TITLE: 11/23 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 161Mpc
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 2mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.31 μm/s
QUICK SUMMARY:
H1's been locked 5hrs. After RyanS's 1st Observing lock (which was about 1hr), the next (& current) lock was fully automated (w/ CHECK MICH FRINGES + PRMI).
Environmentally, microseism had a bump up about 12hrs ago, but has been trending down slowly since. Winds are almost nonexistant. There was a light rain on the drive in.
It is Saturday, so I am assuming a calibration is planned this morning for 1930utc (1130amPT).
H1 is back to observing at 161Mpc as of 08:34 UTC following a two-day NPRO swap.
Reached low noise at 08:21 UTC but couldn't start observing due to several outstanding SDF diffs. I made my best guess for some of these, so I encourage the appropriate people check on these changes and make updates as necessary. The models in question are:
Ryan had two locklosses from an oscillation just below 1 Hz that showed up in the SRCL error signal and CSOFT P (and a little less in yaw) this evening during the state TRANSITION_FROM_ETMX.
I had a look at the state and found that there was a step where we were clearing histories repeatedly while waiting for the bias to reach 0, this probably wasn't the problem but I added and increment to the counter to stop doing this.
Since the oscillation rings up very slowly, I lowered the bias ramp times from 60 seconds to 30 seconds. This worked once, although I'm not sure it always will.
We've now passed through that state and the guardian is waiting in OMC_WHITENING. I will set it to automatically go into observing when the violins are low enough.
While waiting for the violins I had a look at SDF.
There were a couple diffs from forcing the squeezer TTFSS to lock when the refernce cavity was unlocked, I've reverted those.
I am not sure what's going on with HAM4 ISI FF but I've accepted what's shown in the screenshot.
Vicky triggered on the SQZ ASC, she thinks it was working. We lost lock for some reason while waiting for the violins.
TITLE: 11/23 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: None
SHIFT SUMMARY: Once Jason, Vicky, and I finished recovering the PSL and taking closeout transfer functions, initial alignment began right away with the intent to get H1 back to observing tonight. These are just some quick notes about the locking progress this evening, which at this point have not been successful due to locklosses at TRANSITION_FROM_ETMX and from ALS-X randomly dropping out.
H1 is currently relocking up to ENGAGE_ASC_FOR_FULL_IFO, but if TRANSITION_FROM_ETMX continues to be unsuccessful, H1 will be left down overnight.
V. Xu, R. Short, J. Oberling
Short short version: The PSL NPRO swap is done and IFO recovery has begun. I'll get a more detailed alog in tomorrow, right now I'm exhausted.
Promised details from the final day of the NPRO swap.
Summary
Mode Matching Lens Positions
We first measured the new positions of mode matching lenses L02 and L21, I'll update the As-Built table layout with the new values. The new positions:
Amplifier Recovery
With the previous day's work resulting in the NPRO being ready for amplifier recovery, this is where we started our recovery work. Amplifier recovery was straightfoward. As a reminder, we first measure the power into the amp and the unpumped power out, to assess our intial alignment. Then we raise the amp pump diode operating current in 1A intervals until we get to the locking point, adjusting beam alignment into the amplifier at each step in current (so alignment follows the formation of the thermal lenses in the amplifier crystals). For Amp1 we had 1.612 W input and an unpumped output of 1.252 W. This is 77.6% throughput, which is above our requirement of 65% throughput before starting to pump the amplifier, so we proceeded with recovery of Amp1 (see Amp1 columns in below table). We finished Amp1 recovery with ~70.2 W output, so we calibrated the Amp1 power monitor PD to this value (was off by a couple Watts). We then lowered the light level in the Amp1/Amp2 path (using the High Power Attenuator (HPA) after Amp1) and checked alignment, all looked good. We increased the Amp2 seed with the HPA to ~1.8 W and checked the unpumped output from Amp2. This measured at ~1.5 W, which is ~83% throughput and above our 65% threshold, so we proceeded with recovery of Amp2. This also went very well, see the Amp2 column in the below table; we had ~64.0 W output from Amp2 with an ~1.8 W initial seed power (the power was bouncing between 63.9 W and 64.0 W). With Amp2 fully powered we then used the HPA to increase the Amp2 seed to max, which resulted in ~140.2 W output from Amp2.
Current (A) | Amp1 Output Power (W) | Amp2 Output Power (W) | ||
Initial Pout | Final Pout | Initial Pout | Final Pout | |
1 | 1.25 | 1.25 | 1.6 | 1.6 |
2 | 1.46 | 2.37 | 3.0 | 3.0 |
3 | 6.11 | 8.43 | 9.1 | 9.3 |
4 | 16.6 | 18.1 | 18.2 | 18.4 |
5 | 28.6 | 28.6 | 28.1 | 28.4 |
6 | 39.6 | 39.6 | 38.0 | 38.2 |
7 | 50.3 | 50.4 | 47.4 | 47.5 |
8 | 60.7 | 60.7 | 55.8 | 56.8 |
9 | 70.2 | 70.2 | 63.7 | 64.0 |
Stabilization System Recovery
PMC: After lunch we began recovering the PSL stabilization systems in order: PMC, ISS, FSS. We began by using the HPA after Amp2 to lower the power to ~100 mW to check our beam alignment up to the PMC. All looked good here so we increased the power to max and measured the power incident on the PMC at ~129.4 W. We then toggled the PMC autolock to ON and it locked without issue. We needed to use the picomotor-equipped mirrors (M11 and M12 on the layout) to tweak the beam alignment into the PMC, but were only able to get ~102.0 W in transmission with ~27.0 W in reflection. This is 9 W more than we had after our last NPRO swap, indicating that we really need to take a look at PMC mode matching; since we still had more than enough power to deliver to the IFO we decided to defer the mode matching work to a later Tuesday and continue with PSL recovery. The PMC Trans and Refl monitor PDs were calibrated to the newly measured values; they were pretty close to begin with, but were still 1-2 W different than our power meter was measuring. We then returned the amplifer pump diode currents to their previous operating values (9.0 A and 8.8 A for Amp1, 9.1 A and 9.1 A for Amp2), which lowered Amp1 output power from ~70 W to ~68 W and Amp2 output power from ~140 W to ~139 W; this also changed PMC Refl to ~24 W and PMC Trans to ~104 W, indicating our beam is better matched to our current mode matching solution at these pump diode currents.
ISS: Moving on to the ISS, we first measured the amount of power in our 1st order diffracted beam (the "power bank" for the ISS). With the loop off and the AOM diffracting a default of 4% we expect ~5.7 W in this beam, and this is what we measured. AOM alignment was good, so moved on to the ISS PDs in the ISS box. A voltmeter get plugged into the DC Out ports on the ISS box and a HWP inside the box is adjusted until PD voltages read ~10.0 V. We did this, but noticed the DC voltage reading on the ISS MEDM screen was much higher, ~12.5 V for PDA and ~13 V for PDB. We tried to lock the loop and, as expected with PD voltages that high, the loop thought it needed to removed more power from the beam and ran the diffracted power up really high. We immediately unlocked the ISS and began looking into what could be the problem, as the MEDM reading on the ISS PDs generally matches the voltmeter reading (I say "generally matches" because, for reasons unknown to me, the ISS does not use the DC out from its PDs, it uses a Filter out and "derives" the DC and AC PD voltages from that). The PDs appeared to be working correctly, and we found no large dark voltages that would indicate a PD failure/malfunction. When we unplugged the Filter output the PD reading in MEDM began to slowly climb, but when we blocked the light onto the PDs the MEDM reading went to zero. Looking back at trends we saw the PDs behaving as expected before this most recent NPRO swap, only reading these higher values in MEDM with the relock of the PMC an hour or so prior. I had never seen this behavior in the past, so wasn't quite sure where the problem could be. Thinking maybe something had gone wrong in either the ISS inner loop servo box or maybe something in the CER, we called Fil and asked if he could take a look at the CER electronics for the ISS while we moved on to FSS recovery. It was at this point we found the problem. When the FSS MEDM screen was opened the first thing we saw was one NPRO noise eater (NE) light green, and the other red. The green light was our NE enable monitor, indicating that the NE toggle was switched ON in the PSL software; the red light was our NE monitor, which reads the Check output from our NPRO monitor PD that indicates whether or not the NPRO's relaxation oscillation was being supressed. So we had the NE toggled ON but it was clearly not working, so we toggled it off and on again. The NE monitor went green and the channel monitoring the relaxation oscillation indicated it was working properly, and the ISS PD values on the ISS MEDM screen now read the correct values. So I learned that we have another measure of if the NE is working or not, the ISS PD readings on the MEDM screen go higher. Trending back, the NE stopped working at ~16:58 PST on Thursday, right before Ryan and I left the enclosure for the day. We'll keep an eye on this, as right now it's not clear why the NE turned off. At this point everything looked good for the ISS so we moved on to the FSS.
FSS: For the FSS, we first tried to see if the RefCav would lock with the autolocker; it would not. We had to manually tune the NPRO temperature to find a RefCav resonance, one was found with a slider value of ~ +0.06. The temperature search ranges were adjsuted to this new value and we tried the autolocker again. While we could see clear flashes the autolocker would not grab lock for some reason. The FSS guardian was paused so it would stop yanking the gains around upon lock acquisition, but this did not help, the autolocker refused to hold lock for some reason. So I did it manually (from the FSS manual screen, manually change NPRO temperature until a resonance flashed through, then really quickly move the mouse up to turn the loop on; if the loop grabs go back to the FSS MEDM screen and turn on the Temperature loop, if not then turn the loop off and try again), which worked. With a locked RefCav we measured a RefCav TPD voltage of ~ 0.84 V. The RefCav Refl spot looked pretty centered so we did not do any alignment tuning. This completed our work in the enclosure so we cleaned up, turned the computers and monitors off, left the enclosure, and put it into Science mode. Outside, we scaned the NPRO for mode hop regions and measured TFs of the stabilization loops.
NPRO Temperature Scan
Now outside the enclosure we set up to scan the NPRO temperatures to check for mode hopping. We took the HV Monitor output from the PMC fieldbox to trigger an oscillscope on the PZT ramp and used the PMC Trans PD to monitor the peaks. We set the PMC's alignment ramp to +/- 7.0 V and a 1 Hz scan rate, and monitored the peaks as we tuned the NPRO crystal temperature. We used the slider on the FSS Manual MEDM screen, which gives us a total range of approximately +/- 0.8 °C (0.01 on the slider changes the NPRO crystal temperature by roughly 0.01 °C and the slider goes from -0.8 to +0.8). Since we were close to zero on slider, sitting at ~ +0.07, we started by moving lower (which reduces the NPRO crystal temperature); our starting crystal temperature, as read at the NPRO power supply front panel, was 24.22 °C. We got all the way to the negative end of the slider, which gave a crystal temperature of 23.38 °C, and did not see any evidence of mode hopping on the way down. Heading back up we finally started to see early evidence of mode hopping near the top end of the slider; we could clearly see a new forest of peaks show up in the PMC PZT scan and one of them started to grow noticebly as the temperature was further increased. This mode hop region began at a crystal temperature of 24.76 °C, and the slider maxed out at 24.91 °C. At this point we still had not fully transitioned through the mode hop region, but we did have a peak starting to grow very large indicating that we were almost there. Since we saw no evidence of mode hopping by making the temperature colder, we went back to our starting place of 24.22 °C and then reduced the temperature further to the next RefCav resonance below that; this resulted in a crystal temperature of 23.96 °C at a slider value around -0.17. Again I had to lock the RefCav manually, as the autolocker did not want to grab and hold lock. With all of the stabilization systems locked we moved on to TF measurements.
Transfer Functions and Gains
We started with the PMC. With the current settings we have a UGF of ~1.6 kHz and 60° of phase margin, see first attachment. Everything looked good so we left the PMC alone.
For the ISS, we have a UGF of ~45 kHz and a phase margin of 37.5°, see second attachment. Again, everything look normal here so we left the ISS alone.
For the FSS, we started with a Common gain of 15 dB. Everything looked OK, but since we had seen some potential zero crossings that like to hide in the longer range scans we did a "zoomed in" scan from 100 kHz to 1 MHz. Sure enough, there were a couple peaks in the 500 kHz to 600 kHz range that were pretty close to a zero crossing. We lowered the Common gain to 14 dB to move them away from the potential crossing; the third attachment shows this zoomed in area with the Common gain at 14 dB, and the peaks in question are clearly visible. With this Common gain we have a UGF of ~378 kHz with ~60° of phase margin, see fourth attachment; we took this TF out to 10MHz to check for any weirdness at higher frequency and did not see anything immediately concerning. To finish we took a look at the PZT/EOM crossover (around 20 kHz) to set the Fast gain. The final attachment shows this measurement (a spectrum of IN1) at a Fast gain of 5 dB; this looks OK so we left the Fast gain as is.
At this point the NPRO swap was complete, the PSL was fully recovered, and we handed things over to the commissioning team for IFO recovery. We still need to look at PMC mode matching, and will do so during future Tuesday maintenance periods. This closes WP 12210.
After the very successful PSL work this week, we're electing to not offload the temp changes for the ALS or SQZ lasers to their physical knobs tonight. This means that they all need Crystal Freqs of something near 1.6 GHz (1600 in the channel H1:ALS-X_LASER_HEAD_CRYSTALFREQUENCY). The new-ish ALS state CHECK_CRYSTAL_FREQ is written assuming that all the lasers have the changes offloaded to their knobs, so that the value in that channel is close to zero. After we went through SDF revert, we lost those values (and the search values, which Daniel has updated in SDF for the weekend), so we we lost the PLL lock of the aux lasers. The CHECK_CRYSTAL_FREQ state was 'fighting' us, by putting in candidate values closer to zero. I've updated its candidate values to be closer to 1600. Once we offload their temps to the knobs on their front panels (early next week), then we'll want to undo this change.
Current crystal frequencies: ALSX +1595 MHz, ALSY +1518 MHz, and SQZ +1534 MHz.
I have now reverted the change to CHECK_CRYSTAL_FREQS, so that when Daniel and Vicky are done offloading the temps to the front panels of the lasers, this state will still work.
While the PSL team finishes up their work, I wanted to get a jump start on alignment, so I moved the ITMs, ETMs, and TMSs back to where their top mass OSEMs said that they were, the last time that we had good transmission for both ALSX and ALSY while trying to lock. This indeed got somewhat okay flashes on both ALSs, so will be a fine place to start from, once the reference cavity is locked again.
However, I found that TMSY's pitch slider has a minus sign with respect to the OSEM readbacks for pitch. This is the only slider (out of pit or yaw, for all 6 optics that affect ALS alignment for either arm) that seems to have this issue.
In the attached screenshot, when I try to move the TMS up the OSEM readbacks say it went down, and vice versa.
Not any kind of urgent matter, but may make it more challenging for ALS auto-alignment scripts to do their work.
TITLE: 11/22 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: None
SHIFT SUMMARY:
NPRO Swap work continued today by Jason, Vicky, and Ryan, with quite a bit of progress this morning, but the PSL crew is still on the floor (currently seeing flashes of RefCav Trans) and so the status for the return to observing is not clear for tonight. (I will be in for my DAY shift tomorrow.)
Trend of the new PSL HV monitor H1:PEM-CS_ADC_4_28_2K_OUT_DQ is attached. It has been flat/constant for most of the last 24+hrs, but in the last couple hours it has moved a bit (correlated with PMC locking or ISS?).
LOG:
Adam Mullavey from LLO made some code to move the ETM and TMS in a spiral to scan for flashes in the arm. LLO has been using this for O4, and in terms of an automation step, this would be comparable to our Increase_Flashes state. Increase_Flashes is very simple in how it works - move one direction, one degree of freedom at a time, look for better cavity flashes, move the other way if they get worse. While this is reliable, it is very slow since we have to wait for one period of the quad between each step (20seconds) to ensure we dont miss a flash.
The last two days I spent some time converting Adam's state, Scan_Alignment, for use at LHO, adjusting thresholds and other parameters, and trying to improve on parts of it so it might work a bit more reliably. The most notable change I've made is to get data from the fast channel for the arm transmission, rather than collecting slow channel data. This seemed to help a bit, but this completely relies on nds calls that we've historically found not 100% reliable. I've also lowered minimum thresholds, thanks to the previous change. This allows it to start off from basically no light in the cavity and bring it up to a decent alignment.
After these changes it seems to really improve the flashes from a very misaligned starting point, but I'm not sure it's any faster than the Increase_Flashes state. In the attached example it took around 20-30 min to go from little to no light to a decent amount. I'm testing this without the PLL and PDH locked, so it's hard to say exactly how well aligned it is and how much better it can get. Next I'd like to take some time on a Tuesday to test with a PDH&PLL locked ALS, and compare it to the time it takes compared to Increase_Flashes for a very misaligned cavity and a barely misaligned cavity.
I've committed the changes I've made to ALS_ARM.py, ALS_YARM.py (both in common) to the SVN. This created a new state - SCAN_ALIGNMENT - that I'll keep there, but isn't in the state graph so it cannot be reached.
I've commented out the new import in the ALS_ARM guardian, since it was preventing reload of the ALS guardians.
Closes FAMIS 26342. Last checked in alog 81307.
All trends look well. No fans above or near the (general) 0.7 ct threshold. Screenshots attached.
Here are some plots relevant to understanding our uptime and down time from the start of O4 until Nov 13th, with some comparisions to Livingston. I'm looking at times when the interferometer is in the low noise state (for H1 ISC_LOCK >=600, for L1 ISC_LOCK >=2000).
The first piechart shows what guardian states we spend the most time in, this is pretty similar to what Mattia reported in 79623
The histograms of lock segement lengths and relocking times show that L1's lock stretches are longer than H1s, and that we've had 34 individual instances where we were down for longer than 12 hours. (And that H1 has a lot of short locks)
The rolling and cumulative avergage plot shows how the drop in duty cycle in O4b compare to O4a is due to individual problems, including the OFI break, pressure spikes, and laser issues.
Lastly, the final plot shows how we accumulate uptime and down time bined by length of the segments. This shows that L1 accumulates more uptime than Hanford by having more locks in the 30-50 hour range. The downtime accumulation shows that just under half of our downtime is from times when we were down for more than 16 hours (serious problems), and about 1/4 of it is due to routine relocking that takes less than 2.5 hours.
The script and data used to make these plots can be found in DutyCycleO4a.py and H1(L1)ISCLockState_04.txt in this git repo.
Closes FAMIS#26018, last checked 81331
Nothing looks out of the norm
Yesterday Gerardo noticed that the south most section of the EY wind fence had some broken cables on the lower half of the fence on the panel we did NOT replace last summer. I think we have a couple of ways we could go about repairing this. We will discuss options, but the weather is not great for this kind of work.
Fri Nov 22 10:10:42 2024 INFO: Fill completed in 10min 39secs
Gerardo confirmed a good fill curbside through the tumbleweeds. Minimum TC temps are getting close to their trip temps (trip=-100C, TCmins=-117C,-115C). I have increased the trip temps to -90C for tomorrow's fill. Looking at a yearly trend, we had to do this on 25 Nov last year, so we are on schedule.
Using the lockloss page, I picked out some times when the tag "IMC" or "FSS_OSSCILATION" was triggered. Then, I just made some simple time series comparison plots between several PSL channels and PSL PEM channels. In particular, the PSL channels I used were:
H1:PSL-FSS_FAST_MON_OUT_DQ, H1:PSL-FSS_TPD_DC_OUT_DQ, H1:IMC-MC2_TRANS_SUM_IN1_DQ, H1:PSL-PWR_NPRO_OUT_DQ, H1:PSL-PMC_MIXER_OUT_DQ, H1:PSL-PMC_HV_MON_OUT_DQ. I compared those to the following PSL PEM channels:
H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ, H1:PEM-CS_ACC_PSL_PERISCOPE_Y_DQ, H1:PEM-CS_ACC_PSL_TABLE1_X_DQ, H1:PEM-CS_ACC_PSL_TABLE1_Y_DQ, H1:PEM-CS_ACC_PSL_TABLE1_Z_DQ, H1:PEM-CS_ACC_PSL_TABLE2_Z_DQ, H1:PEM-CS_MIC_PSL_CENTER_DQ. For this analysis, I've only looked at 3 time periods but will do more. Below are some things I've seen so far:
From my very limited dataset so far, it seems like the most interesting area is where the PSL_PERISCOPE_X/Y and PSL_TABLE1_X/Y/Z channels are located.
For the same 3 time periods, I checked if the temporary magnetometer H1:PEM-CS_ADC_5_18_2K_OUT_DQ witnessed any glitches that correlate with the IMC/FSS/PMC channels. Attached are some more time series plots of this. During the time period on September 13th, I do not see anything interesting in the magnetometer. However, during the other two periods in November I do see a glitch that correlates with these channels. The amplitude of the glitch isn't very high (not as high as what is witnessed by the periscope_x/y and table1_x/y/z channels), but it is still there. Like in the original slides posted, I don't see any correlations with the PWR_NPRO channel with any glitches in the pem channels on any of the days.
Setup a scope near the PSL rack. The channels are FSS test2, PMC mixer out, ISS PDB, and IMC servo test 1. The trigger has been connected to the IMC REFL shutter. The shutter usually triggers upon a lock loss, or when more than ~4W are detected in reflection of the IMC.
22 Nov 2024 around 20:30pm PT, I disconnected the remote scope and all its input BNCs, and I unplugged this power strip for the PSL remote scope + 785 + aglient because Ryan was close to relocking.
926utc Lockloss was NOT an IMC lockloss btw. Woo Hoo.