We've been locked for about 4 hours. The range has been just over 160.
TITLE: 04/19 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
Commissioning time from 1900-2200utc but there was a lockloss at the end---H1 currently at 60W so almost back up.
Picket Fence froze earlier in the shift.
LOG:
TITLE: 04/19 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Commissioning
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 12mph Gusts, 9mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY:
Sheila, Gabriele, Camilla, Jennie
The other day I used Elenna's template to make a PRCL excitation: 77252
We can use this to check on our PRCL to SRCL feedforward element in the LSC input matrix, where we add some of the POP A 9I signal (PRCL) to the POP A 45I signal (SRCL) to cancel the POP45 sensitivity to PRCL.
When the PRCL excitation is on, the transfer function of POP45I/POP9I is -0.028716 cnts/cnt
We want to set out input matrix such that:
5_1 = -5_3 * POP41/POP9I tf (with PRCL excitation on) to cancel the PRCL contribution to the SRCL error signal. This means that now, our POP 9I to SRCL ERR matrix element should be 0.107 but it is now 0.06.
EDITING to Add:
Today I re did this measurement and saw that POP45/POP9I is -0.02551 cnts/cnt, meaning that the input matrix should be 0.0946. Updating the input matrix to this did improve the PRCL to SRCL decoupling, as shown in the second attachment.
I also tried moving the PRCL offset, this did not seem to change the POP45I/POP9I ratio much, so the PRCL to SRCL decoupling doesn't seem to need retuning each time the PRCL offset changes.
The third screenshot shows the PRCL to DARM coupling, there are small changes (2-3dB change) with these changes in PRCL offset and PRCL to SRCL decoupling. So this suggests that the main coupling of PRCL to DARM isn't through SRCL. (And that the PRCL offset sn't helping much).
Gabriele remembered that the PRCL to DARM coupling could also be through MICH, indeed the third screenshot shows that the PRCL signal is a factor of 10 smaller in POP45 Q than in I. In 66151 Anamaria tuned the phasing so that the PRCL signal in 45Q was a factor of 100 smaller than in I, so we can try to improve this by rephasing POP45.
Edited again:
Camilla and I moved the POP45 phase from 8 to 11, this reduced the PRCL to POP45Q transfer function to be about a factor of 100 below PRCL to POP45I (last screenshot blue references show the improvement from a phase of 8 to 11). We then remeasured the SRCL to PRCL input matrix element, which should be 0.087. This made the SRCL coherence in DARM rather bad, although the PRCL to MICH and PRCL to SRCL couplings were improved. We decided to revert these, and wait for next week to make the changes at a time when we could immediately retune FF. As I stepped the phase back I accidentaly moved us from 11 to 6 in one step, which unlocked the IFO.
[Jennie, Sheila, Gabriele]
Low frequency OMC alignement lines were injected between 20:19 UTC and 21:16 UTC, following a similar method to what described in 76462.
Using the code attached, I looked at how the noise in DARM and the amplitude of the 410.x Hz calibration line changes with respect to the OMC QPD signals.
One can see from a spectrogram that the noise in GDS-CALIB_STRAIN_NOLINES is strongly modulated by the OMC angular lines. The spectrogram shows GDS-CALIB_STRAIN_NOLINES normalized to the median over time.
Then one can compute BLRMS as several bands of GDS-CALIB_STRAIN_NOLINES to see how the noise changes, and of OMC-DCPD_SUM at 410-411 Hz to monitor the amplitude of the DARM calibration line, that is a proxy to optical gain. Time series are shown in the attached figure, where the OMC-DCPD_SUM BLRMS time series are comapred with the OMC QPD signals.
Finally, I plotted the 410-411 Hz BLRMS of OMC-DCPD_SUM (optical gain) in a scatter plot vs all four QPD. This allows me to select a value of the QPD signals that maximize the optical gain. The negative of those values is the offset that should be added to the OMC QPD on top of the existing offset.
Signal | Aditional offset (to be added to the existing value) |
H1:OMC-ASC_QPD_A_PIT | -0.05 |
H1:OMC-ASC_QPD_A_YAW | +0.12 |
H1:OMC-ASC_QPD_B_PIT | +0.00 |
H1:OMC-ASC_QPD_B_YAW | -0.12 |
CAVEAT: However, looking at the noise in GDS-CALIB_STRAIN_NOLINES between 800 and 900 Hz (that should be shot noise) one see that it is minimum for the current OMC offsets. Maybe this is an indication that the SQZ is aligned with the current OMC, but the current OMC is not well aligned to the IFO. I suspect that using the offsets above will increase the optical gain, but make SQZ worse
The template we used to run these lines is saved in /ligo/home/jennifer.wright/Documents/OMC_Alignment/20240419_OMC_ALignment_EXC.xml
The lines ran from 1397593158 to 1397596597 GPS in parallel with Robert's PEM measurements.
Naoki, Camilla, Sheila
We did 5+5 minutes ON/OFF test of ADF line at 322 Hz. While the ADF is ON, we changed the ADF demod phase to make the ADF SQZ angle around 0 so that we can use the ADF servo with 322 Hz ADF line.
The script to change the ADF frequency is in (userapps)/sqz/h1/scripts/ADF/setADF.py. The command is as follows.
python setADF.py -f 322
1st cycle
ON (5 min)
UTC: 2024/04/19 19:02:10 UTC
GPS: 1397588548
OFF (5 min)
UTC: 2024/04/19 19:07:21 UTC
GPS: 1397588859
2nd cycle
ON (5 min)
UTC: 2024/04/19 20:02:52 UTC
GPS: 1397592190
OFF (5 min)
UTC: 2024/04/19 20:08:00 UTC
GPS: 1397592498
3rd cycle
ON (5 min)
UTC: 2024/04/19 20:13:16 UTC
GPS: 1397592814
OFF (5 min)
UTC: 2024/04/19 20:18:35 UTC
GPS: 1397593133
The x509 certificate for cdsldap0 was due to expire tomorrow. This would have caused the control room systems to not recognize users. Due to the age of our ldap server and an updated signature type from our regular certificate authority we were unable to follow our usual certificate renewal and replacement procedure. To address this we issued a certificate from an an internal CA for cdsldap0 and pushed out the required configuration changes to the clients today. At this point everything should be working, though CDS laptops may need to go through a cycle of puppet running to get the updates in place. As a note to our sysadmins if there are issues the steps to check are as follows (these should all be done by puppet): 1. ensure the LHO_CDS_CA.crt file is installed 2. restart the nslcd service 3. reload the nscd service We are also working on the replacement for cdsldap0, however it will not be ready before tomorrow.
[Jennie, Jim, Gabriele]
We tried again turning on the HAM1 yaw FF to ASC, like in 77254. This time we wanted to turn on one dof at a time.
We first turend on the CHARD_Y FF since we care mostly about that loop. It worked well and reduced the CHARD_Y noise below 20 Hz.
We then turned on the INP1_Y FF. This imrpvoed slighlty the INP1 dof, but made the CHARD_Y g back to the original noise level.
We did not try the other dofs, since the predicted subtraction had very little effect.
We lft the CHARD_Y FFs on, and all other yaw off. This improves CHARD_Y and maybe also DARM, to be confirmed.
Jim accepted the new values in SDF
Fri Apr 19 10:07:41 2024 INFO: Fill completed in 7min 38secs
Jordan confirmed a good fill curbside.
At 03:36 UTC (Thu 20:36 PDT) the picket fence server stopped updating. This happened at the exact same time on Monday evening this week (20:36 15th April 2024 PDT). Erik took a look at it Tuesday morning during maintenance and did not find a reason for the stoppage. I restarted Picket Fence by hand at 08:49 PDT.
For FAMIS 26240:
Laser Status:
NPRO output power is 1.813W (nominal ~2W)
AMP1 output power is 66.77W (nominal ~70W)
AMP2 output power is 139.1W (nominal 135-140W)
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 16 days, 21 hr 17 minutes
Reflected power = 17.33W
Transmitted power = 108.7W
PowerSum = 126.0W
FSS:
It has been locked for 0 days 9 hr and 51 min
TPD[V] = 0.8222V
ISS:
The diffracted power is around 2.5%
Last saturation event was 0 days 10 hours and 2 minutes ago
Possible Issues:
Check diode chiller
TITLE: 04/19 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 19mph Gusts, 13mph 5min avg
Primary useism: 0.06 μm/s
Secondary useism: 0.13 μm/s
QUICK SUMMARY:
H1's been locked for over 8.5hrs with a range mostly just under 160Mpc; with nice triple coincidence and Virgo up for 18hrs! Much quieter night than last night earthquake-wise.
NOTE: Commissioning time is scheduled for noon-3pm Local time during this shift.
TITLE: 04/19 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Currently Observing and have been Locked for 36 minutes.
Two unknown locklosses during my shift. The lockloss tool flagged the first one as windy, but since the wind went just above 20mph 4.5 minutes before we lost lock, I don't think that is the reason. The second lockloss had a set of small glitches in the 300ms before the LL(attachment).
LOG:
23:00 Detector Observing and Locked for 6 hours
01:02 Lockloss
- Relocking
- Couldn't get past DRMI so I took us to DOWN to run an initial alignment
01:23 Initial alignment start
01:41 Initial alignment done, relocking
02:21 NOMINAL_LOW_NOISE
02:23 Observing
04:56 Lockloss
- Relocking
05:16 Lost lock at TRANSITION_DRMI_TO_3F, starting initial alignment
05:37 Initial alignment done
06:20 NOMINAL_LOW_NOISE
06:23 Observing
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
00:18 | PCAL | Francisco | PCAL Lab | y(local) | PCALin | 00:54 |
06:23 Observing
Jennie W, Sheila
Summary: We spent some time today resetting A2L gains, which make a large impact on our range. We are leaving the gains set in guardian to be the ones that we've found for 3 hours into the lock, but this will probably cost us about 5Mpc of range early in the lock.
Overnight, our our low range was partially because the angle to length decoupling was poor (and partly because the squeezing angle was poor). We were running the angle to legnth decoupling script this morning when an EQ unlocked the IFO, and now we have re-run it with a not thermalized IFO.
I manually changed the amplitudes for the A2L script again on a per optic bassis to get each A2L set in a first round, there was still significant CHARD P coherence. I've edited the run_all_a2L.sh script so that some degrees of freedom are run with amplitudes of 1, 3 or 10 counts excitations, this has now run and suceeded in tuning A2L for each DOF on each optic, this second round seems to have improved the sensitivity. We may need to tune these amplitudes each time we run a2L for now.
After our second round of A2L, the ASC coherences were actually worse than after only one round. We tried some manual tuning using DHARD and CHARD injections, but that didn't work well probably because we took steps that were too large.
After the IFO had been locked and at high power for ~3 hours, we re-ran the A2L script again, which again set all 4 A2L gains, and impoved the range by ~5Mpc compared to the A2L settings early in the lock (see screenshot). I've accepted these in SDF and added them to LSCparams:
'FINAL':{
'P2L':{'ITMX':-0.9598, #+1.0,
'ITMY':-0.3693, #+1.0,
'ETMX':4.1126,
'ETMY':4.2506}, #+1.0},
'Y2L':{'ITMX':2.8428, #+1.0,
'ITMY':-2.2665, #+1.0,
'ETMX':4.9016,
'ETMY':2.9922 },#+1.0}
This means that the A2L probably won't be well tuned for early in the locks when we relock, which may cost us range early in the locks. In O4a we werw also using the camera servo, not ADS, and since we set the camera servo offsets to match ADS alignment early in the locks, we probably had less than optimal decoupling late in the lock stretches. This probably had less of an impact on range since these noises are contributing in the same frequency range as the ESD noise was in O4a.
Shiela, Naoki, Rahul, Kar Meng, Terry
Same as yesterday (77188), we are again today having trouble locking the FC. Wind is again constant at 0-20mph. Can get green flashes but no locking, even with FC feedback turned off, VCO locking also fails.
Rahul changed the M1 coil driver state on FC1 and FC2 from state 1 (usually only used in TFs) to state 2: IFO nominal for other triples. State 2 contains a low pass filter.
Rahul took FC2 P and L transfer functions, look healthy and same as 66414.
Rahul took FC2 OSEMINF spectrums and checked ther health, as previously had issues with FC1 BOSEMS (72563). Can see the 0.3 to 0.4Hz peak, Rahul's not worried about it but we could check old data for this peak.
Jim found that the HAM8 ISI has a resonance at the same place as FC2, peak at 0.375Hz see attached. He edited a gain and the motion seemed to improve enough to lock the FC.
During this time Ibrahim, Oli and I had followed ObservationWithOrWithoutSqueezing wiki and edited all the SQZ guardian code nominal states and accepted no squeezing sdfs. We then reverted these changes once the FC locked.
From HAM8 summary pages, I don't see this 0.36 Hz peak between Dec15 - Jan15. Peak is basically exactly where FC2_L is oscillating in yesterday's screenshot. Since Jan 15 2024, the peak looks intermittently there or gone, pretty variable. Maybe exciting this peak is related to wind? Or maybe this is all totally unrelated to wind.. I don't think this was an issue in O4a, maybe something changed after Jan 15.
Seems like another broken GS13 on HAM8, this time a horizontal sensor. I took some driven measurements looking at the l2l cps to gs13 transfer functions and the H1 cps to gs13 tfs is lower than the other 2 sensors by about 2x, see first attached image. This affects the stability of the blend cross-over, which changes the gain peaking in the blends. I've compensated for now with a digital gain, but this may not work for long.
I tried compensating with a digital gain in the calibration INF filters for the ISI, this seems to have improved things, shown in the second image. Top subplot is the M3 pit witness for FC2, second line is the gain I adjusted, third line are LOG BLRMS for the X, RZ and RX GS13s on HAM8. X and RX don't improve much, but the RZ motion improves a bit after changing the gain. Fourth line is the RZ cps residual, which is much quieter after increasing the gain to compensate for the suspected low response of the H1 GS13.
To add to Vicki's comment, the peaks behavior seems complicated, it started at .6hz in January, then some time around March it moved down to its current frequency ~.37hz. Lots of days missing from the summary pages in that time, so it's hard to track. The transience of the peak is also consistent with broken seismometers we've seen in the past. The gain tweak I put in may not be a stable fix.
SDFs that were accepted for observing w/ sqz
FYI - FRS ticket 31005 is tracking the 1/2 gain GS-13
Jeff K, Sheila D
We are interested in which signals we can use to try to damp triple bounce and roll modes, since these are probably responsible for some of the peaks in our forest of peaks around 25-35Hz (77109 and 76505). Jeff is preparing to implement a damping path on the HSTSs so that we can try to damp these.
We looked at signals (in addition to DARM) that we may want as options for damping.
Dtt template is at /ligo/home/sheila.dwyer/SUS/HSTS_damping_signals_checks.xml
"Yes and..." Here're some plots that demonstrate the coupling between these modes and ISC channels - at higher resolution, - separated into ASC and LSC signals, and - with the individual optics called out as per their ID in LHO:49643 In addition to the region that Sheila focuses on, 27 Hz and 28.5 Hz to isolate the highest vertical modes of the recycling cavity HSTS and HLTSs, I also show the 40-42 Hz region to show the highest roll modes of the recycling cavity HSTSs. (Note, I looked around ~45 Hz for the highest HLTS vertical modes, but they did not appear in DARM, nor any of these other sensors.) This should help us pick the error signals, and/or how much damping we should expect to get, if we don't want to use DARM_CTRL as our error signal (due to concerns about parasitic loops, or needed to account for it inn the DARM calibration, etc.).
Apologies. In my haste to produce the plots, I didn't read all the elements in the table of SUS resonances reported in LHO:49643, and incorrectly assumed that the table was sorted by frequency, picking off the first listed suspension that was close. Thus, in the first two attachments, - 2024-04-15_HXTS_HighestVModes_Labeled_DARM_and_ASC_Signals.png and - 2024-04-15_HXTS_HighestVModes_Labeled_DARM_and_LSC_Signals.png I claim that the mode at ~27.42 +/- 0.01 Hz is SRM in these spectra (id'ed at 27.45 Hz in LHO:49643), but PR2 and MC3 are closer possibilities (id'ed at 27.41 and 27.42 Hz in LHO:49643). To definitively assign these modes to suspensions, we'll likely have to do some driven tests at high frequency resolution.
Back into NLN at 23:06 UTC and we returned to OBSERVING at 23:15 UTC after Robert took apart his PEM setup/EXC