At 03:36 UTC (Thu 20:36 PDT) the picket fence server stopped updating. This happened at the exact same time on Monday evening this week (20:36 15th April 2024 PDT). Erik took a look at it Tuesday morning during maintenance and did not find a reason for the stoppage. I restarted Picket Fence by hand at 08:49 PDT.
For FAMIS 26240:
Laser Status:
NPRO output power is 1.813W (nominal ~2W)
AMP1 output power is 66.77W (nominal ~70W)
AMP2 output power is 139.1W (nominal 135-140W)
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 16 days, 21 hr 17 minutes
Reflected power = 17.33W
Transmitted power = 108.7W
PowerSum = 126.0W
FSS:
It has been locked for 0 days 9 hr and 51 min
TPD[V] = 0.8222V
ISS:
The diffracted power is around 2.5%
Last saturation event was 0 days 10 hours and 2 minutes ago
Possible Issues:
Check diode chiller
TITLE: 04/19 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 19mph Gusts, 13mph 5min avg
Primary useism: 0.06 μm/s
Secondary useism: 0.13 μm/s
QUICK SUMMARY:
H1's been locked for over 8.5hrs with a range mostly just under 160Mpc; with nice triple coincidence and Virgo up for 18hrs! Much quieter night than last night earthquake-wise.
NOTE: Commissioning time is scheduled for noon-3pm Local time during this shift.
TITLE: 04/19 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Currently Observing and have been Locked for 36 minutes.
Two unknown locklosses during my shift. The lockloss tool flagged the first one as windy, but since the wind went just above 20mph 4.5 minutes before we lost lock, I don't think that is the reason. The second lockloss had a set of small glitches in the 300ms before the LL(attachment).
LOG:
23:00 Detector Observing and Locked for 6 hours
01:02 Lockloss
- Relocking
- Couldn't get past DRMI so I took us to DOWN to run an initial alignment
01:23 Initial alignment start
01:41 Initial alignment done, relocking
02:21 NOMINAL_LOW_NOISE
02:23 Observing
04:56 Lockloss
- Relocking
05:16 Lost lock at TRANSITION_DRMI_TO_3F, starting initial alignment
05:37 Initial alignment done
06:20 NOMINAL_LOW_NOISE
06:23 Observing
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
00:18 | PCAL | Francisco | PCAL Lab | y(local) | PCALin | 00:54 |
06:23 Observing
Between yesterday 2024/04/17 16:48utc and today's lockloss at 2024/04/18 15:25utc, there were three sets of times where the calibration kappa values were suddenly louder than normal. Attachment1 and attachment2 show the three lock stretches (labeled 1, 2, and 3) over which this was seen, as well as showing kappa ranges from other locked times. The white dotted lines give an idea of the boundaries that the majority of kappa values fall within. The first time this showed up was 2024/04/17 16:48utc(stretch1), where we were locked in NOMINAL_LOW_NOISE and commissioning. The kappa values had flatlined for three minutes before then coming back larger than normal. Stretch2 and stretch3 were over the entirety of the next two locks. However, the lock following those two starting at 2024/04/18 17:26utc, the values seemed to be back to normal.
I mentioned this increase to Louis last night and he said "I think we need to increase the calibration line amplitudes". I'm not sure if something was done about this today between locks that brought the values back to what they were previously.
Lockloss 04:19 01:02UTC from unknown cause
02:23 Observing
TITLE: 04/18 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 12mph Gusts, 8mph 5min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY:
Observing and have been locked for 6 hours.
TITLE: 04/18 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
The day started out with plans of coordinated-Commissioning but that soon changed after another earthquake took H1 down (so coordinated-commissioning was shifted to later in the morning) and a return to Observing in the afternoon.
LOG:
FAMIS25987
All plots looks good to me. Jim happened to be next to me while I ran this and said that there may have been an earthquake during the time this uses, but it seems to have only raised the low frequency up and not changed the high frequency that we care about for these CPS checks. All good.
Now that Sheila and Jennie have done lots of work to get our arm pointing and A2L couplings improved, there may be a small amount of jitter cleaning improvement that can be gained.
In the attached image, red and blue are from today after the commissioning time. Green and brown are from a week ago when we had ~165 Mpc cleaned range. Red and Green are pre-cleaning, and Blue and Brown are after line subtraction and cleaning. In the area between 100-200 Hz, it looks like the blue current trace is perhaps not quite as flat as the brown trace, so maybe there is a teeny bit more that an updated cleaning coefficient set could eeek out. Once the cleaned trace is flat, if there is an overall level change, that probably wouldn't be able to be cleaned out.
I'll try to look at some Observing time from tonight, to see if there is an improved candidate set of coefficients we can try. As a reminder, the jitter coupling is still quite close to what it was in O4a, and the cleaning coefficients are currently the same as they have been since Fall 2023.
Jennie W, Sheila
Summary: We spent some time today resetting A2L gains, which make a large impact on our range. We are leaving the gains set in guardian to be the ones that we've found for 3 hours into the lock, but this will probably cost us about 5Mpc of range early in the lock.
Overnight, our our low range was partially because the angle to length decoupling was poor (and partly because the squeezing angle was poor). We were running the angle to legnth decoupling script this morning when an EQ unlocked the IFO, and now we have re-run it with a not thermalized IFO.
I manually changed the amplitudes for the A2L script again on a per optic bassis to get each A2L set in a first round, there was still significant CHARD P coherence. I've edited the run_all_a2L.sh script so that some degrees of freedom are run with amplitudes of 1, 3 or 10 counts excitations, this has now run and suceeded in tuning A2L for each DOF on each optic, this second round seems to have improved the sensitivity. We may need to tune these amplitudes each time we run a2L for now.
After our second round of A2L, the ASC coherences were actually worse than after only one round. We tried some manual tuning using DHARD and CHARD injections, but that didn't work well probably because we took steps that were too large.
After the IFO had been locked and at high power for ~3 hours, we re-ran the A2L script again, which again set all 4 A2L gains, and impoved the range by ~5Mpc compared to the A2L settings early in the lock (see screenshot). I've accepted these in SDF and added them to LSCparams:
'FINAL':{
'P2L':{'ITMX':-0.9598, #+1.0,
'ITMY':-0.3693, #+1.0,
'ETMX':4.1126,
'ETMY':4.2506}, #+1.0},
'Y2L':{'ITMX':2.8428, #+1.0,
'ITMY':-2.2665, #+1.0,
'ETMX':4.9016,
'ETMY':2.9922 },#+1.0}
This means that the A2L probably won't be well tuned for early in the locks when we relock, which may cost us range early in the locks. In O4a we werw also using the camera servo, not ADS, and since we set the camera servo offsets to match ADS alignment early in the locks, we probably had less than optimal decoupling late in the lock stretches. This probably had less of an impact on range since these noises are contributing in the same frequency range as the ESD noise was in O4a.
Naoki, Camilla
We reverted PSAMS from 7.5/0.55 to 8.8/-0.67, which was done last Friday in 77133. Before and after PSAMS change, we ran SCAN_ALIGNMENT and took 5+5 min asqz/sqz data. The attached figure shows that 8.8/-0.67 is better than 7.5/0.55. In parallel, A2L tuning was ongoing so the noise below 100 Hz in SQZ should be due to A2L.
asqz 120/137 (strain voltage 7.5/0.55) (5 min)
PDT: 2024-04-18 10:59:48 PDT
UTC: 2024-04-18 17:59:48 UTC
GPS: 1397498406
sqz 120/137 (strain voltage 7.5/0.55) (5 min)
PDT: 2024-04-18 11:06:29 PDT
UTC: 2024-04-18 18:06:29 UTC
GPS: 1397498807
asqz 170/95 (strain voltage 8.8/-0.67) (5 min)
PDT: 2024-04-18 12:01:57 PDT
UTC: 2024-04-18 19:01:57 UTC
GPS: 1397502135
sqz 170/95 (strain voltage 8.8/-0.67) (5 min)
PDT: 2024-04-18 13:35:01 PDT
UTC: 2024-04-18 20:35:01 UTC
GPS: 1397507719
Thu Apr 18 10:06:27 2024 INFO: Fill completed in 6min 23secs
Jordan confirmed a good fill curbside.
TITLE: 04/18 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 117Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 7mph Gusts, 6mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.10 μm/s
QUICK SUMMARY:
H1's had range hovering between 150-155Mpc 24hrs ago. After the EQs last night, H1 made it back UP. but seems a bit off with a range down near 110Mpc. Sheila has just walked in and mentioned Commissioning time is planned from 8-11am this morning and she is jumping in on that---so we are out of OBSERVING. Heard Sheila mentioned angular coupling and tried No Squeezing and is now looking at A2L (but we are at the very beginning of investigations and commissioning).
The togglingof No Squeezing to Freq Dep Squeezing helped with the range already. And Sheila is currently running an A2L. And it looks like L1 is also joining us with commissioning.
Jennie, Jim
We tried Gabriele's newest version of the HAM1 ASC feedforward. New filters were installed last week, but we ran out of time during the commissioning window. It seems like we can still only get good subtraction from the pitch degrees of freedom, we got a lot of 10-ish hz injection when we engaged the yaw degrees of freedom.
To start we turned off the HAM1 ff, by setting H1:HPI-HAM1_TTL4C_FF_INF_{RX,RY,Z,X)}_GAIN to zero, then shutting off the H1:HPI-HAM1_TTL4C_FF_OUTSW. While collecting that data we switched to the new filters, and zerod the gains for all of the individual H1:HPI-HAM1_TTL4C_FF_CART2ASC{P,Y}_{DOF}_{DOF}_GAIN filter banks. We then tried turning on all of the pitch feedforward first, by ramping the gains in a couple of steps from 0 to 1. It seemed like it worked well, but I don't think we actually got to full gain of 1. We then tried turning on the yaw FF, but that pretty quickly started injecting noise around 10hz, first attached spectra, red traces are with the new ff, blue are with ff off.
Second image are trends where Jennie tries to reconstruct the timeline, which is how we found the first pitch test wasn't complete. Sheila ran an A2L measurement, then we tried the pitch ff again, spectra in the third plot. All of the red traces are new P ff (1397415884), blue is the old (1397407980), green is with the ff off (1397405754). This worked well, we got some slight improvements around 15hz, new chard p gets rid of some noise injection around 2hz. We didn't see the pitch ff affect the yaw dofs.
We left the new pitch filters running, and accepted in the Observe SDF. The yaw filters were left off.
I looked at the attempt of engaging the HAM1 yaw FF:
1) the filter that were loaded were correct, meaning they were what I expected
2) retraining with a time when the HAM1 pitch FF were on yields yaw filters that are similar to what was tried, so it doesn't look like it's a pitch / yaw interaction (Jennie also pointed to some evidence of this in the alog)
I suspect there might be cross coupling between the various yaw dofs. I would suggest that we upload the newly trained filters (attached) and try to engage the yaw FF one by one, starting from CHARD that is the one we care most
Sheila, Naoki, Vicky, Camilla
As we continue to see the nominal SQZ angle change lock-to-lock (attached BLRMs). We edited SQZ_MANAGER to scan sqz angle (takes 120s) every time the SQZ locks, i.e. every time we go into NLN.
In SQZ_MANAGER we have:
We tested this taking SQZ_MANAGER down and back up, it went though SCAN_SQZANG as expected and improved the SQZ by 1dB, range improved by ~10MPc.
We'll want to monitor that the change as SCAN_SQZANG may not give us the best angle at the start of the lock when SQZ is very variable. We expect this won't delay us going into observing as only added 120s and ISC_LOCK is often still waiting for ADS to converge during this time.
I have removed this change of path by removing thew weighting as we can see last night SCAN_SQZANG at the start of the lock took us to a bad squeezing once thermalized.
We weren't seeing this changing angular dependence as much before 77133 with the older PSAMS settings (8.8V,-0.7V) or (7.2V, -0.72V). Current (7.5V,0.5V). We think we should revert to the older setting.
Shiela, Naoki, Rahul, Kar Meng, Terry
Same as yesterday (77188), we are again today having trouble locking the FC. Wind is again constant at 0-20mph. Can get green flashes but no locking, even with FC feedback turned off, VCO locking also fails.
Rahul changed the M1 coil driver state on FC1 and FC2 from state 1 (usually only used in TFs) to state 2: IFO nominal for other triples. State 2 contains a low pass filter.
Rahul took FC2 P and L transfer functions, look healthy and same as 66414.
Rahul took FC2 OSEMINF spectrums and checked ther health, as previously had issues with FC1 BOSEMS (72563). Can see the 0.3 to 0.4Hz peak, Rahul's not worried about it but we could check old data for this peak.
Jim found that the HAM8 ISI has a resonance at the same place as FC2, peak at 0.375Hz see attached. He edited a gain and the motion seemed to improve enough to lock the FC.
During this time Ibrahim, Oli and I had followed ObservationWithOrWithoutSqueezing wiki and edited all the SQZ guardian code nominal states and accepted no squeezing sdfs. We then reverted these changes once the FC locked.
From HAM8 summary pages, I don't see this 0.36 Hz peak between Dec15 - Jan15. Peak is basically exactly where FC2_L is oscillating in yesterday's screenshot. Since Jan 15 2024, the peak looks intermittently there or gone, pretty variable. Maybe exciting this peak is related to wind? Or maybe this is all totally unrelated to wind.. I don't think this was an issue in O4a, maybe something changed after Jan 15.
Seems like another broken GS13 on HAM8, this time a horizontal sensor. I took some driven measurements looking at the l2l cps to gs13 transfer functions and the H1 cps to gs13 tfs is lower than the other 2 sensors by about 2x, see first attached image. This affects the stability of the blend cross-over, which changes the gain peaking in the blends. I've compensated for now with a digital gain, but this may not work for long.
I tried compensating with a digital gain in the calibration INF filters for the ISI, this seems to have improved things, shown in the second image. Top subplot is the M3 pit witness for FC2, second line is the gain I adjusted, third line are LOG BLRMS for the X, RZ and RX GS13s on HAM8. X and RX don't improve much, but the RZ motion improves a bit after changing the gain. Fourth line is the RZ cps residual, which is much quieter after increasing the gain to compensate for the suspected low response of the H1 GS13.
To add to Vicki's comment, the peaks behavior seems complicated, it started at .6hz in January, then some time around March it moved down to its current frequency ~.37hz. Lots of days missing from the summary pages in that time, so it's hard to track. The transience of the peak is also consistent with broken seismometers we've seen in the past. The gain tweak I put in may not be a stable fix.
SDFs that were accepted for observing w/ sqz
FYI - FRS ticket 31005 is tracking the 1/2 gain GS-13