FAMIS 20010
PMC reflected power has been slightly increasing over the past ~3 days, and FSS transmitted power has been decreasing over the same period, but I don't see that PMC transmitted power has changed much at all.
TITLE: 01/09 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
IFO is LOCKING and at TRANSITION_FROM_ETMX
EQ recovery is going smoothly
Lockloss alogs:
Lockloss 21:13 UTC (EQ)
Other:
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
16:24 | FAC | Tyler | EX, EY | N | Tumbleweed check | 17:24 |
17:17 | SUS | Randy | MX | N | Inventory | 18:03 |
18:03 | FAC | Karen | Optics/Vac Prep | N | Technical Cleaning | 18:03 |
18:29 | VAC | Travis | MX | N | Pfeiffer Box Check | 19:29 |
21:46 | SUS | Randy | LVEA | N | Tue Maint prep | 22:01 |
21:57 | PEM | Ryan C | CER | N | Looking at dust monitors | 22:15 |
22:45 | VAC | Gerardo | LVEA | N | Vacuum prep for Tue | 23:05 |
TITLE: 01/08 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 2mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.51 μm/s
QUICK SUMMARY:
H1 is relocking following a M7.0 earthquake; currently up to MOVE_SPOTS. All systems look good; wind is low and microseism is sitting just below the 90th percentile.
Since TJ implemented bootstrapping on the CO2_PWR guardians, 74075 the power sent into vacuum has been much more stable.
We installed a new chiller in CO2X 73704 October 27th. It's been more stable and decaying less quickly since then but has still been relocking around once a week. Plot attached
As in 74872 and 74741, I have taken PEM_MAG_INJ and SUS_CHARGE from WAITING to DOWN so that they do not run tomorrow. Instead, tomorrow Louis and Sheila will try the risky DARM loop swaps and calibration from 7amPT. To re-enable the automated measurements, the nodes should be requested to INJECTIONS_COMPLETE before next Tuesday.
IFO was unlocked from wind this morning. Re-requested both guardians to INJECTIONS_COMPLETE.
Lockloss due to 7.0 EQ from Philippines
Staying in DOWN until it passes.
We've been seeing SQZ angle not optimizing correctly (75245, 75151). At 20:05 Ibrahim took us into commissioning and I tried to change ADF frequency H1:SQZ-ADF_VCXO_FREQ_SET from 1300Hz to 200Hz. The ADF line didn't move from the 1300Hz region, just became noisy when I changed it. PLL didn't lock. Unsure why the ADF wouldn't move, I also tried 800Hz to no avail. ADF frequency hasn't successfully been changed since Daniel adjusted the model in May 69453.
At ~16:15UTC when we got to NLN, I tried and failed at this again.
Vicky showed (image) that H1:SQZ-ADF_VCXO_CONTROLS_SETFREQUENCYOFFSET and H1:SQZ-ADF_VCXO_FREQ_SET need to be changed then the ADF successfully moved. I also turned the size of the line down by turning up H1:SQZ-RLF_INTEGRATION_ADFATTENUATE, but it was still big and probably reduced our range a few MPc. Attached settings changed and then reverted.
It didn't seem to be able to converge on zero, plot attached. After trying twice I reverted the changes.
Dhruva points out to correctly change both of these settings we can use the script in /sqz/h1/scripts/ADF/ 'python setADF.py -f newfrequency'
No particulare cause for this lockloss.
The lockloss tool shows that EX L3 saturated first, prompting the lockloss.
The following lock acquisition was fully automatic.
Mon Jan 08 10:11:30 2024 INFO: Fill completed in 11min 26secs
Gerardo confirmed a good fill curbside.
Dave alerted me that this had frozen as indicated by a blue screen on the client. The systemd service status reported: ● pylon-camera-server@H1-VID-CAM-FCES-IR-TRANS-B.service - Basler 2D GigE camera RTP H264 UDP server Loaded: loaded (/etc/systemd/system/pylon-camera-server@.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/pylon-camera-server@.service.d └─lho.conf Active: active (running) since Tue 2023-10-03 10:00:32 PDT; 3 months 3 days ago Main PID: 402217 (pylon-camera-se) Tasks: 40 (limit: 77094) Memory: 114.8M CPU: 3w 3d 1h 32min 52.488s CGroup: /system.slice/system-pylon\x2dcamera\x2dserver.slice/pylon-camera-server@H1-VID-CAM-FCES-IR-TRANS-B.service └─402217 /usr/bin/pylon-camera-server H1-VID-CAM-FCES-IR-TRANS-B.ini Jan 05 13:11:05 h1digivideo3 pylon-camera-server[402217]: Height: 540 Jan 05 13:11:05 h1digivideo3 pylon-camera-server[402217]: Setting GevSCPSPacketSize (packet size): 8192 Jan 05 13:11:05 h1digivideo3 pylon-camera-server[402217]: Setting GevSCPD (inter-packet delay): 25000 Jan 05 13:11:05 h1digivideo3 pylon-camera-server[402217]: Starting grabbing. Jan 05 13:11:05 h1digivideo3 pylon-camera-server[402217]: Setting auto exposure: 0 Jan 05 13:11:05 h1digivideo3 pylon-camera-server[402217]: Setting exposure time: 200000 Jan 05 13:11:05 h1digivideo3 pylon-camera-server[402217]: Setting auto gain: 0 Jan 05 13:11:05 h1digivideo3 pylon-camera-server[402217]: Setting gain: 360 Jan 05 22:33:53 h1digivideo3 pylon-camera-server[402217]: The grab failed. Jan 05 22:33:53 h1digivideo3 pylon-camera-server[402217]: The buffer was incompletely grabbed. This can be caused by performance problems of the network hardware used, i.e. network adapter, switch, or ethernet I ran 'service pylon-camera-server@H1-VID-CAM-FCES-IR-TRANS-B restart' and it came back. This is the first I can recall of this occurring on this new server and code.
TITLE: 01/08 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 2mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.50 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING (17 hr 32 min lock).
TITLE: 01/08 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: H1 was locked the entire shift; current lock stretch up to 9.5 hours. Just one instance of lost observation from SQZ unlocking, but everything was brought back swiftly.
State of H1: Observing at 159Mpc
Very quiet shift with H1 observing the entire time except at 03:25 UTC when SQZ unlocked (SQZ_OPO_LR Guardian reported "PZT voltage limits exceeded"). Guardians were able to bring everything back automatically and observing was resumed within 2 minutes. BNS range improved by 5-6Mpc after this event.
We have a checker in SQZ_MAMNGER to relock the OPO if the PZT is not in 50-110V range when IFO is down to prevent this. The PZt changed too fast for this checker to help though, probably caused by a 0.4degF LVEA temperature change in zone 4 at the time.
The range increase appears to be because the SQZ angle was in a bad place before relock (220 degrees rather than nominal 180), see attached. Unsure if we expect the OPO PZT changing to effect SQZ angle.
This may be improved by moving the ADF closer to where we want to optimize (200Hz?). Currently the ADF at 1.3kHz but best range is with SQZ not optimized at 1.3kHz 75151. There could be two zero ADF servo crossings at 1.3kHz, one with good 300Hz SQZ and one with bad 300Hz SQZ and sometimes the servo takes us to the wrong one.
Attached is DARM before and after this relock.
TITLE: 01/07 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
IFO is in NLN and OBSERVING
Other:
GraceDB query failure still flashing on and off at times - We (and LLO) were experiencing the same thing yesterday (01/06) so probably still mini-server delays/reconnections.
LOG:
None
TITLE: 01/07 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 8mph Gusts, 7mph 5min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.61 μm/s
QUICK SUMMARY:
H1 has been locked and observing for 1.5 hours. Range this lock is a bit lower than last, but otherwise all systems look good.
Sheila, Louis, with help from Camilla and TJ
Louis and I have had several locklosses transitioning to the new DARM configuration. We don't understand why.
This transition was done several times in December, one of these times was 15:15 UTC on December 19th, when the guardian state was used (74977). Camilla used the guardian git to show us the code that was loaded that morning, which does seem very much the same as what we are using now (the only difference is a ramp time which Louis found was wrong and corrected, this ramp time doesn't actually matter though since the value is only being reset to the value it is already at).
We also looked in the filter archive and see that the H1SUSETMX filters have not been reloaded since December 14th, so the filters should be the same. We also looked at the filters and believe that they should be correct.
In the last attachment to 74790 you can see that this configuration has more drive to the ESD at the microseism (the reduction in the ESD RMS comes from reduced drive around a few Hz), so this may be less robust when there is more wind and microseism. I don't think this is our current problem though, because we are loosing lock due to a 2.6Hz oscillation saturating the ESD.
We've tried both to do this transition in the way that it was done in December, (using the NEW_DARM state) and by setting the flag in the TRANSITION_FROM_ETMX state, which I wrote in Decmber but we hadn't tested until today. This code looks to have set everything up correctly, but we still loose lock due to a 2.6Hz saturation of the ESD.
Camilla looked at the transition we did on December 19th, there was also a 2.6Hz ring up at that time, but perhaps with the lower microseism we were able to survive this. A solution may be to ramp to the new configuration more quickly (right now we use a 5 second ramp).
Elenna suggested ASC could be making this transition unstable and that we could think about raising the gain of an ASC lop during the transition. On Friday's lockloss, attached, you can see CSOFT and DSOFT YAW wobble at 2.6Hz. HARD loops look fine.
Camilla, Erik, Dave:
h1hwsmsr (HWS ITMX and /data RAID) computer froze at 22:14 Thu 04 Jan 2024 PST. The EDC disconnect count went to 88 at this time.
Erik and Camilla have just viewed h1hwsmsr's console, which indicated a HWS driver issue at the time. They rebooted the computer to get the /data RAID NFS shared to h1hwsex and h1hwsmsr1. Currently the ITMX HWS code is not running, we will start it during this afternoon's commissioning break.
One theory of the recent instabilities is the camera_control code I started just before the break to ensure the HWS cameras are inactive (in extenal trigger mode) when H1 is locked. Every minute the camera_control code gets the status of the camera, which along with the status of H1 lets it decide if the camera needs to be turned ON or OFF. Perhaps with the main HWS code getting frames from the camera, and the control code getting the camera status, there is a possible collision risk.
To test, we turn the camera_control code off at noon. I will rework the code to minimize the number of camera operations to the bare minimum.
At ~ 20:00UTC we left the HWS code running (restarted ITMX) but stopped Dave's carema control code 74951 on ITMX, ITMY, ETMY, leaving the camera's off. They'll be left off over the weekend until Tuesday. ETMX is still down from yesterday 75176.
If the computers remain up over the weekend we'll look at incorporating the camera control into the hws code to avoid crashes.
Erik swapped h1hwsex to a new v1 machine. We restarted the HWS code and turned the camera to external trigger mode so it too should remain off over the weekend.
I've commented out the HWS test entirely (only ITMY was being checked) from DIAG_MAIN since no HWS cameras are capturing data. Tagging OpsInfo.
Trace from h1hwsmsr crash attached.
All 4 computers remained up and running over the weekend, with the camera on/off code paused. We'll look into either making Dave's code smarter or incorporating the cameras turning on/off into the hws-server code so that we don't send multiple calls to the camera at the same time, our leading theory as to why these hws computers have been crashing.
Jenne, Naoki, Louis, Camilla, Sheila
Here is comparison of the DARM CLEAN spectrum with OM2 hot vs cold. The second screenshot shows a time series of OM2 cooling off. The optical gain increased by 2%, as was seen in the past (for example 71087). Thermistor 1 shows that the thermal transient takes much longer (12 + hours) than what thermistor 2 says (2 hours).
Louis posted a comparison of the calibration between the two states, there are small differences in calibration ~1% (74913). While the DARM spectrum is worse below 25Hz, it is similar at 70 Hz where we in the past thought that the sensitivity was worse with OM2 cold. From 100-200 Hz the sensitivity seems slightly better with OM2 cold, some of the peaks are removed by Jenne's jitter subtraction (74879) but there also seems to be a lower level of noise between the peaks (which could be small enough to be a calibration issue). At high frequency the cold OM2 noise seems worse, this could be because of the squeezing. We plan to take data with some different squeezing angles tomorow and will check the squeezing angle as part of that.
So, it seems that this test gives us a different conculsion than the one we did in the spring/summer, and that now it seems that we should be able to run with OM2 cold to have better mode matching from the interferometer to the OMC. We may have not had our feedforwards well tuned in the previous test, or perhaps some other changes in the noise mean that the result is different now.
Is this additonal nosie at low frequency due to the same non-stationarity we oberved before and we believe is related to the ESD upconversion? Probably not, here's why.
First plot compares the strain spectrum from two times with cold and hot OM2. This confirms Sheila's observation.
The second and third plots are spectrograms of GDS-CALIB_STRAIN during the two periods. Both show non-stationry noise at low frequency. The third plot shows the strain spectrogram normalized to the median of the hot OM2 data: beside the non-stationariity, it looks like the background noise is higher below 30 Hz.
This is confirmed by looking at the BLRMS in the 16-60 Hz region for the two times, as shown in the fourth plot: its higher with cold OM2
Finally, the last plot shows the correlation between the ESD RMS and the strain BLRMS, normalized to the hot OM2 state. There is still a correlation, but it appear again that the cold OM2 state has an additional background noise: when the ESD RMS is att the lower end, the strain BLRMS setlles to higher values
Here is the same comparison, without squeezing. Using times from 74935 and 74834
This suggests that where cold OM2 seems better than hot OM2 above that is due to the squeezing (and the jitter subtraction Jenne added, which is also on in this plot for cold OM2 but not for hot OM2). And the additional noise with cold OM2 reaches up to about 45Hz.
After we optimized ADF demod phase in 74972, the BNS range seems better and consistently 160-165Mpc. The attached plot shows the comparison of OM2 cold/hot with/without SQZ. The OM2 cold with SQZ is measured after optimization of ADF demod phase and other measurements are same as Sheila's previous plots.
This plot supports what Sheila says in the previous alogs.
Echo-ing the above, and summarizing a look at OM2 with sqz in both Sept 2023 and Dec 2023 (running gps times dictionary is attached here).
If we compare the effect of squeezing -- there is higher kHz squeezing efficiency with hot OM2. We can look at either just the darm residuals dB[sqz/unsqz] (top), or do subtraction of non-quantum noise (bottom) which shows that hot OM2 improved the kHz squeezing level by ~0.5 dB at 1.7 kHz (the blue sqz blrms 5). This is consistent with summary pages: SQZ has not reached 4.5 dB since cooling OM2 74861. Possibly suggests better SQZ-OMC mode-matching with hot OM2.
Without squeezing, cold om2 has more optical gain and more low-freq non-quantum noise. Better IFO-OMC mode-matching with cold OM2.
In total, it's almost a wash for kHz sensitivity: heating OM2 loses a few % optical gain, but recovers 0.2-0.5 dB of shot noise squeezing.
It's worth noting the consistent range increases with SQZ tuning + improvements: even in FDS, there is a non-zero contribution of quantum noise down to almost 50 Hz. For example Naoki's adjustment of sqz angle setpoint on 12/21 74972 improved range, same for Camilla's Jan sqz tuning 75151. Looking at DARM (bottom green/purple traces), these sqz angle tunings reproducibly improved quantum noise between about 60-450 Hz.
Here are some more plots of the times that Vicky plotted above.
The first attachment is just a DARM comparison with all 4 no sqz times, OM2 cold vs hot in December vs September.
Comparing OM2 hot September vs December shows that our sensitivity at from 20-40 Hz has gotten worse since September, the MICH coherence seems lower while the jitter and SRCL coherence seem similar. The same comparison for OM2 cold shows that with OM2 cold our sensitivity has also gotten worse from 15-30 Hz.
Comparing cold vs hot, in September the MICH coherence did get worse from 60-80 Hz for cold OM2, which might explain the worse sensitivity in that region. The MICH coherence got better from 20-30 Hz where the sensitivty was better for cold OM2. The December test had better tuned MICH FF for both hot and cold OM2, so this is the better test of the impact of the curvature change.
As Gabriele pointed out with his BRUCO, 74886 there is extra coherence with DHARD Y for cold OM2 at the right frequencies to help explain the extra noise. There isn't much change in the HARD pitch coherence between these December times, but the last attachment here shows a comparison of the HARD Y coherences for hot and cold OM2 in December.
Peter asked if the difference in coherence with the HARD Yaw ASC was due to a change in the coupling or the control signal.
Here is a comparison of the control signals with OM2 hot and cold, they look very similar at the frequencies of the coherence.