Mid X alarmed this morning due to low temperature in the main room. The heating coil for this space was limited to a 95 degree discharge air temperature in the tuning parameters. This meant even with a 100% heating call in the space, the coil was only allowed to run ~50% to prevent it from putting out over 95 F. I increased the discharge air temperature limit from 95 F to 115 F to allow the heater to reach 100% capacity. Space has returned to setpoint.
[Dave, Erik]
CDS Login was updated to fix EPICS problems from a previous update which crashed the EDC. See here for details on the crash: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=74959
Lockloss alerts were temporarily down during the upgrade.
Workaround used to keep things running in the interim wree removed.
On 26th Dec 75041, IFO unlocked during ETMX stage of in-lock charge measurements in 75171 I adjusted guardian code to make the transition more robust.
The ITMX, ITMY, ETMY measurements did run, analysis attached. ETMX hasn't ran since November. We will try to run them all this week in commissioning time.
Note that the x-axis of the plots isn't a to-scale time axis, we will try and change this.
Tue Jan 09 10:10:24 2024 INFO: Fill completed in 10min 20secs
Gerardo confirmed a good fill curbside.
We updated the tzdata and leapseconds database on some frozen infrastructure systems, including the FE computers, DAQD, and guardian machines. This was done by an apt-get install tzdata, not a general churn in the software versions.
Vicky noticed that H1:SQZ-SHG_FIBR_REJECTED_DC_POWERMON is high, this is the rejected polarization polarization sent though the OPO pump fiber and measured in HAM7, trend over time attached.
Followed instructions in 71761 to reduce it from 0.55 to 0.03, plot. With ISS on, H1:SQZ-OPO_ISS_CONTROLMON is at 4.1V
With SQZ_OPO_LR in DOWN, H1:SQZ-OPO_REFL_DC_POWER increased from 2.2 to 2.3. Our maximum after on table AOM and fiber alignment is around 2.9V (last done August, September and November 74479).
Lockloss at 08:35UTC from an earthquake, high winds as well at the time.
TITLE: 01/09 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Wind
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: MAINTENANCE
Wind: 33mph Gusts, 21mph 5min avg
Primary useism: 0.14 μm/s
Secondary useism: 0.83 μm/s
QUICK SUMMARY: Lost lock 7.5 hours ago and Ryan was unable to relock with the high wind and high useism. Maintenance activities have started.
Workstations were updated and rebooted. This was an OS package update only. Conda packages were not updated.
H1 called for assistance at 10:15UTC, it seems to be stuck in IA as we're still ringing down from an earthquake and winds have picked up which is making locking Yarm a struggle.
IA finished at 11:12UTC, back to locking. I had to help yarm lock during IA.
With wind increasing with gusts up to 40mph I've taken SEI back to WINDY from MICROSEISM
Wind gusts up to 60mph! ALS is having lots of difficulty locking due to the wind and high microseism. The forecast on windy.com says the wind is likely stay at the same level for the rest of the morning and increase further, peaking around 11am today. So I'm not super hopeful of the IFO getting relocked, unless the wind dies down for a short period.
TITLE: 01/09 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Quiet shift with one superevent. H1 has now been locked for almost 8 hours.
State of H1: Observing at 160Mpc
H1 has been locked for almost 4 hours. Quiet night so far with just one minor earthquake passing through.
Full DQ shift report is here.
FAMIS 26273, last checked in alog75087
Looks like MY fans were switched 5 days ago, but otherwise all fans look okay.
As in 74872 and 74741, I have taken PEM_MAG_INJ and SUS_CHARGE from WAITING to DOWN so that they do not run tomorrow. Instead, tomorrow Louis and Sheila will try the risky DARM loop swaps and calibration from 7amPT. To re-enable the automated measurements, the nodes should be requested to INJECTIONS_COMPLETE before next Tuesday.
IFO was unlocked from wind this morning. Re-requested both guardians to INJECTIONS_COMPLETE.
We've been seeing SQZ angle not optimizing correctly (75245, 75151). At 20:05 Ibrahim took us into commissioning and I tried to change ADF frequency H1:SQZ-ADF_VCXO_FREQ_SET from 1300Hz to 200Hz. The ADF line didn't move from the 1300Hz region, just became noisy when I changed it. PLL didn't lock. Unsure why the ADF wouldn't move, I also tried 800Hz to no avail. ADF frequency hasn't successfully been changed since Daniel adjusted the model in May 69453.
At ~16:15UTC when we got to NLN, I tried and failed at this again.
Vicky showed (image) that H1:SQZ-ADF_VCXO_CONTROLS_SETFREQUENCYOFFSET and H1:SQZ-ADF_VCXO_FREQ_SET need to be changed then the ADF successfully moved. I also turned the size of the line down by turning up H1:SQZ-RLF_INTEGRATION_ADFATTENUATE, but it was still big and probably reduced our range a few MPc. Attached settings changed and then reverted.
It didn't seem to be able to converge on zero, plot attached. After trying twice I reverted the changes.
Dhruva points out to correctly change both of these settings we can use the script in /sqz/h1/scripts/ADF/ 'python setADF.py -f newfrequency'