I have added a disk-usage section to the DAQSTAT system. It will report a warning if the disk usage is between 93-95% and an error is greater than 95%.
The disk usage information is updated hourly by cds_report running on cdsmanager.
DAQSTAT was restarted several times yesterday and once today after lock-loss to install the new code.
Fri Jun 16 10:03:52 2023 INFO: Fill completed in 3min 52secs
Jordan confirmed a good fill curbside.
During a lock-loss around 10am localtime we took another device out to the CER switch to check that the port that cam15 is plugged into is live. It was able to power another device. So referring the next steps to Filiberto when he is available to go to ham6 and find/replace the camera.
Lockloss @ 16:54 UTC, possibly from CSOFT_P ringup. I don't notice excess ground motion or other immediate cause.
After having been locked for 3+ hours, I started a "sensing function only" calibration measurement following the instructions on the TakingCalibrationMeasurements wiki. This measurement took ~30 minutes after taking H1 out of Observing mode and into NLN_CAL_MEAS.
Attached are a screenshot of the calibration monitor screen prior to the measurement and the generated calibration report: /ligo/groups/cal/H1/reports/20230616T161654Z/H1_calibration_report_20230616T161654Z.pdf
FAMIS 23810
Noise on EX_FAN2_570_2 seems to have slightly increased 3 days ago.
All other fans look good.
TITLE: 06/16 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 138Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 13mph Gusts, 9mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY: Taking over from Austin; H1 has been locked and observing for 2 hours.
TITLE: 06/16 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 137Mpc
SHIFT SUMMARY:
- Lock #1:
- Waiting in DOWN for ground to settle, before starting an IA - which went smoothly
- Lock #2:
- Lock #3:
- Leaving H1 to Ryan S. locked and in observing
LOG:
No log for this shift.
LOCKLOSS @ 9:07, a 5.7 EQ from Tonga. Ground motion was still ringing down when this EQ hit, peakmon was ~1000 counts at the time of the lockloss. There was also what looked to be an ASC ringup for MICH P/Y on nuc29, but I'm attributing this to the earthquake. Peakmon is already starting to drop so I'm going to give it a few minutes and try relocking.
Following a lockloss caused by an earthquake, H1 has now just gotten back into observing as of 11:04 UTC.
About an hour ago BSC5 AIP railed, the MEDM vacuum screen site overview shows a red field for this AIP, and the X-End station MEDM shows the AIP railed at 10 mA. No action is required for now, we will take look at it on Tuesday to asses its situtation.
Closes 25869, last completed by Corey in May
All looks nominal, barring a few glitches here and there. The HEPI L0 CONTROL VOUT does look a bit in flux, but not terribly.
TITLE: 06/16 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 134Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 7mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
- H1 has been locked for 8.5 hours and all looks stable
- SEI/DMs/CDS ok
TITLE: 06/15 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 136Mpc
SHIFT SUMMARY:
H1 locked at NLN the entire shift. There were a little over 2hrs OUT of OBSERVING for COMMISSIONING work by Robert & Elenna. H1 rode through a 4.5 earthquake from Mexico. HAM6 OMC video is not working on nuc30 (I believe Erik has been made aware.).
LOG:
Addressed TCS Chillers (10:30pm local) & later CLOSED FAMIS #21123:
[Measurements attached]
BSC CPS: Looks good.
HAM CPS: Looks good.
H1's been locked 4.5hrs. 2hrs of this was for commissioning time, otherwise OBSERVING. Just had a 4.5 EQ from Oaxca roll through (hasn't posted on USGS for some reason). Otherwise, a nice shift with much less wind than the last 2-days!
Here is a list of the things we need to change to return as closely as possible to a desirable 60W configuration.
If you are looking for a timestamp to determine when the power change occurred, the last full lock at 60W was on April 6 from about 17:00 UTC to April 7 3:30 UTC.
Under LSC controls, I claimed that we should revert the PRCL loop design, however Gabriele reminded me that the new PRCL design has better suppression, see alog 68817. We should keep this new design, but we should still determine how/if we need to change the gain to ensure the loop UGF is around 30 Hz.
Under LSC feedforward, I forgot to mention that we did not run with PRCL feedforward at 60W, so we can turn that back off at 60W.
I have also recovered the old MICH FF filter that was in FM9, called "May_d". At 60W, we will need to engage FM6-9. labeled May a-d.
We will need to update the violin mode threshhold checker. The counts value for the DARM offset was hard coded, and will be different at 60W.This value will only need to change if we change the DARM offset.
Tagging a lot of the teams who will either need to be involved in these changes, or at least be impacted by these changes when/while we revert.
J. Kissel, J. Driggers, N. Aritomi, S. Dwyer
Just FYI I brought up the open question in Elenna's aLOG about
DARM offset: 20 mA, not sure if we want to revert this value
The quick consensus (without agreeing to write it in stone) is that we "plan" to *not* revert the DARM offset, leaving us with 40 mA of current on the DCPDs, as has been the case since May 05 2023 (see LHO:69358).
J. Kissel, J. Driggers, S. Dwyer
Regarding the following setting suggestions in this bullet point,
SRCL offset: we had been running with an offset of -175.
This was also with the previous LSC-POP_RF45 whitening at 21 dB.
We could revert the whitening change as well if we think it's better
for noise considerations
The plan is to *definitely* go to the -175 ct SRCL offset, however -- upon discussion this morning -- we've decided *not* to revert the reduction in POP A RF45 whitening gain from +21 dB to +15 dB. Said with all positives to avoid confusion, we'll continue to reduce the gain to +15 dB rather than revert to +21 dB.
We think
- the extra ADC range head room is nice,
- the sacrifice in SRCL / PRCL sensing noise is minimal, and/or has minimal impact***
- for now, today, when we power down, we want to change as little as is need to achieve stability, rather than revert absolutely everything.
***One may find the assessment of the noise impact in LHO:69350.
I forgot to include this in this alog, but the CSOFT P gain should probably be reduced to 20 again. This was a change made late last week.
Naoki, Vicky
To try damping the 80 kHz PI recently causing locklosses (LHO:70434), today we installed a new path on PI28, with drive sent to ETMX. To phase-lock the ESD damping drive to the PI mode, we bandpassed the DCPD signal around 80.296 kHz (foton here). This frequency was chosen based on the DCPDs full-spectrum signal around 80.3 kHz, which today we saw the 3 peaks in red here between 80.295-80.301 kHz in full lock. It seems like our problem is the bigger peak around *296, where pink cursors are centered. We are trying to damp on ETMX first, since it seems like PI29 80kHz damping on ETMX could impact this mode.
The new path for PI28 has been updated and damping is guardianized, but this PI28 damping is untested, and there are no verbal alarms for PI28 yet.
Summary with the current status of PI damping:
| PI frequency | PI damping mode number | Test Mass | PI Guardian status | _DAMP_GAIN |
| 10.428 kHz | 24 | ETMY | automated | 1000 |
| 10.431 kHz | 31 | ETMY | automated | 1000 |
|
80.302 kHz (LHO:68760) |
29 | ETMX | automated, likely working (LHO:70243) |
50000 |
|
80.296 kHz (LHO:70443) |
28 | ETMX | guardianized, testing now |
50000 |
SDFs were recoinciled after, see screenshots -- mainly, we un-monitored guardian-controlled things like the damping phase and PLL integrator.
I've added in a test for PI mode 28 into verbal alarms with an RMS threshold of 1 for now (the same as mode 29).
Just checking in, looks like PI28 does at least see a mode come through ~1-2 hours into lock (RMSMON spikes just before NLN b/c OMC whitening, the second peak is the real one), but this hasn't run away yet. Then ~10 min after PI28, looks like PI29 sees something pass through.
Looking at disaggregated temperature trends across the EX VEA over the last year, it might be a bit misleading to only look at the "average EX VEA" temperature. Temperatures in some parts of the EX VEA seem to have drifted by up to 0.5-2 deg F over the past few months. Temps now seem to be returning to where they were about 6mo - 1 year ago.
I think Robert and Aidan have both made the point that, to understand temperature drifts, it's helpful to look at the individual temperature sensors across the VEA, rather than the average VEA temperature. For example, see Aidan's alog LLO:25785 where he also thought about stabilizing the VEA temperatures to a different sensor, which is better correlated with the test mass temperature. For this 80 kHz PI though, Aidan's also said that the temperature dependence of the mechanical mode freq is about 80ppm/K, so for ~0.5K (delta~1degF), the HOM frequency changes by ~3 Hz for the 80 kHz mechanical mode, and it seems unlikely we're just within 3 Hz of the PI going unstable -- so it's not totally convincing that our recent 80 kHz PI ring ups are simply b/c VEA temperature drifts. But at least, at LLO, he found their ETMY is most correlated with the EY VEA_202B sensor.
I'm not sure which of LHO's EX VEA sensors are most correlated with ETMX. But, it may be worth considering more than the average VEA temperature. Especially since the individual temperature sensors have seen some drifts over the past year, which aren't seen by trends of the average VEA temperature.