FAMIS - 25996 H1 ISI CPS Noise Spectra Check - Weekly
Todays's CPS spectra does not look much different than the previous noise spectra taken alog 78352.
Vicky, Begum, Camilla: Vicky and Begum noted that the CLF ISS and SQZ laser is glitchy.
Vicky's plot shows CLF ISS glitches started with O4b, attached.
Timeline below shows SQZ laser glitches started May 20th and aren't related to TTFSS swaps. DetChar - Request : Do you see these gltiches in DARM since May 20th?
Summary pages screenshots from: before glitches started, first glitch May 20th (see top left plot 22:00UTC), bad glitches since then.
Missed point:
In addition to previous report, I must note that glitches started on May 9th and continued for several times even before to May 25th.
Glitches are usually accompanied by the increased noise in H1:SQZ-FIBR_EOMRMS_OUT_DQ and H1:SQZ-FIBR_MIXER_OUT_DQ channels.
Andrei, Camilla
Camilla swapped the TTFSS fiber box 78641 on June 25th in hopes that this will resolve the glitches issue.
However, it made no difference: Figure (see from 20:40 UTC, as it is when TTFSS box was swapped).
In preparation for a leak investigation and repair, the chillers at EX have been rotated. We are monitoring the rotation closely. As such, no interruption to HVAC or desired temps is expected. At this time, chiller 2 seems to working as expected. The current configuration of lead/lag will remain until the next ~quarterly cycle. E. Otterman T. Guidry
TITLE: 06/20 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 20mph Gusts, 11mph 5min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.05 μm/s
QUICK SUMMARY: Locked for 6 hours, range seems to be stable around 154Mpc (cleaned). Planned calibration and commissioning today from 8-12 PT (15-19UTC)
TITLE: 06/20 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Tony
SHIFT SUMMARY: 1 lockloss, currently relocking after running an IA
LOG:
Lockloss at 06:38 UTC ending a almost 13 hour lock, I had to do an IA to relock due to bad DRMI & PRMI flashes, fully automated. We're about to power up to 10W as of the end of my shift.
Lockloss at 06:38 UTC
We've been locked for just over 10 hours, all is calm on site. High frequency squeezing isn't quite as good as it was earlier in the week.
TITLE: 06/19 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: One lockloss with an automated relock. Our range was around 150, but has since increased a handful of Mpcs now. It looks like CHARDY and DHARD_Y are looking a bit better.
LOG:
TITLE: 06/19 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 10mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY:
SQZ_OPO_LR GRD is notifying: 'pump fiber rej power in ham7 high, nominal 35e-3, align fiber pol on sqzt0.'
Its currently just under 0.36, (0.3 is considered high).
Wed Jun 19 10:07:48 2024 INFO: Fill completed in 7min 45secs
Ended a 20 hour lock. Looking at the plots it seemed to be a very fast lock loss. I see that LSC-DARM wiggle that we often see, but the magnitude of it is a bit smaller than what I have been looking at in the recent past.
Back to observing at 1748 UTC. Fully auto relock with an initial alignment.
TITLE: 06/19 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 10mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY: Locked for 18.5 hours, the range has been decreasing a bit the last hour or two. I'll run some checks to see what's going on.
There is a loud whining in/on the OSB that can be heard from the parking lot and alos in the control room. I think it might be coming from the roof or somewhere in the ceiling. I'll see if I can localize it a bit before calling facilities.
Two updates:
The whining stopped around 740PT and hasn't returned. It went away before I was able to localize it any better than somewhere in the OSB.
I ran Sheila's low range dtt and it looks like our <15Hz is what's hurting us the most at the moment. In particular CHARD Y, DHARD Y, and maybe IMC-WFS_B YAW but the last one is a bit tough to tell.
TITLE: 06/19 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: We've been locked for 11:50, calm night. The trace on the bottom of nuc5 isn't showing up.
23:25 -23:27 UTC We dropped out of observing to run SCAN_SQZ_ANG to improve SQZing which did not look good following its last relock.
If we lose lock and relock ISC_LOCK will need to be taken to INJECT_SQUEEZING then back to NLN to reset the ISC_LOCK notification about SQZ_MANAGER.
Rahul ran the OPLEV charge measurement this morning for ETMY so I processed the measurement.
TITLE: 06/19 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
We've been locked for almost 8 hours, the wind has died down a bit.
Last checked in alog78384
Closes FAMIS 28358
While the SUS_CHARGE guardian LOG shows no errors in the measurement, ITMX, and ITMY failed to be processed. ITMX complained of bad coherence and or failed measurements:
" UserWarning: Cannot calculate alpha/gamma because some measurements failed ' 'or have insufficient coherence!
warn("Cannot calculate alpha/gamma because some measurements failed ' \ "
ITMY had the following error when trying to load the data:
"ValueError: no Fr{Adc,Proc,Sim}Data structures with the name H1:SUS-ITMY_L3_ESDAMON_DC_OUT_DQ"
Camilla, Andrei, Naoki, Sheila
There have been many examples of times when the filter cavity length locking error signal peak to peak seems to be correlated with worse range over the last 3 months, I've attached some screenshots of examples. This was true before the May 16th change in the status of the CO2 ISS channel , 78217, and before the output arm damage that happened April 22nd.
Some of these times correspond to times when there is a whistle visible in the FC WFS signals 78263, others do not. These whistles in the FC WFS channel have been present throughout all of O4, but they do go away sometimes for several days, this last week they do not seem to have been present. Andrei has identified a candidate wandering line in the IOP channels for these WFS that was last week ~10 kHz away from the 105kHz RLF-CLF signal, today that line seems to be gone.
Last week, the filter cavity error signal peak to peak became much noisier than it was previously (screenshot), until June 15th at around 15 UTC when things returned to the previous state. Camilla identified that this started a few hours after the time of the ringdown measurement attempt, 78422, and that there haven't been any ZM1/2/3 alignment changes. During that time period, the FC error signal from around 0.7-9 Hz has higher and variable, in addition, the low frequency noise was changing and varriying the RMS as it has done before. The attached screenshot shows some of the typical low frequency variation (compare yellow to blue), a whistle in the yellow trace, and in red a time during last week's elevated noise when the low frequency was relatively quiet but there is elevated noise from 0.7-9Hz.
As part of a detchar request that is related to this, I ran the tool lasso on four different days in which their were some range drops. The days (and the linked results) are below:
Lasso normally uses the mean trends of the auxillary channels to try and correlate with the range, but I used the max trends instead as requested. The results from lasso are interesting. On the 17th, there is a correlation with H1:SUS-MC3_M3_OSEMINF_UR_OUTPUT.max and the CO2 channel H1:TCS-ITMX_CO2_ISS_CTRL2_OUT16.max. On the 21st, the range lines up pretty well with a filter cavity channel, H1:SUS-FC2_M3_NOISEMON_LR_OUT16.max. On the 27th, lasso still picks out the TCS ITMX C02 channel, but the second most correlated channel is another H1:SUS-FC2_M3_NOISEMON_LL_OUT16.max channel. There are two drops around ~4:30 and ~8:30 UTC and they seem to match up with this FC2 channel, which seems similar to what happened on the 21st. On the 30th, lasso seems to pick out a lot of ISI BLRMS channels, which is different that the other days. The top channel is H1:ISI-HAM7_BLRMS_LOG_RY_1_3.max. Overall there does seem to maybe be some relation between the CO2 channel and these filter cavity channels.