After IFO in NLN for 9h14. 1382730139.
No obvious reason why, no 73818 DARM glitches before this lockloss. Secondary useism is high at 0.40 μm/s.
STATE of H1: Observing at 157Mpc. IFO locked for 8h30.
Rode through an EQ at ~16:20UTC. Had SRM saturations at the EQ time.
Microseism is increasing, plot attached.
The facilities team have worked to fix an issue with the AC at EY 73841 so we've had some EY temperature excursions that should be returning towards normal over the next ~hours, plot attached.
ADS taking more time to converge than it used to before switching to camera servos and going into Observing. Used to be ~15 minutes, this weekend taking up to 24 minutes.
Ryan, Austin and Ibrahim looked into this in 69601 but didn't come to any conclusions. TJ said that high microseism increases time to converge and microseism has been increasing but was not high 28th/29th Oct.
Time between arriving in NLN and ADS switching to camera servos:
Closes FAMIS#26436, last checked 73177
BRS Driftmon values are looking good and are well within range for both BRSX and BRSY, although there has been a very slight trend downward in BRSX Driftmon over the past week, possibly related to the temperature drifting this past week, which can be seen in both ETMX and ETMY.
We found that the temp in EY was not holding steady over the weekend. The heating coil for the space was commanded on but was not running. After investigating, we found that the line voltage fuse in Phase C had blown which was keeping the variable transformer from operating. We took the variable transformer out of the circuit and swapped fan relay contacts to run the elements not controlled by the variable transformer. Heating elements are currently working and EY is slowly heating up. The heating elements controlled by the variable transformer will need to have their resistance checked for a short or an open circuit during the maintenance window.
Closes FAMIS 26215. Last checked 73664. Possible Issues: FSS TPD is low - plot attached
Laser Status:
NPRO output power is 1.812W (nominal ~2W)
AMP1 output power is 68.31W (nominal ~70W)
AMP2 output power is 137.3W (nominal 135-140W)
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PMC:
It has been locked 13 days, 0 hr 55 minutes
Reflected power = 15.84W
Transmitted power = 109.9W
PowerSum = 125.8W
FSS:
It has been locked for 0 days 9 hr and 18 min
TPD[V] = 0.6423V
ISS:
The diffracted power is around 2.3%
Last saturation event was 0 days 8 hours and 3 minutes ago
Possible Issues:
FSS TPD is low - plot attached
Mon Oct 30 10:09:11 2023 INFO: Fill completed in 9min 7secs
Jordan confirmed a good fill curbside.
Jenne notes that this changing high frequency noise during the first hours of a lock since Wed 25th Oct (purple trace in 73798) may be caused by the new higher CARM gain 73738, changed on that day.
In 73798, I noted The 4.8kHz noise that suddenly changes ~ 1h40 into NLN, Jenne suggests this could be aliased down CARM gain peaking. Looking at the 64kHz channel, plot attached, noise disappears from 18.4 to 18.7kHz (v. large peak) and appears at 16.6kHz to 16.8kHz and at 21.1kHz.
We have had a weekend of shorter (~5hour) locks and two locklosses form LASER_NOISE_SUPPRESSION state#575 (73787, 73831), the state this CARM gain is changed. Maybe this gain change has made us less stable, we'll discuss today reverting it.
CARM sliders reverted back to 6dB in ISC_LOCK (svn) and loaded.
On Monday, Naoki and Sheila 73855 saw that even with the CARM gain back at 12dB, the high frequency squeezing was still bad and the optimal H1:SQZ-CLF_REFL_RF6_PHASE_PHASEDEG sqz angle has to be adjusted a lot.
Maybe the CARM gain increase was effecting stability, but we don't think it wasn't causing the high frequency noise that was present in FDS, FIS but not no SQZ, plot attached. With adjustment to SQZ angle the SQZ greatly improved. It wasn't clear to us why the SQZ angle changed.
Although the overall high frequency noise was still bad once the CARM gain was reduced from 12 back down to 6, the peak around 4600Hz did disappear once the CARM gain was reverted. See attached SQZ BLRM 6 purple trace with CARM at 12 and CARM at nominal 6.
See attached high frequency plot showing peaks at 16.4kHz and 18.7kHz (purple) and thermalized around 16.6kHz (red), peaks disappeared once CARM gain was reduced ( green to blue traces). Maybe this concludes that the peaks are CARM gain peaking as Jenne suggested.
TITLE: 10/30 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Austin
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 2mph Gusts, 1mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.47 μm/s
QUICK SUMMARY: IFO been in NLN for 4h40
High Pitched Fire Panel alarm in the CUR was on 15:58 to 15:00:45UTC as the FAC team worked with the fire system. Tagging DetChar
ITMY mode 5 and 8 are not damping so I will try turning gains to zero, plot attached. 73826. Tagging SUS.
ITMY mode 8 was damping with old settings rather than gain = 0 that Ryan had saved in lscparams as VIOLIN_DAMPING was not yet reloaded, will plan to do this when out of observing.
I've put Ryan's new ITMY#8 FM1+FM6+FM10, G=+0.4, settings which is damping it well. I have left ITMY#5 at 0 gain for now.
There is no noticeable change in the data quality at 14:58 (assuming there is a typo in the original log entry). I've attached a spectrogram of the strain data from 14-15UTC. There is non-stationary noise at low frequency throughout the hour and no change is visible at the time of the fire alarm.
I reloaded VIOLIN_DAMPING guardian so it will now keep ITMY#8 gain at 0 to avoid ringing up. Also changed ITMY#5 gain at 0 after talking with Rahul. Tagging SUS.
The H1 system gave me a call this morning (~10:20 UTC 10/30), but when I checked the system in NoMachine, the IFO was at INCREASE DARM OFFSET and relocking and I couldn't find any issue (perhaps the timer ran over the limit?). However, once we got to NLN (10:26), I did see that the SQZ_MANAGER guardian was having some trouble, showing the warning, SQZ ASC AS42 not on??? in the guardian message log. It would cycle between this message and going back into FDS, so I hit RESET_SQZ_ASC before trying to go back into FDS, which looked to have solved the issue. - Tagging SQZ.
Tony and I looked into this and the reason Austin was called was that the IFO had been relocking for 2 hours so the H1_MANAGER's 2 hour "wait_for_nln" timer was up. The reason for the long relocking period was a lockloss at LASER_NOISE_SUPPRESSION 575 state 1382693851, Ryan had a lockloss here this weekend too 73787.
NLN Lockloss at 8:20UTC - 1382689308
TITLE: 10/30 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
INCOMING OPERATOR: Austin
SHIFT SUMMARY: We stayed locked the whole shift 13:07 as of 07:00UTC, I found new settings for ITMY mode8 (FM1+FM6+FM10, G=+0.4) and possibly ITMY mode5/6 (FM6+FM8+FM10, G=-0.02) I left mode 5s gain at zero since I'm not too sure about it yet.
05:21 I went to comissioning to run a calibration measurement alog73828
05:53: Back to Observing
INFO | bb output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20231030T052240Z.xml
Simulines:
GPS start: 1382678928.729304
GPS stop: 1382680256.772195
2023-10-30 05:50:38,627 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20231030T052832Z.hdf5
2023-10-30 05:50:38,648 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20231030T052832Z.hdf5
2023-10-30 05:50:38,659 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20231030T052832Z.hdf5
2023-10-30 05:50:38,671 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20231030T052832Z.hdf5
2023-10-30 05:50:38,683 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20231030T052832Z.hdf5
ICE default IO error handler doing an exit(), pid = 2479290, errno = 32
STATE of H1: Observing at 155Mpc
TITLE: 10/29 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 160Mpc
OUTGOING OPERATOR: Camilla (DAY)
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 3mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.20 μm/s
QUICK SUMMARY:
The action of loading new filter modules likely caused an (expected) interruption in the output of the real-time model on h1lsc0 - this is the cause of the transient DAQ checksum (CRC) error
We having large temperature swings at EY 73835 that could effect the IFO.
Started Initial Alignment at 20:16UTC after loosing lock and LOCKING_ALS ~ 4 times, even after waiting at LOCKING_ARMS_GREEN a little longer for green arms signals to converge (tip from Ryan C).
This lock has the CARM gain back to 6dB 73844. Also ITMY mode 5 and 8 gains to zero while new damping settings are finalized 73845. Loaded h1lsc coeff for new MICHFF filter 73821.
IA finished at 20:38UTC, NLN at 21:22 UTC, currently doing some SQZ commissioning.