Continuing to assess effects of last night's windstorm via the Control Room Cameras and see that the leftmost (closest to parking lot) panels for the EY Wind Fence are pulled away from a post (see attached photo). Sending an email to Jim/Mitch.
Wind is forecast to return tonight with a wind advisory from 7pm tonight to 7pm tomorrow night (LINK).
Closes FAMIS#31074, last checked 82857
Six days ago on February 18th, PMC REFL OUT increased a bit but also got almost twice as loud. The mean also seemed to oscillate a bit over the past day.
Everything else is looking normal.
We tried to tweak the PMC beam alignment last week, which coincides with the increase in PMC Refl variation. We did not see any change in PMC Trans with the tweak, but clearly the PMC Refl signal became more noisy. At this time I'm not sure why this happened. We've seen in the past that the PMC alignment being off can cause the Refl signal to get very noisy, which is cured by fixing the alignment. Not the case here, as the alignment was already good when we tried to tweak it. I don't see anything in the other trends that would indicate a reason behind this more noisy PMC Refl. We'll continue to monitor this.
Sheila, Camilla. After Jennie adjusted the OMC offsets, the high frequency SQZ was bad, we ran SCAN_ALIGNMENT_FDS and SCAN_SQZANG_FDS. The alignment changes were considerable in Yaw so we popped back out of observing and added a -0.2 offset to H1:ASC-AS_B_RF42_YAW and accepted in Observe/safe sdf. Plot attached showing changes to ASC and high freq (purple BLRMs) SQZ.
After Gabriele's CHETA tests with the Hamamatsu Detectors in CIT#612 . We compared to our CO2X VIGO PWM 10.6 diodes.
The coherence is much worse between the two IN and OUT diodes (in the same path, separated by a 50/50 BS), we have no coherence above 600Hz, which is opposite to what Gabriele sees.
Attached are plots with our slower and faster channels.
Even looking at the coherence between the DC and AC channels on the same PD is similarly bad, attached, where green is the CO2X OUT PD, Purple is the CO2X IN PD. Gabriele suggests this could be shot noise or dark noise or electronics noise or saturation in the PD electronics as he saw at CIT.
We already know that the DC signal is dominated by dark noise, e.g. plot from 82187 so that's probably what we're seeing reducing the coherence here.
Mon Feb 24 10:13:51 2025 INFO: Fill completed in 13min 47secs
TCmins [-144C, -143C] OAT (10C, 50F) DeltaTempTime 10:13:57
As per WP 12345 we put a custom build of the camera server on digivideo4 and digivideo3. We restarted all the cameras on digivideo4 (MC1, MC3, PRM, PR3) and one camera on divideo3 (FC2). This gave us a better handle on the frame rate of the camera streams. We see the digivideo4 cameras producing a much higher frame rate. This confirms the higher cpu rate we have seen on digivideo4 and on the clients. At this point Patrick is now looking at how to configure the camera client to be able to better control the maximum frame rate if desired.
Summary: The average duty cycle for LHO this week was 62.5% The BNS range fluctuated between 140 and 160Mpc this week, especially hitting the lower bound towards the end of the week 2 super events on tuesday Some lock losses can be explained due to earthquakes and wind Wednesday morning lock loss around 4:30 am, Saturday lock loss at 2:30 am Friday 21:15 lockloss due to wind No glitch rate and strain on Tuesday Multiple new channels in hveto Change in Fscan line count page on Thursday: line counts increased significantly to 2000 from 750 > Change in format of line counts. Full report: https://wiki.ligo.org/DetChar/DataQuality/DQShiftLHO20250210
Summary: Average duty cycle for LHO this week was 65.4% The BNS range fluctuated between 140 and 160Mpc this week A couple of the locklosses can be explained due to ground motion Friday's earthquake at 04:51 UTC Saturday's earthquake at 23:31 UTC, which extended to sunday morning Multiple Superevents this week! Strange noise pattern in H1 BSC2 motion Y Wednesday through Sunday Fscan line count was very low on Saturday and Sunday (below 600) Multiple new channels in Hveto this week Full report: https://wiki.ligo.org/DetChar/DataQuality/DQShiftLHO20250203
Yashasvininad interestingly pointed out some BSC2 (BS chamber) Accelerometer motion wandering at 5-10Hz that started around Feb 3rd-Feb 4th /summary/day/20250204/pem/accelerometers_corner_bscs/ and we still have. I cannot see it before Feb. It's very visible in the last two days. e.g. plot.
Reset the No SQZ AS42 Offsets by taking SQZ out of IFO and then running SQZ > SQZ Overview > IFO ASC > "! reset as42 nosqz". Changes were minimal and are attached.
After a 2nd alignment and finally getting DRMI to lock (post-wind storm), OFFLOAD_DRMI_ASC had a very odd behavior with POP90 & PR_GAIN getting very LOUD (see attachement#1) and for ASC, the SRC2 pitch and yaw also had very odd and noisy behavior (see attachement #2). There were also SRM & SR2 verbal alarms as well.
Stopped at OFFLOAD_DRMI_ASC as soon as we noticed this, and then it cleared up after about 2-minutes.
Sheila, Jennie, Camilla. We checked SRM and SR2 and it doesn't seem like it's a single coil driver that's causing an issue, plot attached.
Shown below are plots of an oscillation starting in SRC2 that seems to begin one of thse noisy states in OFFLOAD_DRMI_ASC (111) which seems to be about 0.97 Hz. This also happened about 19:57 UTC on Sunday, as well as this morning as discussed above.
Is SRC2 going unstable at this frequency?
TITLE: 02/24 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 11mph Gusts, 7mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.25 μm/s
QUICK SUMMARY:
H1's been down for about 13hrs with the big wind storm last night starting all the ruckus (winds are generally under 10mph now) & H1 kept at PREP FOR LOCKING last 3hrs due to the struggles with locklosses in early states of locking (primarily ALS) as noted by Oli who had several wake up calls last night. Currently have H1 running through an alignment.
Other than that, secondary microseism has stabilized down between the 50th & 95th percentile lines.
Detector still can't lock and called me again. It looks like it's been getting locklosses from Green Arms, Find IR, and DRMI, and it mostly looks like it's due to ALS dropping out. It's even gotten DRMI but then had multiple MC2 saturations before losing lock at DRMI_LOCKED_CHECK_ASC. I have no idea why it's struggling so much. I can only think that it might be related to the wind. The wind is still coming and going, and it isn't too strong, but maybe the changes in wind speed are still kind of a lot for the lower states right now? I'm putting the detector in DOWN until 14 UTC and at that time I'll try again and call someone if I don't think it's related to anything environmental.
To test the new h1digivideo4 server we moved four production cameras from servers running the old software to the new server. This required changes to the camera overview MEDM, which up to this time had been hand edited.
I took this opportunity to write a python program to generate the camera overview MEDM (see attached).
The main changes are:
. cameras are sorted alphabetically by name, not by camera number.
. cameras are not grouped by the server machine they run on.
. camera data is stored in a yaml database. Camera details can be viewed by pressing the INFO buttons.
. process control buttons are provided for each camera (VID0 = h1digivideo0, etc). Servers running the new code have green text (VID3 and VID4)
SITEMAP.adl has been upgraded to use the new overview. To open the old overview there is a "Traditional Overview" button provided in the bottom right corner.
Paths for the generator code and the yaml database file are shown at the bottom of the window.
FM6 + FM8 + FM10 Gain +0.01 (+30deg phase) looks to be working for now - see attached screenshot.
The other settings which I tried and it didn't work are listed below,
zero phase Gain 0.01 (IY05 increasing, IY06 decreasing).
-30deg phase gain 0.01 (IY05 increasing, IY06 decreasing).
We lost lock and then during the next lock Corey applied the above settings (in bold fonts) and it seems to be working fine. Hence, I will continue with this for the next few lock stretches. Not committing it to lscparams yet since things can still change.
FM6 + FM8 + FM10 Gain +0.01 (+30deg phase) have been committed to lscparams for ITMY mode 05/06.
Due to so much SQZ strangeness over the weekend, Sheila set the sqzparams.py use_sqz_ang_servo to False and I changed the SQZ_ANG_ADJUST nominal state to DOWN and reloaded SQZ_MANAGER and SQZ_ANG_ADJUST.
We set the H1:SQZ-CLF_REFL_RF6_PHASE_PHASEDEG to a normal good value of 190. If the operator thinks the SQZ is bad and wants to change this to maximize the range AFTER we've been locked 2+ hours, they can. Tagging OpsInfo.
Daniel, Sheila, Camilla
This morning we set the SQZ angle to 90deg and scanned to 290deg using 'ezcastep H1:SQZ-CLF_REFL_RF6_PHASE_PHASEDEG -s '3' '+2,100''. Plot attached.
You can see that the place with the best SQZ isn't a good linear range for H1:SQZ-ADF_OMC_TRANS_SQZ_ANG, which is why the SQZ angle servo has ben going unstable. We are leaving the SQZ angle servo off.
Daniel noted that we expect the ADF I and Q channels to rotate around zero, which they aren't. So we should check that the math calculating these is what we expect. We struggles the find the SQZ-ADF_VCXO model block, it's in the h1oaf model (so that the model runs faster).
Today Mayank and I scanned the ADF phase via 'ezcastep H1:SQZ-ADF_VCXO_PLL_PHASE -s '2' '+2,180''. You can see in the attached plot the I and Q phases show sine/cosine functions as expected. We think we may be able to adjust this phase to improve the linearity of H1:SQZ-ADF_OMC_TRANS_SQZ_ANG around the best SQZ so that we can again use the SQZ ANG servo. We started testing this, plot, but found that the SQZ was v. freq dependent and needed the alignment changed (83009) so ran out of time.
FAMIS 31073
Only event of note I can see is PMC REFL suddenly had a rise of about 0.5W starting 4 days ago that has almost leveled out.
Forgot to add this comment last week, my apologies. The PMC Refl rise appears to coincide with a temperature change in the enclosure and LVEA. Seen on the Weekly Environment plots, there is a clear downward change in all 4 temperature sensors in the PSL enclosure Laser Room (TBLN, TBLS, ACN, ACS), and a clear drop and levelling out of the temperature in the enclosure Anteroom and the PSL LVEA sensor (sits on the exterior PSL enclosure wall between the enclosure and HAM1). This is also seen on a couple of items exposed to ambient temperatures, namely the NPRO laser pump diodes (in the NPRO laser head in the enclosure Laser Room) and the LVEA Control Box (in the PSL rack outside of the enclosure). On the Weekly Cooling set of plots, a clear change in these 3 channels (*_NPRO_LD[1/2]TEMP, *_CB1_TEMP) is seen coincident with the temperature change witnessed by the various PSL temperature sensors.
Seeing this, we attempted a brief alignment tweak of the PMC beam from the Control Room (theory being that since the change in PMC Refl coincides with a temperature change, maybe an temperature-induced alignment shift caused the increase in PMC Refl). Unfortunately we saw no change in PMC Trans with an alignment tweak.