TITLE: 02/25 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Wind
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 13mph Gusts, 9mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.44 μm/s
QUICK SUMMARY:
H1 is currently Unlocked due to a High wind event.
Corey had just finished an Initial Alignment before the handoff and we have now gotten past DRMI locking stages.
More high winds are expected tonight so we shall see how long this lock lasts.
TITLE: 02/24 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Wind
INCOMING OPERATOR: Tony
SHIFT SUMMARY:
More Commissioning Than Observing This Shift.
Wind has been the highlight of the last 12hrs with H1 able to lock once the winds died this morning, but immediately jumped into scheduled Commissioning time. Then after about 30min of Observing, an earthquake took H1 down. Then 71min of Observing and then a HUGE (and fairly quick) wind event took H1 down again---with winds gusting up to 65mph (see attached).
There is a wind advisory for the next 24hrs starting at 7pm local time. Good luck Eve & Owl shifts! :-/
LOG:
Jennie W, Sheila
Summary: We altered the offsets on the H1:ASC_OMC_{A,B}_{PIT,YAW} QPDs which are used to align the beam into the OMC. This was aiming to give us a improvement in optical gain. After doing this we aimed to measure the anti-symmetric port light changing as we chnage the darm offset. We are trying to use both these measurements to narrow down where we have optical loss in that could be limiting our observed squeezing. Performed both measurments successfully but the different alignment of the OMC made the squeezing less good so Camilla (alog #83009) needed to do some tuning.
Last time (alog #82938) I did this I used the wrong values as our analysis used the output channels to the loops instead of the input channels which come before the offsets are put in. The new analysis of our measurement of the optical gain as seen by the 410Hz PCAL line, changing with QPD offset, shows that we want the loop inputs to change to:
H1:ASC_OMC_A_PIT_INMON to 0.3 -> so we should change H1:ASC_OMC_A_PIT_OFFSET to -0.3
H1:ASC_OMC_A_YAW_INMON to -0.15 -> so we should change H1:ASC_OMC_A_YAW_OFFSET to 0.15
H1:ASC_OMC_B_PIT_INMON to 0.1 -> so we should change H1:ASC_OMC_B_PIT_OFFSET to -0.1
H1:ASC_OMC_B_YAW_INMON to 0.025 - so we should change H1:ASC_OMC_B_YAW_OFFSET to -0.025
We stepped these up in steps of around 0.01 to 0.02 while monitoring the saturations on OMC and OM3 suspensions and the optical gain, both to make sure we were going in the correct direction and that we were not near to saturation of the suspensions as hapenened last time I tried to do this.
Attached is the code and the ndscope showing the steps on each offset, (top row left plot, top row center right plot, second row left plot, second row center right plot). The top stage osems for OM3 suspension are shown in the third row left plot, the top stage osems for OMC suspension are in the third row center left plot, and the optical gain is shown in the third row right plot.
The optical gain improved from by 0.0113731 from a starting value of 1.00595, so that is an improvement of 1.13 % in optical gain.
Around 19:04:28 UTC I started the DARM offset step to see if the change in optical gain matches that we would see if we measured the throughput of HAM 6. Unfortuntely I forgot to turn off the OMC ASC which we know affects this measurement of the loss. We stood down from changing the OMC and Camilla did some squeezer measurements, then I made the same mistake again the next time I tried to run it (d'oh). Both times I control-C'd the auto_darm_offset.py form the command line which means the starting PCAL line values, and DARM offset had to be reset manually before I ran the script successfully after turning the OMC ASC gain to 0 to turn it off.
The darm offset measurement started at 19:20:31 UTC. The code to run it is /ligo/gitcommon/darm_offset_step/auto_darm_offset_step.py
The results are saved in /ligo/gitcommon/darm_offset_step/data and /ligo/gitcommon/darm_offset_step/figures/plot_darm_optical_gain_vs_dcpd_sum.
From the final plot in the attached pdf, the transmission of the fundamental mode light between ASC_AS_C (anti-symmetric port) DCPD is (1/1.139)*100 = 87.8 %. We can compare this to the previous measurement from last week with the old QPD offsets to see if the optical loss change matches what we would expect from such a change in optical gain.
Since the script didn't save the correct values for pcal ey and ex (due to the script being run partially twice before a siccessful measurement). I reverted the PCAL values back using SDF before we went into observing. See attached screenshots.
Sheila accepted the new ASC-OMC_A and B OFFSET values in OBSERVE and SAFE (only have the pic for OBSERVE).
Comparing OMC losses calculated by OMC throughput and optical gain measurements.
If we take the improvement in optical gain noted above and calculate the improvment in the optical gain ^2, ie.
(g_f^2 - g_i^2)/ g_i^2 = 0.023 = 2.3 %
And compare it to the gain in OMC throughput from this entry to the measurement after changing the OMC ASC offsets above
(T_OMC_f - T_OMC_i)/ T_OMC_i = 0.020 = 2%
Both methods show a similar improvement in the coupling to the OMC, or alternatively decrease in the HAM 6 losses. Since we improved the alignment of the OMC, it makes sense that the losses decrease and them agreeing validates our method of using darm offset steps to calculate OMC throughput and thus the loss in HAM 6.
The optical gain must be squared as it changes with the square root of the power at the output (due to the DARM loop).
For this comparison I was not able to use the measurement of optical gain from the same day as the initial measurement of OMC throughput, (alog #82938) as the calibration was exported to the front-end between these two dates which would have changed the reference value for kappa C.
The code I used for calcultions is attached.
As I did for the previous DARM offset measurement on the 20th Feb, in alog #83586, I checked that the DARM offset does not show a clear trend in the OMC REFL power. This would be another way of quantifying the mode-matching of the DARM mode to the OMC, but since the mode-matching is good, no trend can be seen in this channel (top plot) as we change the DARM offset.
Fringe wrapping excitations (all made in test_L excitation with 10000 counts amplitude).
Rough summary: CPX and CPY have similar levels of fringe wrapping from a longitudnal excitation. For CPY, a third shelf appears when I turned off the ailgnment offsets that Robert set to reduce the noise he sees while shaking the input arm.
From Accadia et al the shelf frequency = abs(2*velocity of scatterer/lambda). So our first shelf at 17Hz comes from some path with a velocity of 9 um/second, the second fringe at 33Hz from a path with twice that velocity at 17.5um/second, and the third fringe which only shows up for CPY with no offsets is at three times that velocity at 50Hz (27um/second). The max veolcity of R0 should be 2 *pi*0.2 Hz * 7.7um/2 = 4.8um/second. That's consistent with the first shelf being from some scatter path hits the CP twice, so it scatters out of the cavity by being reflected off one of the CP faces, and also scatters back in by a reflection off the CP.
Side note, the damping signals for the two CPs seem rather different, I will try to come back later to look at why they are so different.
Editing to add after looking at the damping loops with help from Jeff:
ITMY's R0 F2 has a lot of 60Hz noise. Jeff suggests copying the 60Hz notch filter from the main chain damping loops into the oseminf filter banks for the reaction chain, and he recomends doing this for F1,F2, F3 and SD since these are all on the same electronics chain.
At first it seems that we have rather different settings between the R0 damping filters for ITMX + ITMY, but the gains for ITMX are incorporated into filter banks, in the end the damping loops seem to be the same. The attached spectrum comparison shows that the motion of the two reaction chains looks similar on the osems.
J. Kissel (with help from S. Dwyer and J. Oberling) ECR E2400083 As a part of the final design review, the SPI team is looking to understand the future duty cycle of the proposed SPI Pick-off path which gets light from downstream of the PMC, see ECR E2400083 -- and specifically the region highlighted in red in the "other files" proposed layout. In order to do so, here are some relevant channels for existing monitor PDs that measure the power in the current PMC / ALS / SQZ paths. Units Today's Location Relevant Drawing PD's Name/Location Reported Power within Drawing (1) H1:PSL-PWR_PMC_TRANS_OUTPUT [W] 105.0 PSL D1300348 "PD04" in transmission of M18, Row 19 / Col 100 (2) H1:ALS-C_SHG_IR_DC_POWER [mW] 2.200 ISCT1 D1201103 "DCPD" in reflection of BS1 in red "PSL" path from right-most ALS periscope (3) H1:ALS-C_FIBR_EXTERNAL_DC_POWER [mW] 0.475 PSL D1300348 "ThorLabs SM1PD1A" in transmission of ALS-M9, Row 34 / Col 149 (4) H1:ALS-C_FIBR_INTERNAL_DC_POWER [mW] 0.105 ALS Fiber D1200136 "ThorLabs SM05PD1A" measuring 50% * 1% sample of beam from PSL Distribution Box (1) PD is calibrated to display the power downstream of the primary output port of the PMC, even though the PD is measuring a different PMC port and downstream of several power-changing optics (confirmed by Jason to be true). (2 thru 4) PDs are calibrated to simply report the light at the photodiode, not say, the power of the path upstream of their respective pick-off beam splitters (confirmed by Sheila to be true). For (2 thru 4), there are versions of those PD signals, (2' thru 4') which *have* been calibrated to the power in the beam path, and those are (2') H1:ALS-C_SHG_IR_DC_POWERMON [mW] 56.0 (3') H1:ALS-C_FIBR_EXTERNAL_DC_POWERMON [mW] 44.7 (4') H1:ALS-C_FIBR_INTERNAL_DC_POWERMON [mW] 30.2 There's also a useful set of channels the explicitly report the duty-cycle of the PMC too: (5) Days, Hours, Minutes of "uptime," resets to zero upon PMC lock loss event - H1:PSL-PMC_RELOCK_DAY [days] - H1:PSL-PMC_RELOCK_HOUR [hours] - H1:PSL-PMC_RELOCK_MIN [minutes] How long took to last relock - H1:PSL-PMC_RELOCK_DUR [seconds] So, if you're performing duty-cycle statistics, say, over a year, you'd trend the H1:PSL-PMC_RELOCK_DAY channel, find the points just before each drop to zero, then take a histogram of those values and find the quantiles to report the median (50% quantile) and then whatever quantile you like to represent "usually." In the first attached screenshot of MEDM overview screens, I highlight where the channels live. - On the ALS overview screen, (2) is circled in RED, From the hidden "FIBER DISTRIBUTION BOX" link, (3) and (4) are circled in GREEN and YELLOW, - PSL PMC overview screen (1) is circled in MAGENTA, and (5) are circled in BLUE In the second attached screenshot of the ALS/SQZ subscreens, the screens themselves highlight the "power at the PD" versions, (2 thru 4), in PURPLE, and I highlight the "power in the relevant path" versions, (2' thru 4') in GREEN. Stay tuned for the results of looking at these channels to derive quantitative estimates of duty cycle.
Still qualitative at this point, but a good representative time period should be the two years between July 01 2022 00:00 UTC and July 01 2024 00:00 UTC. This time period contains two observatory cycles of "vent > commission > observe," with O4a lasting from May 24 2023 at 15:00 UTC to Jan 16 2024 at 16:00 UTC, and O4b resuming Apr 10 2024 15:00 UTC. The ndscope command starts with ndscope H0:VAC-LY_Y1_PT120A_PRESS_TORR H0:VAC-LY_Y1_PT120B_PRESS_TORR . H1:GRD-ISC_LOCK_STATE_N . H1:PSL-PWR_PMC_TRANS_OUTPUT . H1:PSL-PMC_RELOCK_DAY . H1:ALS-C_SHG_IR_DC_POWERMON . H1:ALS-C_FIBR_EXTERNAL_DC_POWERMON H1:ALS-C_FIBR_INTERNAL_DC_POWERMON & I've saved the .yaml and .mat file of the session in /ligo/home/jeffrey.kissel/2025-02-24/ alssqzpowr_July2022toJul2024_trend.mat alssqzpowr_July2022toJul2024_trend.yaml and I attach a screenshot of the session.
VIOLIN_DAMPING Guardian keeps us from Observing for ~2m30 each time we get to NLN. In Feb, on average we've lost lock from state 600 3 times a day, this is around 0.5% of observing time lost. Is this wait timer in the VIOLIN_DAMPING Guardian required?
WP12342
Janos, Dave:
VACSTAT was extended by the addition of the X6 and Y6 beam tube ion-pump vacuum gauges. These are located about 100m from their respective end stations, and are solar powered.
The service was restarted at 11:20. The extended medm is shown attached.
We had a lockloss, so I was able to take a few minutes and attempt a remote alignment tweak of the FSS RefCav from the Control Room. With the IMC OFFLINE, I was able to increase the RefCav TPD from a starting value of ~0.677 V up to ~0.801 V. With the IMC once again locked the RefCav TPD sits right around ~0.799 V. This is the highest we've seen the RefCav TPD since our amplifier diode slope measurements (82635 & 82636) and better than the last on-table adjustment in January (82503). This means that an on-table alignment is not necessary at this point in time, so there will NO PSL enclosure incursion during maintenance tomorrow. This closes LHO WP 12349.
Continuing to assess effects of last night's windstorm via the Control Room Cameras and see that the leftmost (closest to parking lot) panels for the EY Wind Fence are pulled away from a post (see attached photo). Sending an email to Jim/Mitch.
Wind is forecast to return tonight with a wind advisory from 7pm tonight to 7pm tomorrow night (LINK).
Closes FAMIS#31074, last checked 82857
Six days ago on February 18th, PMC REFL OUT increased a bit but also got almost twice as loud. The mean also seemed to oscillate a bit over the past day.
Everything else is looking normal.
We tried to tweak the PMC beam alignment last week, which coincides with the increase in PMC Refl variation. We did not see any change in PMC Trans with the tweak, but clearly the PMC Refl signal became more noisy. At this time I'm not sure why this happened. We've seen in the past that the PMC alignment being off can cause the Refl signal to get very noisy, which is cured by fixing the alignment. Not the case here, as the alignment was already good when we tried to tweak it. I don't see anything in the other trends that would indicate a reason behind this more noisy PMC Refl. We'll continue to monitor this.
Sheila, Camilla. After Jennie adjusted the OMC offsets, the high frequency SQZ was bad, we ran SCAN_ALIGNMENT_FDS and SCAN_SQZANG_FDS. The alignment changes were considerable in Yaw so we popped back out of observing and added a -0.2 offset to H1:ASC-AS_B_RF42_YAW and accepted in Observe/safe sdf. Plot attached showing changes to ASC and high freq (purple BLRMs) SQZ.
After Gabriele's CHETA tests with the Hamamatsu Detectors in CIT#612 . We compared to our CO2X VIGO PWM 10.6 diodes.
The coherence is much worse between the two IN and OUT diodes (in the same path, separated by a 50/50 BS), we have no coherence above 600Hz, which is opposite to what Gabriele sees.
Attached are plots with our slower and faster channels.
Even looking at the coherence between the DC and AC channels on the same PD is similarly bad, attached, where green is the CO2X OUT PD, Purple is the CO2X IN PD. Gabriele suggests this could be shot noise or dark noise or electronics noise or saturation in the PD electronics as he saw at CIT.
We already know that the DC signal is dominated by dark noise, e.g. plot from 82187 so that's probably what we're seeing reducing the coherence here.
Mon Feb 24 10:13:51 2025 INFO: Fill completed in 13min 47secs
TCmins [-144C, -143C] OAT (10C, 50F) DeltaTempTime 10:13:57
Summary: Average duty cycle for LHO this week was 65.4% The BNS range fluctuated between 140 and 160Mpc this week A couple of the locklosses can be explained due to ground motion Friday's earthquake at 04:51 UTC Saturday's earthquake at 23:31 UTC, which extended to sunday morning Multiple Superevents this week! Strange noise pattern in H1 BSC2 motion Y Wednesday through Sunday Fscan line count was very low on Saturday and Sunday (below 600) Multiple new channels in Hveto this week Full report: https://wiki.ligo.org/DetChar/DataQuality/DQShiftLHO20250203
Yashasvininad interestingly pointed out some BSC2 (BS chamber) Accelerometer motion wandering at 5-10Hz that started around Feb 3rd-Feb 4th /summary/day/20250204/pem/accelerometers_corner_bscs/ and we still have. I cannot see it before Feb. It's very visible in the last two days. e.g. plot.
FM6 + FM8 + FM10 Gain +0.01 (+30deg phase) looks to be working for now - see attached screenshot.
The other settings which I tried and it didn't work are listed below,
zero phase Gain 0.01 (IY05 increasing, IY06 decreasing).
-30deg phase gain 0.01 (IY05 increasing, IY06 decreasing).
We lost lock and then during the next lock Corey applied the above settings (in bold fonts) and it seems to be working fine. Hence, I will continue with this for the next few lock stretches. Not committing it to lscparams yet since things can still change.
FM6 + FM8 + FM10 Gain +0.01 (+30deg phase) have been committed to lscparams for ITMY mode 05/06.
Due to so much SQZ strangeness over the weekend, Sheila set the sqzparams.py use_sqz_ang_servo to False and I changed the SQZ_ANG_ADJUST nominal state to DOWN and reloaded SQZ_MANAGER and SQZ_ANG_ADJUST.
We set the H1:SQZ-CLF_REFL_RF6_PHASE_PHASEDEG to a normal good value of 190. If the operator thinks the SQZ is bad and wants to change this to maximize the range AFTER we've been locked 2+ hours, they can. Tagging OpsInfo.
Daniel, Sheila, Camilla
This morning we set the SQZ angle to 90deg and scanned to 290deg using 'ezcastep H1:SQZ-CLF_REFL_RF6_PHASE_PHASEDEG -s '3' '+2,100''. Plot attached.
You can see that the place with the best SQZ isn't a good linear range for H1:SQZ-ADF_OMC_TRANS_SQZ_ANG, which is why the SQZ angle servo has ben going unstable. We are leaving the SQZ angle servo off.
Daniel noted that we expect the ADF I and Q channels to rotate around zero, which they aren't. So we should check that the math calculating these is what we expect. We struggles the find the SQZ-ADF_VCXO model block, it's in the h1oaf model (so that the model runs faster).
Today Mayank and I scanned the ADF phase via 'ezcastep H1:SQZ-ADF_VCXO_PLL_PHASE -s '2' '+2,180''. You can see in the attached plot the I and Q phases show sine/cosine functions as expected. We think we may be able to adjust this phase to improve the linearity of H1:SQZ-ADF_OMC_TRANS_SQZ_ANG around the best SQZ so that we can again use the SQZ ANG servo. We started testing this, plot, but found that the SQZ was v. freq dependent and needed the alignment changed (83009) so ran out of time.
FAMIS 31073
Only event of note I can see is PMC REFL suddenly had a rise of about 0.5W starting 4 days ago that has almost leveled out.
Forgot to add this comment last week, my apologies. The PMC Refl rise appears to coincide with a temperature change in the enclosure and LVEA. Seen on the Weekly Environment plots, there is a clear downward change in all 4 temperature sensors in the PSL enclosure Laser Room (TBLN, TBLS, ACN, ACS), and a clear drop and levelling out of the temperature in the enclosure Anteroom and the PSL LVEA sensor (sits on the exterior PSL enclosure wall between the enclosure and HAM1). This is also seen on a couple of items exposed to ambient temperatures, namely the NPRO laser pump diodes (in the NPRO laser head in the enclosure Laser Room) and the LVEA Control Box (in the PSL rack outside of the enclosure). On the Weekly Cooling set of plots, a clear change in these 3 channels (*_NPRO_LD[1/2]TEMP, *_CB1_TEMP) is seen coincident with the temperature change witnessed by the various PSL temperature sensors.
Seeing this, we attempted a brief alignment tweak of the PMC beam from the Control Room (theory being that since the change in PMC Refl coincides with a temperature change, maybe an temperature-induced alignment shift caused the increase in PMC Refl). Unfortunately we saw no change in PMC Trans with an alignment tweak.