Lockloss @ 15:00 UTC - link to lockloss tool
Ends lock stretch at 7 hours. No obvious cause; environment is calm and no real sign of an ETM glitch.
TITLE: 03/22 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 8mph Gusts, 5mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.30 μm/s
QUICK SUMMARY: Despite periods of high winds, H1 generally had a good night with only one lockloss. Lock stretch is currently up to almost 7 hours.
TITLE: 03/21 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:
H1 was locked entire shift (even with winds over 30mph). Had one drop from observing due to the Squeezer, but other than that quiet shift.
LOG:
Created an nscope for viewing the calibration injections. ndscope /ligo/home/anthony.sanchez/Desktop/CALsweepEXCchans.yaml
Tried simply looking at time series data of a number of Calibration measurements, and calibration excitation channels, trying to determine if there was something easy to find that would set apart the Cal sweeps that survive Vs the Locklosses.
I was also cross checking some of the injections with their Log files found here:
/opt/rtcds/userapps/release/cal/common/scripts/simuLines/logs/H1
I looked at 9 Different CAL measurements:
Things to Note:
As mentioned in Camilla's alog about the locklosses during Calibration, There does seem to be a 43.6hz Signal that is being found in ETMX_L3_MASTER that gets ramped up when we have a lockloss but Not when we survive the Calibration.
There is also a tendancy for the DARM1_EXC signal to Show up in ETMX_L3_MASTER_OUT but only at certain frequecies, I'm currently not sure if that is intentional or not.
Date | LL/ Survived | Well Staggered Gain Ramp ups? | DARM1_EXC signal found in ETMX_L3_MASTER OUT? | 43.6 hz Signal Found in ETMX_L3_MASTER? |
Feb 1st | Survived | Yes, All gains Ramped up 1by1 and staggered. | Yes, 1200 hz | Not visible on time series, A 7 Hz signal is though. |
Feb 6th | LL | Yes | Yes 1200 | Yes |
Feb 15th | LL | No, L2& L3 CAL_EXC Ramp at the Same time. | Yes 1098 hz Lost lock at 1098 Hz on this one | Not visible on time series, A 7 Hz signal is though. LL Looks different than the rest. |
Feb 20th | Survived | No, DARM1_EXC and ETMX_L3_CAL | Yes, 1200 hz | Not visible on time series, A 7 Hz signal is though. |
Feb 22nd | Survived | Well Staggered | Yes 1200HZ | Not visible on time series, A 7 Hz signal is though. |
Mar 6th | LL | No, ETMX_L2_CAL & DARM1 Ramp up at the same time | yes 1200 hz | Yes |
Mar 8th | LL | No ETMX_L2 & L3_CAL & DARM1 Ramp at the same time | Yes 1200 hz | yes |
Mar 13th | LL | No ETMX_L2 & L3_CAL & DARM1 Ramp at the same time | Yes 1200 | yes |
Mar 16th | Survived | Yes very well staggered Ramp times | No 1200 HZ line was run at all | No |
I don't feel like this was all that fruitful to me, but hopefully someone else might find this useful?
H1 just had a drop from OBSERVING due to the Squeezer's OPO_LR. The Squeezer came back on its own in less than 2-min.
We do have the following message for the SQZ_OPO_LR node:
"pump fiber rej power in ham7 high, nominal 35e-3, align fiber pol on sqzt0"
And it looks like SHG Fiber Rejected Power has had a high value (over 0.350 counts) since Mon (see RyanC's alog83406) and has been railing at 0.753 the last few days. (see attached). Trending back further shows that this Rejected Power hasn't had this sustained railing since way back in Aug-Oct 2023.
Since H1's been (1) running like this the last few days, (2) SQZ came back quick, and the (3) H1 range looks "normal (around 150Mpc)----took H1 back to Observing and tagging SQZ.
Mayank, Jennie, Siva, Keita
Continuing measurements carried out on the ISS arrray in alog 83077 (between that alog and today Siva and Mayank had set up measurements of the first four ISS array PDs with oscilloscopes).
One problem emerging was that the QPD (shown in this image) readout doers not give a number for how far from centre the QPD is in our lab setup (it uses an old LCD output box with a microcontroller we can't find the documentation for). Another is that the four PDs are best aligned away from the centre of the QPD. So after testing it may be neccessary to move the QPD for best alignment.
This morning Siva and Mayank setup the readout for all 8 PDs and also added a polarising beamsplitter between the PZT controlled mirror used for dithering the alignment to the array, and the aperture on the array assembly input. This was to ensure the polarisation was not affecting the measurement.
Mayank and I measured the AC and DC signals for all 8 PDs using a horizontal dither on the PZT controlled mirror before the polarising beamsplitter, when:
1: the light was centered on the QPD and
2: when it was in the observed 'best alignment' when the coupling from the dither to the PDs was mininimised.
Changing the DC alignment is done with the pitch and yaw screws on the PZT mirror mount.
After the first couple of measurements we tried to get a time when the laser was not in a noisy state this could be because it keeps running multi-mode or some pickup in the measurement cables - not sure so further investigation needed.
It does seem possible to obtain a spot where all the 8 PDs are minimised in the top left-hand quadrant of the QPD for.
We also repeated measurements 1 and 2 with a vertical dither on the PZT. This is the LCD screen when aligned at its best alignment on the PDs for vertical dither on the PZT mirror. This is the readout for PDs 1-4 and 5-8 at this position. This is the LCD screen when the input alignment is such that the QPD is centred.
I have attached our data file references for the measurements and some photos.
Keita also tuned the laser temperature control thermistor from a resistance of 10 kOhms to 9.602 kOhms which seems to stop the laser mode-hopping and becoming noisy. Siva has also hooked up a cable between the QPD and a different QPD readout box which we can get oscilloscope readouts from. More measurements to follow...
For FAMIS #26368: All looks well for the last week for all site HVAC fans (see attached trends).
TITLE: 03/21 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Several hours of downtime today thanks to some earthquakes this morning and high winds this afternoon. Eventually we were able to relock, and so far H1 has been observing for about 1.5 hours. Quiet day otherwise.
LOG:
TITLE: 03/21 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 12mph Gusts, 7mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.33 μm/s
QUICK SUMMARY:
H1's been Observing for 90min and so far the environment looks decent for evening operations (winds mostly under 10mph); it's been raining the last few hours. Ryan passed on how locking went for him and there is an opportunistic Observing-Drop for Robert---if L1 is out of Observing.
H1 returned to observing at 21:53 UTC after several hours of downtime due to earthquakes and high winds. The wind has calmed down enough that I was able to relock after an initial alignment.
I accepted the SDF diffs Sheila predicted in alog83483 before going to observing, screenshot attached.
Bi-Weekly TCS Chiller Water Level Top-Off Famis 27811
CO2X
CO2Y
There was no water in the leak cup.
Fri Mar 21 10:05:37 2025 INFO: Fill completed in 5min 34secs
HEPI Pump Trends Monthly. Last Checked in alog 82928. Closes FAMIS 37203.
Trends look as expected and are comprable in noise to last month.
Since I changed the ramp time to 2 seconds for the second time, there hasn't been a change in the rate of these locklosses, there have been 8 per nln lockloss since March 18th at 21 UTC. I've now changed the ramp time back just to avoid an unnecessary change to ALS.
Oli used data from the lockloss tool to make this useful plot showing how many locklosses we've had from the LOCKING_ALS state (state 15 for ISC_LOCK), blue shows the total number of state 15 locklosses for that day and orange shows those that were tagged as high wind or EQ. This shows the problem starting on the 22nd or 23rd UTC time, which might line up in time with this change to the ALS demod phase and locking gain: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=82388
Because of the large EQ right now, we can't lock the Y arm to measure the TF. I've reverted the changes, and accepted the changes in SDF safe.snap. They will need to be accepted in observe. It would also be a good idea to measure the OLG when we can.
FAMIS 26376, last checked in alog83399
Laser Status:
NPRO output power is 1.834W
AMP1 output power is 70.15W
AMP2 output power is 140.0W
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 44 days, 19 hr 39 minutes
Reflected power = 22.58W
Transmitted power = 106.0W
PowerSum = 128.5W
FSS:
It has been locked for 0 days 0 hr and 12 min
TPD[V] = 0.7967V
ISS:
The diffracted power is around 3.9%
Last saturation event was 0 days 0 hours and 55 minutes ago
Possible Issues: None reported
Lockloss @ 15:00 UTC - link to lockloss tool
S-waves from two M6.2 earthquakes, one from Panama and another from the Aleutians, AK hit at pretty much the same time, sending H1 into EQ mode and losing lock. Since the R-waves are still 15 minutes out from the Panama quake and there have already been some aftershocks, I'm leaving H1 in DOWN until those pass by.
At the EY station the compressor is being replaced - after the one at EX is done. In this aLog, in the comments, the progress of this operation is tracked continuously, until the 1st startup by the supplier, Rogers Machinery. Another important consideration here, is that the purge line at EY needs to be replaced (based on an FTIR test - see DCC LIGO-E2300222-v2. As it can be seen, the level of contamination reaches even the 10 ug/cm2 value). This operation will be done after the April-May vent, so the EY station will be ready to be vented after O4. 02-25 (maintenance Tuesday): the old compressor was pulled out (it is temporarily stored in the EY receiving area). The beginning of the purge and TMDS lines with the associated brackets and unistruts were taken off. The new compressor unit and dryer skid were anchored in the mechanical room. Here it is important to mention that the orientation of the inlet was brought closer to the purge line inlet into the VEA, so the overall length of the associated circulation lines will be much shorter. Next is the electrical and pneumatic installation, which will be completed in the next 1-2 weeks.
The filter tree was installed and supports anchored to the slab. Ken also reports that electrical installation is complete. Connection to the purge air header is awaiting CF fittings from the supplier. However, startup testing can continue prior to header connection.
The 1st startup of the compressor was carried out by Rogers Machinery on March 18th, during maintenance hours.
Jennie W, Sheila
Summary: We altered the offsets on the H1:ASC_OMC_{A,B}_{PIT,YAW} QPDs which are used to align the beam into the OMC. This was aiming to give us a improvement in optical gain. After doing this we aimed to measure the anti-symmetric port light changing as we chnage the darm offset. We are trying to use both these measurements to narrow down where we have optical loss in that could be limiting our observed squeezing. Performed both measurments successfully but the different alignment of the OMC made the squeezing less good so Camilla (alog #83009) needed to do some tuning.
Last time (alog #82938) I did this I used the wrong values as our analysis used the output channels to the loops instead of the input channels which come before the offsets are put in. The new analysis of our measurement of the optical gain as seen by the 410Hz PCAL line, changing with QPD offset, shows that we want the loop inputs to change to:
H1:ASC_OMC_A_PIT_INMON to 0.3 -> so we should change H1:ASC_OMC_A_PIT_OFFSET to -0.3
H1:ASC_OMC_A_YAW_INMON to -0.15 -> so we should change H1:ASC_OMC_A_YAW_OFFSET to 0.15
H1:ASC_OMC_B_PIT_INMON to 0.1 -> so we should change H1:ASC_OMC_B_PIT_OFFSET to -0.1
H1:ASC_OMC_B_YAW_INMON to 0.025 - so we should change H1:ASC_OMC_B_YAW_OFFSET to -0.025
We stepped these up in steps of around 0.01 to 0.02 while monitoring the saturations on OMC and OM3 suspensions and the optical gain, both to make sure we were going in the correct direction and that we were not near to saturation of the suspensions as hapenened last time I tried to do this.
Attached is the code and the ndscope showing the steps on each offset, (top row left plot, top row center right plot, second row left plot, second row center right plot). The top stage osems for OM3 suspension are shown in the third row left plot, the top stage osems for OMC suspension are in the third row center left plot, and the optical gain is shown in the third row right plot.
The optical gain improved from by 0.0113731 from a starting value of 1.00595, so that is an improvement of 1.13 % in optical gain.
Around 19:04:28 UTC I started the DARM offset step to see if the change in optical gain matches that we would see if we measured the throughput of HAM 6. Unfortuntely I forgot to turn off the OMC ASC which we know affects this measurement of the loss. We stood down from changing the OMC and Camilla did some squeezer measurements, then I made the same mistake again the next time I tried to run it (d'oh). Both times I control-C'd the auto_darm_offset.py form the command line which means the starting PCAL line values, and DARM offset had to be reset manually before I ran the script successfully after turning the OMC ASC gain to 0 to turn it off.
The darm offset measurement started at 19:20:31 UTC. The code to run it is /ligo/gitcommon/darm_offset_step/auto_darm_offset_step.py
The results are saved in /ligo/gitcommon/darm_offset_step/data and /ligo/gitcommon/darm_offset_step/figures/plot_darm_optical_gain_vs_dcpd_sum.
From the final plot in the attached pdf, the transmission of the fundamental mode light between ASC_AS_C (anti-symmetric port) DCPD is (1/1.139)*100 = 87.8 %. We can compare this to the previous measurement from last week with the old QPD offsets to see if the optical loss change matches what we would expect from such a change in optical gain.
Since the script didn't save the correct values for pcal ey and ex (due to the script being run partially twice before a siccessful measurement). I reverted the PCAL values back using SDF before we went into observing. See attached screenshots.
Sheila accepted the new ASC-OMC_A and B OFFSET values in OBSERVE and SAFE (only have the pic for OBSERVE).
Comparing OMC losses calculated by OMC throughput and optical gain measurements.
If we take the improvement in optical gain noted above and calculate the improvment in the optical gain ^2, ie.
(g_f^2 - g_i^2)/ g_i^2 = 0.023 = 2.3 %
And compare it to the gain in OMC throughput from this entry to the measurement after changing the OMC ASC offsets above
(T_OMC_f - T_OMC_i)/ T_OMC_i = 0.020 = 2%
Both methods show a similar improvement in the coupling to the OMC, or alternatively decrease in the HAM 6 losses. Since we improved the alignment of the OMC, it makes sense that the losses decrease and them agreeing validates our method of using darm offset steps to calculate OMC throughput and thus the loss in HAM 6.
The optical gain must be squared as it changes with the square root of the power at the output (due to the DARM loop).
For this comparison I was not able to use the measurement of optical gain from the same day as the initial measurement of OMC throughput, (alog #82938) as the calibration was exported to the front-end between these two dates which would have changed the reference value for kappa C.
The code I used for calcultions is attached.
As I did for the previous DARM offset measurement on the 20th Feb, in alog #83586, I checked that the DARM offset does not show a clear trend in the OMC REFL power. This would be another way of quantifying the mode-matching of the DARM mode to the OMC, but since the mode-matching is good, no trend can be seen in this channel (top plot) as we change the DARM offset.
H1 back to observing at 16:13 UTC. I went straight into an initial alignment after the lockloss, then everything went fully automatically after that.