Thu Jan 09 10:03:57 2025 INFO: Fill completed in 3min 55secs
Jordan confirmed a good fill curbside. TCmins [-90C, -88C] OAT (1C, 34F)
Broadband and simulines ran without squeezer today. There was also a small earthquake that started to roll through at 1651UTC during this measurement.
Simulines start:
PST: 2025-01-09 08:36:47.309046 PST
UTC: 2025-01-09 16:36:47.309046 UTC
GPS: 1420475825.309046
Simulines end:
PST: 2025-01-09 09:00:29.590371 PST
UTC: 2025-01-09 17:00:29.590371 UTC
GPS: 1420477247.590371
Files:
2025-01-09 17:00:29,514 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20250109T1
63648Z.hdf5
2025-01-09 17:00:29,524 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_2025
0109T163648Z.hdf5
2025-01-09 17:00:29,528 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_2025
0109T163648Z.hdf5
2025-01-09 17:00:29,535 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_2025
0109T163648Z.hdf5
2025-01-09 17:00:29,541 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_2025
0109T163648Z.hdf5
This should conclude at 2000UTC or 12000PT
VACSTAT detected a single BSC3 glitch at 03:18 this morning. This glitch is consistent with sensor noise.
I attempted to restart vacstat_ioc.service on cdsioc0 at 07:44 to reset the glitch, which failed with a leap-seconds error.
I updated the tzdata package on cdsioc0 and was then able to start vacstat_ioc.service.
After the restart, I disabled PT110 HAM6 which is currently not running.
TITLE: 01/09 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 124Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 9mph Gusts, 5mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.63 μm/s
QUICK SUMMARY: Locked for 16 hours, useism up slightly from yesterday. The range has been trending down and we've been going in and out of Observing since Tony had to intervene. Looks like something in the squeezer is unlocking. It just unlocked again as I'm typing this up. Investigation ongoing.
Since the squeeze guardians reported that it has been unlocking from the OPO LR node, I adjusted the opo temperature hoping that it would help. The log of that node is incredibly long and very difficult to find what actually happened. From SQZ_OPO_LR - "2025-01-09_15:34:33.836024Z SQZ_OPO_LR [LOCKED_CLF_DUAL.run] USERMSG 0: Disabling pump iss after 10 lockloss couter. Going to thro
ugh LOCKED_CLF_DUAL_NO_ISS to turn on again."
I also should have mentioned that there is calibration and commissioning planned for today starting at 1630UTC (0830 PT).
The opo temperature adjustment did help our range, but it just dropped again. SQZ team will be contacted.
The Green SHG pump power needed to get the same 80uW from the OPO has increased from 14mW to 21mW since the OPO crystal swap on Tuesday. The CONTROLMON was at the bottom of it's range trying to inject this level of light, hence the OPO locking issues, plot.
In the past we've got back to Observing by following 70050 and lowering the OPO trans set point. But as we haven't maxed out the amount of power in the SHG to OPO path yet, we tuned this up with H1:SYS-MOTION_C_PICO_I motor 2 instead. A temporary fix if OPO is having issues and the H1:SQZ-OPO_ISS_CONTROLMON is too low can still be following 70050.
It's surprising that the amount of pump light has increased this quickly and quicker than after the last spot move plot, but our OPO refl is still low compared to before meaning that the new crystal spot still has lower losses than the last. We might need to realigned the SHG pump fiber to increase the amount of light we can inject.
We can work on improving Guardian log messages.
TITLE: 01/09 Owl Shift: 0600-1530 UTC (2200-0730 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 6mph Gusts, 4mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.53 μm/s
QUICK SUMMARY:
When I was first woken by the H1, we were in Nominal_Low_Noise, but not observing.
I noticed that the SQZ_MANAGER had an issue.
I was Very much not awake yet, So I just watched SQZ_MANAGER LOG and the SZQ_MAN states and observed the following:
SQZ_MANAGER couldn't get all the way up to FREQ_DEP_SQZ[100] and would hit FC_WAIT_FDS[64] and then drop down to FDS_READY_IFO[50] again.
I then started to read read the error messages we were getting, some beam diverter messages that would flash up quickly.
I then started to read some alog about recent SQZ issues.
I saw this alog from Camilla, which links to Oli's, but I wasn't sure if I was having the same exact issue.
But it did kinda point me in A direction.
I then checked the SQZ_LO_LR and the SQZ_OPO_LR Guardian nodes & Logs and Init'ed SQZ_LO_LR. This did not solve the issue, but I tried other things from Oli's alog like requesting RESET_SQZ_ASC & RESET SQZ_ANG . I stepped away after hitting init on SQZ_Manager, to wash the sleep from my eyes and it was relocked when I came back.
TITLE: 01/09 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Very quiet shift with H1 staying locked the whole shift and one BBH candidate event. H1 has now been locked for almost 7 hours.
TITLE: 01/09 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: One lock loss with an automated relock, and some SQZ tuning on the prior lock. Other than that it was a quiet shift.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
22:08 | OPS | LVEA | LVEA | N | LASER SAFE | 15:07 |
16:26 | FAC | Kim | Opt Lab | n | Tech clean | 16:49 |
21:40 | VAC | Gerardo, Jordan | EY | n | Ion pump work in mech room | 22:32 |
21:40 | - | Betsy | Mech room, OSB Rec | n | ISS array crate moves | 21:59 |
TITLE: 01/09 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 161Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 8mph Gusts, 4mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.48 μm/s
QUICK SUMMARY: H1 has been locked for just over an hour.
For reference here is a link with a summary of somewhat recent status for the LIGO SUS Fiber Puller: alog #80406
Fiber pulling was on a bit of a hiatus with a busy Fall/Winter of Ops Shifts and travel. This week, returned to the Fiber Puller lab to pull as many fibers as possible (have some free days this/next week). Thought I would note today's work because of some odd behavior.
Yesterday ended up spending most of the time making new Fiber Stock (we were down to 2, and about 12 more were made). Also packaged up Fiber Cartridge hardware returned from LLO and stored here to get re-Class-B-ed. Ended the day yesterday (Jan7) pulling 2025's first fiber: S2500002. This fiber was totally fine and no issues.
Today, I did some more lab clean-up but then around lunch I worked on pulling another fiber.
Issue with RESET Position
The first odd occurrence was with the Fiber Puller/Labview app. When I RESET the system to take a fresh new Fiber Stock, the Fiber Puller/Labview moved the Upper & Lower Stages to a spot where the Upper Stage/Fiber Cartridge Bracket was too "high" wrt the Fixed Lower Bracket---so I was not able to install a new Fiber Stock. Closing the Labview App, and a couple of "dry pulls" later, I was FINALLY! able to get the system to the correct RESET position. (not sure what was the issue here!)
Laser Power Looks Low
Now that the Fiber Stock was set to be POLISHED & PULLED, I worked on getting the Fiber Stock Aligned (this is a step unique to our set up here at LHO). As I put CO2 beam on the Fiber Stock, the first thing I noticed was very dim light on the stock (as seen via both cameras of the Fiber Puller). This was at a laser power of 55.5% (of the 200W "2018-new" laser installed for the Fiber Puller last summer). The alignment should not have changed for the system. I wondered if there were issues with the laser settings. I notice I had the laser controller at the "ANV" setting---it should be "MANUAL". But nothing improved the laser power.
Miraculously, the laser power (as seen with the cameras) returned to what I saw yesterday for S2500002, so I figured it was a glitch and then moved on to a POLISH, but within a few minutes and being mindful of the laser power on the stock on the cameras, once again the laser spot became dim. But I did nothing and just let the POLISH finish (about 15min). Fearing there was power-related issue, I decided to run the Pull (takes less than a minute), but at a higher power (75% vs 65% which had been the norm the last few months).
S2500003 pulled fine and passed.
I thought we were good, but when I set up for S2500004, I noticed more low and variable laser power. I worked on this fiber later in the afternoon after lunch. This one started with power which looked good to me (qualitatively), so I continued with set-up. But in a case of déjà vu, the laser power noticeably dipped in power as seen via the cameras. For this fiber I decided to continue, but since S2500003 had a "low-ish" violin fundamental freq (~498Hz), I figured I would return to a power of 65% for the Pull (hoping the laser power was hot enough for the pull (and not cool enough to cause the fiber to break during the pull---something which happened with our old and dying 100W laser we replaced in 2024).
Pulled the fiber and it all looked fine, profiled the fiber, and then ran an analysis on the fiber: it FAILED.
This is fine. Fibers fail. Since Aug we have pulled 29 fibers and this was the 5th FAIL we have had.
This is where I ended the day, but I'm logging this work only for some early suspicions with the laser. Stay Tuned For More!
Overall comments I wanted to add:
During yesterday's maintenance period, we removed the Inficon BCG450-SE pressure gauge that was on the C1 cross in the FC. This gauge has not been in use since the inital pumpdown of the filter cavity tube and was used in conjunction with the SS500 pump cart.
The +X RAV was closed, along with the redundant RAV before the inficon gauge. Gauge volume was vented and replaced with an o-ring angle vavle and KF40 to CF adapter. Newly install joints were pumped down and leak tested with HLD, and showed no He signal above the leak detector background, <1E-11 Torr-l/s. After leak check was complete, the +X RAV was opened to the main FC volume, redundant RAV remains closed as a pump-out port.
This gauge is now a spare for the HAM6 gauge replacement to follow.
FAMIS 28456
Both CO2 lasers have generally been increasing in output power on each relock. The increased CO2-Y laser temperature lines up clearly with decreasing chiller flow rate on a few occasions in the past month.
ETMX glitches in the output drive <1s before lock loss.
In 81863 we looked a the RIN of the CO2 lasers on vs off. Today we repeated with PWM (from Synrad UC-2000), the noise level is considerably higher when PWM is on. Note that this isn't really the RIN as we're not dividing by the DC power.
Data saved to camilla.compton/Documents/tcs/templates/dtt/20250107_CO2_RIN.xml and 20250107_CO2Y_RIN.xml
The CO2Y plots also show the dark noise as there is no photo detector plugged into CO2Y_ISS_IN.
For some of the CO2X data, the CO2 guardian was relocking (sweeping PZT), noted above.
These channels are calibrated in Volts as the detector would output, before the gain of PD's amplifier D1201111 .
DC levels of power on the ISS PDs at the different powers are:
With PWM on, the noise level increases by around a factor of 10. Unsure why the AC channel sees this increase less than the DC channels.
Gabriele, Camilla
Erik showed me that we can look to higher frequencies using H1:IOP-OAF_L0_MADC{2,3}_TP_CH{10-13} channels at 65kHz. Can repeat 50% PWM next week.
Note that these are before the filtering (listed in 81868), we could filters back in with python... but it's mainly gains and some shaping of the AC channel below 20Hz and above 10kHz, a DC roll of is expected around 100kHz as in D1201111. DC values:
Matt Todd and I measured the powers before the CO2X ISS PDs today:
In 82182 using specs of PDs we think we are measuring 30-50mW. We didn't have time to check for clipping with an iris or alignment, both beams looked small but the PD's are also small ~1mmx1mm.
We investigated the SHV cable connectivity between the Ion Pump Controller at EY and the Ion Pump at Y2-8 checking for faults. We did not detect any obvious faults, we did detect the prior splice which was installed back in 2016. We used the Fieldfox in TDR mode with a 1.2 meter launch cable in low pass mode. I have attached scans of these findings:
First photo is a shot from the Controller, there is a large impedance change at 19 meters from the connection point, this is the same location of the splice made in 2016, the repair looks like we would expect a splice to look.
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=28847
Second photo is a shot from the Ion Pump back to the end station. There is a very small bump at 20 meters from the ion pump that is real, but is very very tiny, could be a kinked cable, but not enough to cause issues.
Otherwise, the cable looks fine. Future testing is warranted.
M. Pirello, G. Moreno, J. Vanosky
After the cable was deemed good, we took a look at the controller, and found some "events" recorded on the log, see attached photo, one such events pertains to the glitch noticed here. All events recorded were kept on the controller. We are going to look further into this.
Elenna, Camilla
Ran the automatic DARM offset sweep via Elenna's instructions (took <15 minutes):
cd /ligo/gitcommon/labutils/darm_offset_step/
conda activate labutils
python auto_darm_offset_step.py
DARM offset moves recorded to /ligo/gitcommon/labutils/darm_offset_step/data/darm_offset_steps_2024_Oct_21_23_37_44_UTC.txt
Reverted the attached tramp sdf diffs afterwards, see attached. Maybe the script needs to be adjusted to automatically do this.
I just ran Craig's script to analyze these results. The script fits a contrast defect of 0.742 mW using the 255.0 Hz data and 0.771 mW using the 410.3 Hz data. This value is lower than the previously 1 mW on July 11 (alog 79045), which matches up nicely with our reduced frequency noise since the OFI repair (alog 80596).
I attached the plots that the code generates.
This result then estimates that the homodyne angle is about 7 degrees.
Last year I added some code to plot_darm_optical_gain_vs_dcpd_sum.py to calculate the losses through HAM 6.
Sheila and I have started looking at those again post-OFI replacement.
Just attaching the plot of power at the anti-symmetric port (ie. into HAM 6) vs. power after the OMC as measured by the DCPDs.
The plot is found in /ligo/gitcommon/labutils/darm_offset_step/figures/ and is also on the last page of the pdf Elenna linked above.
From this plot we can see that the relationship between the power into HAM 6 (P_AS) is related to the power at the output DCPDs as follows.
P_AS = 1.220*P_DCPD + 656.818 mW
Where the second term is light that will be rejected by the OMC + that which gets through the OMC but is insensitive to DARM length changes.
The throughput between the anti-symmetric port and the DCPDs is 1/1.22 = 0.820. So that means 18% of the TM00 light that we want at the DCPDs is lost through HAM 6.