TITLE: 01/09 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 124Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 9mph Gusts, 5mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.63 μm/s
QUICK SUMMARY: Locked for 16 hours, useism up slightly from yesterday. The range has been trending down and we've been going in and out of Observing since Tony had to intervene. Looks like something in the squeezer is unlocking. It just unlocked again as I'm typing this up. Investigation ongoing.
TITLE: 01/09 Owl Shift: 0600-1530 UTC (2200-0730 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 6mph Gusts, 4mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.53 μm/s
QUICK SUMMARY:
When I was first woken by the H1, we were in Nominal_Low_Noise, but not observing.
I noticed that the SQZ_MANAGER had an issue.
I was Very much not awake yet, So I just watched SQZ_MANAGER LOG and the SZQ_MAN states and observed the following:
SQZ_MANAGER couldn't get all the way up to FREQ_DEP_SQZ[100] and would hit FC_WAIT_FDS[64] and then drop down to FDS_READY_IFO[50] again.
I then started to read read the error messages we were getting, some beam diverter messages that would flash up quickly.
I then started to read some alog about recent SQZ issues.
I saw this alog from Camilla, which links to Oli's, but I wasn't sure if I was having the same exact issue.
But it did kinda point me in A direction.
I then checked the SQZ_LO_LR and the SQZ_OPO_LR Guardian nodes & Logs and Init'ed SQZ_LO_LR. This did not solve the issue, but I tried other things from Oli's alog like requesting RESET_SQZ_ASC & RESET SQZ_ANG . I stepped away after hitting init on SQZ_Manager, to wash the sleep from my eyes and it was relocked when I came back.
TITLE: 01/09 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Very quiet shift with H1 staying locked the whole shift and one BBH candidate event. H1 has now been locked for almost 7 hours.
TITLE: 01/09 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: One lock loss with an automated relock, and some SQZ tuning on the prior lock. Other than that it was a quiet shift.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
22:08 | OPS | LVEA | LVEA | N | LASER SAFE | 15:07 |
16:26 | FAC | Kim | Opt Lab | n | Tech clean | 16:49 |
21:40 | VAC | Gerardo, Jordan | EY | n | Ion pump work in mech room | 22:32 |
21:40 | - | Betsy | Mech room, OSB Rec | n | ISS array crate moves | 21:59 |
TITLE: 01/09 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 161Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 8mph Gusts, 4mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.48 μm/s
QUICK SUMMARY: H1 has been locked for just over an hour.
For reference here is a link with a summary of somewhat recent status for the LIGO SUS Fiber Puller: alog #80406
Fiber pulling was on a bit of a hiatus with a busy Fall/Winter of Ops Shifts and travel. This week, returned to the Fiber Puller lab to pull as many fibers as possible (have some free days this/next week). Thought I would note today's work because of some odd behavior.
Yesterday ended up spending most of the time making new Fiber Stock (we were down to 2, and about 12 more were made). Also packaged up Fiber Cartridge hardware returned from LLO and stored here to get re-Class-B-ed. Ended the day yesterday (Jan7) pulling 2025's first fiber: S2500002. This fiber was totally fine and no issues.
Today, I did some more lab clean-up but then around lunch I worked on pulling another fiber.
Issue with RESET Position
The first odd occurrence was with the Fiber Puller/Labview app. When I RESET the system to take a fresh new Fiber Stock, the Fiber Puller/Labview moved the Upper & Lower Stages to a spot where the Upper Stage/Fiber Cartridge Bracket was too "high" wrt the Fixed Lower Bracket---so I was not able to install a new Fiber Stock. Closing the Labview App, and a couple of "dry pulls" later, I was FINALLY! able to get the system to the correct RESET position. (not sure what was the issue here!)
Laser Power Looks Low
Now that the Fiber Stock was set to be POLISHED & PULLED, I worked on getting the Fiber Stock Aligned (this is a step unique to our set up here at LHO). As I put CO2 beam on the Fiber Stock, the first thing I noticed was very dim light on the stock (as seen via both cameras of the Fiber Puller). This was at a laser power of 55.5% (of the 200W "2018-new" laser installed for the Fiber Puller last summer). The alignment should not have changed for the system. I wondered if there were issues with the laser settings. I notice I had the laser controller at the "ANV" setting---it should be "MANUAL". But nothing improved the laser power.
Miraculously, the laser power (as seen with the cameras) returned to what I saw yesterday for S2500002, so I figured it was a glitch and then moved on to a POLISH, but within a few minutes and being mindful of the laser power on the stock on the cameras, once again the laser spot became dim. But I did nothing and just let the POLISH finish (about 15min). Fearing there was power-related issue, I decided to run the Pull (takes less than a minute), but at a higher power (75% vs 65% which had been the norm the last few months).
S2500003 pulled fine and passed.
I thought we were good, but when I set up for S2500004, I noticed more low and variable laser power. I worked on this fiber later in the afternoon after lunch. This one started with power which looked good to me (qualitatively), so I continued with set-up. But in a case of déjà vu, the laser power noticeably dipped in power as seen via the cameras. For this fiber I decided to continue, but since S2500003 had a "low-ish" violin fundamental freq (~498Hz), I figured I would return to a power of 65% for the Pull (hoping the laser power was hot enough for the pull (and not cool enough to cause the fiber to break during the pull---something which happened with our old and dying 100W laser we replaced in 2024).
Pulled the fiber and it all looked fine, profiled the fiber, and then ran an analysis on the fiber: it FAILED.
This is fine. Fibers fail. Since Aug we have pulled 29 fibers and this was the 5th FAIL we have had.
This is where I ended the day, but I'm logging this work only for some early suspicions with the laser. Stay Tuned For More!
Overall comments I wanted to add:
During yesterday's maintenance period, we removed the Inficon BCG450-SE pressure gauge that was on the C1 cross in the FC. This gauge has not been in use since the inital pumpdown of the filter cavity tube and was used in conjunction with the SS500 pump cart.
The +X RAV was closed, along with the redundant RAV before the inficon gauge. Gauge volume was vented and replaced with an o-ring angle vavle and KF40 to CF adapter. Newly install joints were pumped down and leak tested with HLD, and showed no He signal above the leak detector background, <1E-11 Torr-l/s. After leak check was complete, the +X RAV was opened to the main FC volume, redundant RAV remains closed as a pump-out port.
This gauge is now a spare for the HAM6 gauge replacement to follow.
FAMIS 28456
Both CO2 lasers have generally been increasing in output power on each relock. The increased CO2-Y laser temperature lines up clearly with decreasing chiller flow rate on a few occasions in the past month.
ETMX glitches in the output drive <1s before lock loss.
Wed Jan 08 10:07:22 2025 INFO: Fill completed in 7min 19secs
Gerardo confirmed a good fill curbside. TCtrip=-60C, TCmins = [-93C, -90C] OAT (2C, 36F)
Our range has been slowly falling, so I followed alog80461 to adjust the OPO temperature. We dropped Observing during this time (out from 16:33:14-16:35:14 UTC), and it looks to have helped out range a bit.
In 81863 we looked a the RIN of the CO2 lasers on vs off. Today we repeated with PWM (from Synrad UC-2000), the noise level is considerably higher when PWM is on. Note that this isn't really the RIN as we're not dividing by the DC power.
Data saved to camilla.compton/Documents/tcs/templates/dtt/20250107_CO2_RIN.xml and 20250107_CO2Y_RIN.xml
The CO2Y plots also show the dark noise as there is no photo detector plugged into CO2Y_ISS_IN.
For some of the CO2X data, the CO2 guardian was relocking (sweeping PZT), noted above.
These channels are calibrated in Volts as the detector would output, before the gain of PD's amplifier D1201111 .
DC levels of power on the ISS PDs at the different powers are:
With PWM on, the noise level increases by around a factor of 10. Unsure why the AC channel sees this increase less than the DC channels.
Gabriele, Camilla
Erik showed me that we can look to higher frequencies using H1:IOP-OAF_L0_MADC{2,3}_TP_CH{10-13} channels at 65kHz. Can repeat 50% PWM next week.
Note that these are before the filtering (listed in 81868), we could filters back in with python... but it's mainly gains and some shaping of the AC channel below 20Hz and above 10kHz, a DC roll of is expected around 100kHz as in D1201111. DC values:
Matt Todd and I measured the powers before the CO2X ISS PDs today:
In 82182 using specs of PDs we think we are measuring 30-50mW. We didn't have time to check for clipping with an iris or alignment, both beams looked small but the PD's are also small ~1mmx1mm.
We investigated the SHV cable connectivity between the Ion Pump Controller at EY and the Ion Pump at Y2-8 checking for faults. We did not detect any obvious faults, we did detect the prior splice which was installed back in 2016. We used the Fieldfox in TDR mode with a 1.2 meter launch cable in low pass mode. I have attached scans of these findings:
First photo is a shot from the Controller, there is a large impedance change at 19 meters from the connection point, this is the same location of the splice made in 2016, the repair looks like we would expect a splice to look.
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=28847
Second photo is a shot from the Ion Pump back to the end station. There is a very small bump at 20 meters from the ion pump that is real, but is very very tiny, could be a kinked cable, but not enough to cause issues.
Otherwise, the cable looks fine. Future testing is warranted.
M. Pirello, G. Moreno, J. Vanosky
After the cable was deemed good, we took a look at the controller, and found some "events" recorded on the log, see attached photo, one such events pertains to the glitch noticed here. All events recorded were kept on the controller. We are going to look further into this.
Looked at the spectum of th 50W CO2 lasers on the fast VIGO PVM 10.6 ISS detectors when the CO2 is locked (using the laser's PZT) and unlocked/ free running: time series attached. Small differences <6Hz, see spectrum attached.
Gabriele, Camilla.
We are not sure if this measurement makes sense.
Attached is the same spectrum with the CO2X laser turned off to see the dark noise. It appears that measurement is limited to dark noise of the diode above 40Hz. The ITMX_CO2_ISS_IN_AC channel dark noise is actually above the level when the laser is on, this doesn't make sense to me.
Gabriele and I checked that the H1:TCS-ITM{X,Y}_CO2_ISS_{IN/OUT}_AC filter: de-whiten zpk([20], [0.01], 0.011322, "n"), is as expected from the PD electronics D1201111, undoing the gain of 105dB with the turning point around 20Hz, foton bode plot attached.
This means that both the AC and DC outputs should be the voltage out of the photodetector before the electronics, where the PD shunt resistance was measured to be 66 Ohms.
Because we moved the OPO crystal position yesterday, 8045, the crystal absorption will be changing for a few weeks, and we will need to adjust the temperature setting for the OPO until it settles, as Tony and I did last night. 80455. Here are two sets of instructions, one that can be used while in observing to adjust this, and one that can be done when we loose lock or are out of observing for some other reason.
For these first few days, please follow the out of observing instructions when relocking, if this hasn't been done in the last few hours, and please try to do it in the last few hours of the evening shift so that the temperature is close to well tuned at the start of the owl shift.
Out of observing instructions (to be done while recovering from a lockloss, can be done while ISC_LOCK is locking the IFO) (screenshot) :
In observing instructions:
Both the OPO temperature and the SQZ phase are ignored in SDF and can be adjusted while we are in observing, but it's important to log the times of any adjustments made.
Changes made to make this easier (Thanks Corey for beta testing the instructions):
Update to the In observing instructions:
Both the OPO temperature and the SQZ phase are ignored in SDF, but it's important go out of observing for the change and to log any adjustments made.
Since the squeeze guardians reported that it has been unlocking from the OPO LR node, I adjusted the opo temperature hoping that it would help.
The log of that node is incredibly long and very difficult to find what actually happened.From SQZ_OPO_LR - "2025-01-09_15:34:33.836024Z SQZ_OPO_LR [LOCKED_CLF_DUAL.run] USERMSG 0: Disabling pump iss after 10 lockloss couter. Going to through LOCKED_CLF_DUAL_NO_ISS to turn on again."
I also should have mentioned that there is calibration and commissioning planned for today starting at 1630UTC (0830 PT).
The opo temperature adjustment did help our range, but it just dropped again. SQZ team will be contacted.
The Green SHG pump power needed to get the same 80uW from the OPO has increased from 14mW to 21mW since the OPO crystal swap on Tuesday. The CONTROLMON was at the bottom of it's range trying to inject this level of light, hence the OPO locking issues, plot.
In the past we've got back to Observing by following 70050 and lowering the OPO trans set point. But as we haven't maxed out the amount of power in the SHG to OPO path yet, we tuned this up with H1:SYS-MOTION_C_PICO_I motor 2 instead. A temporary fix if OPO is having issues and the H1:SQZ-OPO_ISS_CONTROLMON is too low can still be following 70050.
It's surprising that the amount of pump light has increased this quickly and quicker than after the last spot move plot, but our OPO refl is still low compared to before meaning that the new crystal spot still has lower losses than the last. We might need to realigned the SHG pump fiber to increase the amount of light we can inject.
We can work on improving Guardian log messages.