TITLE: 09/26 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 144Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
17:10 CP1 fill started
Lock#1:
19:51UTC - I couldn't get PRMI or DRMI so I started an IA
Back to NLN at 21:10UTC, in Observing at 21:37 (took 27 minutes for ADS to converge for the camera servo to turn on). Fully automated relock
I noticed the LVEA temperatures were increasing this afternoon and I texted Bubba about it, we should keep our eyes on it.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:04 | VAC | Jordan | MidY & EY | N | Turbopump test | 16:17 |
15:06 | FAC | Randy | LVEA | N | Forklifting | 16:35 |
15:06 | FAC | Kim | EndX | N | Tech clean | 16:41 |
15:06 | FAC | Karen | EndY | Y | Tech clean | 16:44 |
15:07 | CAL | Tony, Rick | EndY | Y | PCAL measurement | 17:00 |
15:07 | FAC | Cindi | FCES | N | Tech clean | 15:52 |
15:08 | FAC | Tyler | Outside, Beamtube | N | Move big forklift | 16:27 |
15:18 | EE | Ken | FCES, beamtube | N | Signs, lights... | 18:41 |
15:37 | VAC | Rogers (contractor) | Mech room | N | Kobelco compressor | 19:00 |
15:38 | CDS | Dave, Fil, Rahul | Remote/Mezz | N | SUSH7, power supply replacement | 16:29 |
15:53 | FAC | Cindi | FCES tube | N | Tech clean | 16:35 |
15:54 | PSL | RyanS | Remote | N | Adjust PSL ISS diff power | 16:00 |
16:17 | VAC | Jordan, Travis | LVEA | N | Vent Hepta | 16:46 |
16:24 | SEI | Jim | Office | N | ITMY measurements | 18:14 |
16:41 | FAC | Kim | LVEA | Y | Tech clean | 18:36 |
16:43 | EE | Fil | LVEA | Y | Control power down, dust&humidity | 18:45 |
16:46 | VAC | Jordan | MidY, EndY | N | Turn off turbopumps | 17:16 |
16:49 | FAC | Bubba | FCES | N | Checks | 16:51 |
16:55 | FAC | Karen | LVEA | Y | Tech clean | 18:16 |
17:02 | SUS | Austin, Ibrahim | LVEA | Y | ITMX OPLEV checks, centering | 19:21 |
17:06 | FAC | Cindi | High bay | N | Tech clean | 17:41 |
17:29 | LVEA IS LASER HAZARD | LVEA | Y | LVEA IS LASER HAZARD | 19:29 | |
18:00 | VAC | Jordan | MidY, EndY | n | Turbo turn off | 18:14 |
18:21 | SQZ | Camilla, Naoki | LVEA | Y | SQZT0 table checks | 19:42 |
19:48 | Camilla | LVEA | N | Sweep | 20:00 | |
20:31 | EE | Ken | Mech room | N | Check heaters | 22:16 |
21:03 | FAC | Tyler | Ends, X then Y | N | Chiller yard checks | 21:23 |
19:30 | CAL | Tony | PCAL lab | LOCAL | PCAL work | Ongoing |
Closes 25958
Script reports ITMX_ST2_CPSINF_H1/3 are elevated.
BS and ETMY ST2_CPSINF_H1/2 and H3 channels also look a bit elevated as well.
I. Abouelfettouh, A. Jennings
Today during Tuesday maintenance, we decided to look into the oplev pd sum counts, which have been dropping steadily since Jason and I upgraded the laser with an armored fiber and cooling enclosure, done two weeks ago (alog here). We took the LVEA to LASER HAZARD then proceded to turn off the oplev laser in its enclosure. When doing this, I immediately noticed warm air coming out of the enclosure. The ice packs and the laser box itself were both warm to the touch. This is a sign that the laser itself is starting to go, which makes sense since this is the oldest oplev laser we have in the detector. In addition, if the laser was indeed dying, it would have been further exasperated when it was moved into a closed enclosure when we made the upgrade, trapping all the heat and degrading the laser at an even faster rate.
Once we turned off the laser and verified there was no feedback on the oplev overview, we then opened the housing to the laser to check to see if the fiber optic cable was potentially pinched or frayed somehow and we didn't notice it when Jason and I made the initial upgrade. These photos (1 / 2 / 3) are exactly what we saw when we first opened the housing. The optical fiber looks to be in the same position and we found no evidence of any fraying, pinching, or degradation anywhere on the fiber. We made sure that the zip tie we used to secure the fiber to the base plate was snug, but not too tight, and were able to confirm this by being able to feed the fiber both in and out of the zip tie with relative ease.
After verifying that the optical fiber checked out, we put the housing back on, which was a struggle. After moving and readjusting the housing, we saw that some of the screws used to secure the housing (looks like 0.25" x 5/32) appeared to be stripped, which made it hard to put back in. More importantly though, it seemed that some of the threads in the screw holes on the base plate might be stripped as well. Hard to get a picture of it, but have attached some photos here - 1 / 2. Even when we tried replacing the screws with fresh ones, it still would not thread into the hole (we verified that the issue was NOT the housing itself as it was lifted above when we tried this). After a lot of finnicking, we were able to get majority of the screws in, torqued, and secured with relative ease. However there was one screw, we were only able to get midway in (attached), and one hole we could not get a screw in at all. With 12/14 screws set in place, and one moderately secured, and double checking the housing was secured, we felt comfortable with leaving the housing in that state, but this should be fixed on another Tuesday (maybe use some helicoils to fix the screw holes?).
With the housing back on, we verified that the sum counts were back on the oplev overview. During this process, it appears that we slightly moved the alignment of the laser, with it now reading back at -15.5 P/-5 Y, when it was near 0 before we started. We went back out to grab the oplev controller and were able to get the alignment to sub 0.2 for both P/Y. We somehow were also able to get the oplev counts higher, ending up at ~4600 counts (up from 3600 before we started). Once we were satisfied with the alignment, we packed the controller, verfied the housing was secure a third time, and called it a day.
All this to say that we can confirm the fiber optic cable is in tact and not the cause of the sum dropping problem. Based on everything we found today, I believe that it is the oplev laser itself that is dying and has been for some time, and was just greatly expaserated by putting it into an enclosure when Jason and I did the upgrade. After talking with Fil, he thinks it also could be a possibility that the 5/10 V power block that feeds into the cooling enclosure could be faulty and therefore be an explanation for the dropping sum counts. However, he mentioned that if that were the case, we should have seen the counts drop with a "staircase" esque pattern, but the sum counts from these past two weeks seem to have followed a more linear degradation. Next steps would most likely be to remove the current laser and swap it, and if the problem still persists, look into the power block next.
This concludes the work in WP11445.
We ran the functionality test on the main turbopumps in MY and EY during Tuesday Maintenance (9/26/23). The scroll pump is started to take pressure down to low 10^-02 Torr, at which time the turbo pump is started, the system reaches low 10^-08 Torr after a few minutes, then the turbo pump system is left ON for about 1 hour, after the hour the system goes through a shut down sequence.
No issues were encountered while performing the functionality test on these 2 stations.
MY Turbo:
Bearing Life:100%
Turbo Hours: 204
Scroll Pump Hours: 11393 - Needs tip seal replacement
EY Turbo:
Bearing Life:100%
Turbo Hours: 1272
Scroll Pump Hours: 204
WP11438 Replace HAM7 IO Chassis +24V DC power supply
Fil, Rahul, Ryan, Dave:
This morning Fil replace the Kepco dual-channel DC power supply which is powering h1sush7 and h1seih7 IO Chassis. At a rate of roughly once per year the channel which powers h1sush7's chassis has tripped.
The procedure was:
Put HAM7 SUS and SEI into a safe state.
On h1[sus,sei]h7: stop the models, fence from Dolphin fabric, power computers down.
On the mezzanine, power down the IO Chassis, replace the power supply, power everything back up.
In the MSR, power h1[sus,sei]h7 back up. Verify all the IO Chassis cards can be seen (they can)
After the models had restarted, untrip the HAM7 SWWD. Clear IPC and CRC errors. Recover HAM7 to an operational state.
WP11443 Reroute LVEA Comtrol wiring
Fil, Patrick, Dave:
Fil powered down the Comtrol (ethernet to serial converter) which is used to read the LVEA Dust Monitors and the 2IFO dew-point sensors.
The system was down between 09:40 and 09:54.
I restarted the LVEA dust monitor IOC, it had lost connection with its dust monitors.
The 3IFO-DEWPOINT IOC did not need a restart.
No DAQ Restart Today
Tue26Sep2023
LOC TIME HOSTNAME MODEL/REBOOT
09:26:10 h1seih7 ***REBOOT***
09:26:19 h1sush7 ***REBOOT***
09:27:39 h1seih7 h1iopseih7
09:27:52 h1seih7 h1isiham7
09:28:06 h1sush7 h1iopsush7
09:28:19 h1sush7 h1susfc1
09:28:32 h1sush7 h1sussqzin
09:28:45 h1sush7 h1susauxh7
Camilla, Naoki
Since it was difficult to lock the pump ISS with 80uW OPO trans, we aligned the pump AOM and fiber following alog72081.
First we checked the AOM throughput with 0V ISS drivepoint. The AOM throughput was only 24.6 mW/34.7 mW = 71%. After we aligned the AOM, the AOM throughput is 34.2 mW/38 mW = 90%.
Then we set the ISS drivepoint at 5V and aligned AOM by maximizing the +1st order beam, which is left side of 0th order beam looking from the laser side. After the alignment, the 1st order beam is 11 mW and the 0th order beam is 23.4 mW.
After fiber alignment, we could lock ISS with 80uW OPO trans and the ISS control monitor is 4.8. The SHG output power is 47.7 mW and the pump going to fiber is 22.1 mW.
LVEA Swept following T1500386. Lights and WAP off.
There was a high pitched whining and bumping noise coming from the wall vent to the mechanical room, I have a recording. I'll investigate and notify Facilities team.
After speaking with Bubba and Ryan, the noise is the Purge air air-compressor (big blue compressor in Mech room) that Gerardo is leaving it on until 1pm as it was worked on today.
The wind has been picking up over the past hour, Maintenance is wrapping up and we are starting to relock.
NLN reaquired at 21:10 UTC
Sheila, Vicky - Summarizing SQZ losses from the continuously updated sqz wiki, and loss gsheet.
Total SQZ Losses inferred from Generated SQZ and Measured Anti-SQZ / SQZ: ~30-35%.
With total budgeted optical losses of 20%, this implies ~10-15% unexplained losses. Are these excess losses IFO- or SQZ- side?
Summary of the 20% budgeted optical losses, excluding mode-mismatch:
--------------------------------------------------------------------------------------
Other non-loss mechanisms that reduce observed squeezing:
- technical noise -- From the noise budget in LHO:72717, laser frequency noise looks about -14 dB (=20*log10(0.7/3.5)) below unsqueezed shot noise at our best around ~1 kHz. Correlated noise estimate suggests even higher technical noise at kHz, see Sheila's log June 2023, LHO:70978 for plots with 60W and hot OM2, and Craig's correlated noise budget from July 2023, LHO:71333.
- phase noise -- From homodyne measurements LHO:67223, we expect phase noise is less than ~20 mrad. In DARM, we haven't seen sqz turn-arounds at high NLGs; NLG sweeps on DARM can be used to check this phase noise estimate.
- mode-matching -- between SQZ-IFO and SQZ-OMC. From SQZ-OMC mode scans (e.g. Jan 2023, LHO:66946) and attempts to improve ADF-OMC transmission (most recently July 2023, LHO:71270), it seems we have railed PSAMS before fully optimizing SQZ-OMC mode-matching. Although recent Aug. 2023 sqz measurements (72565) looked quite flat (maybe because we are loss-limited, or too much mismatch?), we do expect mode-mismatch with OM2 hot.
For AS port loss budgeting and estimating OMC mode-mismatch, it may be worth trying another sqz single-bounce OMC scan with hot OM2, comparing e.g. SQZT7 OPO-IR transmitted seed power with output PD's such as OMC_REFL, and playing with PSAMS.
--------------------------------------------------------------------------------------
To get to 4.5 dB sqz, we need to reduce technical noise by about 5-7 dB (aim for laser frequency noise 10-fold lower than shot noise in asd). May need to reduce losses by a few %, but we have seen almost 4.5 dB sqz at various IFO powers over the past year (e.g. 50W, 60W), so suspect losses limit sqz at this 4.5 dB level.
To get to 6 dB sqz, we need to reduce losses by 10-15% and technical noise by 5-7 dB (as above). To find losses, we can use e.g. AS port measurements to constrain IFO-path readout losses, and in-air homodyne measurements to constrain SQZ-path injection losses from HAM7. Analysis of the quantum noise should also help us identify mode-matching losses as they become prominent.
Tue Sep 26 10:12:46 2023 INFO: Fill completed in 12min 41secs
Gerardo confirmed a good fill curbside.
Vicky noticed the HOM peaks had shifted this week compared to last week, see attached plot and 72943. The lower HOM peak (Y-arm?) is lower than usual, even comparing the same amount of time in lock. The squeezing has also been better since last week, plot attached, not inducing last nights 73100 lock with lowered opo_grTrans_setpoint_uW.
Things that have changed since last week: OM2 heater was turned back on 72967, CO2X laser power increased 73003, alignment changed either in IFO or SQZ 73105, adjusted SHG_SERVO_IN1GAIN 72978 (don't expect to effect anything).
WP11438 HAM7 +24V IO Chassis Power Supply Swap
h1sush7 and h1seih7 are powered down, Fil is heading out to the mech room mezzanine to swap the power supply.
Fil and I went to the MER and replaced the DC power supply, the s/n are given below,
New s/n - S2001609
OLD s/n - S2001613
After replacing the power supply Dave restarted the models following which I restored the ISI and suspensions to their nominal state.
WP11438 is now closed.
Power supply replacement is complete. IO Chassis and front end computers are back online. All cards can be seen and the models are running correctly.
I untripped HAM7 SWWD and handed it over to the control room for recovery.
Broken Kepco power supply (removed) | S2001613 |
New Kepco power supply (installed) | S2001609 |
Lockloss:
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1379531482
19:02 UTC Pi 24 starts to Ring up.
I let the PI Gaurdian try and sort it out at first. When I noticed that was cycling too quickly through the PI damping settings, I tried fighting it a little. Eventually I turned off the PI Guardian. then Tried by hand. Unfortunately My first guess at what would dampen PI24 actually made it worse, and When I finnaly found the setting that was turning it around the IFO unlocked.
I Struggled to try and fid the right settings to damp it.
19:11 UTC a few seconds after I found the right setting to bring PI24 down a lockloss happend.
Naoki checked that PI24 is still at the correct frequency, his plot attached.
We checked that the SUS_PI guardian was doing what was expected, it was. It stopped at 40deg PHASE which seemed to be damping but as the RMSMON started rising it continued to cycle, see attached. The issue appears to be that the guardian couldn't find the exact frequency to damp the mode quicker than it was ringing up.
Naoki and I have edited SUS_PI to take steps or 45degrees rather than 60degrees to fix this. We've left the timer at 10 seconds between steps but this may need to be reduced later. SUS_PI needs to be reloaded when next out of observe, tagging OpsInfo.
Right after lockloss DIAG_MAIN was showing:
- PSL_ISS: Diffracted power is low
- OPLEV_SUMS: ETMX sums low
05:41UTC LOCKING_ARMS_GREEN, the detector couldn't see ALSY at all and I noticed the ETM/ITMs L2 saturations(attachment1-L2 ITMX MASTER_OUT as example) on the saturation monitor(attachment2) and noticed the ETMY oplev moving around wildly(attachment3)
05:54 I took the detector to DOWN, and immediately could see ALSY on the cameras and ndscope; L2 saturations were all still there
05:56 and 06:05 I went to LOCKING_ARMS_GREEN again and GREEN_ARMS_MANUAL, respectively, but wasn't able to lock anything - I was able to go to LOCKING_ARMS_GREEN and GREEN_ARMS_MANUAL without the saturations getting too bad but both ALSX and Y evenually locked for a few seconds each and then unlocked and gone to basically 0 on the cameras and ndscope(attachment4). ALSX and Y sometime after this went to FAULT and were giving the messages "PDH" and "ReflPD A"
06:07 Tried going to INITIAL_ALIGNMENT but Verbal kept calling out all of the optics as saturating, ALIGN_IFO wouldn't move past SET_SUS_FOR_ALS_FPMI, and the oplev overview screen showed ETMY, ITMX, and ETMX moving all over the place (I was worried something would go really wrong if I left it in that configuration)
I tried waiting a bit and then tried INITIAL_ALIGNMENT and LOCKING_ARMS_GREEN again but had the same results.
Naoki came on and thought the saturations might be due to the camera servos, so he cleared the camera servo history and all of the saturations disappeared (attachment5) and an INITIAL_ALIGNMENT is now running fine.
Not sure what caused the saturations in the camera servos, but it'll be worth looking into whether the saturations are what ended up causing this lockloss.
As shown in the attached figure, the output of camera servo for PIT suddenly became 1e20 and the lockloss happened after 0.3s. So this lockloss is likely due to the camera servo. I checked the input signal of camera servo. The bottom left is the input of PIT1 and should be same as the BS camera output (bottom right). The BS camera output itself looks OK, but the input of PIT1 became red dashed and the output became 1e20. The situation is same for PIT2, 3. I am not sure why this happened.
Opened FRS29216 . It looks like a transient NaN on ETMY_Y camera centroid channel caused a latching of 1e20 on the ASC CAM_PIT filter modules outputs.
Camilla, Oli
We trended the ASC-CAM_PIT{1,2,3}_INMON/OUTPUT channels alongside ASC-AS_A_DC_NSUM_OUT_DQ(attachment1) and found that according to ASC-AS_A_DC_NSUM_OUT_DQ, the lockloss started(cursor 1) ~0.37s before PIT{1,2,3}_INMON read NaN(cursor 2), so the lockloss was not caused by the camera servos.
It is possible that the NaN values are linked to the light dropping off of the PIT2 and 3 cameras right after the lockloss, as when the cameras come back online ~0.2s later, both the PIT2 and PIT3 cameras are at 0, and looking back over several locklosses it looks like these two cameras tend to drop to 0 between 0.35 and 0.55s after the lockloss starts. However the PIT1 camera is still registering light for another 0.8s after coming back online(typical time for this camera).
Patrick updated the camera server to solve the issue in alog73228.
Erik, Camilla
This was already installed on ETMY and the code described by Huy-Tuong Cao in 72229. Today Erik installed conda on h1hwsex and we got the new code running. I had to take a new ETMX reference while the IFO was hot (old py2 pickle files have a different format/encoding to py3). So next Tuesday I should re-take this reference with a cold IFO. This update can be done to the ITMs tomorrow.
To install conda or add it to the path, Erik ran command: '/ligo/cds/lho/h1/anaconda/anaconda2/bin/conda init bash'
Then after closing/reopening a terminal I can pull the hws-server/-/tree/fix/python3 code:
Stop and restart the code after running 'conda activate hws' to use the correct python paths.
Had a few errors that needed to be fixed before the code ran successfully:
New code now running on all optics. Took new references on ITMs and ETMX.
New ETMX reference taken at 16:21UTC with ALS shuttered, after IFO had been down for 90 minutes. Installed at ITMX, new reference taken at 16:18UTC. Installed at ITMY, new reference taken at 16:15UTC. realized where I previously edited code in /home/controls/hws/HWS/ I should have just pulled the new fix/python3 HWS code I went back did this for ITMs and ETMX.
ITMX SLED power has quickly decayed, see attached, so I adjusted the frame rate form 5Hz to 1Hz, instructions in TCS wiki page. This SLED was recently replaced 71476 and was a 2021 SLED so not particularly old. LLO has seen similar issues 66713 but over hours rather than weeks. We need to deicide what to do about this, we could try to touch the trimpoint or contact the company. ITMY SLED is fine.
There was two separate errors on ITMY that stopped the code in the first few minutes, a "segmentation fault" and a "fatal IO error 25". We should watch that this code continues the run without issues.
ITMY code keeps stopping and has been stopped for last few hours. I cannot access the computer, maybe it has crashed? There is an orange light on h1hwsmsr.
ITMX spherical power is very noisy. This appears to be because the new ITMX reference has a lot of dead pixels, I've noted them but haven't yet added them to the dead pixels files as cannot edit the read-only file- TCS wiki link.
ITMY- TJ and I restarted h1hwsmrs1 in the MSR and I restarted the code. We were getting a regular "out of bounds" error, attached, but the data now seems to be running fine. The fix/python3 code didn't have all the master commits in it so when we restarted mrs1 some of the channels were not restarted, found by Dave in 73013. I've updated the fix/python3 code and we should pull this commit and kill/restart the ioc and hws itmy code tomorrow.
ITMX - the bad pixels were added and the data is now mcuh cleaner, updated insruciton in the TCS wiki.
Pulled new python3 code with all channels to both ITM and ETMX computers (already on ETMY). Killed and restart softIoc on h1hwsmsr1 for ITMY following instructions in 65966. The 73013 channels are now running again.
Took new references on all optics at 20:20UTC after the IFO had been unlocked and CO2 lasers off for 5 hours. RH settings nominal IX 0.4W, IY 0.0W, EX 1.0W, EY 1.0W.
Sheila, Naoki, Vicky
Last week we aligned the pump AOM and fiber in alog71875, but the alignment procedure was not correct. Today we realigned them with correct procedure.
Procedure for pump AOM and fiber alignment:
1) set the ISS drivepoint at 0V to make only 0th order beam and check 90% of AOM throughput with power meter. The measured AOM throughput was 33.8 mW/36 mW = 94%.
2) set the ISS drivepoint at 5V and align the AOM to maximize the 1st order beam. After the AOM alignment, the 1st order beam was 11 mW and the 0th order beam was 23 mW. We measured again the AOM throughput including both 0th and 1st order beam. The AOM throughput was 36 mW/38 mW = 95%.
3) set the ISS drivepoint at 5V and align the fiber by maximizing the H1:SQZ-OPO_REFL_DC_POWER.
4) adjust the OPO temperature by maximizing the H1:SQZ-CLF_REFL_RF6_ABS_OUTPUT.
After this alignment, the SHG output is 42.7 mW, the pump going to fiber is 20.5 mW, and the rejected power is 2.7 mW. The ISS can be locked with OPO trans of 80 while the ISS control monitor is 4.2, which is in stable region.
Regarding 2), we maximized the +1st order beam, which is left side of 0th order beam looking from the laser side of SQZT0.