Around 12:12 PST Tuesday, coincidently around the DAQ restart time, H1:PEM-CS_TEMP_LVEA10_DUSTMON_DEGF dropped from 68F to 25F. Tagging PEM.
Around 10:30UTC H1_MANAGER requested help due to the Initial Alignment having timed out.
- By the time I had gotten on, it had just been able to lock green arms and had offloaded fine.
- However, it couldn't get XARM_IR to lock because the IMC kept unlocking and kicking MC2
11:08 I took the detector to DOWN
Restored all optics to 1385057770 (last second of the inital alignment before the 27hour lock from two days ago)
(- I then restored the X and Y arms to 1385290825 (from when ALSX and ALSY had just been locked fine), but ALSY needed to be adjusted a lot and still wasn't able to get high enough to catch so I restored the arm optics back to the 1385057770 time)
- Had to touch up both arms in GREEN_ARMS_MANUAL.
13:03 Went back into INITIAL_ALIGNMENT
- Locked both arms quickly, but ALSY keeps drifting down while waiting for WFS, unlocking Y arm (attachment1)
13:15 Thinking that the issue might be in the specific alignment of the Y arm, I put the values of ETMY and TMSY to the values that they were while in INITIAL_ALIGNMENT and GREEN_ARMS_OFFLOADED from a few hours ago (1385289601)
- Y arm was bad again and would not have caught, so I adjusted it again.
14:26 Into MANUAL_INITIAL_ALIGNMENT
- Same as before, both arms locked quickly, but then ALSY started drifting down until it unlocked again.
So now we're having a different issue from what what it was initially, and referencing 49309, LASERDIODE{1,2}POWERMONITOR are both within range and tolerance(attachment2), although LASERDIODE2POWERMONITOR does look to be slowly drifting down, but very slightly and shouldn't be the issue anyway since ALSY was locking nicely just a couple of hours ago.
Something partially unrelated is that the ALSY spot on the camera is definitely a lot further over/cut off than it usually is, although it's possible that it's just because I'm seeing it closer up than usual. However, it's causing the flashes from Y to bleed over to the X spot and cause little jumps in ALS-C_TRX_A_LF_OUT.
Updates:
To fix the drifting YARM when locked, Jenne just adjusted PR3 and fixed how green arms looked on the camera. Got that locked and offloaded, but the PR3 value is currently reverted so that we could get IR X to catch, and then will be moving it back to its new location once ASC is on.
After fixing this, we went back to having the initial issue that I was called for - XARM IR not being able to lock due to the IMC continuously unlocking. This time at least, MC2 is not constantly saturating. Jenne tried forgoing locking XARM IR and tried locking YARM IR, but we are having the same issue.
16:16 Jenne just got XARM_IR to lock by running the dark_offset script and they're now restarting an initial alignment.
TITLE: 11/29 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 140Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Two locklosses this shift which prompted a reversion of the changed AS72 whitening gain from maintenance day. Since then, things have been quiet, but BNS range is lower tonight for some reason.
LOG:
No log for this shift.
FAMIS 26068, last checked in alog 74211
I had to use the cds-py39 conda environment to run the coefficients.py script, as usual.
Sheila, Naoki
We increased the AS72 A/B whitening gain from 12dB to 21dB to reduce the ADC noise. We accepted it in safe.snap as shown in the first attached figure. To compensate the whitening gain increase, we decreased the SRC1 gain from 4 to 1.4 (4/1.4 = 9dB) by changing the following guardian.
line 3429, 3438 in ISC_LOCK guardian, ENGAGE_ASC_FOR_FULL_IFO state
line 1095, 1097 in ISC_DRMI guardian, ENGAGE_DRMI_ASC state
Then we found that the IFO lost lock twice after DRMI ASC is engaged. We found that the dark offset of AS72 was large and that caused too large SRC1 error signal. So we ran the dark offset script in userapps/isc/h1/scripts/dark_offsets/dark_offsets_exe.py. After that, the IFO could go to NLN.
We accepted a bunch of SDFs both in safe.snap and observe.snap as shown in the attached figures.
Since the BNS range was worse and the lock duration was only 1.5 hour with this whitening gain change. We reverted the whitening gain from 21dB to 12dB. We also reverted the whitening filter from 2 stage to 1 stage, which was done in 74231. Then we ran the dark offset script again and accepted a bunch of SDFs in safe.snap as shown in the attached figures. We need to accept them also in observe.snap. We also reverted the SRC1 gain in ISC_LOCK and ISC_DRMI guardian.
I accepted these same SDF diffs in the OBSERVE tables when we relocked. Screeshots attached.
Lockloss @ 04:25 UTC - no obvious cause, but this is about the same duration into the lock as the last lockloss. We suspect this has to do with changes to the AS72 A/B whitening and SRC1 gains made earlier today (alog 74457). Naoki is reverting these changes and we will run the dark offsets script again before relocking.
State of H1: Observing at 135Mpc
H1 has been locked and observing for 1.5 hours. Range has been lower for both lock stretches since maintenance day and the power recycling gain is quite noisy; it's gotten worse over the past 10 minutes and some ASC control signals follow it (mainly CSOFT_P and CHARD_Y).
Lockloss @ 01:39 UTC - no obvious cause, online lockloss analysis failed.
Looks like PRCL saw the first motion by a very small margin.
H1 back to observing as of 03:01 UTC.
TITLE: 11/29 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 2mph Gusts, 1mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.23 μm/s
QUICK SUMMARY:
H1 is relocking following maintenance day, currently up to TRANSITION_FROM_ETMX. The PRG trace is noisier than usual, but otherwise things look okay.
H1 is back to observing as of 00:22 UTC.
I had to deal with one SDF diff in the CS_ISC table for the H1:ALS-C_DIFF_PLL_VCOCOMP channel. Trending it back, it seems to have been turned OFF last Tuesday during maintenance (ISC_LOCK in 'IDLE') and back ON earlier this afternoon at 20:46 UTC, during the "Beckhoff work" section of Ibrahim's day shift log. Prior to it being switched OFF last week, it had been ON for about 8.5 years, or pre-O1. So, I ACCEPTED this diff.
TITLE: 11/28 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
16:04 | FAC | Kim and Karen | EX, EY | N | Technical cleaning | 17:23 |
16:07 | SUS | Randy, Chris, Mitchell | EY, EX | N | EX cleanroom sock install | 18:32 |
16:08 | VAC | Jordan | MY, EY | N | Turbo pump tests | 17:01 |
16:11 | FAC | Cindi | FCES | N | Technical cleaning | 17:14 |
16:15 | VAC | Gerardo and Jordan | FCES | N | Valve install | 17:15 |
16:48 | CDS | Fernando and Marc | LVEA | N | SQZ 4 Beckhoff modifications | 19:44 |
16:55 | SQZ/CDS | Fil | CER/SQZ Racks | N | Pulling cables | 19:51 |
17:13 | Richard | LVEA | N | Electrical walkthrough/escorting people | 17:43 | |
17:14 | FAC | Cindi | Mech room | N | Cardboard collection | 17:44 |
17:43 | VAC | Ken and Gerardo | FCTE | N | Valve install | 20:13 |
17:49 | VAC | Travis | EX | N | Turbo Station Cooling Lines Upgrade | 20:15 |
17:50 | VAC | Jordan | MY, EY | N | Turbo pump tests | 18:46 |
18:00 | FAC | Karen and Cindi | LVEA | N | Technical cleaning + High bay check | 19:29 |
18:10 | VAC | Norco | CP8 EX | N | LN2 Fill | 20:05 |
18:13 | FAC | Ken | LVEA | N | Electrical work | 20:07 |
18:19 | Richard | M-Station/Wandering/FCES | N | Smoke detector check | 19:56 | |
18:47 | CDS | Erik | N | IOC Server Reboot (and temp change) | 18:57 | |
18:51 | VAC | Jordan | FCTE | N | Valve install assistance | 19:51 |
19:02 | TCS | Camilla | LVEA | N | TCS Setup | 20:11 |
19:22 | FAC | Karen | Recieving | N | Bringing car out | 19:54 |
19:28 | FAC | Eric | CER/Sup | N | Investigating temperature excursion | 19:51 |
19:45 | CDS | Fernando | N | Rebooting with modifications | 19:56 | |
19:48 | FAC | Mitchell and Eric | CER | N | Checking CER disconnects | 20:15 |
20:04 | CDS | Jonathan | N | DAQ Restart | 20:23 | |
20:06 | VAC | Travis | EX | N | Sensor correction correction | 20:17 |
20:14 | VAC | Gerardo | FCTE | N | Valve Opening | 20:24 |
Arianna, Camilla
There are many glitches making up the “fuzzy” range time from 07:00UTC 23rd November 2023. We identified glitches using ndscope of the DARM BLRMs, (plot attached showing yellow, green, blue BLRMs getting worse at t-cursor) and then used ldvw.ligo.caltech.edu Q-transform to plot omega scans of the glitches. Original troubleshooting in alog 74377.
Attached is a pdf containing the glitches seen between 07:00UTC and the lockloss at 10:10UTC. There are many more glitches than usual and lots of different types of glitches.
WP11540 Remove SDF safe->OBSERVE exceptions for h1seiproc and h1brs
TJ, Jim:
TJ made changes to guardian to remove the exception that h1seiproc and h1brs do not transition to OBSERVE.snap. h1brs OBSERVE==safe, h1seiproc has a separate OBSERVE.snap which Jim will verify is correct.
WP11546 New FMCS STAT code base
Erik:
Erik rewrote the FMCS STAT IOC to use this new softIOC python module. The code was restarted.
New Guardian node
Camilla, Dave:
Camilla started a new Guardian nodel called SQZ_ANG_ADJUST. I updated the H1EPICS_GRD.ini file. DAQ+EDC restart was required.
DAQ Restart:
Jonathan, Erik, Dave:
The DAQ was restarted to include the new GRD channels into the EDC. This was a good restart with no issues.
cdsioc0 reboot
Erik, Dave:
Erik updated and rebooted cdsioc0. There were some issues getting Picket Fence running again, which Erik resolved. We were reminded that I was running a temporary HWS ETMY IOC in a tmux session, Erik switched this over to a systemd service via puppet.
Tue28Nov2023
LOC TIME HOSTNAME MODEL/REBOOT
12:12:04 h1daqdc0 [DAQ] <<< 0-leg restart
12:12:13 h1daqfw0 [DAQ]
12:12:13 h1daqtw0 [DAQ]
12:12:14 h1daqnds0 [DAQ]
12:12:22 h1daqgds0 [DAQ]
12:12:35 h1susauxb123 h1edc[DAQ] <<< EDC restart for GRD
12:20:44 h1daqdc1 [DAQ] <<< 1-leg restart
12:20:56 h1daqfw1 [DAQ]
12:20:57 h1daqtw1 [DAQ]
12:20:58 h1daqnds1 [DAQ]
12:21:07 h1daqgds1 [DAQ]
Jordan
We ran the functionality test on the main turbopumps in MY and EY during Tuesday Maintenance (11/28/23). The scroll pump is started to take pressure down to low 10^-02 Torr, at which time the turbo pump is started, the system reaches low 10^-08 Torr after a few minutes, then the turbo pump system is left ON for about 1 hour, after the hour the system goes through a shut down sequence.
MY Turbo:
Bearing Life:100%
Turbo Hours: 208
Scroll Pump Hours: 74
EY Turbo:
Scroll pump made a grinding sound after getting to ~ 5E-2 Torr. I closed all valves and stopped the test. The scroll pump only has 200 hours on it so it will be disassembled to figure out the source of the noise. I have swapped the scroll pump with a new ISP250, but did not have time to run the turbo test. I will resume next tuesday and add a comment to this alog with the EY results.
Closing WP 11544 and FAMIS 24917
After swapping the scroll pump, I ran the functionality test on the EY main turbopump during Tuesday maintenance, no issues were encountered during this test.
Turbo Hours: 1275
Scroll Pump Hours: 72
Bearing life: 100%
Closing WP 11553 and FAMIS 24941
WP 11533
Checked the HAM4 ISI Coil Driver Chassis. Issue reported last week of noisy fan. Fan is spinning and noise reported last week has not returned. Will leave WP open another week.
Second week of monitoring fan. No issues, closing work permit.