WP12186
Richard, Fil, Erik, Dave:
We performed a complete power cycle of the h1psl0. Note, this is not on the Dolphin fabric so no fencing was needed. Procedure was
The system was power cycled at 10:11 PDT. When the iop model started, it reported a timing error. The duotone signal (ADC0_31) was a flat line signal of about 8000 counts with a noise of a few counts.
Erik thought the timing card had not powered up correctly, so we did a second round of power cycles at 10:30 and this time the duotone was correct.
NOTE: the second ADC failed its AUTOCAL on both restarts. This is the PSL FSS ADC.
If we continue to have FSS issues, the next step is to replace the h1pslfss model's ADC and 16bit DAC cards.
[ 45.517590] h1ioppsl0: INFO - GSC_16AI64SSA : devNum 0 : Took 181 ms : ADC AUTOCAL PASS
[ 45.705599] h1ioppsl0: ERROR - GSC_16AI64SSA : devNum 1 : Took 181 ms : ADC AUTOCAL FAIL
[ 45.889643] h1ioppsl0: INFO - GSC_16AI64SSA : devNum 2 : Took 181 ms : ADC AUTOCAL PASS
[ 46.076046] h1ioppsl0: INFO - GSC_16AI64SSA : devNum 3 : Took 181 ms : ADC AUTOCAL PASS
C. Compton, R. Short
I've updated the PSL_FSS Guardian so that it won't jump into the FSS_OSCILLATING state whenever there's just a quick glitch in the FSS_FAST_MON_OUTPUT channel. Whenever the FSS fastmon channel is over its threshold of 0.4, the FSS_OSCILLATION channel jumps to 1, and the PSL_FSS Guardian responds by dropping the common and fast gains to -10 dB then stepping them back up to what they were before. This is meant to catch when there's a servo oscillation in the FSS, but it's also been happening when the FSS glitches. Since moving the gains around makes it difficult for the IMC to lock, the Guardian should now only do that when there's an actual oscillation and should save time relocking the IMC.
FAMIS 31058
Trends are all over the place in the last 10 days due to several incursions, but I motsly tried to focus on how things have looked since we brought the PSL back up fully last Tuesday (alog80929). Generally things have been fairly stable, and at least for now, I dn't see the PMC reflected power slowly increasing anymore.
TITLE: 11/04 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 28mph Gusts, 19mph 3min avg
Primary useism: 0.07 μm/s
Secondary useism: 0.56 μm/s
QUICK SUMMARY: H1 has been sitting in PREP_FOR_LOCKING since 08:05 UTC. Since the secondary microseism has been slowly decreasing, I'll try locking H1 and we'll see how it goes.
TITLE: 11/04 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism, LOCK_ACQUISITION
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
We spent most of the day with the secondary microseism mostly above the 90th percentile, it started to decrease ~8 hours ago but still remains about half above the 90th percentile. The elevated microseism today is from both the storms by Greenland and the Aleutians as seen by the high phase difference for both the arms with the corner station (bottom plot).
We keep losing it at the beginning of LOWNOISE_ESD_ETMX, I see some ASC ringups on the top of NUC29 (CHARD_P I think?), no FSS glitches before the locklosses, no tags on the lockloss tool besides WINDY. It's always a few seconds into the run method, I was able to avoid this by slowly stepping through some of the higher states (PREP_ASC, ENGAGE_ASC, MAX_POWER, LOWNOISE_ASC). I'm not sure which of these is the most relevent to not see the ringup, probably not the ASC states as when I paused there but forgot MAX_POWER I still saw the ringup.
LOG: No log.
For OWL shift, received 12:03amPT notification, but I had a phone issue.
TITLE: 11/04 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Currently still trying to relock, and at POWER_25W. All day we've been working on getting back up, but the secondary microseism is so high. It is slowly coming down though. At one point, we were in OMC_WHITENING for over an hour trying to damp violins so we could go into NLN, but we lost lock before we could get there.
LOG:
15:30 In DOWN due to very high secondary useism
15:38 Started relocking with an initial alignment
- When we got to PRC, I had to do the Pausing-PSL_FSS-Until-IMC-Is-Locked thing again (see 81022)
- 16:04 Initial alignment done, relocking
- Lockloss from ACQUIRE_DRMI_1F, OFFLOAD_DRMI_ASC, ENGAGE_ASC_FOR_FULL_IFO, TRANSITION_FROM_ETMX
- 18:07 Sitting in DOWN for a bit
- 18:45 Started relocking
- More locklosses
- 21:48 Lost lock after sitting in OMC_WHITENING damping violins for over an hour
- More trying to lock and losing lock
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 17:37 | PEM | Robert | CER | n | Improving ground measurement | 17:44 |
Most of the locklosses over this weekend have the IMC tag and those do show the IMC losing lock at the same time as AS_A, but since I don't have any further insight into those, I wanted to point out a few locklosses where the causes are different from what we've been seeing lately.
2024-11-02 18:38:30.844238 UTC
I checked during some normal NLN times and the H1:PSL-ISS_AOM_DRIVER_MON_OUT_DQ channel does not normally drop below 0.31 while we are in NLN (plot). In Oli's times before in lockloss, it drops to 0.28 when we loose lock. Maybe we can edit the PSL glitch scripts 80902 to check this channel.
TITLE: 11/03 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 22mph Gusts, 18mph 3min avg
Primary useism: 0.13 μm/s
Secondary useism: 0.60 μm/s
QUICK SUMMARY:
Closes FAMIS#27801
Last checked a week ago by Camilla, but since the water swap was done recently, I am doing the next check only a week later.
CO2X
CO2Y
There was no water in the leak cup.
Sun Nov 03 10:13:13 2024 INFO: Fill completed in 13min 10secs
TITLE: 11/03 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 4mph Gusts, 2mph 3min avg
Primary useism: 0.09 μm/s
Secondary useism: 0.72 μm/s
QUICK SUMMARY:
Looks like we have been sitting in PREP_FOR_LOCKING since a relocking lockloss at 10:52UTC(81032). I just started an initial alignment. The secondary microseism is really high so not sure how far we'll get
18:07 UTC - going to sit in DOWN for a bit since we haven't been able to get locked because of the high secondary useism (we were able to get up to TRANSITION_FROM_ETMX once, but the majority of locklosses have been from low locking states)
H1 has been having a hard time locking tonight due to very high microseismic motion. Unlocked at 05:43 UTC (lockloss tool) and has been down since; highest state I've seen H1 reach is TRANSITION_FROM_ETMX, alignment doesn't seem to be the problem.
Since H1 hasn't had any success locking and the microseism doesn't look like it's going to come down anytime soon, I'm leaving H1 in DOWN until the morning when hopefully conditions will be better.
TITLE: 11/03 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:Short locks, the IMC kept losing lock during SDF_REVERT/READY/LOCKING_GREEN as it did yesterday. Primary useism has been increasing over the last few hours (there's been more small US EQs today than normal... Missouri, Idaho, Nevada all in the 3s).
LOG: No log
The FC keeps losing IR lock, it says "LO SERVO RAILED" pretty much as soon as the LO_LR grd startings working after the FC locks IR.
RyanC, Vicky
I tried bringing SQZ_MANAGER to FDS_READY then I tried to set FC{1,2} back to where they were the last good SQZ lock ~5 hours ago according to top mass OSEMs... same issue. Trending the ZMs showed some drifting over the past 5 hours, I was thinking about moving these back just as with FC2 then Vicky suggested to try Clearing ASC history. I tried this following Camillas alog80519, but the issue persisted as clearing ASC also cleared FC ASC which misaligned the FC. Vicky then logged on and starting trying to manually lock everything, successfully.
To summarize
Ryan C, Vicky
What I think was the issue and what I did:
Building on work last week, we installed a 2nd PI AI chassis (S1500301) in order to keep the PI signals separate from the ESD driver signals. Original PI AI chassis S1500299.
We routed the LD32 Bank 0 thorugh the first PI AI chassis to the ESD drive L3, while keeping the old ESD driver signal driving the PI through the new PI AI chassis.
We routed the LD32 Bank 1 to the L2 & L1 suspension drive.
We did not route LD32 Bank 2 or Bank 3 to any suspensions. The M0 and R0 signals are still being driven by the 18 bit DACs.
The testing did not go as smoothly as planned, a watchdog on DAC slot 5 (the L1&L2 drive 20 bit DAC) continousouly tripped the ESD reset line. We solved this by attaching that open DAC port (slot 5) to the PI AI chassis to clear the WD error.
Looks like we made it to observing.
F. Clara, R. McCarthy, F. Mera, M. Pirello, D. Sigg
Part of the implication of this alog is that the new LIGO DAC is currently installed and in use for the DARM actuator suspension (the L3 stage of ETMX). Louis and the calibration team have taken the changes into account (see, eg, alog 80155).
The vision as I understand it is to use this new DAC for at least a few weeks, with the goal of collecting some information on how it affects our data quality. Are there new lines? Fewer lines? A change in glitch rate? I don't know that anyone has reached out to DetChar to flag that this change was coming, but now that it's in place, it would be helpful (after we've had some data collected) for some DetChar studies to take place, to help improve the design of this new DAC (that I believe is a candidate for installation everywhere for O5).
Analysis of glitch rate:
We selected Omicron transients during observing time across all frequencies and divided the analysis into two cases: (1) rates calculated using glitches with SNR>6.5, and (2) rates calculated using glitches with SNR>5. The daily glitch rate for transients with SNR greater than 6.5 is shown in Figure 1, with no significant difference observed before and after September 17th. In contrast, Figure 2, which includes all Omicron transients with SNR>5, shows a higher daily glitch rate after September 17th.
The rate was calculated by dividing the number of glitches per day by the daily observing time in hours.
We decided to go ahead and replace h1pslfss model's ADC and DAC card. The ADC because of the continuous autocal fail, the DAC to replace an aging card which might be glitching.
11:30 Powered system down, replace second ADC and second DAC cards (see IO Chassis drawing attached).
When the system was powered up we had good news and bad news. The good news, ADC1 autocal passed after the previous card had been continually failing since at least Nov 2023. The bad news, we once again did not have a duotone signal in ADC0_31 channel. Again it was a DC signal, with amplitude 8115+/-5 counts.
11:50 Powered down for a 4th time today, replaced timing card and ADC0's interface card (see drawing)
12:15 powered the system back up, this time everything looks good. ADC1 AUTOCAL passed again. Duotone looks correct.
Note that the new timing card duotone crossing time is 7.1uS, and the old card had a crossing of 7.6uS
Here is a summary of the four power cycles of h1psl0 we did today:
Card Serial Numbers
Detailed timeline:
Mon04Nov2024
LOC TIME HOSTNAME MODEL/REBOOT
10:20:14 h1psl0 ***REBOOT***
10:21:15 h1psl0 h1ioppsl0
10:21:28 h1psl0 h1psliss
10:21:41 h1psl0 h1pslfss
10:21:54 h1psl0 h1pslpmc
10:22:07 h1psl0 h1psldbb
10:33:20 h1psl0 ***REBOOT***
10:34:21 h1psl0 h1ioppsl0
10:34:34 h1psl0 h1psliss
10:34:47 h1psl0 h1pslfss
10:35:00 h1psl0 h1pslpmc
10:35:13 h1psl0 h1psldbb
11:43:20 h1psl0 ***REBOOT***
11:44:21 h1psl0 h1ioppsl0
11:44:34 h1psl0 h1psliss
11:44:47 h1psl0 h1pslfss
11:45:00 h1psl0 h1pslpmc
11:45:13 h1psl0 h1psldbb
12:15:47 h1psl0 ***REBOOT***
12:16:48 h1psl0 h1ioppsl0
12:17:01 h1psl0 h1psliss
12:17:14 h1psl0 h1pslfss
12:17:27 h1psl0 h1pslpmc
12:17:40 h1psl0 h1psldbb