TITLE: 11/04 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Currently still trying to relock, and at POWER_25W. All day we've been working on getting back up, but the secondary microseism is so high. It is slowly coming down though. At one point, we were in OMC_WHITENING for over an hour trying to damp violins so we could go into NLN, but we lost lock before we could get there.
LOG:
15:30 In DOWN due to very high secondary useism
15:38 Started relocking with an initial alignment
- When we got to PRC, I had to do the Pausing-PSL_FSS-Until-IMC-Is-Locked thing again (see 81022)
- 16:04 Initial alignment done, relocking
- Lockloss from ACQUIRE_DRMI_1F, OFFLOAD_DRMI_ASC, ENGAGE_ASC_FOR_FULL_IFO, TRANSITION_FROM_ETMX
- 18:07 Sitting in DOWN for a bit
- 18:45 Started relocking
- More locklosses
- 21:48 Lost lock after sitting in OMC_WHITENING damping violins for over an hour
- More trying to lock and losing lock
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 17:37 | PEM | Robert | CER | n | Improving ground measurement | 17:44 |
Most of the locklosses over this weekend have the IMC tag and those do show the IMC losing lock at the same time as AS_A, but since I don't have any further insight into those, I wanted to point out a few locklosses where the causes are different from what we've been seeing lately.
2024-11-02 18:38:30.844238 UTC
I checked during some normal NLN times and the H1:PSL-ISS_AOM_DRIVER_MON_OUT_DQ channel does not normally drop below 0.31 while we are in NLN (plot). In Oli's times before in lockloss, it drops to 0.28 when we loose lock. Maybe we can edit the PSL glitch scripts 80902 to check this channel.
TITLE: 11/03 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 22mph Gusts, 18mph 3min avg
Primary useism: 0.13 μm/s
Secondary useism: 0.60 μm/s
QUICK SUMMARY:
Closes FAMIS#27801
Last checked a week ago by Camilla, but since the water swap was done recently, I am doing the next check only a week later.
CO2X
CO2Y
There was no water in the leak cup.
Sun Nov 03 10:13:13 2024 INFO: Fill completed in 13min 10secs
TITLE: 11/03 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 4mph Gusts, 2mph 3min avg
Primary useism: 0.09 μm/s
Secondary useism: 0.72 μm/s
QUICK SUMMARY:
Looks like we have been sitting in PREP_FOR_LOCKING since a relocking lockloss at 10:52UTC(81032). I just started an initial alignment. The secondary microseism is really high so not sure how far we'll get
18:07 UTC - going to sit in DOWN for a bit since we haven't been able to get locked because of the high secondary useism (we were able to get up to TRANSITION_FROM_ETMX once, but the majority of locklosses have been from low locking states)
H1 has been having a hard time locking tonight due to very high microseismic motion. Unlocked at 05:43 UTC (lockloss tool) and has been down since; highest state I've seen H1 reach is TRANSITION_FROM_ETMX, alignment doesn't seem to be the problem.
Since H1 hasn't had any success locking and the microseism doesn't look like it's going to come down anytime soon, I'm leaving H1 in DOWN until the morning when hopefully conditions will be better.
TITLE: 11/03 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:Short locks, the IMC kept losing lock during SDF_REVERT/READY/LOCKING_GREEN as it did yesterday. Primary useism has been increasing over the last few hours (there's been more small US EQs today than normal... Missouri, Idaho, Nevada all in the 3s).
LOG: No log
The FC keeps losing IR lock, it says "LO SERVO RAILED" pretty much as soon as the LO_LR grd startings working after the FC locks IR.
RyanC, Vicky
I tried bringing SQZ_MANAGER to FDS_READY then I tried to set FC{1,2} back to where they were the last good SQZ lock ~5 hours ago according to top mass OSEMs... same issue. Trending the ZMs showed some drifting over the past 5 hours, I was thinking about moving these back just as with FC2 then Vicky suggested to try Clearing ASC history. I tried this following Camillas alog80519, but the issue persisted as clearing ASC also cleared FC ASC which misaligned the FC. Vicky then logged on and starting trying to manually lock everything, successfully.
To summarize
Ryan C, Vicky
What I think was the issue and what I did:
02:28 UTC lockloss, IMC tag
00:44 UTC lockloss, ASC_AS_A and IMC lost lock ~10 ms apart.
02:00 UTC Observing
TITLE: 11/02 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Currently Observing at 157Mpc and have been Locked for over an hour. We relocked many times today and relocking tended to take a bit long due to many locklosses, but for the most part was hands-off. The only things that required help were 1) during both initial alignments I ran today, when we got to aligning PRC, the IMC unlocked and could not relock - every time it would relock the FSS would glitch and unlock. To solve this, I waited until the PSL_FSS guardian was in READY_FOR_MC_LOCK, and then I paused it. Then, I waited for the IMC to lock and it would lock fine that next try, and once it had been locked for a few seconds, I unpaused the PSL_FSS guardian (tagging OpsInfo) and 2) one of the times when we were in GREEN_ARMS, ALS_XARM was stuck in CHECK_CRYSTAL_FREQUENCY, but taking it to UNLOCKED and then ETM_TMS_WFS_OFFLOADED fixed it right away.
LOG:
14:30 Observing and locked for over 3 hours
14:47 Superevent S241102cy
15:17 Lockloss
15:17 Started an initial alignment
IMC lost lock when trying to align PRC
- Every time it would relock, the FSS would oscillate and cause the IMC to lose lock again & FSS would lose lock
- I paused the FSS guardian once the FSS was locked and then unpaused once the IMC was good and locked
15:40 Initial alignment done, relocking
16:12 Lockloss from MOVE_SPOTS
17:04 NOMINAL_LOW_NOISE
17:05 Observing
17:13 Lockloss
18:05 NOMINAL_LOW_NOISE
18:08 Observing
18:38 Lockloss
18:38 Started an initial alignment
- Same issue as before - during PRC the mode cleaner unlocks and cannot relock due to the FSS glitching and unlocking.
Steps that have worked for me: (tagging Ops)
1) Wait until the PSL_FSS guardian is in READY_FOR_MC_LOCK
2) Once it is, pause PSL_FSS
3) Wait for IMC to lock
4) Once it has, unpause PSL_FSS
- For some reason, even once the FSS and IMC were locked and we were aligning PRC, the ISS diffracted power was jumping all over the place
18:58 Initial alignment done, relocking
- ALS_XARM CHECK_CRYSTAL_FREQUENCY issue - toggling Force/No Force did not work, so I changed ALS_XARM to AUTO and then took it to UNLOCKED. I selected ETM_TMS_WFS_OFFLOADED and it locked immediately before I could troubleshoot further
19:48 NOMINAL_LOW_NOISE
19:52 Observing
20:08 Lockloss
22:15 NOMINAL_LOW_NOISE
22:15 Observing
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 17:19 | PEM | Robert | LVEA | Y | Looking for grounding spot | 17:39 |
| 17:48 | PEM | Robert | CER | n | Setting up stuff | 18:02 |
TITLE: 11/02 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 12mph Gusts, 8mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.24 μm/s
QUICK SUMMARY:
Using the darm_integral_compare.py script from the NoiseBudget repo (NoiseBudget/aligoNB/production_code/H1/darm_integral_compare.py) as a starting point, I made a version that is simplified and easy to run for when our range is low and we want to compare range vs frequency with a previous time.
It takes two starting times, supplied by the user, and for each time, it grabs the DARM data between the start time and an end time of starttime+5400 seconds (1.5 hours). Using this data it calculates the inspiral range integrand and returns two plots(pdf) - one showing the range integrand plotted against frequency for each set of data(png1), and then the second plot just shows DARM for each set of data, along with a trace showing the cumulative difference in range between the two sets as a function of frequency(png2). These are saved both as pngs and in a pdf in the script's folder.
This script can be found at gitcommon/ops_tools/rangeComparison/range_compare.py. To run it you just need to supply the gps times for the two sets of time that you want to compare, although there is also an optional argument you can if you want the length of data taken to be different than the default 5400 seconds. The command used to generate the PDF and PNGs attached to this alog was as follows: python3 range_compare.py --span 5000
I appearently didn't do a very good job of telling you how to run this and forgot to put the example times in the command, so here's a more clear (actually complete) explanation
To find the script, go to:
cd /ligo/gitcommon/ops_tools/rangeComparison/
and then to run the script:
python3 range_compare.py [time1] [time2]
where time1 and time2 are the gps start times for the two stretches of time that you want to compare. The default time span it will run with for each time is 5400 seconds after the start time, but this can be changed by using the --span command followed by the number of seconds you want. For example, the plots from the original alog were made by running the command python3 range_compare.py --span 5000 1414349158 1414586877
Building on work last week, we installed a 2nd PI AI chassis (S1500301) in order to keep the PI signals separate from the ESD driver signals. Original PI AI chassis S1500299.
We routed the LD32 Bank 0 thorugh the first PI AI chassis to the ESD drive L3, while keeping the old ESD driver signal driving the PI through the new PI AI chassis.
We routed the LD32 Bank 1 to the L2 & L1 suspension drive.
We did not route LD32 Bank 2 or Bank 3 to any suspensions. The M0 and R0 signals are still being driven by the 18 bit DACs.
The testing did not go as smoothly as planned, a watchdog on DAC slot 5 (the L1&L2 drive 20 bit DAC) continousouly tripped the ESD reset line. We solved this by attaching that open DAC port (slot 5) to the PI AI chassis to clear the WD error.
Looks like we made it to observing.
F. Clara, R. McCarthy, F. Mera, M. Pirello, D. Sigg
Part of the implication of this alog is that the new LIGO DAC is currently installed and in use for the DARM actuator suspension (the L3 stage of ETMX). Louis and the calibration team have taken the changes into account (see, eg, alog 80155).
The vision as I understand it is to use this new DAC for at least a few weeks, with the goal of collecting some information on how it affects our data quality. Are there new lines? Fewer lines? A change in glitch rate? I don't know that anyone has reached out to DetChar to flag that this change was coming, but now that it's in place, it would be helpful (after we've had some data collected) for some DetChar studies to take place, to help improve the design of this new DAC (that I believe is a candidate for installation everywhere for O5).
Analysis of glitch rate:
We selected Omicron transients during observing time across all frequencies and divided the analysis into two cases: (1) rates calculated using glitches with SNR>6.5, and (2) rates calculated using glitches with SNR>5. The daily glitch rate for transients with SNR greater than 6.5 is shown in Figure 1, with no significant difference observed before and after September 17th. In contrast, Figure 2, which includes all Omicron transients with SNR>5, shows a higher daily glitch rate after September 17th.
The rate was calculated by dividing the number of glitches per day by the daily observing time in hours.