TITLE: 11/03 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 4mph Gusts, 2mph 3min avg
Primary useism: 0.09 μm/s
Secondary useism: 0.72 μm/s
QUICK SUMMARY:
Looks like we have been sitting in PREP_FOR_LOCKING since a relocking lockloss at 10:52UTC(81032). I just started an initial alignment. The secondary microseism is really high so not sure how far we'll get
H1 has been having a hard time locking tonight due to very high microseismic motion. Unlocked at 05:43 UTC (lockloss tool) and has been down since; highest state I've seen H1 reach is TRANSITION_FROM_ETMX, alignment doesn't seem to be the problem.
Since H1 hasn't had any success locking and the microseism doesn't look like it's going to come down anytime soon, I'm leaving H1 in DOWN until the morning when hopefully conditions will be better.
TITLE: 11/03 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:Short locks, the IMC kept losing lock during SDF_REVERT/READY/LOCKING_GREEN as it did yesterday. Primary useism has been increasing over the last few hours (there's been more small US EQs today than normal... Missouri, Idaho, Nevada all in the 3s).
LOG: No log
The FC keeps losing IR lock, it says "LO SERVO RAILED" pretty much as soon as the LO_LR grd startings working after the FC locks IR.
RyanC, Vicky
I tried bringing SQZ_MANAGER to FDS_READY then I tried to set FC{1,2} back to where they were the last good SQZ lock ~5 hours ago according to top mass OSEMs... same issue. Trending the ZMs showed some drifting over the past 5 hours, I was thinking about moving these back just as with FC2 then Vicky suggested to try Clearing ASC history. I tried this following Camillas alog80519, but the issue persisted as clearing ASC also cleared FC ASC which misaligned the FC. Vicky then logged on and starting trying to manually lock everything, successfully.
To summarize
Ryan C, Vicky
What I think was the issue and what I did:
02:28 UTC lockloss, IMC tag
00:44 UTC lockloss, ASC_AS_A and IMC lost lock ~10 ms apart.
02:00 UTC Observing
TITLE: 11/02 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Currently Observing at 157Mpc and have been Locked for over an hour. We relocked many times today and relocking tended to take a bit long due to many locklosses, but for the most part was hands-off. The only things that required help were 1) during both initial alignments I ran today, when we got to aligning PRC, the IMC unlocked and could not relock - every time it would relock the FSS would glitch and unlock. To solve this, I waited until the PSL_FSS guardian was in READY_FOR_MC_LOCK, and then I paused it. Then, I waited for the IMC to lock and it would lock fine that next try, and once it had been locked for a few seconds, I unpaused the PSL_FSS guardian (tagging OpsInfo) and 2) one of the times when we were in GREEN_ARMS, ALS_XARM was stuck in CHECK_CRYSTAL_FREQUENCY, but taking it to UNLOCKED and then ETM_TMS_WFS_OFFLOADED fixed it right away.
LOG:
14:30 Observing and locked for over 3 hours
14:47 Superevent S241102cy
15:17 Lockloss
15:17 Started an initial alignment
IMC lost lock when trying to align PRC
- Every time it would relock, the FSS would oscillate and cause the IMC to lose lock again & FSS would lose lock
- I paused the FSS guardian once the FSS was locked and then unpaused once the IMC was good and locked
15:40 Initial alignment done, relocking
16:12 Lockloss from MOVE_SPOTS
17:04 NOMINAL_LOW_NOISE
17:05 Observing
17:13 Lockloss
18:05 NOMINAL_LOW_NOISE
18:08 Observing
18:38 Lockloss
18:38 Started an initial alignment
- Same issue as before - during PRC the mode cleaner unlocks and cannot relock due to the FSS glitching and unlocking.
Steps that have worked for me: (tagging Ops)
1) Wait until the PSL_FSS guardian is in READY_FOR_MC_LOCK
2) Once it is, pause PSL_FSS
3) Wait for IMC to lock
4) Once it has, unpause PSL_FSS
- For some reason, even once the FSS and IMC were locked and we were aligning PRC, the ISS diffracted power was jumping all over the place
18:58 Initial alignment done, relocking
- ALS_XARM CHECK_CRYSTAL_FREQUENCY issue - toggling Force/No Force did not work, so I changed ALS_XARM to AUTO and then took it to UNLOCKED. I selected ETM_TMS_WFS_OFFLOADED and it locked immediately before I could troubleshoot further
19:48 NOMINAL_LOW_NOISE
19:52 Observing
20:08 Lockloss
22:15 NOMINAL_LOW_NOISE
22:15 Observing
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
17:19 | PEM | Robert | LVEA | Y | Looking for grounding spot | 17:39 |
17:48 | PEM | Robert | CER | n | Setting up stuff | 18:02 |
TITLE: 11/02 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 12mph Gusts, 8mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.24 μm/s
QUICK SUMMARY:
Using the darm_integral_compare.py script from the NoiseBudget repo (NoiseBudget/aligoNB/production_code/H1/darm_integral_compare.py) as a starting point, I made a version that is simplified and easy to run for when our range is low and we want to compare range vs frequency with a previous time.
It takes two starting times, supplied by the user, and for each time, it grabs the DARM data between the start time and an end time of starttime+5400 seconds (1.5 hours). Using this data it calculates the inspiral range integrand and returns two plots(pdf) - one showing the range integrand plotted against frequency for each set of data(png1), and then the second plot just shows DARM for each set of data, along with a trace showing the cumulative difference in range between the two sets as a function of frequency(png2). These are saved both as pngs and in a pdf in the script's folder.
This script can be found at gitcommon/ops_tools/rangeComparison/range_compare.py. To run it you just need to supply the gps times for the two sets of time that you want to compare, although there is also an optional argument you can if you want the length of data taken to be different than the default 5400 seconds. The command used to generate the PDF and PNGs attached to this alog was as follows: python3 range_compare.py --span 5000
I appearently didn't do a very good job of telling you how to run this and forgot to put the example times in the command, so here's a more clear (actually complete) explanation
To find the script, go to:
cd /ligo/gitcommon/ops_tools/rangeComparison/
and then to run the script:
python3 range_compare.py [time1] [time2]
where time1 and time2 are the gps start times for the two stretches of time that you want to compare. The default time span it will run with for each time is 5400 seconds after the start time, but this can be changed by using the --span command followed by the number of seconds you want. For example, the plots from the original alog were made by running the command python3 range_compare.py --span 5000 1414349158 1414586877
Over the past 2 days I've been seeing lots of IMC locklosses right after a main lockloss while ISC_LOCK is in READY. This has been happening 10+ times between relocks. The IMC guardian seems to lock for a split second in its "AQUIRE" state then as soon as it moves on to "BOOST" it loses it. It looks the the MC2 transmision maybe isn't stable, then the BOOST state turns on MC2 lock filters and ramps gains. We also see the bottom stage of MC2 SUS saturate during this. Some are from FSS oscillations but not all.
Lockloss @ 11/02 20:08 UTC after 20 minutes locked
Lockloss @ 11/02 18:38 UTC after only 33 minutes locked
19:52 UTC Observing
18:07 UTC - going to sit in DOWN for a bit since we haven't been able to get locked because of the high secondary useism (we were able to get up to TRANSITION_FROM_ETMX once, but the majority of locklosses have been from low locking states)