Lockloss @ 15:35 UTC - fast, no obvious cause.
LSC DARM loop shows first sign of a kick before the lockloss.
Back to observing as of 16:51 UTC.
TITLE: 09/11 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 4mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
09/11 07:05UTC Vicky and Tony made changes to SQZ_MANAGER to take it to NO_SQUEEZING to allow us to be in Observing overnight without squeezing. (SDF Diffs1)
07:10 Vicki further made some edits (I already forgot what the difference was for these, but it was along the same vein)(SDF Diffs2)
07:15UTC While figuring out the workaround for the squeezer issues, we noticed that the ALIGN_IFO and INIT_ALIGN nodes were also keeping us out of Observing since they were reading as being NOT_OK according to the IFO guardian, even though they were in their nominal states. INIT_ALIGN for example is nominally in IDLE, and had been in IDLE since the previous lockloss at 05:53UTC (72797). Tony attempted to get IFO to show INIT_ALIGN as OK by requesting INIT_ALIGN to DOWN and then to IDLE, but this caused the detector to lose lock.
07:48UTC Running INITIAL_ALIGNMENT because the alignment was a mess (stuck going through ACQUIRE_PRMI 3 times) as well as hoping that it would clear the issues with INIT_ALIGN, but we lost lock at PREP_FOR_PRX
07:58 restored optics to settings from time 1378428918 (~3hrs previous, during lock)
- restarting INITIAL_ALIGNMENT again
08:06 INITIAL_ALIGNMENT couldn't find green arms so we took the detector out of INITIAL_ALIGNMENT and into GREEN_ARMS_MANUAL and found green by hand for both arms
08:14 green found, we requested it to DOWN and then into INITIAL_ALIGNMENT
08:39 Finished INITIAL_ALIGNMENT, requested NOMINAL_LOW_NOISE
09:20 Reached NOMINAL_LOW_NOISE
- ISC_LOCK is NOT_OK due to SQZ_MANAGER having been changed
- INIT_ALIGN is again listed as NOT_OK even though it is in its nominal state
- ALIGN_IFO listed as NOT_OK and is currently trying to go from SET_SUS_FOR_FULL_FPMI -> SET_SUS_FOR_FULL_FPMI (attachment3)
- we later figured out that INIT_ALIGN and ALIGN_IFO are somehow connected to the squeezer and squeezer ISS and so the ISS issue somehow causes them to be NOT_OK
09:41 Trying to revert all changes that were made tonight to see if that will get rid of the NOT_OKs so we can get the detector into Observing. We were hoping that we would then be able to bypass the ISS failing
- Tony set SQZ_MANAGER's nominal to NO_SQUEEZING to see if that would help
- it did not (nominal was changed back to FREQ_DEP_SQZ)
09:42 - 10:20 Various other methods tried and thought out (sdf3)
- ex) taking SQZ_MANAGER to DOWN and then back to FDS - didn't work
10:27UTC - Finally got into Observing! Did this by following alog 70050 to try to get around the squeezer ISS issue the same way Tony had been doing earlier that evening:
- Since the script settings were what they were supposed to be, Tony just used the alog as a reference for taking SQZ_MANAGER to NO_SQUEEZING, taking SQZ_OPO_LR to LOCKED_CLF_DUAL_NO_ISS, then to LOCKED_CLF_DUAL to help the ISS pump relock
- ISS pump relocked, ALIGN_IFO, INIT_ALIGN, and ISC_LOCK changed to OK, and we accepted sdf diffs(sdf4) and got into Observing
After that, the ISS has unlocked and locked back up multiple times and the currently accepted SDF diffs are for the ISS being ON. We know that this isn't ideal because of how often the ISS is losing lock and subsequently taking us out of Observing, but after VickY and Tony troubleshooted this issue for 6 hours, with me then working with Tony for a further 3.5 hours, we felt that there was no more that we would be able to do on a weekend night at 3:30am, and that although the ISS will keep losing lock and taking us out of Observing, SQZ_MANAGER will eventually get the ISS back up, putting us back into Observing, and this way we could at least get some amount of Observing time in the next few hours until more people can come in and fix the issue when the workday starts.
Things to note/tldr:
- SQZ_MANAGER's nominal state is back to being FREQ_DEP_SQZ, so that doesn't need to be changed
- SQZ_MANAGER needs to be taken out of IFO ignore list
- SQZ ISS needs to be fixed (obviously)
Thank you to Tony for staying for 3.5 hours past his shift end and Vicky staying up until the early hours to help troubleshoot!!
SQZ_MANAGER has been removed from the exclude_nodes list and the IFO top node was loaded at 15:37 UTC.
TITLE: 09/11 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
GRB-Short E436769 23:43 UTC Standing Down
https://gracedb.ligo.org/events/E436769/view/
23:59 UTC Dropped out of Observing due to a squeezing issue. Diag Main told me to go look at alog 70050.
After following the instructions there.... see alog
Dropped out of Observing again. 00:31 UTC
Vicki gets on Mattermost, and gives me some guidance. 1:01 UTC
Did it again, This time I just brought the squeeze manager to down and then back up to FDS.
Alog pf this saga:
Changes to the gains were made by Vicki and we got back to observing at 1:48UTC
Well that lasted until 2:24 UTC
Talked with Vicki for a while and allowed her to try a handful of changes after we dropped out of commisioning to see if we could resolve the issue, but at best we could only get the SQZ system to stay locked for an hour at a time at max.
GRB-E436819 4:26 UTC
https://gracedb.ligo.org/events/E436819/view/
The SQZ System has been up and Down all night. It seems as though the SQZ System may need to be turn off for the night.
Vicki gave me the following instructions to get the IFO into Observing with no SQZ.
Overall:
bring SQZ_MANAGER --> DOWN --> NO SQUEEZING
(top of guardian code, Line 36) set SQZ_MANAGER:
nominal = 'NO_SQUEEZING' (instead of 'FREQ_DEP_SQZ')
REVERT following SDF diffs, such that we run with:
(SHG SERVO IN1GAIN = -9
FIBR_SERVO_COMGAIN=15)
5:53 UTC LOCKLOSS
I was in the Process of making the changes and accepting the SDF's diffs to go back to OBSERVING and the IFO Unlocked It's self.
We Got back to NOMINAL_LOW_NOISE @ 06:59 UTC
But couldn't get back to OBSERVE because INIT_ALIGN , ALIGN_IFO and SQZ_MANAGER were "NOT OK". According to the H1:GRD-IFO_SUBNODES_NOT_OK list.
LOCKLOSS 7:15 UTC Lockloss due to button press.
I Tried Reloading INIT_ALIGN to see if it would become "OK", and then I thought, Oh I should just take INIT_ALIGN to DOWN and Back up to IDLE..... WHICH THEN UNLOCKED THE IFO. https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72797
Edited The Guard IFONodeList to Add SQZ_MANAGER, that is we want to ignore SQZ_MANAGER for the night until someone can coming and make some adjustments.
Passing Oli An Unlocked IFO.
LOG:
Vicki and I were trying to troubleshoot the SQZ system Basically all night.
Lockloss due to Button push:
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1378451754
5:53 UTC LOCKLOSS
While trying to make the Changes found in this alog 72795
I was in the Process of making the changes and accepting the SDF's diffs to go back to OBSERVING and the IFO Unlocked It's self.
We Got back to NOMINAL_LOW_NOISE, but couldn't get back to OBSERVE because INIT_ALIGN , ALIGN_IFO and SQZ_MANAGER were "NOT OK". According to the H1:GRD-IFO_SUBNODES_NOT_OK list.
LOCKLOSS 7:15 UTC Lockloss due to button press.
I Tried Reloading INIT_ALIGN to see if it would become "OK", and then I thought, Oh I should just take INIT_ALIGN to DOWN and Back up to IDLE..... WHICH THEN UNLOCKED THE IFO. OMG I'm So Sorry Oli.
Dropped out of Observing due to a squeezing issue that told me to go look at alog 70050 from Diag Main.
I took SQZ_Manager to NO_SQUEEZING, edited line 12 on sqzparams.py Guardian code:
From:
opo_grTrans_setpoint_uW = 80 #OPO trans power that ISS will servo to. alog 70050.
to:
opo_grTrans_setpoint_uW = 50 #80 #OPO trans power that ISS will servo to. alog 70050.
Loaded SQZ_OPO_LR Guardian.
I then took SQZ_OPO_LR Guardian to LOCKED_CLF_DUAL_NO_ISS, then to LOCKED_CLF_DUAL.
The instructions told me to maximize H1:SQZ-OPO_TEC_SETTEMP. after a few slider bumps it was maxed.
Then I saw the update at the bottom, Which then told me to change the opo_grTrans_setpoint_uW = 60 which I then did.
I then took SQZ_MANAGER back up to FREQ_DEP_SQZ And accepted the SDF Diffs, and took H1 back to Observing. It was happy with this for a breif moment then dropped back out of observing.
This is when Vicki checked in with me and my pleas for SQZr help.
She told me that This is likely due to the the SQZr being tuned for opo_grTrans_setpoint_uW = 80. I will link this alog to the comment of the 70050 which has guided me to changing it.
We have since dropped out 2 more times. And I have simply taken the SQZ_MANAGER to DOWN, then back up to FREQ_DEP_SQUEEZING. allowing for Observing to be reached again. but it hasn't like this.
More troubleshooting is needed. Vicki said she will be logging on shortly to check it out remotely.
Screen shot of the logs attached.
H1:SQZ-SHG_SERVO_IN1GAIN was changed to hopefully add more stability to the SQZr
SDF Diff accepted.
In the Process of taking SQZ_MANAGER to NO_SQUEEZE to Stay in Observing for longer than an hour a Lockloss happened.
Vicki gave me the following instructions to get the IFO into Observing with no SQZ.
Overall:
bring SQZ_MANAGER --> DOWN --> NO SQUEEZING
(top of guardian code, Line 36) set SQZ_MANAGER:
nominal = 'NO_SQUEEZING' (instead of 'FREQ_DEP_SQZ')
REVERT following SDF diffs, such that we run with:
(SHG SERVO IN1GAIN = -9
FIBR_SERVO_COMGAIN=15)
anthony.sanchez@cdsws13: caget H1:SQZ-SHG_SERVO_IN1GAIN
H1:SQZ-SHG_SERVO_IN1GAIN -9
anthony.sanchez@cdsws13: caget H1:SQZ-FIBR_SERVO_COMGAIN
H1:SQZ-FIBR_SERVO_COMGAIN 15
TITLE: 09/10 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 134Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 2mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY:
Inherited a Locked IFO that has been Locked for 15 hours.
Everything looks great.
TITLE: 09/10 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:
Quiet shift today with H1 locked and observing throughout. Current lock stretch is up to 15 hours.
LOG:
No log for this shift.
State of H1: Observing at 149Mpc
Very quiet day so far with just a couple of small earthquakes passing through. H1 has been locked for 11 hours.
Sun Sep 10 10:07:48 2023 INFO: Fill completed in 7min 44secs
FAMIS 19993
There was a brief temperature spike a little over 6 days ago, but this quickly came back to normal. No other major events in the past 10 days.
TITLE: 09/10 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 10mph Gusts, 6mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY: H1 has been locked for 7 hours.
IFO_NOTIFY asked for assistance after reaching NLN and having ADS converge because the LOCKLOSS_SHUTTER_CHECK guardian node wasn't nominal (nominal is HIGH_ARM_POWER).
It looks like when the lockloss at 08/10 06:46UTC happened, LOCKLOSS_SHUTTER_CHECK jumped from HIGH_ARM_POWER to CHECK_SHUTTER, and got stuck there with the message "USERMSG 0: run shutter check, then manually take to low power". I requested CHECK _SHUTTER (even though it was already in that state), and then went into MANUAL mode and requested LOW_ARM_POWER. Since the arm power was above threshold, it automatically move up to HIGH_ARM_POWER as soon as I switched to AUTO mode and we got into Observing. I've attached a txt file of the output log since my description of what I did might be a little hard to follow.
TITLE: 09/10 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
Lockloss 00:52 UTC https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72780
Attempt to lock right away was met with a lock loss at PRMI.
Initial alignment started 1:26 UTC After PRMI Lockloss.
Relocking started again at 1:49 UTC.
Nominal Low Noise Reached at 2:35 UTC
Observing Reached at 2:48 UTC
GRB-Short E436485 standing down
6:47 UTC unknown Lockloss. https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72782
Current Status: Relocking, and in PRMI_ASC
Unkown Lockloss right before the end of the shift.
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1378363592
Looking at this plot make me think this lockloss looks very much like a lockloss I had the other day Alog 72772 .
Tagging TCS because H1 Dropped from observing due to TCS changing settings and thus making SDF Differences.
The TCS_ITMX_CO2 Guardian dropped down to FIND_LOCK_POINT for a minute for some reason.
I've set the set point for the OPO trans to 60 uW, this gives us better squeezing and a little bit higher range. However, the SHG output power sometimes fluctuates for reasons we don't understand, which causes the ISS to saturate and knocks us out of observing. Vicky and operators have fixed this several times, I'm adding instructions here so that we can hopefully leave the setpoint at 60uW and operators will know how to fix the problem if it arrises again.
If the ISS saturates, you will get a message on DIAG_MAIN, then, the operators can lower the set point to 50 uW.
1) take sqz out of IFO by requesting NO_SQUEEZING from SQZ_MANAGER.
2) reset ISS setpoint, by opening the SQZ overview screen, and opening the SQZ_OPO_LR guardian with a text editor. in the sqzparams file you can set opo_grTrans_setpoint_uW to 50. Then load SQZ_OPO_LR, request LOCKED_CLF_DUAL_NO_ISS, then after it arrives re-request LOCKED_CLF_DUAL, this will turn the ISS on with your new setpoint.
3)This change in the ciruclating power means that we need to adjust the OPO temperature to get the best SQZ. Open the OPO temp ndscope, from the SQZ scopes drop down menu on the sqz overview (pink oval in screenshot). THen adjust the OPO temp setting (green oval) to maximize the CLF-REFL_RF6_ABS channel, the green one on the scope.
4) Go back to observing, by requesting FREQ_DEP_SQZ from SQZ_MANAGER. You will have 2 SDF diffs to accept as shown in the screenshot attached.
Update: in the SDF diffs, you will likely not see H1:SQZ-OPO_ISS_DRIVEPOINT change, and just the 1 diff for OPO_TEC_SETTEMP. The channel *ISS_DRIVEPOINT is used for commissioning but ISS stabilizes power to the un-monitored value which changes, H1:SQZ-OPO_ISS_SETPOINT.
Also, if SQZ_OPO_LR guardian is stuck ramping in "ENGAGE_PUMP_ISS" (you'll see H1:SQZ-OPO_TRANS_LF_OUTPUT ramping), this is b/c the setpoint is too high to be reached, which is a sign to reduce "opo_gr_TRANS_setpoint_uW" in sqzparams.py.
Update for operators:
2) reset ISS setpoint, by opening the SQZ overview screen, and opening the SQZ_OPO_LR guardian with a text editor. In the sqzparams file you can set opo_grTrans_setpoint_uW to 50 60. Then load SQZ_OPO_LR, request LOCKED_CLF_DUAL_NO_ISS, then after it arrives re-request LOCKED_CLF_DUAL, this will turn the ISS on with your new setpoint. Check if the OPO ISS control monitor (H1:SQZ-OPO_ISS_CONTROLMON) is around 3 by opening SQZ OVERVIEW -> SQZT0 -> AOM +80MHz -> Control monitor (attached screenshot). If the control monitor is not around 3, repeat 2) and adjust the opo_grTrans_setpoint_uW to make it around 3.
Vicki has asked me to make a comment about how the line in sqzparams.py should stay at 80 due to tuning for 80 instead of 50 or 60.
Line 12: opo_grTrans_setpoint_uW = 80 #OPO trans power that ISS will servo to. alog 70050.
relevent alog:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72791
Latest update on how to deal with this SQZ error message with a bit more clarity:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=80413