Mon Sep 11 10:09:49 2023 INFO: Fill completed in 9min 45secs
Travis confirmed a good overfill curbside
Chris (Apollo), Jonathan, Patrick As part of the upgrade to the FMCS control system last week by Apollo, the subnet masks of the BACnet devices were changed from 255.255.255.0 to 255.255.0.0. The subnet mask of the computer running the BACnet to EPICS IOC has been 255.255.255.0. This needed to be changed to 255.255.0.0 to match the change to the BACnet devices. Jonathan and I did so this morning and this allowed the IOC to connect to the BACnet devices again. There are two more BACnet devices that Apollo still needs to change to 255.255.0.0. One is associated with the filter cavity station air handlers, the other to the LExC building. We do not translate any of the BACnet channels from the LExC building to EPICS, but we do for the filter cavity. The IOC was restarted a few times during this troubleshooting with permission from the operator.
Lockloss @ 15:35 UTC - fast, no obvious cause.
LSC DARM loop shows first sign of a kick before the lockloss.
Back to observing as of 16:51 UTC.
TITLE: 09/11 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 4mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
09/11 07:05UTC Vicky and Tony made changes to SQZ_MANAGER to take it to NO_SQUEEZING to allow us to be in Observing overnight without squeezing. (SDF Diffs1)
07:10 Vicki further made some edits (I already forgot what the difference was for these, but it was along the same vein)(SDF Diffs2)
07:15UTC While figuring out the workaround for the squeezer issues, we noticed that the ALIGN_IFO and INIT_ALIGN nodes were also keeping us out of Observing since they were reading as being NOT_OK according to the IFO guardian, even though they were in their nominal states. INIT_ALIGN for example is nominally in IDLE, and had been in IDLE since the previous lockloss at 05:53UTC (72797). Tony attempted to get IFO to show INIT_ALIGN as OK by requesting INIT_ALIGN to DOWN and then to IDLE, but this caused the detector to lose lock.
07:48UTC Running INITIAL_ALIGNMENT because the alignment was a mess (stuck going through ACQUIRE_PRMI 3 times) as well as hoping that it would clear the issues with INIT_ALIGN, but we lost lock at PREP_FOR_PRX
07:58 restored optics to settings from time 1378428918 (~3hrs previous, during lock)
- restarting INITIAL_ALIGNMENT again
08:06 INITIAL_ALIGNMENT couldn't find green arms so we took the detector out of INITIAL_ALIGNMENT and into GREEN_ARMS_MANUAL and found green by hand for both arms
08:14 green found, we requested it to DOWN and then into INITIAL_ALIGNMENT
08:39 Finished INITIAL_ALIGNMENT, requested NOMINAL_LOW_NOISE
09:20 Reached NOMINAL_LOW_NOISE
- ISC_LOCK is NOT_OK due to SQZ_MANAGER having been changed
- INIT_ALIGN is again listed as NOT_OK even though it is in its nominal state
- ALIGN_IFO listed as NOT_OK and is currently trying to go from SET_SUS_FOR_FULL_FPMI -> SET_SUS_FOR_FULL_FPMI (attachment3)
- we later figured out that INIT_ALIGN and ALIGN_IFO are somehow connected to the squeezer and squeezer ISS and so the ISS issue somehow causes them to be NOT_OK
09:41 Trying to revert all changes that were made tonight to see if that will get rid of the NOT_OKs so we can get the detector into Observing. We were hoping that we would then be able to bypass the ISS failing
- Tony set SQZ_MANAGER's nominal to NO_SQUEEZING to see if that would help
- it did not (nominal was changed back to FREQ_DEP_SQZ)
09:42 - 10:20 Various other methods tried and thought out (sdf3)
- ex) taking SQZ_MANAGER to DOWN and then back to FDS - didn't work
10:27UTC - Finally got into Observing! Did this by following alog 70050 to try to get around the squeezer ISS issue the same way Tony had been doing earlier that evening:
- Since the script settings were what they were supposed to be, Tony just used the alog as a reference for taking SQZ_MANAGER to NO_SQUEEZING, taking SQZ_OPO_LR to LOCKED_CLF_DUAL_NO_ISS, then to LOCKED_CLF_DUAL to help the ISS pump relock
- ISS pump relocked, ALIGN_IFO, INIT_ALIGN, and ISC_LOCK changed to OK, and we accepted sdf diffs(sdf4) and got into Observing
After that, the ISS has unlocked and locked back up multiple times and the currently accepted SDF diffs are for the ISS being ON. We know that this isn't ideal because of how often the ISS is losing lock and subsequently taking us out of Observing, but after VickY and Tony troubleshooted this issue for 6 hours, with me then working with Tony for a further 3.5 hours, we felt that there was no more that we would be able to do on a weekend night at 3:30am, and that although the ISS will keep losing lock and taking us out of Observing, SQZ_MANAGER will eventually get the ISS back up, putting us back into Observing, and this way we could at least get some amount of Observing time in the next few hours until more people can come in and fix the issue when the workday starts.
Things to note/tldr:
- SQZ_MANAGER's nominal state is back to being FREQ_DEP_SQZ, so that doesn't need to be changed
- SQZ_MANAGER needs to be taken out of IFO ignore list
- SQZ ISS needs to be fixed (obviously)
Thank you to Tony for staying for 3.5 hours past his shift end and Vicky staying up until the early hours to help troubleshoot!!
SQZ_MANAGER has been removed from the exclude_nodes list and the IFO top node was loaded at 15:37 UTC.
TITLE: 09/11 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
GRB-Short E436769 23:43 UTC Standing Down
https://gracedb.ligo.org/events/E436769/view/
23:59 UTC Dropped out of Observing due to a squeezing issue. Diag Main told me to go look at alog 70050.
After following the instructions there.... see alog
Dropped out of Observing again. 00:31 UTC
Vicki gets on Mattermost, and gives me some guidance. 1:01 UTC
Did it again, This time I just brought the squeeze manager to down and then back up to FDS.
Alog pf this saga:
Changes to the gains were made by Vicki and we got back to observing at 1:48UTC
Well that lasted until 2:24 UTC
Talked with Vicki for a while and allowed her to try a handful of changes after we dropped out of commisioning to see if we could resolve the issue, but at best we could only get the SQZ system to stay locked for an hour at a time at max.
GRB-E436819 4:26 UTC
https://gracedb.ligo.org/events/E436819/view/
The SQZ System has been up and Down all night. It seems as though the SQZ System may need to be turn off for the night.
Vicki gave me the following instructions to get the IFO into Observing with no SQZ.
Overall:
bring SQZ_MANAGER --> DOWN --> NO SQUEEZING
(top of guardian code, Line 36) set SQZ_MANAGER:
nominal = 'NO_SQUEEZING' (instead of 'FREQ_DEP_SQZ')
REVERT following SDF diffs, such that we run with:
(SHG SERVO IN1GAIN = -9
FIBR_SERVO_COMGAIN=15)
5:53 UTC LOCKLOSS
I was in the Process of making the changes and accepting the SDF's diffs to go back to OBSERVING and the IFO Unlocked It's self.
We Got back to NOMINAL_LOW_NOISE @ 06:59 UTC
But couldn't get back to OBSERVE because INIT_ALIGN , ALIGN_IFO and SQZ_MANAGER were "NOT OK". According to the H1:GRD-IFO_SUBNODES_NOT_OK list.
LOCKLOSS 7:15 UTC Lockloss due to button press.
I Tried Reloading INIT_ALIGN to see if it would become "OK", and then I thought, Oh I should just take INIT_ALIGN to DOWN and Back up to IDLE..... WHICH THEN UNLOCKED THE IFO. https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72797
Edited The Guard IFONodeList to Add SQZ_MANAGER, that is we want to ignore SQZ_MANAGER for the night until someone can coming and make some adjustments.
Passing Oli An Unlocked IFO.
LOG:
Vicki and I were trying to troubleshoot the SQZ system Basically all night.
Lockloss due to Button push:
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1378451754
5:53 UTC LOCKLOSS
While trying to make the Changes found in this alog 72795
I was in the Process of making the changes and accepting the SDF's diffs to go back to OBSERVING and the IFO Unlocked It's self.
We Got back to NOMINAL_LOW_NOISE, but couldn't get back to OBSERVE because INIT_ALIGN , ALIGN_IFO and SQZ_MANAGER were "NOT OK". According to the H1:GRD-IFO_SUBNODES_NOT_OK list.
LOCKLOSS 7:15 UTC Lockloss due to button press.
I Tried Reloading INIT_ALIGN to see if it would become "OK", and then I thought, Oh I should just take INIT_ALIGN to DOWN and Back up to IDLE..... WHICH THEN UNLOCKED THE IFO. OMG I'm So Sorry Oli.
Dropped out of Observing due to a squeezing issue that told me to go look at alog 70050 from Diag Main.
I took SQZ_Manager to NO_SQUEEZING, edited line 12 on sqzparams.py Guardian code:
From:
opo_grTrans_setpoint_uW = 80 #OPO trans power that ISS will servo to. alog 70050.
to:
opo_grTrans_setpoint_uW = 50 #80 #OPO trans power that ISS will servo to. alog 70050.
Loaded SQZ_OPO_LR Guardian.
I then took SQZ_OPO_LR Guardian to LOCKED_CLF_DUAL_NO_ISS, then to LOCKED_CLF_DUAL.
The instructions told me to maximize H1:SQZ-OPO_TEC_SETTEMP. after a few slider bumps it was maxed.
Then I saw the update at the bottom, Which then told me to change the opo_grTrans_setpoint_uW = 60 which I then did.
I then took SQZ_MANAGER back up to FREQ_DEP_SQZ And accepted the SDF Diffs, and took H1 back to Observing. It was happy with this for a breif moment then dropped back out of observing.
This is when Vicki checked in with me and my pleas for SQZr help.
She told me that This is likely due to the the SQZr being tuned for opo_grTrans_setpoint_uW = 80. I will link this alog to the comment of the 70050 which has guided me to changing it.
We have since dropped out 2 more times. And I have simply taken the SQZ_MANAGER to DOWN, then back up to FREQ_DEP_SQUEEZING. allowing for Observing to be reached again. but it hasn't like this.
More troubleshooting is needed. Vicki said she will be logging on shortly to check it out remotely.
Screen shot of the logs attached.
H1:SQZ-SHG_SERVO_IN1GAIN was changed to hopefully add more stability to the SQZr
SDF Diff accepted.
In the Process of taking SQZ_MANAGER to NO_SQUEEZE to Stay in Observing for longer than an hour a Lockloss happened.
Vicki gave me the following instructions to get the IFO into Observing with no SQZ.
Overall:
bring SQZ_MANAGER --> DOWN --> NO SQUEEZING
(top of guardian code, Line 36) set SQZ_MANAGER:
nominal = 'NO_SQUEEZING' (instead of 'FREQ_DEP_SQZ')
REVERT following SDF diffs, such that we run with:
(SHG SERVO IN1GAIN = -9
FIBR_SERVO_COMGAIN=15)
anthony.sanchez@cdsws13: caget H1:SQZ-SHG_SERVO_IN1GAIN
H1:SQZ-SHG_SERVO_IN1GAIN -9
anthony.sanchez@cdsws13: caget H1:SQZ-FIBR_SERVO_COMGAIN
H1:SQZ-FIBR_SERVO_COMGAIN 15
TITLE: 09/10 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 134Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 2mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY:
Inherited a Locked IFO that has been Locked for 15 hours.
Everything looks great.
Lance, Genevieve, Robert
Recently, we shut down specific components of the HVAC system in order to further understand the loss of about 10 Mpc to the HVAC system (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72308 ). We noted that shutdown of the EX water pump had shown that the 52 Hz DARM peak is produced by the chilled water pump at EX. Based on coupling studies during commissioning time yesterday, the coupling of the water pump can be predicted from shaking injections in the area around the EX cryo-baffle, supporting the hypothesis that the water pump couples at the undamped cryo-baffle (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72769 ). Here we report on other results of the shutdown tests that we have been able to do so far.
CS Fans SF1, 2, 3, 4, 5, and 6 cost roughly 6 Mpc – coupling via input jitter noise and unknown coupling.
Figure 1 shows that the range increased by about 6Mpc when only the CS turbines were shut down; no chillers or chilled water pumps were shut down. Figure 2, a comparison of DARM spectra before, during, and after the fan-only shutdown, shows that there were two major differences. First, a decrease in peaks associated with input jitter noise, particularly the 120 Hz peak. Second, a broad band reduction in noise between about 20 and 80 Hz. This is not consistent with input jitter noise and represents an unknown noise source that we haven’t found yet.
There is a third difference that could be coincidence. The 9.8 Hz ITM bounce modes are higher in the before and after of Figure 2. I was tempted to wonder if the broad band noise was upconversion from the 9.8 Hz peak. We also have harmonics of roughly 10 Hz in the spectrum every so often. I compared BLRMS of 8.5-10 Hz to BLRMS of 39-50 Hz but didn’t see any obvious correlation. But Im not sure this eliminates the possibility.
120 Hz peak in DARM due to periscope resonance matching new 120 Hz peak from HVAC, possibly due to a new leak in LVEA ducts.
Figure 3 shows that the 120 Hz peak in DARM went away when only SF1, 2, 3 and 4 were shut down. It also shows that the HVAC produces a broad peak between 115 and 120 Hz. I looked back and the 120 Hz vibration peak from the HVAC appears to have started during HVAC work at the end of May, beginning of June. There was a period when flows were increased to a high level for a short time that might have pushed apart a duct connection that is now whistleing at 120 Hz. I think it would be worth checking for a leak in the ducts associated with SF1,2,3 and 4.
In addition to fixing a potential duct leak, we could mitigate the peak in DARM by moving the PSL periscope peak so that it doesn’t overlap with the HVAC peak. In the past I have moved PSL periscope resonances for similar reasons by attaching small weights.
EY HVAC does not contribute significantly to DARM noise
Figure 4 shows that on/off/on/off/on/off/on series of EY fan, chiller and water pump shutdowns does not seem to correlate with range.
This data is the analysis of the 2023-Aug-18 data originally summarized in LHO:72331.
After a back to back set of earthquakes and an initial alignment.
H1 has made it back to NOMINAL_LOW_NOISE @ 02:58UTC.
There were SDF Diffs I had to accept to get into Observing @03:01 UTC.
SUS-FC2_M1_OPTICALALIGN_P_OFFSET
SUS-FC2_M1_OPTICALALIGN_Y_OFFSET
Tagging SUS team to documen tthe SDF Diffs.
Tagging ISC and SQZ teams: (1) Why are the FC2 alignment offsets monitored by the SDF system? (2) Why would we have expected a change in the FC2 alignment? Is this concurrent and/or a result of other commissioning / optimizing?
1) FC2 alignment offsets in principle don't have to be monitored by the SDF system anymore, since we have commissioned ASC. While commissioning the squeezer, at some point it was helpful to do it this way before we set up ASC, but I don't think it makes a big difference now that filter cavity ASC is running. Sheila has already un-monitored several other alignment offsets in ZMs recently, so it should be fine to do that here as well.
2) FC2 alignment changed to help the filter cavity lock on a TEM00 mode after the string of earthquakes on Friday. There was significant pitch misalignment, so Naoki manually helped the FC green lock catch by aligning the FC2 (mostly pitch) slider. If FC does not catch green lock (similar to if ALS does not catch green lock due to misalignment) the system won't be able to go to FDS. So in this case, it can be nice to help it a bit by bumping FC2 P/Y sliders to get the green spot locking on FC green transmission camera (nuc33, bottom left).
Vicky, Naoki, Sheila, Daniel
Details of homodyne measurement:
This morning Daniel and Vicky reverted the cable change to allow us to lock the local oscillator loop on the homodyne (undoing change described in 69013). Vicky then locked the OPO on the seed using the dither lock, and increased the power into the seed fiber to 75mW (it can't go above 100mW for the safety of the fiber switch). We then reduced the LO power so that the seed and LO power were matched on PDA, and adjusted the alignment of the sqz path to get good (~97%) visibility measured on PDA. We removed the half wave plate from the seed path, without adjusting the rotation. With it removed, we checked the visibility on PDB, and saw that the powers were imbalanced.
Polarization issue (revisiting the polarization of sqz beam, same conclusion as previous work):
There is a PBS in the LO path close to the homodyne, so we believe that the polarization should be set to horizontal at the beamsplitter in that path. The LO power on the two PDs is balanced (imbalanced by 0.4%), so we believe this means that the beamsplitter angle was set correctly for p polarized light as we found it, and there is no need to adjut the beamsplitter angle. However, when we switched to the seed power, there is a 10% difference between the power on the two PDs without the halfwave plate in the path. We put the halfwave plate back, and the powers were again balanced (with the HWP angle as we found it). We believe this means that the polarization of the sqz path is not horizontal arriving at the homodyne, and that the half wave plate is restoring the polarization to horizontal. If the polarization rotation is happening on SQZT7, the half wave plate should be able to mitigate the problem, if it's happening in HAM7 it will look like a loss for squeezing in the IFO. Vicky re-adjusted the alignment of the sqz path after we put the HWP back in, because it slightly shifts the alignment. After this the visibility measured on PDA is 95.7% (efficiency of 91.6%) and on PDB visibility is 96.9% (efficiency of 93.9%).
SQZ measurements, unclipping:
While the IFO was relocking Vicky and Naoki measured SQZ, SN, ASQZ and mean SQZ on the homodyne and found 4.46dB sqz, 10.4dB mean sqz and 13.14dB anti-sqz measured from 500-550Hz. Vicky then checked for clipping, and saw some evidence of small clipping (order 1% clipping with 10urad yaw dither on ZM2). We went to the table to check that the problem wasn't in the path to the IR PD and camera, we adjusted the angle of the 50/50 beamsplitter that sends light to the camera, and set the angle of the camera to be more normal to the PD path. This improved the image quality on the camera. Vicky moved ZM3 to reduce the clipping seen by the IR PD slightly. She restored good visibility by maximizing the ADF, and also adjusted both PSAMs, moving ZM4 from 100V to 95V. (We use different PSAMs for the homodyne than the IFO). After this, she re-measured sqz at 800-850Hz: 5.2dB sqz, 13.6dB anti-sqz, and 10.6dB mean sqz.
Using the nonlinear gain of 11 (Naoki and Vicky checked it's calibration yesterday), and the equations from aoki, this sqz/asqz level implies total efficiency of 0.72 without phase noise, the mean sqz measurement implies a total efficiency of 0.704. From the sqz loss spreadsheet we have 6.13% known HAM7 losses, if we also use the lower visibility measured using PDA we should have a total efficiency for the homodyne of 0.916*0.9387 = 0.86. This means that we would infer an extra 16-18% losses from these homodyne measurements, which seems too large for homodyne PD QE and optics losses in the path. Since we believe that the polarization issue is reflected in the visibility, this means that these are extra losses in addition to any losses the IFO sees due to the polarization issue.
Screenshot from Vicky shows the measurement made including the dark noise.
Including losses from phase noise of 20mrad, dark noise -21dB below shot noise, and a more accurate calibration of our measured non-linear gain to generated sqz level (from adf paper vs the aoki paper sheila referenced), the total efficiency could marginally be increased to 0.74. This suggests 26% loss based on sqz/asqz. This is also consistent with the 27% loss calculated separately from the mean sqz and generated sqz levels.
From the sqz wiki, we could budget 17% known homodyne losses. This includes 7% in-chamber loss to the homodyne (opo escape efficiency * ham7 optics losses * bdiverter loss), and 11% HD on-table losses (incl. 2% optics losses on SQZT7, and visibility losses of 1- 91.6% as Sheila said above (note this visibility was measured before changing alignments for the -5.2dB measurement; so there remains some uncertainty from visibility losses)).
In total, after including more loss effects (phase noise, dark noise), a more accurate generated sqz level, and updating the known losses -- of the 27% total HD losses observed, we can plausibly account for 17% known losses, lowering the unexplained homodyne losses to ~10-11% (this is still high).
From Sheila's alog LHO:72604 regarding the quantum efficiency of the homodyne photodiodes (99.6% QE for PDA, and 95% QE for PDB), if we accept this at face value (which could be plausible due to e.g. the angle of incidence on PD B), this would change the 1% budgeted HD PD QE loss to 5% loss.
This increases the amount of total budgeted/known homodyne losses to ~21%: 1 - [0.985(opo)*0.953 (ham7)*0.99 (bdiverter) * 0.98(on-table optics loss)*0.95(PD B QE)*0.916(hd visibility)].
From the 27% total HD losses observed, we can then likely account for about 21% known losses (~7% in-chamber, ~15% on-table), lowering unexplained homodyne losses to < 7%.
Genevieve, Lance, Robert
To further understand the roughly 10Mpc lost to the HVAC (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72308), we made several focussed shutdowns today. These manipulations were made during observing (with times recorded) because such HVAC changes happen automatically during observing and also, we were reducing noise rather than increasing it. The times of these manipulations are given below.
One early outcome is that the peak at 52 Hz in DARM is produced by the chilled water pump at EX (see figure). We went out and looked to see if the vibration isolation was shorted, it was not, though there are design flaws (the water pipes arent isolated). We switched from CHWP-2 to CHWP-1 to see if the particular pump was extra noisy. CHWP-1 produced a similar peak in DARM at its own frequency. The peak in accelerometers is also similar in amplitude to the one from the water pump at EY. One possibility is that the coupling at EX is greater because of the undamped cryobaffle at EX.
Friday HVAC shutdowns; all times Aug. 18 UTC
15:26 CS SF1, 2, 3, 4 off
15:30:30 CS SF5 and 6 off
15:36 CS SF5 and 6 on
15:40 CS SF1, 2, 3, 4 back on
16:02 EY AH2 (only fan on) shut down
16:10 EY AH2 on
16:20 EY AH2 off
16:28 EY AH2 on
16:45 EY AH2 and chiller off
16:56:30 EY AH2 and chiller on
17:19:30 EX chiller only off, pump stays on
17:27 EXwater pump CHWP-2 goes off
17:32: EX CHWP-2 back on chiller back on right after
19:34:38 EX chiller off, CHWP-2 pump stays on for a while
19:45 EX chiller back on
20:20 EX started switch from chiller 2 to chiller 1 - slow going
21:00 EX Finally switched
21:03 EX Switched back to original, chiller 1 to chiller 2
Turning Roberts reference to LHO:72308 into a hyperlink for ease of navigation. Check out LHO:72297 for a bigger picture representation of how the 52 Hz peak in the broader DARM sensitivity, and from the Time Stamps in Elenna's plots, they were taken at 15:27 UTC, just after the corner station (CS) "SFs 1, 2, 3, 4" are off. SF stands for "Supply Fans" i.e. those air handler unit (AHU) fans that push the cool air in to the LVEA. Recall, there are two fans per air handler unit, for the two air handler units (AHU1 and AHU2) that feed the LVEA in the corner station. The channels that you can use to track the corner station's LVEA HVAC system are outlined more in LHO:70284, but in short, you can check the status of the supply fans via the channels H0:FMC-CS_LVA_AH_AIRFLOW_1 Supply Fan (SF) 1 H0:FMC-CS_LVA_AH_AIRFLOW_2 Supply Fan (SF) 2 H0:FMC-CS_LVA_AH_AIRFLOW_3 Supply Fan (SF) 3 H0:FMC-CS_LVA_AH_AIRFLOW_4 Supply Fan (SF) 4
My Bad: -- Elenna's aLOG is showing the sensitivity on 2023-Aug-17, and the Robert logging of times listed above are for 2023-Aug-18. Elenna's demonstration is during Robert's site-wide HVAC shut down 2023-Aug-17 -- see LHO:72308 (i.e. not just the corner as I'd errantly claimed above.).
For these 2023-Aug-18 times mentioned in this LHO aLOG 72331, check out the subsequent analysis of impact in LHO:72778.
I've set the set point for the OPO trans to 60 uW, this gives us better squeezing and a little bit higher range. However, the SHG output power sometimes fluctuates for reasons we don't understand, which causes the ISS to saturate and knocks us out of observing. Vicky and operators have fixed this several times, I'm adding instructions here so that we can hopefully leave the setpoint at 60uW and operators will know how to fix the problem if it arrises again.
If the ISS saturates, you will get a message on DIAG_MAIN, then, the operators can lower the set point to 50 uW.
1) take sqz out of IFO by requesting NO_SQUEEZING from SQZ_MANAGER.
2) reset ISS setpoint, by opening the SQZ overview screen, and opening the SQZ_OPO_LR guardian with a text editor. in the sqzparams file you can set opo_grTrans_setpoint_uW to 50. Then load SQZ_OPO_LR, request LOCKED_CLF_DUAL_NO_ISS, then after it arrives re-request LOCKED_CLF_DUAL, this will turn the ISS on with your new setpoint.
3)This change in the ciruclating power means that we need to adjust the OPO temperature to get the best SQZ. Open the OPO temp ndscope, from the SQZ scopes drop down menu on the sqz overview (pink oval in screenshot). THen adjust the OPO temp setting (green oval) to maximize the CLF-REFL_RF6_ABS channel, the green one on the scope.
4) Go back to observing, by requesting FREQ_DEP_SQZ from SQZ_MANAGER. You will have 2 SDF diffs to accept as shown in the screenshot attached.
Update: in the SDF diffs, you will likely not see H1:SQZ-OPO_ISS_DRIVEPOINT change, and just the 1 diff for OPO_TEC_SETTEMP. The channel *ISS_DRIVEPOINT is used for commissioning but ISS stabilizes power to the un-monitored value which changes, H1:SQZ-OPO_ISS_SETPOINT.
Also, if SQZ_OPO_LR guardian is stuck ramping in "ENGAGE_PUMP_ISS" (you'll see H1:SQZ-OPO_TRANS_LF_OUTPUT ramping), this is b/c the setpoint is too high to be reached, which is a sign to reduce "opo_gr_TRANS_setpoint_uW" in sqzparams.py.
Update for operators:
2) reset ISS setpoint, by opening the SQZ overview screen, and opening the SQZ_OPO_LR guardian with a text editor. In the sqzparams file you can set opo_grTrans_setpoint_uW to 50 60. Then load SQZ_OPO_LR, request LOCKED_CLF_DUAL_NO_ISS, then after it arrives re-request LOCKED_CLF_DUAL, this will turn the ISS on with your new setpoint. Check if the OPO ISS control monitor (H1:SQZ-OPO_ISS_CONTROLMON) is around 3 by opening SQZ OVERVIEW -> SQZT0 -> AOM +80MHz -> Control monitor (attached screenshot). If the control monitor is not around 3, repeat 2) and adjust the opo_grTrans_setpoint_uW to make it around 3.
Vicki has asked me to make a comment about how the line in sqzparams.py should stay at 80 due to tuning for 80 instead of 50 or 60.
Line 12: opo_grTrans_setpoint_uW = 80 #OPO trans power that ISS will servo to. alog 70050.
relevent alog:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72791
Latest update on how to deal with this SQZ error message with a bit more clarity:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=80413