TITLE: 10/08 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 4mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.21 μm/s
QUICK SUMMARY:
IFO is LOCKING and in MAX_POWER
PSL is glitchy, and comissioners are unsure of what it is, even after troubleshooting today. It's still causing FSS Lockloss issues.
TITLE: 10/08 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Currently relocking and at MAX_POWER . We took a few extra hours after the normal maintenance period to continue troubleshooting the PSL issues, but it didn't come up with much.
When I took SEI_ENV to MAINTENANCE we lost lock (most likely due to increased ground motion from septic tank truck + MAINTENANCE mode switch)
LOG:
14:30 Locked for 3 hours and running magnetic injections
14:40 Injections done, back to Observing
14:45 Out of Observing, starting In-Lock SUS Charge measurements
15:10 Took SEI_ENV to MAINTENANCE
15:10 Lockloss due to ground motion while in MAINTENANCE
15:28 ISC_LOCK to IDLE
21:46 Sensor correction turned back on
21:52 Started manual initial alignment
22:29 Started relocking
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
14:57 | FAC | Chris | H2/CP1 | n | Moving snorkel lift to CP1 | 15:28 |
14:42 | Christina | MX, MY | n | Inventory for 3IFO | 15:56 | |
15:12 | FAC | Kim, Karen, Patty | FCETube | n | Tech clean | 18:23 |
15:18 | VAC | Jordan | FCETube | n | Installing pressure gauge and closing gate valves | 20:03 |
15:21 | EE | Fil | Beir Garden, CER | n | Prepping for seismometer | 17:21 |
15:21 | Betsy | LVEA | n | O5 cabling tasks | 15:33 | |
15:36 | SUS | RyanC | CR | N | OPLEV charge measurements, ETMs | 16:54 |
15:54 | Betsy | OptLab | n | 16:54 | ||
15:56 | Christina | LVEA | n | Inventory walk around | 17:21 | |
15:00 | FAC | Septic tank vendor | sitewide | n | Pumping tanks | 16:13 |
16:17 | FAC | Tyler | LVEA | n | Looking for forklift | 16:23 |
16:29 | ISI | Jim | CER | n | Power cycling ITMX ISI | 17:07 |
16:40 | Camilla | LVEA | n | Looking at arms | 18:27 | |
16:45 | EE | Fernando | MSR | n | Check ISC computer | 22:42 |
16:54 | tour | Cassidy and Austin | LVEA | n | tour | 17:33 |
17:11 | CDS | Erik | EX | n | Power cycling gps receiver | 17:57 |
16:00 | FAC | Chris, Eric | Opticslab | n | Taking apart fume hood | 17:29 |
17:37 | Betsy | LVEA | n | Looking at things with Camilla | 18:23 | |
17:58 | ISI | Jim | LVEA | n | Moving stuff | 18:08 |
18:10 | ISI | Jim | LVEA | n | Checking problem out | 18:55 |
18:24 | Christina | Outside receiving | n | Throwing away stuff | 18:52 | |
18:29 | CDS | Erik | EX | n | Taking out gps receiver | 18:55 |
19:06 | Tony | LVEA | n | sweep | 19:23 | |
19:20 | PSL | Sheila, Rick | LVEA | n | Testing stuff with NPRO | 19:23 |
19:30 | VAC | Janos | FCETube | n | gate values | 20:03 |
19:35 | FAC | Karen | Pre-VacPrep | n | Tech clean | 20:16 |
19:43 | CDS | Erik | EX | n | Replacing receiver | 20:12 |
20:25 | PSL | Sheila, Rick | LVEA | n | Unplug another cable for PSL issue | 20:35 |
20:50 | Camilla | LVEA | n | Grabbing beamsplitters | 21:17 | |
21:26 | PSL | Sheila & Rick | LVEA PSL Racks | N | Re-seating the DB cables and Barrel conectors. | 21:33 |
A new MKS pressure gauge was installed today on the FC-C3 tube section, on cross C7.
FC gate valves, FCV7 & FCV8 were closed to isolate the C7 cross. The +x port angle valve remained closed, and a tee, gauge, and pump out port were installed. CF flange joints were helium leak checked, no signal found above detector background ~8E-10 Torr-l/s.
Volume was pumped to medium vacuum and volume remains isolated. Will need additional pumping and power/signal cables landed before joining with main FC volume.
WP12122 HEPI HAM2 model change
Jim, Erik, Dave:
Installed Jim's new h1hpiham2 model, has new top level signal connections. No DAQ restart was required.
WP12123 Alarm system use Twilio for cell texting
Dave:
Because of recent issues with bepex being unable to send text messages to verizon.com for several hours at at time, I created a test alarm system which uses both bepex and our 3rd party texting service Twilio. I will run the old and new alarms side-by-side for testing.
WP12125 CP1 Fill Time Change.
Dave, Janos, Gerardo:
CP1 fills happen at 8am during the summer before the dewar gets full sun and the head pressure increases. This is not needed during the winter, so I've reverted the fill time to 10:00.
Note that after the CP1 vaporizer disconnect work this morning, we did not need a second overfill run.
EX CNS-II independent GPS receiver failure
Dave, Daniel, Erik, Fil
(see Erik's alog for details). At 03:54 PDT this morning the EX CNS-II GPS receiver stopped running correctly:
The 1PPS became a noisy signal with an 8 second repeat.
The IRIG-B signal became a flat noise signal with a 1 Hz spike
The GPS MEDM froze at the last good values.
After its 12V DC wall-wart was replaced it is working agains.
The H1:CAL-PCALX_IRIGB_DQ channel was down between the times
offline | 10:54 Tue 08 Oct 2024 UTC (03:54 local) |
online | 20:01 Tue 08 Oct 2024 UTC (13:01 local) |
Timing Master port 7 activation
Dave, Daniel, Erik, Fernando
The MSR Timing Master port7 is temporarily connected to a timing card in the spare Beckhoff computer. Normally this computer is powered off.
This morning Fernando powered the computer up for updates. The MFO went into an error, correctly reporting a spurious signal on port 7. To green up the master, I activated port7. When Fernando powered the computer back down I returned the port to its deactivated state.
Last checked in alog80348
Nothing really of note, for the OUTs MY_FAN2_270_1 is a little noisy, along with MR_FAN5_170_1 at the CS.
Still down for maintenance and it will be up to two more hours before we can start relocking - some investigations/experiments will be done to try and narrow down the cause of the FSS issues.
The independent GPS receiver at EX has failed. It 1PPs signal into the comparator starting oscillating at 03:53 this morning. Its MEDM screen is frozen from that time.
Erik is on his way out to EX to investigate
I've brought the EX CNS clock back to the corner station for repair.
External power supply failed.
CNS (GPS receiver) restored with new power supply at EX
Before this morning's lockloss until 15:20UTC, we left CO2X and CO2Y on at their nominal annular powers (1.7W into vac) so that we could measure the IFO beams absorption on the ITMs using the HWS data.
B. Weaver, J. Kissel WP 12109 Betsy, myself, and several others are looking to propose redlines to the WHAM3 flange cable layout (D1002874) in prep for O5 (see T2400150 for DCN and discussion there-of). However, in doing so, we discovered that the flange layout may have double-counted the shared cable for the MC2/PR2 M1 (top) RTSD / T1T2 OSEMs. Other drawings (e.g. the Cable Routing D1101463 and the wiring diagrams D1000599) indicate "yes, there's an extra 'SUS-TRIPLE' entry somewhere between the D6 and D3 allocation," but we wanted to be sure. As such, Betsy went out to HAM3 today and confirmed that YES the MC2/PR2 M1 (top) RTSD / T1T2 cable, labeled "SUS_HAM3_002" in the wiring diagram or "SUS-HAM3-2" in real life, does come out of D3-1C1 and *not* out of any D6 port, and thus we validate that D6's list of 4x DB25s to support the 'SUS-TRIPLE' (i.e. MC2) in D1002874 is incorrect. Pictures attached show the D3 flange, and highlight the SUS-HAM3-2 cable at the flange going to D3-1C1, and then the other end of that cable, clearly going into the MC2-TOP/PR2-TOP satamp box in the SUS-R2 field rack (S1301887).
Oli, Camilla
Oli found there was 3 sdf diffs after the in-lock charge measurements this morning. One was from lscparams.ETMX_GND_MIN_DriveAlign_gain being changed and th SU_CHARGE guardian not reloaded, the others it seems from an out of date filter and the original tramp ot being reverted correctly. The tramp wasn't explicitly changed but was ramped using a different than nominal value ezca.get_LIGOFilter( ... ramp_time=60).
Code changes attached and all sus charge guardians reloaded.
H1 called cause ITMX ISI watchdoog trip.
Our range has increased back to around 160 Mpc on the CALIB CLEAN channel the past few days. I ran the DARM integral compare plots using an observing time on Sept 28 (before OPO crystal and PRCL FF changes) and Oct 5 (after those changes). It appears the largest improvement has occured at low frequency. Some of that can be attributed to the PRCL feedforward, but not all. Based on the previous noise budget measurements, and the change in the coherence of PRCL from Sept 28 to Oct 5, I think the improvement in DARM from 10-30 Hz is likely due to the PRCL improvement. Above 30 Hz, I am not sure what could have caused that improvement. It doesn't appear there is much improvement above 100 Hz, which is where I would expect to see changes from the squeezing, if it improved from the OPO changes.
Sheila pointed out two things to me: first, that if we are not using median averaging, these plots might be misleading if there is a glitch, and second, that some of the improvement at low frequency could be squeezing related.
I went through the noise budget code and found that these plots were made without median averaging. However, changing the code to use median averaging is a simple matter of uncommenting one line of code in /ligo/gitcommon/NoiseBudget/aligoNB/aligoNB/common/utils.py that governs how the PSD is calculated for the noise budget.
I reran the darm_integral_compare code using median averaging. The result shows much less difference in the noise at low frequency between these two times. The range is still improved from 10-50 Hz, but there is a small drop in the range between 50-60 Hz. I still think the change from 10-30 Hz is likely due to PRCL.
As a further confirmation of the necessity of median averaging here, I made a spectrogram of the data span on Sept 28, and a few glitches, especially around low frequency, are evident. I didn't see these glitches in the sensitivity channel that I used to choose the data spans (I just trend the sensmon CLEAN range and look for regions without big dips). However, the Oct 5 data span appears fairly stationary.
Sheila, Camilla.
New SQZ ASC using AS42 signals with feedback to ZM4 and ZM6 tested and implemented. We still need to watch that this can keep a good SQZ alignment during thermalization. In O4a we used a SQZ ASC with ZM5/6, we have not had a SQZ ASC for the majority of O4b.
Prep to improve SQZ:
Testing ASC from 80373:
In the first 20 minutes of the lock, the SQZ ASC appears to be working well, plot.
Note to operator team: if the squeezing gets really bad, you should be able to use the SQZ Overview > IFO ASC (black linked button) > "!graceful clear history" script to turn off the SQZ ASC. Then change /opt/rtcds/userapps/release/sqz/h1/guardian/sqzparams.py use_ifo_as42_asc to False and go though NO_SQZEEZING then FREQ_DEP_SQUEEZING in SQZ_MANAGER and accept the sdfs for not using SQZ ASC. If SQZ still looks bad, put ZM4/6 osems (H1:SUS-ZM4/6_M1_DAMP_P/Y_INMON) back to when squeezing was last good and if needed run scan sqz alignment and scan sqz angle with SQZ_MANAGER.
Sheila moved the "0.01:0" integrators from the ASC_POS/ANG_P/Y filters into the ZM4/5/6_M1_LOCK_P/Y filter banks.
This will allow us to more easily adjust the ASC gains and to use the guardian ZM offload states. We turned them on on ZM4/6. Edited OFFLOAD_SQZ_ASC to offload for ZM4,5,6. And tested by putting an offset on ZM4. We put ZM4/6 back to positions they were in in lock via the osesms. SDFs for filters accepted. I removed the "!offload AS42" button from the SQZ > IFO ASC screen (liked to sqz/h1/scripts/ASC/offload_IFO_AS42_ASC.py) as it caused a lockloss yesterday.
Oli tested the SQZ_MANAGER OFFLOAD_SQZ_ASC guardian state today and it worked well. We still need to make the state request-able.
ASC now turns off before SCAN_SQZANG_FDS/FIS and SCAN_ALIGNMENT_FDS/FIS. It wil check if the ASC is on via H1:SQZ-ASC_WFS_SWITCH and turn the asc off before scanning alignment or angle.
We changed the paths so that to get from SCAN_SQZANG_FDS/FIS and SCAN_ALIGNMENT_FDS/FIS back to squeezing, the guardian will go though SQZ_ASC_FDS/FIS to turn back on ASC afterwards.
starting at 16:48 UTC, we have the IMC locked with the NPRO temperature at 0.3, compared to -0.18 for the last 1.5 years (the medm screen says this is in units of K). This was a suggestion from the PSL team to see if our problem is that the laser is near mode hopping.
Ryan Short noticed that this was still glitching at the higher temperature, so that hasn't solved the issue. The first two screenshots show times of the glitching, the glitches also show clearly in the PWR_NPRO channel, but they are not as clear when looking at minute trends as in the FSS channel. This test ran until 17:53 UTC.
We are now sitting with the IMC and FSS unlocked, to see if we see the glitches like this in the NPRO channel. This would rule out that the problem is coming from the FSS, and point to a laser problem. We will probably need to look at full data for the NPRO channel for this second test. We've been sitting here since 17:57 UTC.
We saw similar glitches in the NPRO power monitor with the FSS off as on, so the glitches don't seem to be coming from the FSS. (1st attachment)
Ryan next closed the shutter after the NPRO, before the first amplifier. We didn't see any glitches for nearly 2 hours, but then we saw a series of similar glitches (second screenshot). So this narrows the problem down to something in the laser or controller.
Continuing this glitch search from yesterday the PSL has been locked to the reference cavity with an NPRO temperature of -0.7 since 15:35 UTC October 8th. At that temperature, there was a glitch which looked slightly different from the usual glitches. There was also an osciallation in the FSS.
At around 9 am, I went to the diode room and turned off the noise eater, in that configuration I saw some glitches that looked fairly different from the ones seen regularly, it is mostly only visible in the FSS channel but can also be seen as a small step in the NPRO power channel. There were about 4 glitches like this in an hour.
Then we had the lower temperature (-0.7) with the noise eater on for about an hour, the glitches were not bad during this time.
Later, on a suggestion from Daniel, Rick and I went and disconnected the "diagnostic cables" which connect the power supply to the beckhoff system. To do this, we noted first the set and acutal temperatures and diode currents, as well as the A and B buttons. (I will add photos later of these).
Then we went to the diode room and followed instructions that Ryan Short gave me to turn off the two amplifiers in order, then the shutter and then we turned the NPRO off. We went to the rack, disconnected the cables, and turned the NPRO on by turning on the button on the controller box. This conntroller doesn't have a switch on the front panel for the noise eater, it was replaced by a cable which is no longer used. Filiberto looked up some information about this and tells us that the noise eater would be off in this configuration. We quickly saw that there were many glitches visible in this configuration, while we had the laser temperature back to it's usual -0.2K. This test started at 12:42 pacific-.
At 1:30 pacific we disconnected the "slow" BNC cable from the back of the controller, labeled NPRO temp, it was in this configuration from 1:30 to 2:15. We did see glitches in that time, but not the largest ones.
Now we've set the temperature back to normal, and reconnected the cables, and turned back on the amplifiers and their watchdogs. Oli and Tony are proceeding with initial alignment and Rick and I will reset the watchdogs before leaving.
Pictures of feed throughs on the HAM6 chamber. These reflect the current layout as per D1002877 V14. In order there are three pictures of each D3, D4, D5, D6 and the -Y door.