H1 is still in observing, currently locked for 12.5 hours. Detector appears to be stable, ground motion is low, and the wind seems to be slowly going down (currently hovering ~15-20 mph).
TITLE: 10/16 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 27mph Gusts, 23mph 5min avg
Primary useism: 0.08 μm/s
Secondary useism: 0.38 μm/s
QUICK SUMMARY:
- H1 has been locked for 8.5 hours
- CDS/DMs/SEI ok
- Wind is picking up, right around 35 mph gusts - will monitor throughout the night
TITLE: 10/16 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: Austin
SHIFT SUMMARY:
We dropped out of Observing to reverse the previously (last week, alog73445) done ring heater changes in 2 steps 2 hours apart, first step was done from 20:39:51 - 20:40:23UTC, the second was from 22:41:46 - 22:24:19UTC. alog73503
The wind has started to pick up over the past 2 hours, gusts up to 35mph which is probably what's degrading our range over the same timeframe, and microseism remains slightly elevated.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 14:15 | FAC | Tyler | EY | N | Glycol things | 14:20 |
| 15:27 | FAC | Karen | Woodshop | N | Tech clean | 15:43 |
| 15:43 | FAC | Karen | Optics lab, vap prep | N | Tech clean | 16:03 |
| 16:30 | FAC | Kim & Karen | FCES | N | Tech clean | 18:42 |
| 16:42 | VAC | Jordan, Travis, Mitchell, Janos | FCES | N | Prep for tomorrow, move cleanroom | 18:44 |
| 17:17 | VAC | Gerardo | FCES | N | Join team | 18:47 |
| 19:02 | FAC | Kim & Karen | FCES | N | Tech clean | 20:05 |
| 19:34 | FAC | Cindy | Light bulb room | N | Put away light bulbs | 20:27 |
Ryan informed me at 20:48 UTC (13:48 PDT) that he was getting a PSL Chiller alarm, a likely indication that the level was low. I had to watch the chiller for several minutes as the level slowly moved up and down with the regular flow of water in and out of the reservoir, but sure enough the water dipped just below the threshold and the chiller threw a Low Water Level alarm. I added 150mL to get the reservoir to the Max line and the alarm went away.
We are averaging ~180mL of water added every 6 weeks, this has been fairly constant since late-April 2023. I'll be in the enclosure tomorrow so will be able to do our customary leak check (a habit that's developed since we don't go into the enclosure unless necessary); I don't expect any leaks since the amount of water being added has not accelerated, but it's good practice to check while I'm in there.
I checked the water lines in the enclosure today, and as expected found no leaks.
To see if the range changes seen when we turned up the ETM RH from 1.0W/segment to 1.2W/segment in 73445 were real, this afternoon we are reversing this change.
At 20:40UTC, Ryan took us out of Observing to reduce both H1:TCS-ETM{X,Y}_RH_SET{UPPER,LOWER}POWER to 1.1W, we'll plan to reduce all the way to 1.0W in 2 hours.
I made the second step down from 1.1 to 1.0 at 22:41UTC for H1:TCS-ETM{X,Y}_RH_SET{UPPER,LOWER}POWER.
Unsure if this made any difference to the range. Range looks improved after the step down but the wind also dropped during this change, ndscopes attached. No large change seen in DARM.
The corner station dust monitor pump has failed. I am waiting for a quote for rebuild kits. The service kits we have on site are for the end stations only.
Closes FAMIS 26213
Laser Status:
NPRO output power is 1.819W (nominal ~2W)
AMP1 output power is 67.79W (nominal ~70W)
AMP2 output power is 135.4W (nominal 135-140W)
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PMC:
It has been locked 24 days, 4 hr 23 minutes
Reflected power = 16.32W
Transmitted power = 109.2W
PowerSum = 125.5W
FSS:
It has been locked for 0 days 6 hr and 52 min
TPD[V] = 0.7082V
ISS:
The diffracted power is around 2.3%
Last saturation event was 0 days 6 hours and 55 minutes ago
Possible Issues: None
Mon Oct 16 10:09:16 2023 INFO: Fill completed in 9min 12secs
I received a text Sunday morning about moderately rising temps at EY. When logging into the FMCS, Chiller 1 was in a latched alarm. I enabled chiller 2 to run, as well as the chilled water pump 2. The two pumps ran in parallel for approximately 15 minutes before I commanded chilled water pump 1 off. The latched alarm is once again an "Evaporator Flow Lost" likely from last week's work to fix the damaged wires/flow switch. No temperature excursions should be expected at that building. Work to remedy the latest issue will begin Thursday 10/19 T. Guidry
TITLE: 10/16 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 10mph Gusts, 8mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.39 μm/s
QUICK SUMMARY:
ALS_DIFF had stepped over a resonant offset by about 35. I'm not yet sure if this offset is changing by this much each lock, or if there is something else going on.
Back to Observing at 1443 after an earthquake took us out. H1_MANAGER waited ~1.5 hours while the ground motion calmed down and then tried relocking.
TITLE: 10/16 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 161Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: We've now been Locked for 2hrs 15mins and everything is looking good besides some wind at around 20mph.
Relocking from the 10/15 03:18 earthquake lockloss, I needed to touch both green arms, and IR DIFF could not be found so I needed to adjust the DARM offset, but lost lock while doing that, and during the next relock I had to help with DIFF again.
LOG:
23:00UTC Detector Observing and has been Locked for 11hours 35mins
00:00:02 Verbal said WAP on in LVEA, EX, EY (dropped to a value of 0 for 5s starting at 00:00:00)
- Rarely shows up (unless we actually forgot the wifi on) but when it does it's always at 00:00UTC
03:18 Lockloss due to nearby Canadian earthquake (73488)
- Relocking: needed to go to GREEN_ARMS_MANUAL for both arms, and needed to pause at CHECK_IR and touch DIFF Offset
03:55 Lockloss from CHECK_IR
04:46 Into NOMINAL_LOW_NOISE
04:57 Observing
Lockloss @ 10/16 03:18UTC due to a sudden local earthquakeearthquake from west Canadian coast. Peakmon jumped up to 2200 counts very quickly.
03:18:40 DIAG_MAIN message: SERVO_BOARDS: IMC_REFL_SERVO_SPLITMON > 8V
03:18:45 SRM saturation
03:18:49 Earthquake mode activated
03:18:54 SRM saturation
03:18:56 SRM saturation
and Lockloss
These past couple of days we have had a lot of sudden ground motion for some reason.
EDIT: USGS actually did measure this one as coming from off the west coast of Canada, so makes sense why we did not get notified in time. The local earthquakes from previous days had not shown up on USGS however so were definitely from much closer than this one was.
04:46 Reached NOMINAL_LOW_NOISE
04:57 Observing
[Vicky, Regina, Dorotea, Sheila]
The attached plot shows the ratio of green light transmitted vs reflected through the OPO over the past year to see how green OPO cavity losses have changed.
We last moved the OPO crystal in Nov. 2022 (see relevant alog). Shortly after moving the crystal, the ratio was around 5% and degraded to around 4% within the first week; it has since settled between 2-3% for most of the past year.
We think the discontinuity around 23/04/16 was due to rejecting power away from the refl PD, which was done around that time because the PD was being saturated. If we look at the refl power alone we can see clear saturation occurring on the PD in the time before the power was lowered. We think the loss curve would be fairly continuous if not for the saturation, but can't exactly compensate since we don't know how much power is being lost. The plot shows only times from when the detector was "stable", but stability was calculated fairly naively, leading to some outlier points remaining from when the detector wasn't actually locked; however, the general trend seems pretty consistent.
History of crystal movements before O4, after HAM7 installation. Compared to ~1000+ steps between crystal co-resonances, the opo crystal has not really moved to a different co-resonance position since installation.
Estimating spots between crystal co-resonances: From e.g. Maggie's MIT/Lasti ilog and LLO crystal scans (e.g. LLO:49568 one side to other side of crystal was from (+3,180 and -11,820) counts, for either LLO:53429 10 co-resonance spots, or LLO:60710 13 co-resonance spots), it seems to average about (15000/13) ~ 1150 steps between fixed-temperature crystal co-resonances, save for the fine-tuning needed at each position. Compared to this ~1150 steps between crystal spots, our previous moves have basically been circling one spot.
Both ETM RH turned up from 1.0W to 1.1W/segment at 19:03UTC, plan to increase another 0.1W later this afternoon. Follow on from 73437. Will stay in observing during this test.
Made another step up of +0.1W to 1.2W/Segment on ETMX and ETMY at 21:07UTC.
Plots attached of HOM, DARM and ndscopes. Jenne pointed out we should use GDS-CALIB for DARM as it isn't effected bt the calibration changing with the RH changes. On ndscopes, not that at -4hours theres a step in SQZ that effects the range 73446.
High frequency noise reduce, DARM maybe better in in the bucket but circulating power down 7kW and Kappa_C down 0.8%.
This doesn't seem to be doing anything bad to the range so we can leave the ETM Ring Heaters in this 1.2W setting for the weekend. If Operators have any troubles, they can reduce H1:TCS-{ETMY,ETMY}_RH_SET{UPPER,LOWER}POWER from 1.2W to 1.0W.
Adding plots of 11:00UTC,13h30 after ETM RH change. DARM ~1000Hz looks like it thermalized worse than 2 hours after the RH change. 6kHz DARM continued to reduce, but it started that lock particularly high. Circulating power settled at 367kW, 7kW less than nominal. KAPPA_C dropped nearly 1%.
These RH changes were reverted 16 October 2023 73503.
To see if the OM2/beckhoff coupling is a direct electronics coupling or not, we've done A-B-A test while the fast shutter was closed (no meaningful light on the DCPD).
State A (should be quiet): 2023 Oct/10 15:18:30 UTC - 16:48:00 UTC. The same as the last observing mode. No electrical connection from any pin of the Beckhoff cable to the OM2 heater driver chassis. Heater drive voltage is supplied by the portable voltage reference.
State B (might be noisy): 16:50:00 UTC - 18:21:00 UTC. The cable is directly connected to the OM2 heater driver chassis.
State A (should be quiet): 18:23:00- 19:19:30 UTC or so.
DetChar, please directly look at H1:OMC-DCPD_SUM_OUT_DQ to find combs.
It seems that even if the shutter is closed, once in a while very small amount of light reaches DCPDs (green and red arrows in the first attachment). One of them (red arrow) lasted long and we don't know what was going on there. One of the short glitches was caused by BS momentarilly kicked (cyan arrow) and scattered light in HAM6 somehow reached DCPDs, but I couldn't find other glitches that exactly coincided with optics motion or IMC locked/unlocked.
To give you a sense of how bad (or not) these glitches are, 2nd attachment shows the DCPD spectrum of a quiet time in the first State A period (green), strange glitchy period indicated by the red arrow in the first attachment (blue), a quiet time in State B (red) and during the observing time (black, not corrected for the loop).
FYI, right now we're back to State A (should be quiet). Next Tuesday I'll inject something to thermistors in chamber. BTW 785 was moved in front of the HAM6 rack though it's powered off and not connected to anything.
I checked H1:OMC-DCPD_SUM_OUT_DQ and don't see the comb in any of the three listed intervals (neither state A nor B). Tested with a couple of SFT lengths (900s and 1800s) in each case.
Since it seems that the coupling is NOT a direct electronics coupling from Beckhoff -> OM2 -> DCPD, we fully connected the Beckhoff cable to the OM2 heater driver chassis and locked the OMC to the shoulder with an X single bounce beam (~20mA DCPD_SUM, not 40mA like in the usual nominal low noise state). That way, if the Beckhoff is somehow coupling to OMC PZT that might cause visible combs in the DCPD.
We didn't see the comb in this configuration. See the 1st attachment, red is the shoulder lock and green is when 1.66Hz comb was visible with the full IFO (the same time reported by Ansel in alog 73000), showing just two largest peaks of 1.66Hz harmonics visible in the green trace. (It seems that the 277.41Hz and 279.07 Hz peak are 167th and 168th harmonics of 1.66Hz.) Anyway, because of the higher noise floor, even if the combs are there we couldn't have seen these peaks. We've had a different comb spacing since then (alog 73028) but anyway I don't see anything at around 280Hz. FYI I used 2048 FFTs for both, red is a single FFT and the green is an average of 6. This is w/o any normalization (like RIN).
In the top panel of 2nd attachment, red is the RIN of OMC-DCPD_SUM_OUT_DQ of the shoulder lock, blue and dark green are RIN of 2nd loop in- and out-of-loop sensor array. Magenta, cyan and blue green are the same set of signals when H1 was in observing last night. Bottom panel shows coherence between DCPD_SUM during the shoulder lock and ISS sensors as well as IMC_F, which just means that there's no coherence except for high kHz.
If you look at Georgia's length noise spectrum from 2019 (alog 47286), you'll see that it's not totally dissimilar to our 2nd plot top panel even though Georgia's measurement used dither lock data. Daniel points out that a low-Q peak at around 1000Hz is a mechanical resonance of OMC structure causing the real length noise.
Configurations: H1:IMC-PWR_IN~25.2W. ISS 2nd loop is on. Single bounce X beam. DCPD_SUM peaked at about 38mW when the length offset was scanned, and the lock point was set to the middle (i.e. 19mA). DC pointing loops using AS WFS DC (DC3 and DC4) were on. OMC QPD loops were not ON (it was enabled at first but was disabled by the guardian at some point before we started the measurement). We were in this state from Oct/17/2023 18:12:00 - 19:17:20 UTC.
BTW Beckhoff cable is still fully connected to the OM2 heater driver chassis. This is the first observation data with such configuration after Fil worked on the grounding of Beckhoff chassis (alog 73233).
Detchar, please find the comb in the obs mode data starting Oct/17/2023 22:33:40 UTC.
The comb indeed re-appeared after 22:33 UTC on 10/17. I've attached one of the Fscan daily spectrograms (1st figure); you can see it appear in the upper right corner, around 280 Hz as usual at the start of the lock stretch.
Two other notes:
Just to see if anything changes, I used the switchable breakout board at the back of the OM2 heater driver chassis to break the thermistor connections but kept the heater driver input coming from the Beckhoff. The only two pins that are conducting are pins 6 and 19.
That happened at around Oct/18/2023 20:18:00 to 20:19-something UTC when others were doing the commissioning measurements.
Detchar, please look at the data once the commissioning activities are over for today.
Because there was an elevated noise floor in the data from Oct/17/2023 18:12:00 mentioned in Keita's previous comment, there was some doubt as to whether the comb would have been visible even if it were present. To check this, we did a direct comparison with a slightly later time when the comb was definitely present & visible. The first figure shows an hour of OMC-DCPD_SUM_OUT_DQ data starting at UTC 00:00 on 10/18 (comparison time with visible comb). Blue and yellow points indicate the comb and its +/-1.235 Hz sidebands. The second figure shows the time period of interest starting 18:12 on 10/17, with identical averaging/plotting parameters (1800s SFTs with 50% overlap, no normalization applied so that amplitudes can be compared) and identical frequencies marked. If it were present with equivalent strength, it looks like the comb ought to have been visible in the time period of interest despite the elevated noise floor. So this supports the conclusion that the comb *not* present in the 10/17 18:12 data.
Following up, here's about 4 hours of DELTAL_EXTERNAL after Oct 18 22:00. So this is after Keita left only the heater driver input connected to the Beckhoff on Oct/18/2023 20:18:00. The comb is gone in this configuration.
The cleanroom inside the filter cavity enclosure was set above the 6 way cross (C6). The cleanroom was turned ON at 18:03 UTC.