Closes FAMIS 21128
The cup in back corner of mezzanine was empty
Mon Aug 28 10:10:25 2023 INFO: Fill completed in 10min 20secs
Travis confirmed a good fill curbside.
FAMIS 19991
Jason was in the anteroom last Tuesday for inventory, which is seen by the environmental trends. Since then, the differential pressure between the anteroom and the laser room has been elevated, but not alarmingly so.
Well pump is running to replenish the fire water tank. The pump will run for 4 hours and automatically shut down.
TITLE: 08/28 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 2mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY:
H1 lost lock at 1152utc (452amPT) and literally as I type it was at Nominal Low Noise & waiting for the CAMERA_SERVO to have its channels converge! Camera Servo completed, so I just took H1 to OBSERVE at 1506utc (806amPT). Scanning Verbal it looks like H1 made several lock attempts over the 3hrs and made it fairly far into locking each time (Power Up and a little beyond) before locklosses. Does not look like an Initial Alignment was run overnight. Looks like Tony received no locking alerts overnight.
Big picture for the range is our locks over the last day or two (153Mpc and about 146Mpc at the very beginning of ths current lock).
TITLE: 08/27 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:
- Arrived with H1 locked for 17.5 hours
- 0:21 UTC - ISS pump turned off, followed alog 70050 to troubleshoot
- Lockloss @ 1:08
- Relocking was troublesome, had to tinker a lot with ALS Y to try and find light on the PD. After a few slider restorations and trial/error attempts later, I was finally able to get it to catch. Writing down the golden sus values here for future reference:
- Rest of unlocking went unaided, acquired NLN @ 2:38, OBSERVE @ 2:54
- 102 Hz peak is apparent for this lock, screenshot
- EX saturation @ 4:43/5:07
- Microseism looks to be on the rise, but otherwise all looks nominal. Setting H1 to managed to handing off to Tony for the night.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 23:33 | EPO | Corey | Deck + overpass | N | Tour +1 | 23:50 |
Following a lockloss followed by a cumbersome relocking process, we have just made it back to observing as of 2:54 UTC. 102 Hz peak is apparent for this lock - ss attached.
Lockloss @ 1:08, no obvious cause.
TITLE: 08/27 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 160Mpc
INCOMING OPERATOR: Austin
SHIFT SUMMARY:
Nice shift riding through breezes and Colombian seismic waves. If there's a lockloss, Dave may take the opportunity to clear up CRC on the CDS overview for SUSH2B.
LOG:
TITLE: 08/27 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 6mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.10 μm/s
QUICK SUMMARY:
- H1 has been locked for just under 17 hours
- DMs/CDS ok
- Was able to ride out a 5.7 EQ from Colombia, just riding out the remnants of it
Quiet shift; winds slightly picking up. Chatted with Dave regarding (1) RED CRC (Cyclic Redundancy Check) for SUSH2B (He will address this after a future lockloss.) and (2) h1digivideo3 memory issue/bumping us out of Observing.
Sun Aug 27 10:09:33 2023 INFO: Fill completed in 9min 29secs
As Corey mentioned, the DAQ stream from h1sush2b to DC0 had a checksum mismatch in one of its 1/16th second data blocks during the 05:56:09 PDT second. The data stream to DC1 did not see this issue.
This was a true data error for this data block, which meant that FW0 wrote a different frame than FW1 for the frame which encompassed this 1/16th of a second. We know that FW1's frame is correct, FW0's frame has bad h1sush2b data.
For the full frame file H-H1_R-1377176128-64.gwf:
| FW | SIZE (bytes) | CKSUM |
| FW0 | 2115565285 | 4100757796 |
| FW1 | 2115562427 | 1319327622 |
Interestingly the correct frame is smaller than the bad frame, meaning the corrupt signal did not compress as well as the actual signal.
Looks like the CAMERA_SERVO had a CONNECTION ERROR for h1digivideo3 from 13:21:20-13:22:00 (taken out of observing during the error and taken back to OBSERVING at 13:22:01utc).
h1digivideo0 ran out of memory at 06:20 PDT this morning and the ETMX video server process crashed. It was restarted by monit (via systemd), which recouped 24% available memory. Attached 6 hour trend of mem available.
TITLE: 08/27 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 162Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 3mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY:
H1's been locked for almost 9hrs w/ a range of around 160Mpc.
On CDS Overview, there is a little RED for the SUSH2B front ends (attached, looks like CRC Sum errors).
Also happened to notice nuc30's H1 live DARM disappear for about 3-5 seconds at 1508utc.
TITLE: 08/26 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 159Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:
- Arrived to an unlocked IFO, ran into the same issue as last night but was able to reconcile with these steps
- Back to NLN @ 1:27/OBSERVE @ 1:51 UTC
- EX saturations @ 3:47/4:04/4:22/4:35
- Lockloss @ 5:06
- Reaquired NLN unaided @ 6:09/OBSERVE @ 6:25
- EX saturation @ 6:36
- Setting H1 to auto and passing to Tony for the night
LOG:
No log for this shift.
TITLE: 08/25 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 11mph 5min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY: H1 has been locked for 46 hours.
Just a note: I was the Day Operator, but arrived late due to Route10 being CLOSED. :(