Sun Nov 12 10:15:27 2023 INFO: Fill completed in 15min 23secs
TITLE: 11/12 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 7mph Gusts, 5mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.50 μm/s
QUICK SUMMARY:
H1 is locked and observing for almost 32 hours.
Violins look much better thida morning.
The winds died down around 7 hours ago.
Everything looks good for a quiet shift.
TITLE: 11/12 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
- While L1 was down and running an initial alignment, I took the opportunity to adjust the opo temp and SQZ angle, in doing this I was able to squeeze (ha ha) out ~3-4 Mpc, bringing us to 161 Mpc
- 5:11 - inc 5.9 EQ from Papua New Guinea
- 7:04 - SQZr unlocked, then relocked on its own, back to OBSERVE @ 7:08
LOG:
No log for this shift.
H1 just got back into observing after tuning the opo temperature and sqz angle while L1 was down. IFO seems to be stable, locked for 20 hours, and seismic activity looks to be calming down as well.
TITLE: 11/12 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
INCOMING OPERATOR: Austin
SHIFT SUMMARY:
So far so good, the wind speed has picked up
The PSL Dust mon just hit 300 again on H1:PEM-CS_DUST_PSL101_300NM_PCF
18:13 UTC Dropped out of Observing due to SQZ_MANAGER changed
2023-11-11_08:11:39.133044Z SQZ_MANAGER REQUEST: SQZ_READY_IFO
2023-11-11_08:11:39.133044Z SQZ_MANAGER calculating path: SQZ_READY_IFO->SQZ_READY_IFO
It seems like the SQZ Manager had some sort of issue with the FC-IR unlocked!
Tagging SQZ
18:15 UTC back to Observing
4 tours rolled through the control room, and out to the overpass.
PSL Dum Mon has been alarming all
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 19:38 | Tour | Corey Janos Mike Cassidy + Tours | Control room | N | Standard tours goin to the overpass. | 22:38 |
TITLE: 11/12 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 21mph Gusts, 19mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.51 μm/s
QUICK SUMMARY:
- H1 currently at a 15:45 hour lock
- Wind has been high all day (gusts of 60+ mph!) but are starting to trend downwards
- CDS/DMs ok
Sat Nov 11 10:11:59 2023 INFO: Fill completed in 11min 55secs
TITLE: 11/11 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 26mph Gusts, 19mph 5min avg
Primary useism: 0.09 μm/s
Secondary useism: 0.47 μm/s
QUICK SUMMARY:
Wind Speeds were gusting at over 50mph and we stayed locked!
Winds are still above 25 mph right now, PSL had a dust alarm on dust mon 101 going up and down in that last several hour with a Peak Value of 450 on channel H1:PEM-CS_DUST_PSL101_300NM_PCF
Violins look elevated but not as high as yesterday above 2.e-16 ish
For the last ~2 hours the BS camera code has been repeated crashing, bringing us out of Observe, then coming back and having us go back into Observing for a short time and then repeating. It looks like it finally died for real this time and I can't bring it back via Monit. I'll continue to look into how to restart this.
Looks like this happened just the other week - alog73749
I'll get Dave involved here in a bit.
Back to Observing at 1523UTC. Erik will post an log with more info on the camera, but it looks like the same fix that's listed in the above alog.
The fix was indeed the same as the previous alog. The steps have to be done in the proper order: first restart the camera, then restart the camera server.
1. Restart the camera by cycling power on the port: Log in to the switch, sw-lvea-aux. Log in either directly from instructions in the password file or from cdsmonitor.
Once logged in to the switch, these commands will reset the camera.
a. config t
b. interface gigabitEthernet 0/35
c. shutdown
d. noshutdown
2. Reset the server by logging into h1digivideo2 as root and killing any process named H1-VID-CAM26. The camera number can be had from the CDS->Digivideo MEDM screen.
TITLE: 11/11 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Aquisition
INCOMING OPERATOR: TJ
SHIFT SUMMARY:
- Ongoing commissioning work from 12 - 2:00 UTC
- Lockloss @ 2:05 - cause unknown
- Back to NLN @ 3:06/OBSERVE @ 3:25
- Attached are SDFs from SQZ that I accepted and from CAL that I reverted - Tagging SQZ/CAL
- 3:16 inc 5.0 EQ from Northern Cali
- Loaded ISC LOCK to pull in new CAL settings at 4:09 UTC
- 6:55 - inc 4.8 EQ from Ethiopia
- Lockloss @ 7:08 - cause unknown
- Relocking went through PRMI thrice, but is now continuing up without issue
- The LVEA is still LASER HAZARD and will remain that way for the weekend
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 00:53 | PEM | Robert | LVEA | YES | Move items | 01:13 |
| 02:20 | PEM | Robert | LVEA | YES | Check viewports on HAM 3 | 02:36 |
Lockloss @ 7:08 UTC, EX saturation right before. SQZ LO looks like it has seen a blip first, followed by ASC AS.
H1 is back in observing, just hit a 1 hour lock. Seismic motion is relatively low, and systems appear stable. Violins, while not great, are slowly starting to come down.
Lockloss @ 2:05 UTC. This one was odd as I had just flipped the intention bit back to observing 3 seconds before we lost lock...how's that for timing. Looks like ASC AS A sees the motion first.
Closes 26262, last completed in alog 74035
Looks like there were a couple glitches for both EX fans ~ 4 days ago, but has since leveled out. Couple glitches over the past couple days for the CS fan vibrometers as well.
I increased the amplitude gain of the TST SUS LINE at 17.6 Hz line to 0.17 from 0.085 using the same line command from LHO:71947. The change took place at GPS 1383694441. This change was in response to LHO:74113 and LHO:74136. It's meant to be a temporary measure until we can try the new MICH FF without the 17.7Hz zp pair (LHO:74139). I've attached a scope of the ETMX L3 SUS line uncertainty to show that it's now down to about 0.5%, which is below the 1% threshold implemented by the GDS pipeline (LHO:72944). Here is the command I used:val=0.17 && caput H1:SUS-ETMX_L3_CAL_LINE_CLKGAIN $val && caput H1:SUS-ETMX_L3_CAL_LINE_SINGAIN $val && caput H1:SUS-ETMX_L3_CAL_LINE_COSGAIN $val && caput H1:CAL-CS_TDEP_SUS_LINE3_COMPARISON_OSC_CLKGAIN $val && caput H1:CAL-CS_TDEP_SUS_LINE3_COMPARISON_OSC_SINGAIN $val && caput H1:CAL-CS_TDEP_SUS_LINE3_COMPARISON_OSC_COSGAIN $valOutput:Old : H1:SUS-ETMX_L3_CAL_LINE_CLKGAIN 0.085 New : H1:SUS-ETMX_L3_CAL_LINE_CLKGAIN 0.17 Old : H1:SUS-ETMX_L3_CAL_LINE_SINGAIN 0.085 New : H1:SUS-ETMX_L3_CAL_LINE_SINGAIN 0.17 Old : H1:SUS-ETMX_L3_CAL_LINE_COSGAIN 0.085 New : H1:SUS-ETMX_L3_CAL_LINE_COSGAIN 0.17 Old : H1:CAL-CS_TDEP_SUS_LINE3_COMPARISON_OSC_CLKGAIN 0.085 New : H1:CAL-CS_TDEP_SUS_LINE3_COMPARISON_OSC_CLKGAIN 0.17 Old : H1:CAL-CS_TDEP_SUS_LINE3_COMPARISON_OSC_SINGAIN 0.085 New : H1:CAL-CS_TDEP_SUS_LINE3_COMPARISON_OSC_SINGAIN 0.17 Old : H1:CAL-CS_TDEP_SUS_LINE3_COMPARISON_OSC_COSGAIN 0.085 New : H1:CAL-CS_TDEP_SUS_LINE3_COMPARISON_OSC_COSGAIN 0.17
tagging DetChar: please be on the lookout for any artifacts that may have been caused by increasing this ETMX L3 line at 17.6 Hz.
lscparams.py has been updated with the new SUS ETMX L3 gain
alog for Tony, Robert, Jim, Gerardo, Mitchell, Daniel
At 22:16UTC the HAM1 HEPI started "ringing", Robert heard this when he was in the LVEA as a 1000Hz "ringing" that he tracked to HAM1. plot attached.
Geradro, Mitchell and Robert investigated the HEPI pumps in the Mechanical room mezzanine and didn't find anything wrong. Robert physically damped the part of HEPI that was vibrating with some foam around 22:40UTC and the "ringing" stopped, readbacks going back to nominal background levels. Can see it clearly in H1:PEM-CS_ACC_HAM3_PR2_Y_MON plot as well as H1:HPI-HAM1_OUTF_H1_OUTPUT channels, plots attached. It must be down converting to see it in HEPI 16Hz channels. HAM1 Vertical IPSINF channels also looked strange, plot.
Jim checked the HEPI readbacks are now okay.
Don't know why it started. Current plan is that it's okay now and more through checks will be done on Tuesday.
Snapshot of peak mon right after a lockloss during this time.
Lockloss page from the lockloss that happened during this event.
Rober reports at 1khz, but is seems there are a number of features at 583, 874 and 882. Can't tell if there are any higher, because HEPI is only a 2k model. Attached plot shows the H1 L4C asds, red is from a couple weeks ago, blue is when HAM1 was singing, pink is after Robert damped the hydraulic line. Seems like the HAM1 motion is back to what it was a couple weeks ago. Not sure what this was, I'll look at the chamber when I get a chance on Monday or Tuesday, unless it becomes an emergency before then...
Second set of asds compare all the sensors during the "singing" to the time in October. Red and light green are the Oct data, blue and brown are the choir, top row are the H L4Cs, bottom row are the V. Ringing is generally loudest in the H sensors, though H2 is quieter than the other 3 H sensors.