TITLE: 10/04 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: SEISMON_ALERT
Wind: 19mph Gusts, 15mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.15 μm/s
QUICK SUMMARY:
- H1 has been locked for just over 8:30 hours
- SEI/CDS/DMs ok
TITLE: 10/04 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Austin
SHIFT SUMMARY: We've been locked all day 8:30 so far, and just finished up commissioning for the day.
Norco arrived at 15:35UTC and started down Xarm at 15:45 and was parking at MX at 15:54 and was leaving around 17:25.
We started commissioning at 19:00UTC with a calibration sweep (19:03 to 19:33). I also manually restarted NUC26 which had frozen late last night. Robert wrapped up around 22:45UTC form his PEM injections and swept the LVEA as he left.
Camilla made the ring heater changes at 19:35UTC and undid them at 22:35UTC.
Back into observing at 22:46UTC
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 16:23 | FAC | Karen | Wood shop, fire pump room | N | Tech clean | 18:10 |
| 17:12 | FAC | Cindi | Weld shop/Laundry | N | Laundry | 19:07 |
| 17:51 | FAC | Kim | H2 | N | Tech clean | 18:10 |
| 19:05 | PEM | Robert | LVEA | N | Grab equipment | 19:12 |
| 19:15 | CAL | RyanC | CR | N | Calibration sweep | 19:34 |
| 19:16 | PEM | Robert | EndX | N | PEM injection | 20:54 |
| 19:33 | FAC | Cindi | Wood shop | N | grab supplies | 19:53 |
| 19:35 | TCS | Camilla | CR | N | Ring heater test | 20:23 |
| 20:54 | PEM | Robert | CR | N | ITM tests | 21:50 |
| 21:17 | VAC | Travis, Jordan | Mids | N | Gauge updates | 22:20 |
| 21:18 | Marc +2 | Roof | N | Tour for new intern | 21:22 | |
| 21:50 | PEM | Robert | LVEA | N | HAM3 Shaking Tests | 22:47 |
Starting RH changes at 19:35UTC, turned up ITM RHs +0.1W and down ETM RHs -0.1W. Plots of DARM and HOM after 2 hours attached and ndscope trend. High frequency noise increased and the HOM moved higher in frequency. Circulating powers increased. After 3 hours at 22:35UTC I turned RHs back to nominal.
During this time Robert was doing some PEM injections and we had been locked at NLN for 5 hours so weren't completely thermalized at the start.
I added a 6600 to 6800Hz BLRM as H1:OAF-RANGE_RLP_8 to monitor high frequency noise, see attached, didn't prove very useful. H1:SQZ-DCPD_RATIO_6_DB shows trend better.
| Nominal (W/segment) | Test Values 19:35 to 22:35UTC | |
| ITMX | 0.44 | 0.54 |
| ITMY | 0.0 | 0.1 |
| ETMX | 1.0 | 0.9 |
| ETMY | 1.0 | 0.9 |
Last weeks test 73093. Future tests: Can't go the other direction as ITMY started with 0W RH. Could try commonly turning up just ETMs in a future while observing test as we had decreased high frequency noise with both ETM RHs ~1.4W/seg in February 67501, plot attached.
Updated plot showing thermalization back to nominal RH settings attached. Main changes were higher circulating power and more high frequency noise during the test.
I ran a calibration sweep today, starting with the BB then simulines.
BB output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20231004T190400Z.xml
Simulines:
GPS start: 1380481801.115491
GPS stop: 1380483144.778068
2023-10-04 19:32:06,617 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20231004T190945Z.hdf5
2023-10-04 19:32:06,638 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20231004T190945Z.hdf5
2023-10-04 19:32:06,650 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20231004T190945Z.hdf5
2023-10-04 19:32:06,662 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20231004T190945Z.hdf5
2023-10-04 19:32:06,674 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20231004T190945Z.hdf5
We've been locked and observing since 14:47UTC, we plan to commission from 19:00UTC to 23:00UTC. Plans include a calibration sweep, ring heater test, and PEM injections.
Wed Oct 04 10:11:28 2023 INFO: Fill completed in 11min 24secs
Jordan confirmed a good fill curbside.
TITLE: 10/04 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 3mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.18 μm/s
QUICK SUMMARY:
As Austin mentioned in his alog nuc26 is not responding over the network. This is the upper camera FOM showing the MC and PR cameras.
Its web image has not updated since Wed 4th 00:08 UTC (Tue 3rd 17:08 PDT). The EDC lost connection to its 12 load_mon EPICS channels at the same time (list shown below)
Ryan says the local display in the control room is updating its images, so we we schedule a reboot of nuc26 at the next target-of-opportunity.
H1:CDS-MONITOR_NUC26_CPU_LOAD_PERCENT
H1:CDS-MONITOR_NUC26_CPU_COUNT
H1:CDS-MONITOR_NUC26_MEMORY_AVAIL_PERCENT
H1:CDS-MONITOR_NUC26_MEMORY_AVAIL_MB
H1:CDS-MONITOR_NUC26_PROCESSES
H1:CDS-MONITOR_NUC26_INET_CONNECTIONS
H1:CDS-MONITOR_NUC26_NET_TX_TOTAL_MBIT
H1:CDS-MONITOR_NUC26_NET_RX_TOTAL_MBIT
H1:CDS-MONITOR_NUC26_NET_TX_LO_MBIT
H1:CDS-MONITOR_NUC26_NET_RX_LO_MBIT
H1:CDS-MONITOR_NUC26_NET_RX_ENO1_MBIT
H1:CDS-MONITOR_NUC26_NET_TX_ENO1_MBIT
Ryan rebooted nuc26 at 12:10 PDT. The web snapshot now looks good and the EDC has reconnected to its PVs.
TITLE: 10/03 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
- Arrived to a relocking IFO, back to NLN @ 0:12, OBSERVE @ 0:33 UTC
- PI 31 ringups @ 0:14 - successfully damped by guardian
- 0:33 - inc 5.5 EQ from Japan - back to CALM @ 1:08
- EX saturation @ 0:51
- 1:30 - Saturations on PR2/SR2/MC2 - have never seen these before
- Lockloss @ 3:09
- Relocking was smooth (had a slight issue with DRMI going to CHECK MICH, but otherwise ok), back to NLN @ 4:13, OBSERVE @ 4:28
- 3:54 - inc 5.7 EQ from Philippines
- 5:13 - inc 5.7 EQ this one from Japan again - 5:34 EQ mode activated/back to CALM @ 5:44
- EX saturations @ 4:31/5:49
- The H1EDC is reading red with 12 channels - all pertaining to nuc26 - Tagging CDS
LOG:
No log for this shift.
Lockloss @ 3:09. This one was very quick, DCPD saturation and then a lockloss immediately after. Looks like SRCL saw motion first for this one.
Quiet night so far, other than a 5.5 EQ from Japan. Seismic motion has since stabilized, violin modes look to have calmed down since this past weekend, rest of the systems look stable as well.
TITLE: 10/03 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Austin
SHIFT SUMMARY:
IFO is LOCKING (as of 22:46 UTC)
Notable Activities and Events Regarding Control Room and Tuesday Maintenance Tasks
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 15:09 | 15:10 | |||||
| 15:11 | FAC | Karen | EY | N | Technical Cleaning | 16:06 |
| 15:13 | FAC | Randy + Chris + Bubba | LVEA | N | Septum Plate Forklift + Move | 17:17 |
| 15:13 | FAC | Kim | EX | N | Technical Cleaning | 16:13 |
| 15:15 | FAC | Ken | LVEA | N | Mechanical Room Heaters Check | 19:41 |
| 15:16 | FAC | Cindi | FCES | N | Technical Cleaning | 16:16 |
| 15:23 | PEM | Mitchell | EX/EY | N | Dust Pump Famis Task | 16:11 |
| 15:26 | FAC | Bubba | LVEA | N | Snorkel Lift Ceiling Noise Check | 15:57 |
| 15:26 | FAC | Chris | LVEA | N | Pest Control | 15:42 |
| 15:27 | SUS | Fernando + Jason | LVEA | N | IX Oplev Laser Swap | 15:39 |
| 15:33 | VAC | Jordan + Gerardo + Travis | LVEA | N | Turbo Pumps | 17:22 |
| 15:41 | FAC | Chris | EY, MY, EX, MX | N | Pest Control | 16:42 |
| 15:46 | EE | Fil | CER | N | ISI IX Coil Driver | 16:21 |
| 15:49 | FAC | Tyler | MY | N | Looking at Fans | 19:55 |
| 16:22 | EE | Fil | CER | N | Grounding of PSL + Beckhoff Chassis | 17:00 |
| 16:43 | FAC | Chris | LVEA | N | First Aid + Eyewash Stations + First Aid (FAMIS) | 17:43 |
| 16:44 | FAC | Karen + Kim | LVEA | N | Technical Cleaning | 17:17 |
| 16:48 | VAC | Gerardo + Travis | FCES | N | Moving Cleanroom | 17:22 |
| 17:10 | SQZ | Sheila + Camilla + Naoki | LVEA | Local | Swapping homodyne at SQZ Table | 18:43 |
| 17:16 | EE | Fernando + Marc | LVEA, CER, Mezannine | N | Electrical System Testing | 18:46 |
| 17:19 | FAC | Karen + Kim | Filter Cavity Tube | N | Technical Cleaning | 19:34 |
| 17:22 | TCS | Ryan C | Mechanical Room | N | Mechanical Room Chiller Check | 17:32 |
| 17:29 | FAC | Cindi | LVEA Recieving | N | Cardboard extraction | 17:46 |
| 17:40 | ISC | Keita | LVEA | N | Ham 6 Noise Reduction Inspection on OM2 | 19:45 |
| 17:47 | 17:47 | |||||
| 17:48 | FAC | Cindi | High-Bay | N | Technical Cleaning | 18:19 |
| 18:13 | FAC | Chris | EX, EY | N | Eyewash Stations + First Aid (FAMIS) | 19:24 |
| 18:25 | SUS | Jason | LVEA | N | IX Oplev Check | 18:32 |
| 18:32 | VAC | Gerardo | LVEA | N | More Turbo Pump Work | 19:03 |
| 18:40 | EE | Fil | LVEA | N | ISI IX Coil Driver Turn On | 18:41 |
| 19:02 | SUS | Jason + Ryan S | LVEA, CER | N | Swapping DC converter power regulator | 20:07 |
| 19:51 | SAF | Ryan S + Oli | LVEA | N | Sweep | 20:15 |
| 20:08 | SUS | Jason | Optics Lab | N | Dropping off parts (Quick) | 20:09 |
| 21:33 | TCS | Camilla | LVEA | N | TCSY Laser Switch-On | 21:44 |
At 21:29UTC the CO2Y laser tripped off with RTD/IR alarm. I went into the LVEA and had to power cycle the CO2Y chassis and wobble the IR cable connector box before the laser would turn on again. The VAC team bumped the table earlier today but I doubt this has anything to do with the trip off. I checked the annular pattern on ITMY via the HWS looked as normal.
Ryan S and I did a sweep of the LVEA. Everything looked good, but there were a few things we thought were probably okay but still wanted to log:
attachment1 - Two power supply units were on at the y-arm termination slab.
Not pictured - Small puddle of condensation water on the tile close to y-arm termination slab from the pipes above. This water is separate from the puddles that collect from either side of the beam tube. - tagging VE
attachment2, attachment5 - OM2 power supply and multimeter connected to Beckoff Output Interface 1 via mini hook clips
attachment3 - Computer for PCal X plugged in. I remember talking to TJ and Tony about this last time I swept and the conclusion (I believe) being that it had been plugged in for someone who had needed it at the time (not sure if still needed?) - tagging CAL
attachment4 - Power supply underneath Hartmann table on
Olli, Ryan, thanks for noticing this computer by the Xend Pcal system. While it happens to be located nearby, and we may have used it years ago, we (Pcal team) have not used that computer for some time and don't plan to use it in the future. As far as I know it is currently in an inoperable state, having been replaced by the CDS laptops that we carry down to the end stations from the corner station.
The remaining two slow controls chassis were grounded via a wire braid to a grounding terminal block. Same as what was done in alog 68096, alog 66402 and alog 66469. This scheme provides a low resistance path to the grounding block. The anodized racks prevent a solid grounding connection via the mounting screws. The PSL and TCS slow controls were grounded using new scheme. See attached pictures. This completes all slow controls chassis in CER and End Stations.
Tagging DetChar -- with the CW group in mind. After this maintenance day, there might be a change in comb behavior as a result of this work. This is action taken as a result of Keita, Daniel, and Ansel's work on ID'ing combs from the OM2 heater system -- see LHO:72967.
After Fil was done with the grounding work, I temporarily restored the connection between the beckhoff cable and the heater chassis and used a normal breakout board to measure the voltage between the driver ground (pin13) and the positive drive voltage (pin 6) of D2000212, just like I did on Aug 09 2023 (alog 72061).
1st attachment is today, 2nd attachment is on Aug 09. I see no improvement (OK, it's better by ~1dB today).
After seeing this, I swapped the breakout board back to the switchable one I've been using to connect only a subset of pins (e.g. only thermistor 1). This time, there's no electrical connection between any pins but the cable was physically attached to the breakout board. No connection between the cable shell and the chassis connector shell either. I expect that the comb will be gone, but I'd like detchar to have a look.
The heater driver is driven by the voltag reference on the nearby table, not Beckhoff.
J. Oberling, F. Mera
This morning we swapped the failing laser in the ITMx OpLev with a spare. The first attached picture shows the OpLev signals before the laser swap, the 2nd is after. As can be seen there was no change in alignment, but the SUM counts are now back around 7000. I'll keep an eye on this new laser over the next couple of days.
This completes WP 11454.
J. Oberling, R. Short
Checking on the laser after a few hours of warm up, I found the cooler to be very warm, and the box housing the DC-DC converter that powers the laser (steps ~11 VDC down to 5 VDC) was extremely warm. Also, the SUM counts had dropped from the ~7k we started at to ~1.1k. Seeing as how we just installed a new laser, my suspicion was that the DC-DC converter was failing. Checking the OpLev power supply in the CER it was providing 3A to the LVEA OpLev lasers; this should only be just over 1A, which is further indication something is up. Ryan and I replaced the DC-DC converter with a spare. Upon powering up with the new converter the current delivered by the power supply was still ~3A, so we swapped the laser with another spare. With the new laser the delivered current was down to just over 1A, as it should be. The laser power was set so the SUM counts are still at ~7k, and we will keep an eye on this OpLev over the coming hours/days. Both lasers SN 191-1 and SN 119-2 will be tested in the lab; my suspicion is that the dying DC-DC converter damaged both lasers and they will have to be repaired by the vendor, will see what the lab testing says. New laser SN is 199-1.
Noticing as the night progresses, the sum counts are slowly going up, starting from ~6200 and now ~7100. Odd.
ITMX OPLEV sum counts are at about 7500 this morning.
Sum counts around 7700 this morning, they're still creeping up
The ISS Second Loop engaged this lock with a low-ish diffracted power (about 1.5%). Oli had chatted with Jason about it, and Sheila noticed that perhaps it being low could be related to the number of glitches we've been seeing. A concern is that if the control loop needs to go "below" zero percent (which it can't do), this could cause a lockloss.
I "fixed" it by selecting IMC_LOCK to LOCKED (which opens the ISS second loop), and then selecting ISS_ON to re-close the second loop and put us back in our nominal Observing configuration. This set the diffracted power back much closer to 2.5%, which is where we want it to be.
This cycling of the ISS 2nd loop (a DC coupled loop) dropped the power into the PRM (H1:IMC-PWR_IN_OUT16) from 57.6899 W to 57.2255 over the course of ~1 minute 2023-Aug-07 17:49:28 UTC to 17:50:39 UTC. It caught my attention because I saw a discrete drop in arm cavity power of ~2.5W while trending around looking for thermalization periods. This serves as another lovely example where time dependent correction factors are doing their job well, and indeed quite accurately. If we repeat the math we used back in O3, (see LHO:56118 for math derivation), we can model the optical gain change in two ways: - the relative change estimated from the power on the beam splitter (assuming the power recycling gain is constant and cancels out) relative change = (np.sqrt(57.6858) - np.sqrt(57.2255)) / np.sqrt(57.6858) = 0.0039977 = 0.39977% - the relative change estimated by the TDCF system, via kappa_C relative change = (0.97803 - 0.974355)/0.97803 = 0.0037576 = 0.37576% indeed the estimates agree quite well, especially given the noise / uncertainty in the TDCF (because we like to limit the height of the PCAL line that informs it). This gives me confidence that -- at least over the several minute time scales -- kappa_C is accurate to within 0.1 to 0.2%. This is consistent with how much we estimate the uncertainty is from converting the coherence between the PCAL excitation and DARM_ERR into uncertainty via Bendat & Piersol's unc = sqrt( (1-C) / (2NC) ). It's nice to have these "sanity check" warm and fuzzies that the TDCFs are doing their job; but also nice to have detailed record of these weird random "what's that??" when trending around looking for things. I also note that there's no change in cavity pole frequency, as expected.
When the circulating power dropped ~2.5kW, kappa_c trended down, plot attached. This implies that the lower circulating powers induced in previous RH tests 73093are not the reason kappa_c increases. Maybe see an slight increase in high frequency noise as the circulating power is turned up, plot attached.