Ryan S and I did a sweep of the LVEA. Everything looked good, but there were a few things we thought were probably okay but still wanted to log:
attachment1 - Two power supply units were on at the y-arm termination slab.
Not pictured - Small puddle of condensation water on the tile close to y-arm termination slab from the pipes above. This water is separate from the puddles that collect from either side of the beam tube. - tagging VE
attachment2, attachment5 - OM2 power supply and multimeter connected to Beckoff Output Interface 1 via mini hook clips
attachment3 - Computer for PCal X plugged in. I remember talking to TJ and Tony about this last time I swept and the conclusion (I believe) being that it had been plugged in for someone who had needed it at the time (not sure if still needed?) - tagging CAL
attachment4 - Power supply underneath Hartmann table on
Olli, Ryan, thanks for noticing this computer by the Xend Pcal system. While it happens to be located nearby, and we may have used it years ago, we (Pcal team) have not used that computer for some time and don't plan to use it in the future. As far as I know it is currently in an inoperable state, having been replaced by the CDS laptops that we carry down to the end stations from the corner station.
(Jordan V., Gerardo M.)
Output mode cleaner tube turbo station;
Scroll pump hours: 5561.4
Turbo pump hours: 5621
Crash bearing life is at 100%
X beam manifold turbo station;
Scroll pump hours: 785
Turbo pump hours: 783.5
Crash bearing life is at 100%
Y beam manifold turbo station;
Scroll pump hours: 1878.6
Turbo pump hours: 603
Crash bearing life is at 100%
WP11437 DAQ file server network jumbo frames
Jonathan:
During the DAQ restart Jonathan increased the MTU on the ethernet ports connecting FW and NDS to their NFS file servers to 9000 bytes. This change required an extended FW downtime, and a second restart of the NDS.
WP11447 Add missing model slow channels to DAQ
Dave:
Both H1EPICS_FEC.ini and H1EPICS_SDF.ini were extended to add missing channels. DAQ+EDC restart was needed.
WP11449 Add missing Dust Monitor channels to DAQ
Dave:
H1EPICS_DUST.ini was extended to add missing channels. This file is now being generated by a python script. DAQ+EDC restart was needed.
WP11450 New camera server code
Patrick:
New server code was installed on h1digivideo3 for all cameras being served by this machine. This is a mitigation of sending NaN values to h1asc via cds_ca_copy.
WP11456 PSL Beckhoff SDF monitor Watchdog Channels
Ryan S, Jason, Dave:
New monitor.req and safe.snap files were installed on h1pslopcsdf.
DAQ Restart
Jonathan, Erik, Dave:
DAQ was restarted for the above changes. As noted, FWs were down for an extended time and NDSs needed a second restart to implement jumbo frames.
GDS1 needed a second restart to sync its channel list.
FW1 spontaneously restarted itself 20 minutes later.
Tue03Oct2023
LOC TIME HOSTNAME MODEL/REBOOT
11:30:51 h1daqdc0 [DAQ] <<< 0-leg restart
11:31:05 h1daqfw0 [DAQ]
11:31:06 h1daqnds0 [DAQ]
11:31:06 h1daqtw0 [DAQ]
11:31:14 h1daqgds0 [DAQ]
11:36:24 h1daqfw0 [DAQ] <<< FW0 2nd restart
11:39:49 h1daqnds0 [DAQ] <<< NDS 2nd restart
11:42:08 h1susauxb123 h1edc[DAQ] <<< EDC restart
11:44:40 h1daqdc1 [DAQ] <<< 1-leg restart
11:44:48 h1daqfw1 [DAQ]
11:44:49 h1daqtw1 [DAQ]
11:44:50 h1daqnds1 [DAQ]
11:44:58 h1daqgds1 [DAQ]
11:47:04 h1daqfw1 [DAQ] <<< FW1 2nd restart
11:47:50 h1daqgds1 [DAQ] <<< GDS1 2nd restart (chan list)
11:49:05 h1daqnds1 [DAQ] <<< NDS1 2nd restart
12:17:28 h1daqfw1 [DAQ] <<< FW1 spontaneous restart
DAQ Frame File Channel Changes
Fast channels: no additions nor removals
Slow channels:
no channels removed
3,319 channels added.
Tuesday Maintenance activities have concluded
IFO is starting INITIAL ALIGNMENT, then acquiring lock (NOMINAL LOW NOISE) and moving into OBSERVING thereafter.
Vicky and Sheila showed the Quantum efficiency on PD-B of our homodyne was low in 72604 and 72973.
Today Sheila and I swapped the original S1800623 with the spare S2000862 homodyne and balanced the homodyne by adjusting the BS and PDB steering mirror to bring H1:SQZ-HD_DIFF_DC_OUTPUT to 0. Both PDs now have the same QE of >97%.
We removed the fudge factors of -1.09 and 1.11 in H1:SQZ-HD_{A,B}_DC so now these show mA. The dark noise and shot noise look similar to the old homodyne, see attached. Next week will align and match the SQZ beam to the LO. We left the original HD in SQZT7.
Channel HD_{A,B}_DC | Ophir PM (s/n 756764, meter 751044) | QE w/Ophir (from 72604) | Thorlabs PM | QE w/Thorlabs (from 72604) | |
PD-A | 0.486mA | 0.58mW | 97.7% | 0.560mW | 101.2% |
PD-B | 0.486mA | 0.58mW | 97.7% | 0.563mW | 100.7% |
* Example QE calulation: Responsivity of 0.833A/W for Ophir Measurements using 0.486mA / 0.58mW. QE is 97.7% from 0.833A/W / 0.8582A/W for QE(100%) from 72604.
WP 11448
The plan for today was to replace the third ISI Coil Driver on ITMX. Spare unit powered on with fault lights on. Unit was pulled and needs to be verified all ECRs on unit have been completed. Original ISI coil driver was reinstalled.
While looking at slowdowns on frame writing we noticed that the frame file server NICs had the default MTU of 1500. Today during the daq restart we updated the MTU to 9000 to allow larger packets between the framewriter, frame file server, and the nds machines. Basic process: - on daqd * stop the daqd * umount /frames * take the interface down on the daqd - on daqframes * take the interface down * update the mtu * bring the interface up - back on the daqd * bring the interface up * verify that both sides are at a MTU of 9000 * mount /frames * start daqd This was done on h1daqfw0, h1daqfw1, h1daqnds0, h1daqnds1, h1daqframes-0, h1daqframes-1.
The remaining two slow controls chassis were grounded via a wire braid to a grounding terminal block. Same as what was done in alog 68096, alog 66402 and alog 66469. This scheme provides a low resistance path to the grounding block. The anodized racks prevent a solid grounding connection via the mounting screws. The PSL and TCS slow controls were grounded using new scheme. See attached pictures. This completes all slow controls chassis in CER and End Stations.
Tagging DetChar -- with the CW group in mind. After this maintenance day, there might be a change in comb behavior as a result of this work. This is action taken as a result of Keita, Daniel, and Ansel's work on ID'ing combs from the OM2 heater system -- see LHO:72967.
After Fil was done with the grounding work, I temporarily restored the connection between the beckhoff cable and the heater chassis and used a normal breakout board to measure the voltage between the driver ground (pin13) and the positive drive voltage (pin 6) of D2000212, just like I did on Aug 09 2023 (alog 72061).
1st attachment is today, 2nd attachment is on Aug 09. I see no improvement (OK, it's better by ~1dB today).
After seeing this, I swapped the breakout board back to the switchable one I've been using to connect only a subset of pins (e.g. only thermistor 1). This time, there's no electrical connection between any pins but the cable was physically attached to the breakout board. No connection between the cable shell and the chassis connector shell either. I expect that the comb will be gone, but I'd like detchar to have a look.
The heater driver is driven by the voltag reference on the nearby table, not Beckhoff.
Closes FAMIS 26158
There was no water in the cup in the corner.
TCS X | TCS Y | |
---|---|---|
Previous Level | 29.6 | 9.5 |
New Level | 30.0 | 10.0 |
Water added | 150mL | 150mL |
Closes WP 11450. Ran 'apt-get install pylon-camera-server' as root on h1digivideo3. The code for each camera was restarted by the service manager. No issues were seen. This update is intended to fix a possible threading issue that may have been a source of 'NaN' input. See this commit for the relevant change.
J. Oberling, F. Mera
This morning we swapped the failing laser in the ITMx OpLev with a spare. The first attached picture shows the OpLev signals before the laser swap, the 2nd is after. As can be seen there was no change in alignment, but the SUM counts are now back around 7000. I'll keep an eye on this new laser over the next couple of days.
This completes WP 11454.
J. Oberling, R. Short
Checking on the laser after a few hours of warm up, I found the cooler to be very warm, and the box housing the DC-DC converter that powers the laser (steps ~11 VDC down to 5 VDC) was extremely warm. Also, the SUM counts had dropped from the ~7k we started at to ~1.1k. Seeing as how we just installed a new laser, my suspicion was that the DC-DC converter was failing. Checking the OpLev power supply in the CER it was providing 3A to the LVEA OpLev lasers; this should only be just over 1A, which is further indication something is up. Ryan and I replaced the DC-DC converter with a spare. Upon powering up with the new converter the current delivered by the power supply was still ~3A, so we swapped the laser with another spare. With the new laser the delivered current was down to just over 1A, as it should be. The laser power was set so the SUM counts are still at ~7k, and we will keep an eye on this OpLev over the coming hours/days. Both lasers SN 191-1 and SN 119-2 will be tested in the lab; my suspicion is that the dying DC-DC converter damaged both lasers and they will have to be repaired by the vendor, will see what the lab testing says. New laser SN is 199-1.
Noticing as the night progresses, the sum counts are slowly going up, starting from ~6200 and now ~7100. Odd.
ITMX OPLEV sum counts are at about 7500 this morning.
Sum counts around 7700 this morning, they're still creeping up
The outbuilding weather station IOCs stopped updating again at 04:02 this morning (they stopped at 04:27 Mon morning).
As part of the investigation I found that I am still running a 04:01 cronjob which restarts any stuck weather stations. This should only restart an IOC if its weather station PVs are in an invalid state. The cronjob was installed on 17th Jan 2023 following a spate of lock ups which were occuring between 03:18 - 03:28.
I have disabled this crontab for now.
Instead of disabling the crontab, we have decided to reschedule it to run at 05:01 each morning:
01 05 * * * /ligo/home/controls/restart_weather > /dev/null
TITLE: 10/03 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventative Maintenance
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 4mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.25 μm/s
QUICK SUMMARY:
IFO is DOWN for Planned Tuesday Maintenance
When I arrived, IFO was about halfway through locking.
Was locked for only 3 minutes at NLN and unlocked due to a DCPD Saturation at 14:41 UTC
Dust monitors were sounding and were acknowledged.
For the 14:41 lockloss, the signals look fine on the lockloss select tool scopes. The violins also look fine, they were all trending downwards. The OMC_DCPD signals looked interesting, they seem to have been diverging before the lockloss.
Right after lockloss DIAG_MAIN was showing:
- PSL_ISS: Diffracted power is low
- OPLEV_SUMS: ETMX sums low
05:41UTC LOCKING_ARMS_GREEN, the detector couldn't see ALSY at all and I noticed the ETM/ITMs L2 saturations(attachment1-L2 ITMX MASTER_OUT as example) on the saturation monitor(attachment2) and noticed the ETMY oplev moving around wildly(attachment3)
05:54 I took the detector to DOWN, and immediately could see ALSY on the cameras and ndscope; L2 saturations were all still there
05:56 and 06:05 I went to LOCKING_ARMS_GREEN again and GREEN_ARMS_MANUAL, respectively, but wasn't able to lock anything - I was able to go to LOCKING_ARMS_GREEN and GREEN_ARMS_MANUAL without the saturations getting too bad but both ALSX and Y evenually locked for a few seconds each and then unlocked and gone to basically 0 on the cameras and ndscope(attachment4). ALSX and Y sometime after this went to FAULT and were giving the messages "PDH" and "ReflPD A"
06:07 Tried going to INITIAL_ALIGNMENT but Verbal kept calling out all of the optics as saturating, ALIGN_IFO wouldn't move past SET_SUS_FOR_ALS_FPMI, and the oplev overview screen showed ETMY, ITMX, and ETMX moving all over the place (I was worried something would go really wrong if I left it in that configuration)
I tried waiting a bit and then tried INITIAL_ALIGNMENT and LOCKING_ARMS_GREEN again but had the same results.
Naoki came on and thought the saturations might be due to the camera servos, so he cleared the camera servo history and all of the saturations disappeared (attachment5) and an INITIAL_ALIGNMENT is now running fine.
Not sure what caused the saturations in the camera servos, but it'll be worth looking into whether the saturations are what ended up causing this lockloss.
As shown in the attached figure, the output of camera servo for PIT suddenly became 1e20 and the lockloss happened after 0.3s. So this lockloss is likely due to the camera servo. I checked the input signal of camera servo. The bottom left is the input of PIT1 and should be same as the BS camera output (bottom right). The BS camera output itself looks OK, but the input of PIT1 became red dashed and the output became 1e20. The situation is same for PIT2, 3. I am not sure why this happened.
Opened FRS29216 . It looks like a transient NaN on ETMY_Y camera centroid channel caused a latching of 1e20 on the ASC CAM_PIT filter modules outputs.
Camilla, Oli
We trended the ASC-CAM_PIT{1,2,3}_INMON/OUTPUT channels alongside ASC-AS_A_DC_NSUM_OUT_DQ(attachment1) and found that according to ASC-AS_A_DC_NSUM_OUT_DQ, the lockloss started(cursor 1) ~0.37s before PIT{1,2,3}_INMON read NaN(cursor 2), so the lockloss was not caused by the camera servos.
It is possible that the NaN values are linked to the light dropping off of the PIT2 and 3 cameras right after the lockloss, as when the cameras come back online ~0.2s later, both the PIT2 and PIT3 cameras are at 0, and looking back over several locklosses it looks like these two cameras tend to drop to 0 between 0.35 and 0.55s after the lockloss starts. However the PIT1 camera is still registering light for another 0.8s after coming back online(typical time for this camera).
Patrick updated the camera server to solve the issue in alog73228.
Detchar, please tell us if the 1.66Hz comb is back.
We changed the OM2 heater driver configuration from what was described in alog 72061.
We used a breakout board with jumpers to connect all OM2 thermistor readback pins (pin 9, 10, 11, 12, 22, 23, 24, 25) to Beckhoff at the back of the driver chassis. Nothing else (even the DB25 shell on the chassis) is connected to Beckhoff.
Heater voltage inputs (pin 6 for positive and 19 for negative) are connected to the portable voltage reference powered by a DC power supply to provide 7.15V.
BTW, somebody powered the OM2 heater off at some point in time, i.e. OM2 has been cold for some time but we don't know exactly how long.
When we went to the rack, half of the power supply terminal (which we normally use for 9V batteries) was disconnected (1st picture), and there was no power to the heater. Baffling. FYI, if it's not clear, the power terminal should look like in the second picture.
Somebody should have snagged cables hard enough, and didn't even bother to check.
Next time you do it, since reconnecting is NOT good enough, read alog 72286 to learn how to set the voltage reference to 7.15V and turn off the auto-turn-off function, then do it, and tell me you did it. I promise I will thank you.
Thermistors are working.
There is an OSEM pitch shift of OM2 at the end of the maintenance period 3 weeks ago (Aug 29)
Having turned the heater back on will likely affect our calibration. It's not a bad thing, but it is something to be aware of.
Indeed it now seems that there is a ~5Mpc difference in the range calculations between the front-ends (SNSW) and GDS (SNSC) compared to our last observation time.
It looks like this has brought back the 1.66 Hz comb. Attached is an averaged spectrum for 6 hours of recent data (Sept 20 UTC 0:00 to 6:00); the comb is the peaked structure marked with yellow triangles around 280 Hz. (You can also see some peaks in the production Fscans from the previous day, but it's clearer here.)
To see if one of the Beckhoff terminals for thermistors is kaput, I disconnected thermistor 2 (pins 9, 11, 22 and 24) from Beckhoff at the back of the heater driver chassis.
For a short while the Beckhoff cable itself was disconnected but the cable was connected back to the breakout board at the back of the driver chassis by 20:05:00 UTC.
Thermistor 1 is still connected. Heater driver input is still receiving voltage from the voltage reference.
I checked a 3-hour span starting at 04:00 UTC today (Sept 21) and found something unusual. There is a similar structure peaked near 280 Hz, but the frequency spacing is different. These peaks lie on integer multiples of 1.1086 Hz, not 1.6611 Hz. Plot attached.
Detchar, please see if there's any change in 1.66Hz comb.
At around 21:25 UTC, I disconnected OM2 thermistor 1 (pins 10, 12, 23, 25 of the cable at the back of the driver chassis) from Beckhoff and connected thermistor 2 (pins 9, 11, 22, 24).
Checked 6 hours of data starting at 04:00 UTC Sept 22. The comb structure persists with spacing 1.1086 Hz.
Electrical grounding of the beckhoff systems has been modified as a result of this investigation -- see LHO:73233.
Corroborating Daniel's statement that the OM2 heater power supply was disrupted on Tuesday Aug 29th 2023 (LHO:72970), I've zoomed in on the pitch *OSEM* signals for both (a) when the suspected time of power distruption (first attachment), and (b) when the power and function of OM2 was restored (second attachment). One can see that upon power restoration and resuming the HOT configuration of TSAMS on 2023-09-19 (time (a) above), OM2 pitch *decreases* by 190 [urad] over the course of ~1 hour, with a characteristic "thermal time constant" exponential shape to the displacement evolution. Then, heading back to 2023-08-29, we can see a similar shaped event that causes OM2 pitch to *increase* by 160 [urad] over the course of ~40 minutes (time (b) above). Then at 40 minutes IFO recovery from maintenance begins, and we see the OM2 pitch *sliders* adjusted to account for the new alignment, as had been done several times before with the OM2 ON vs. OFF state changes. I take this to be consistent with: The OM2 TSAMS heater was inadvertently turned OFF and COLD on 2023-08-29 at 18:42 UTC (11:42 PDT), and The OM2 TSAMS heater was restored turned ON and HOT on 2023-09-19 18:14 UTC (11:14 PDT).