TITLE: 09/19 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Austin
SHIFT SUMMARY: Average maintenance day, relocked by early afternoon but lost lock after one superevent.
H1 is relocking, currently acquiring DRMI.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:04 | FAC | Randy | EX, EY | - | Cleanroom panel work | 19:04 |
14:53 | FAC | Kim | EX | - | Technical cleaning | 16:09 |
14:53 | FAC | Karen | EY | - | Technical cleaning | 16:17 |
15:10 | FAC | Ken | Fire Tank | - | Drilling for grounding wire | 15:27 |
15:16 | FAC | Cindi | Mech Rm | - | Technical cleaning | 15:42 |
15:27 | CDS | Fil | CER | - | Scanning power supplies | 16:28 |
15:32 | TCS | TJ, Camilla | CR | - | Driving CO2Y PZT | 15:57 |
15:34 | FAC | Ken | FCES | - | Grabbing equipment | 16:34 |
15:35 | VAC | Gerardo | EX, EY | - | Safety checks | 19:09 |
15:36 | SUS | RyanC | CR | - | Oplev charge measurements | 16:55 |
15:37 | VAC | Jordan | LVEA X-man | - | Hepta line work | 17:58 |
15:39 | CDS | Erik | Remote | - | Picket fence updates | 17:11 |
15:49 | FAC | Christina | LSB/VPW | - | Moving pallets to VPW (forklift) | 18:48 |
15:50 | FAC | Chris | LVEA | - | Safety checks | 17:07 |
15:57 | FAC | Cindi | LVEA | - | Technical cleaning | 16:34 |
16:11 | TCS | Camilla | CR | - | CO2 laser calibration | 17:20 |
16:20 | FAC | Kim, Karen | LVEA | - | Technical cleaning | 18:40 |
16:22 | SEI | Jim | CR | - | HAM2/3 ISI measurements (IMC offline) | 18:09 |
16:30 | CDS | Fil | LVEA, EX, EY | - | Safety checks | 18:32 |
16:30 | SQZ | Sheila | LVEA SQZT7 | LOCAL | Homodyne path alignment | 19:03 |
16:34 | FAC | Cindi | FCES | - | Technical cleaning | 18:20 |
16:45 | CDS | Marc | EX, EY | - | Checking power supplies | 17:19 |
17:07 | FAC | Chris | FCTE | - | Take down lighting | 18:19 |
17:22 | CDS | Marc, Fernando | CER | - | Cable tracing | 18:47 |
17:24 | EPO | Oli, MJ | LVEA | - | Taking pictures | 18:05 |
17:33 | SQZ | Camilla | LVEA SQZT7 | LOCAL | Homodyne path alignment | 19:03 |
17:59 | VAC | Janos | LVEA, EX, EY | - | GV signage | 19:07 |
18:02 | SQZ | Daniel, Keita | LVEA | - | 18:21 | |
18:20 | FAC | Chris | LVEA | - | Put up fire extinguisher | 18:37 |
18:27 | ISC | TJ, RyanC | CR | - | Testing ALS crystal fix | 18:59 |
18:56 | CDS | Fil | FCES, MER | - | Safety labeling | 19:20 |
19:03 | SAF | TJ, RyanC | LVEA | - | Sweep | 19:16 |
19:04 | CDS | Dave | Remote | - | DAQ restart |
19:20 |
21:10 | TCS | Camilla | CR | - | Inject 45Hz line with CO2Y | 21:28 |
After testing in 72959, TJ and I injected a 45Hz G=30 sine wave into H1:TCS-ITMY_CO2_PZT_OUT_GAIN_EXC while we waited for the ADS lines to converge to go into Observing. Excitation on from 2023/09/19 21:14:14 UTC and Gain = 30 from 21:15:30 UTC to 21:21:08 UTC.
We could see this line and some harmonics in DARM, plot attached. Daniel points out that this isn't useful unless we understand how much of a signal we are injecting...
I didn't save the signal on the H1:TCS-ITMX_CO2_LSRPWR_{HD,MTR}_OUT channels (not DQ'ed) and we will also need to understand how much of a 45Hz signal these thermal powermeters would convert and see.
Lockloss @ 22:30 UTC - after 1:20 lock stretch, no obvious cause. Lockloss tool is not updating online.
Accepted sdf for H1:SQZ-SHG_SERVO_IN1GAIN as 5. It gets set to 5 in the SQZ_SHG guardian DOWN script but was manually set to -9 last week during the SQZ troubles to try to improve stability 72794. Vicky says that we may get more green tranmistted power with -9 gain but it shouldn't matter.
WP11424 New Workstation Conda Environment
Erik:
Erik installed a new python3.10 conda env on all the workstations. Please see Erik's alog for details.
WP11431 Upgrade of Picket Fence EPICS IOC
Erik, Dave:
Erik installed a new version of the picket fance IOC which adds meta channels for the picket fence service and status channels for the IOC itself.
I upgraded generate_picket_fence_medm.py to add these new channels to the MEDM, and added code to generate the H1EPICS_PICKET_FENCE.ini DAQ ini file.
The INI file lists all the non-string PVs in alphabetical order. A DAQ+EDC restart was required.
The new MEDM (attached) has the extended SERVER section and a new IOC section in the lower right. The IOC is color coded blue to distinguish it from the Seismic data.
WP11433 Add CDS EPICS LOAD MON to FOM Nuc machines
Jonathan, Erik, Dave:
Jonathan started the EPICS_LOAD_MON IOC on all of the FOM Nuc machines in the control room. Erik installed these via puppet.
I modified generate_host_stats_medm.py to add these new machines to the MEDM (attached) and the H1EPICS_CDSMON.ini DAQ file. DAQ+EDC restart was required.
DAQ Restart
Dave, Jonathan, Erik:
This was a messy DAQ restart.
We restarted the 0-leg first and waited to see if FW0 would spontaneously restart it self after running for a few minutes, and indeed it did so.
The EDC was then restarted on h1susauxb123.
After FW0 had come back, we then restarted the 1-leg. GDS1 required a second restart to resync the channels lists.
After a few minutes of running FW1 also spontaneously restarted itself, and then after a few minutes more did a second spontaneous restart.
DAQ Frame Writers' Missing Frames Due To Restarts:
FW0 Missing Frames [1379185408, 1379185472, 1379185664, 1379185728]
FW1 Missing Frames [1379185984, 1379186048, 1379186240, 1379186304, 1379186496, 1379186560, 1379186624]
Tue19Sep2023
LOC TIME HOSTNAME MODEL/REBOOT
12:03:52 h1daqdc0 [DAQ] <<< 0-leg restart
12:04:03 h1daqfw0 [DAQ]
12:04:04 h1daqnds0 [DAQ]
12:04:04 h1daqtw0 [DAQ]
12:04:12 h1daqgds0 [DAQ]
12:05:22 h1susauxb123 h1edc[DAQ] <<< EDC restart for CDSMON and PICKET_FENCE channels
12:08:50 h1daqfw0 [DAQ] <<< FW0 crash and restart
12:13:27 h1daqdc1 [DAQ] <<< 1-leg restart
12:13:38 h1daqfw1 [DAQ]
12:13:39 h1daqtw1 [DAQ]
12:13:40 h1daqnds1 [DAQ]
12:13:48 h1daqgds1 [DAQ]
12:14:17 h1daqgds1 [DAQ] <<< GDS1 needed a second restart
12:18:15 h1daqfw1 [DAQ] <<< FW1 crash and restart
12:22:53 h1daqfw1 [DAQ] <<< FW1 crash and restart
Sheila, Camilla.
In 72604 Sheila and Vicky found the Quantum Efficiently of Homodyne PD-B was ~5% lower than expected and expected that the beam was not well aligned.
We tried to align the beam better, details below, but couldn't increase the current measured by PDB. The homodyne has been left in a configuration that isn't aligned to PDA.
We removed the steering mirror after the HD BS and adjusted the angle of the Homodyne to be closer to normal with the beam. We then tried adding a ROC 50mm and then 25mm lens directly before PDB to reduce the beam size. Each time after realigning we only got back to our starting value of ~0.56mA on H1:SQZ-HD_B_DC_OUT.
Re-taking the measurements roughly with the Thorlabs power meter didn't replicate the difference in QE Sheila and Vicky took, see below. We could try swapping to the spare HD.
PDA | PDB | |
Distance between BS and PD | 6.25" | 6.0" |
Thorlabs Power Meter (not the calibrated Ophir PM) | 0.6mW | 0.576mW |
H1:SQZ-HD_{A,B}_DC_OUT | 0.554mA | 0.565mA |
Fudge Factor in H1:SQZ-HD_{A,B}_DC_GAIN | -1.09 | 1.11 |
H1:SQZ-HD_{A,B}_DC_OUT with removed fudge factor | 0.508mA | 0.509mA |
Responsivity | 0.847A/W | 0.884A/W |
QE (??) Taken Responsivity/0.8582 from 72604 | 98.7% | 100.3% ?? |
Maintenance activities have finished, VEAs have been swept, and H1 has started lock acquisition.
Starting initial alignment; ALS alignment is bad.
Looked pretty good, a few things we weren't sure about. A OPLEV power supply, a PEM shaker by GV5, and we took a picture of the TSAMS setup and where it was connected to.
Per WP11435 I performed a health check on the Kepco power supplies at EY, EX, and CER.
EY -All supplies pass, no excess vibrations, no sounds, good airflow, no temperature issues.
EX - All supplies pass, no excess vibrations, no sounds, good airflow, no temperature issues.
CER - All but one supplies pass, no excess vibrations, no sounds, good airflow, no temperature issues.
** ISC C1 - Slot 38 RHS N18V 6A 99F at fan housing, 105 at rear, very slight fan stumbling / rumbling. We need to keep an eye on this one, spare supply is staged.
While heading out to EX I spied some wildlife very close to the OSB. Coyote attacks on people are very rare. What to do if a coyote approaches you, make noise, clap your hands, whistle, they will move along.
Detchar, please tell us if the 1.66Hz comb is back.
We changed the OM2 heater driver configuration from what was described in alog 72061.
We used a breakout board with jumpers to connect all OM2 thermistor readback pins (pin 9, 10, 11, 12, 22, 23, 24, 25) to Beckhoff at the back of the driver chassis. Nothing else (even the DB25 shell on the chassis) is connected to Beckhoff.
Heater voltage inputs (pin 6 for positive and 19 for negative) are connected to the portable voltage reference powered by a DC power supply to provide 7.15V.
BTW, somebody powered the OM2 heater off at some point in time, i.e. OM2 has been cold for some time but we don't know exactly how long.
When we went to the rack, half of the power supply terminal (which we normally use for 9V batteries) was disconnected (1st picture), and there was no power to the heater. Baffling. FYI, if it's not clear, the power terminal should look like in the second picture.
Somebody should have snagged cables hard enough, and didn't even bother to check.
Next time you do it, since reconnecting is NOT good enough, read alog 72286 to learn how to set the voltage reference to 7.15V and turn off the auto-turn-off function, then do it, and tell me you did it. I promise I will thank you.
Thermistors are working.
There is an OSEM pitch shift of OM2 at the end of the maintenance period 3 weeks ago (Aug 29)
Having turned the heater back on will likely affect our calibration. It's not a bad thing, but it is something to be aware of.
Indeed it now seems that there is a ~5Mpc difference in the range calculations between the front-ends (SNSW) and GDS (SNSC) compared to our last observation time.
It looks like this has brought back the 1.66 Hz comb. Attached is an averaged spectrum for 6 hours of recent data (Sept 20 UTC 0:00 to 6:00); the comb is the peaked structure marked with yellow triangles around 280 Hz. (You can also see some peaks in the production Fscans from the previous day, but it's clearer here.)
To see if one of the Beckhoff terminals for thermistors is kaput, I disconnected thermistor 2 (pins 9, 11, 22 and 24) from Beckhoff at the back of the heater driver chassis.
For a short while the Beckhoff cable itself was disconnected but the cable was connected back to the breakout board at the back of the driver chassis by 20:05:00 UTC.
Thermistor 1 is still connected. Heater driver input is still receiving voltage from the voltage reference.
I checked a 3-hour span starting at 04:00 UTC today (Sept 21) and found something unusual. There is a similar structure peaked near 280 Hz, but the frequency spacing is different. These peaks lie on integer multiples of 1.1086 Hz, not 1.6611 Hz. Plot attached.
Detchar, please see if there's any change in 1.66Hz comb.
At around 21:25 UTC, I disconnected OM2 thermistor 1 (pins 10, 12, 23, 25 of the cable at the back of the driver chassis) from Beckhoff and connected thermistor 2 (pins 9, 11, 22, 24).
Checked 6 hours of data starting at 04:00 UTC Sept 22. The comb structure persists with spacing 1.1086 Hz.
Electrical grounding of the beckhoff systems has been modified as a result of this investigation -- see LHO:73233.
Corroborating Daniel's statement that the OM2 heater power supply was disrupted on Tuesday Aug 29th 2023 (LHO:72970), I've zoomed in on the pitch *OSEM* signals for both (a) when the suspected time of power distruption (first attachment), and (b) when the power and function of OM2 was restored (second attachment). One can see that upon power restoration and resuming the HOT configuration of TSAMS on 2023-09-19 (time (a) above), OM2 pitch *decreases* by 190 [urad] over the course of ~1 hour, with a characteristic "thermal time constant" exponential shape to the displacement evolution. Then, heading back to 2023-08-29, we can see a similar shaped event that causes OM2 pitch to *increase* by 160 [urad] over the course of ~40 minutes (time (b) above). Then at 40 minutes IFO recovery from maintenance begins, and we see the OM2 pitch *sliders* adjusted to account for the new alignment, as had been done several times before with the OM2 ON vs. OFF state changes. I take this to be consistent with: The OM2 TSAMS heater was inadvertently turned OFF and COLD on 2023-08-29 at 18:42 UTC (11:42 PDT), and The OM2 TSAMS heater was restored turned ON and HOT on 2023-09-19 18:14 UTC (11:14 PDT).
This morning I ran the OPLEV charge measurement for both of the ETMs. ETMY saw some overflows during the measurement.
ETMX seems to be trending downward towards zero but most of the quads are above +\- 50 [V], ETMY seems stable and none of the quads are above +/- 50 [V].
Tue Sep 19 10:05:33 2023 INFO: Fill completed in 5min 30secs
Note, cooler outside temps mean the TCs did not saturate at -200C.
After we found the CO2X power has been degrading in 72943, I recalibrated the CO2 rotation stages this morning. Last done in March 66724.
Added a README.txt file to /opt/rtcds/userapps/release/tcs/h1/scripts/RS_calibration/ with instructions.
Also adjusted lscparams to output the existing values. Plan to relock with yesterdays output and then during lock, adjust CO2X to output 1.68W as the start of O4.
tcs_nom_annular_pwr = {'X': 0.95, #change back to 1.03 after alog72943 test
'Y': 1.19}