TITLE: 09/25 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 8mph Gusts, 5mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.24 μm/s
QUICK SUMMARY:
H1 was unlocked and stuck in locking green arms for 3 hours.
I'm not sure why yet but I will be trying to lock and will probably find out.
I've requested an initial alignment since there was an earthquake last night .
Both ALSX & Y locked easily enough 61% for x and 95% for Y. ALSX DOF3 just nose dived for some reason though.
But eventually we got to PRC Aligning with out intervention, so I guess its all working well.
I happened to not be able to sleep, and happened to check on H1 and saw that it looked like a rough day after Maintenance. I also saw that H1 went immediately into the NLN_CAL_MEAS state and the GRD IFO was set to COMMISSIONING; H1 was also in Managed Operation.
I didn't see any notes in the alog to stay out of observing for the night--Ibrahim noted there were some calibration issues.
So I took H1 to Observing. (there was an sdf diff for ISC EY---accepted and snapshot attached)
But then there was also an EQ Alert from guatamala....which then took us down.
I'm still confused for plan for the night. I am setting ISC Lock for NLN and leaving H1 in Automatic Operation and also taking GRD IFO back to Observing (vs Commissioning). Going to try to sleep.
TITLE: 09/25 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Calibration
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
IFO is in MAINTENANCE and RELOCKING at FIND_IR
Shift was dominated by locking and calibration issues. Fought ALS to get to NLN then lost lock during impromptu CAL maintenance (though seems to be unrelated to the maintenance) right before shift end.
ALS Issues:
ALSY is still having locking issues. Thankfully, Keita was on-site when this happened and was able to help me out of it by shifting an offset (COMM Offset (V)) in ALS_CUST_CM_REFL - screenshot attached. This visible allowed ALSY to catch but the offset was moved to its maximum slider positon (-10V). The issue wasn't the counts, as we had had earlier today, but that ALSY was not locking at the maximum of its count (0.83 cts but locking closer to 0.6 and sometimes at 0.2, which is likely a different higher order mode).
Weirder yet was that after fixing this, neither PRMI nor DRMI were able to lock, which prompted me to start an initial alignment. The initial alignment reset the offset back from -10V to the nominal -0.025V BUT ALS was indeed able to lock, and quite quickly. In fact, initial alignment was fully automated, and so was normal locking, all the way upto NLN!
I'll keep the offset at its normal -0.025V instead of Keita's 10V fix since we didn't end up needing it.
Other than that, Keita went into comissioning for a very short <2 min to make/revert an ALSY change which I believe is related to alog 808280. I've attached the accepted SDF Diff.
Calibration Issues:
Louis and (and later, Joe B) got in contact with LHO Control Room while I was powering up and asked to reset the calibration due to very off/non-nominal calibration. Keita approved 1hr of CALIBRATION_MAINTENANCE. This was done, but in the process, we had to reload the GDS and reset the DTT, none of which caused any issues. We attempted to run a broadband measurements (attached cal monitor) but weirdly enough, the CAL Lines did not dissapear when I entered NLN_CAL_MEAS. By this time, I had already run the BB but by instruction from calibration, was told to cancel it. Well, cancelling it via ctrl_c and terminal closing didn't work and then we tried to wait until the measurement was done. This did not work either and it went over the 5 min time by another 5 minutes. At this point, we went the awg line clear route and forced the lines to close. So now, we are back to the earlier calibration state, which has a bad 15% error. Louis and Joe B are attempting to revert it back to a better state, which has <10% error. They're having difficulties with GDS so had to reload coefficients again, resulting in downed DTT again (for 12 mins). This happened successfully! Riding on this great luck, we lost lock 5 mins later...
Other:
LOG:
None
Lockloss during calibration broadband sweep during impromptu EVE shift cal maintenance. Unlikely the cause. IY saturated.
Lockloss during Post-Event Standdown. Unknown cause not enviuronmental. Relocking now buyt having difficulties with both ALSY and ALSX.
Since ALS (both X and Y) have been sub-optimal to say the least forever, and since ALS difficulty is apparently a large part of our relocking time according to people, I started investigating ALSY.
I didn't have time to finish anything and in the end I reverted everything back for now. I'll make similar measurements again next Tuesday for ALSX.
At 23:48:02 UTC, I incremented H1:CAL-CALIB_REPORT_ID_INT for one second, then decremented it back to its original value, to force a save of the current CALIB_REPORT hash values, which had been blocked by a full drive.
TITLE: 09/24 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 147 Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: The LVEA remains in laser hazard. Maintenance ended a little late due to SR3 issues, locking also was a small struggle as ALS-Y wouldn't lock as it had low IR and green power. After ALS-Y was solved relocking has been great.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:58 | SAF | H1 | LVEA | YES | LVEA is laser HAZARD | 18:24 |
14:34 | FAC | Chris | Weldshop | N | Housekeeping, move forklift | 17:06 |
14:54 | FAC | Kim | EndX | Y | Tech clean | 16:27 |
14:54 | FAC | Karen | EndY | N | Tech clean | 15:53 |
14:58 | FAC | Nelly | HAM shack | N | Tech clean | 15:59 |
15:00 | CAL | Tony | PCAL lab | Y | PCAL work | 15:27 |
15:04 | CAL | Dripta | PCAL lab | Y | Check with Tony | 15:27 |
15:05 | ALS | Keita | EndY | N | Mode matching investigation | 19:38 |
15:19 | SUS | Jason, Oli | LVEA | Y | SR3 OPLEV centering, dead laser :( and translation stage | 20:03 |
15:19 | FAC | Eric, Tyler | FIre pump | N | Fire pump testing | 16:04 |
15:34 | CAL | Tony, Dripta | EndX | Y | PCAL measurement | 19:09 |
15:37 | FAC | Betsy | LVEA | Y | Talk to Jason | 15:48 |
15:47 | SQZ | Sheila | SQZ0 | Y | Pump align | 18:13 |
16:04 | FAC | Eric | Mech room | N | Heater coil repair | 18:35 |
16:09 | SEI | Jim | Office/EndX remote | N | BRS-X adjustments | 17:55 |
16:16 | VAC | Travis | LVEA | Y | Check feedthroughs | 16:51 |
16:21 | FAC | Richard | LVEA | Y | Check with Jason | 16:37 |
16:26 | EE | Marc & Fernando | EndX | Y | SUS DAC checks | 17:13 |
16:27 | FAC | Kim, Karen | Ham Shack | N | Tech clean | 17:09 |
17:09 | FAC | Kim & Karen | LVEA | Y | Tech clean | 18:07 |
17:12 | EE | Fil | MSR | N | Looking under the floors, wire routing for strobe light | 20:18 |
17:12 | FAC | Christina | Recieving | N | Forklift | 17:46 |
17:12 | FAC | Tyler | Lvea, Mids | Y | 3IFO checks | 17:49 |
17:36 | FAC | Chris | LVEA | Y | Load bearing labels for storage racks | 18:32 |
18:06 | PSL | Fernando | PSL racks | N | Checks | 18:25 |
18:12 | VAC | Janos, Jordan, Travis | FCT | N | Push a cleam room to the FCES | 18:34 |
18:24 | SEI | Jim | CR | N | ETMX BLND checks/tests | 19:11 |
18:59 | OPS | RyanC | LVEA | Y | Run out FARO laptop to Jason | 19:12 |
19:43 | ALS | Keita | EndX | N | Take pictures of racks | 20:06 |
21:57 | ALS | Keita | EndY | Y | ALS-Y table adjustment | 22:33 |
22:33 | CDS | Tony | EndX | N | WIFI investigation | 23:10 |
Busy maintenance day, the work completed includes:
Team SR3 was the last one out and made sure the lights were off
Lock#1:
TITLE: 09/24 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 6mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.29 μm/s
QUICK SUMMARY:
IFO is in MAINTENANCE and MOVE_SPOTS
After Keita fixed the ALSY locking issues, we were able to automatically finish initial_alignment and are currently on track for a fully auto NLN acquisition.
WP12093 Add h1iopsusex new LIGO DAC to the SWWD
Ryan C, Erik, Dave:
A new h1iopsusex model was built and installed using the custom rcg-9eb07c3200.
The new IOP model was installed using the procedure:
We decided to reboot rather than just restart the models because in the past this had not worked and we ended up rebooting anyways. Since the IOP restart necessitated a full model restart, rebooting did not slow the process much.
Now if the software watchdog (SWWD) on h1iopsusex were to initiate a local DACKILL, it wil kill all six local DACs: three 18-bit, two 20-bit and the one LIGO 28-bit.
WP12101 Reboot h1digivideo servers
Erik, Dave:
Due to a slow memory leak in the old digivideo servers h1digivideo[0,1,2] after an uptime of about 7 months h1digivideo2's memory usage had crept up to 95% (2024 trend attached).
We rebooted all three machines at 12:22 following the conclusion of h1alsey work (the h1alsey model receives ITMY camera data from h1digivideo2).
At time of writing the mem usages are; 0=20%; 1=22%; 2=18%
Tue24Sep2024
LOC TIME HOSTNAME MODEL/REBOOT
12:23:23 h1susex h1iopsusex
12:23:36 h1susex h1susetmx
12:23:49 h1susex h1sustmsx
12:24:02 h1susex h1susetmxpi
Dr. Dripta , and I went to End X to do an End Station measurement.
We followed a modified version of the T1500062 procedure, that had extra pages availible for taking a a few more optical efficiany measurements.
Beam Spot before we started looked good.
We did our normal procedure and then did a little jumping around to minimize the number of times WSH(PS4) was passed under the beam tube.
The Extra long Optical Effiency measurements were the same configuration that we normally have for our Optical Efficiency measurements, but the durration was 900 secods. That analysis is still pending.
Beam spots after we were finished also looked good.
Analysis:
Martel tests looked a little lower than normal at 4 V.
WS @ TX png
WS @ RX png
WS @ RX Both Beams
WS "PS4" --date "2024-09-23"
Reading in config file from python file in scripts
../../../Common/O4PSparams.yaml
PS4 rho, kappa, u_rel on 2024-09-23 corrected to ES temperature 300.6 K :
-4.70978667092155 -0.0002694340454223 4.121866697713714e-05
Copying the scripts into tD directory...
Connected to nds.ligo-wa.caltech.edu
martel run
reading data at start_time: 1411229815
reading data at start_time: 1411230420
reading data at start_time: 1411230760
reading data at start_time: 1411234174
reading data at start_time: 1411234615
reading data at start_time: 1411234975
reading data at start_time: 1411237284
reading data at start_time: 1411237906
reading data at start_time: 1411238249
Ratios: -0.4623501228701653 -0.46585957548893536
writing nds2 data to files
finishing writing
Background Values:
bg1 = 8.765046; Background of TX when WS is at TX
bg2 = 5.318545; Background of WS when WS is at TX
bg3 = 8.952192; Background of TX when WS is at RX
bg4 = 5.319887; Background of WS when WS is at RX
bg5 = 8.841055; Background of TX
bg6 = 0.521412; Background of RX
The uncertainty reported below are Relative Standard Deviation in percent
Intermediate Ratios
RatioWS_TX_it = -0.462350;
RatioWS_TX_ot = -0.465860;
RatioWS_TX_ir = -0.456872;
RatioWS_TX_or = -0.460592;
RatioWS_TX_it_unc = 0.091320;
RatioWS_TX_ot_unc = 0.083249;
RatioWS_TX_ir_unc = 0.090438;
RatioWS_TX_or_unc = 0.091453;
Optical Efficiency
OE_Inner_beam = 0.988290;
OE_Outer_beam = 0.988868;
Weighted_Optical_Efficiency = 0.988579;
OE_Inner_beam_unc = 0.059378;
OE_Outer_beam_unc = 0.058135;
Weighted_Optical_Efficiency_unc = 0.083099;
Martel Voltage fit:
Gradient = 1636.760986;
Intercept = 0.654137;
Power Imbalance = 0.992467;
Endstation Power sensors to WS ratios::
Ratio_WS_TX = -1.077343;
Ratio_WS_RX = -1.390443;
Ratio_WS_TX_unc = 0.053215;
Ratio_WS_RX_unc = 0.043612;
=============================================================
============= Values for Force Coefficients =================
=============================================================
Key Pcal Values :
GS = -5.135100; Gold Standard Value in (V/W)
WS = -4.709787; Working Standard Value
costheta = 0.988362; Angle of incidence
c = 299792458.000000; Speed of Light
End Station Values :
TXWS = -1.077343; Tx to WS Rel responsivity (V/V)
sigma_TXWS = 0.000573; Uncertainity of Tx to WS Rel responsivity (V/V)
RXWS = -1.390443; Rx to WS Rel responsivity (V/V)
sigma_RXWS = 0.000606; Uncertainity of Rx to WS Rel responsivity (V/V)
e = 0.988579; Optical Efficiency
sigma_e = 0.000822; Uncertainity in Optical Efficiency
Martel Voltage fit :
Martel_gradient = 1636.760986; Martel to output channel (C/V)
Martel_intercept = 0.654137; Intercept of fit of Martel to output (C/V)
Power Loss Apportion :
beta = 0.998895; Ratio between input and output (Beta)
E_T = 0.993723; TX Optical efficiency
sigma_E_T = 0.000413; Uncertainity in TX Optical efficiency
E_R = 0.994823; RX Optical Efficiency
sigma_E_R = 0.000413; Uncertainity in RX Optical efficiency
Force Coefficients :
FC_TxPD = 7.889514e-13; TxPD Force Coefficient
FC_RxPD = 6.183572e-13; RxPD Force Coefficient
sigma_FC_TxPD = 5.348574e-16; TxPD Force Coefficient
sigma_FC_RxPD = 3.744092e-16; RxPD Force Coefficient
data written to ../../measurements/LHO_EndX/tD20240924/
Things to note about this measurement.
A: Long measurement, Analysis pending.
B: There was a maintenance day item for FAC - [CR] Raise the chilled water set point. (Tyler, E. Otterman) WP12102 which could optics to "move more than usual" I dont think this had much impact on us.
This adventure was brought to you By Dr. Dripta & Tony.
ALS-Y is not locking, the dither align scripts did not help. Trending OSEMs for the PRC and SRC and BS and TESTs masses did not help either, things are in the same spot or adjusting did nothing,
Keita is returning to EndY to make some adjustments on the table, IR and green power dropped about 3 hours ago.
J. Oberling, O. Patane
Today we started to re-center the SR3 optical lever after SR3 alignment was reverted to its pre-April alignment. That's not quite how it went down, however...
We started by hooking up the motor driver and moving the QPD around (via the crossed translation stages it is attached to), and could not see any improvement to the OpLev signal. While moving the horizontal translation stage it suddenly stopped and started making a loud grinding noise, like it had hit its, or a, limit. Not liking the sound of that, we launched on figuring out fall protection to climb on top of HAM4 to investigate. While the fall protection was getting figured out we took a look at the laser and found it dead. No light, no life, all dead. So we grabbed a spare laser from the Optics Lab and installed it (did not turn it on yet).
Once the fall protection was figured out I climbed on top of HAM4 and opened the OpLev receiver. I couldn't visually see anything wrong with the stage. It was near the center of its travel range, and nothing else looked like it was hung up. I removed the QPD plate and the vertically mounted translation stage to get a better view of the stuck stage, and could still see nothing wrong. Oli tried moving the stage with the driver and it was still making the loud noise, and the stage was not moving. So it was well and truly stuck. We grabbed one of the two spare translation stages from the EE shop (where Fernando was testing the remote OpLev recentering setup), tested it to make sure it worked (it did!), and installed it in the SR3 OpLev receiver. The whole receiver was reassembled and the laser was turned on. Oli slowly turned up the laser power while I watched for the beam, and once it was bright enough Oli then moved the translation stages to roughly center it on the QPD.
Something interesting, as Oli was turning up the laser power it would occasionally flash bright and then return to the brightness it was at before the flash. They got it bright enough to see a SUM count of ~3k, and then re-centered the OpLev. At this point I closed up the receiver and came down from the chamber. I turned the laser power up to return the SUM counts to the ~20k it was at before the SR3 alignment shift and saw the SUM counts jump just like the beam would flash. This happened early in the power adjustment (for example: started at ~3k SUM, adjusted up and saw a flash to ~15k, then back down to ~6k) but leveled off once the power was higher (I saw no jumps once the SUM counts were above 15k or so). Maybe some oddness with a low injection current for the laser diode? Not sure. The OpLev is currently reading ~20k SUM counts and looks OK, but we'll keep an eye out to see if it remains stable starts behaving oddly.
The SR3 optical lever is now fixed and working again.
New laser SN is 197-3, old laser SN is 104-1. SN of the new translation stage is 10371
Forgot to add, once the translation stage became stuck the driver was still recording movement as the counts would change when we tried to move the stage but the stage was clearly not moving. So the motor encoder for the stage was working while the stage itself was stuck.
During today's maint. period I updated the calibration on CAL-CS and restarted the GDS pipeline. The kappas will be reset to 1. The biggest reason for this update is to account for uncompensated delays left over from the DAC swap (& others that have been in place but that we just never accounted for). new pydarm H1 ini file latest pydarm report
Vicky and Naoki aligned the pump ISS AOM 2 weeks ago: 79993, since then it has been steadily drifting (screenshot), such that it looks like it will rail in the next day or so. I went to the table to see if I could adjust the alignment.
start: 52 mW incident on AOM, set drive to 0V, measure 41mW in 0th order beam (78%). set drive to 5V, 12mW in 1st order beam (23%) and 28mW in 0th order beam.
I tried several rounds of translating and yawing or pitching the AOM, alternating between 0V and 5V drive. At one point I misaligned the SHG steering mirror with a pico because I was using too long a wrench, so tweaked that up to bring the SHG output back to 75mW, then brought some knobs to continue AOM alignment. I was able to get 100% throughput or close with 0V drive a few times, but when I looked at where the beams were on the apertures they were obviously off center in yaw for these good throughputs. When the beams looked more centered on the apertures the throughputs were more like 80%. I also several times aligned to increase the diffracted power but caused the beam shape to look obviously clipped on the card.
In the end I SAW 15 mW in the diffracted beam with a round beam quality and looking roughly centered on the apertures. I decided to quit here, and then realized that the input power had increased so that this is only mediocre AOM alignment, similar efficiency to how it started the day. 31.7 mW 0th order, 15.2mW in 1st order with 5V drive (24%), beam quality looks round, 0V drive, 47.2 mW in oth order beam, 63mW input power (75%).
With the OPO in down, I realinged the pump fiber looking at OPO REFL, and also adjusted the wave plate for the SHG rejected path to minimize the rejected power. After these, the ISS locked at the nominal set point of 80uW transmitted through the OPO with 3V drive signal, since I've closed up the table this is drifting up, to now 5.4V.
So the ISS should be in better shape that it was before I started, although the AOM alignment is not improved, the fiber alignment, half wave plate adjustment, and drifts are helping. I think it would make sense to take a beam profiler out to the table and measure the beam size at the AOM, there should be space for a couple of measurements. On a card the beam looks like it is as large as the apertures.