TITLE: 04/11 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 167Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Very quiet shift (expect for the air handler) with H1 observing for all but 2 minutes.
H1 has now been locked for 10 hours.
LOG:
No log for this shift.
State of H1: Observing at 161Mpc
H1 has been locked for 6 hours, things are running smoothly.
Starting around 00:20 UTC (5:20pm PDT) this evening, the air has been much louder in the control room. Called Bubba to ask about it, and he says it's related to Eric's work today on the AHU-3 VAV (alog77092) and that the temporary ducting may have come undone. Other than it being louder in the control room, the duct should be fine and will be addressed in the morning.
I have not noticed a noticeable change in the IFO range since this noise increase, but I'll tag PEM and DetChar just in case.
TITLE: 04/10 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 3mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.23 μm/s
QUICK SUMMARY: H1 has been locked and observing for 3 hours.
TITLE: 04/10 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 160Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: One lockloss today with an easy relock. Some calibration work on the HVAC system that caused more fan noise in the OSB from 15:00-16:00UTCish. Rest of the day was quiet. We hit our current highest O4b range, 167.6Mpc!
LOG:
15:00ish-15:30ish Calibration of the CS HVAC system(77092) - fans much louder than normal - tagging Detchar
15:00UTC Detector relocking and at LOWNOISE_ASC
15:06 NOMINAL_LOW_NOISE
15:09 Observing
15:45ish HVAC calibration done but fans still slightly louder than normal while cooling the CS to correct temps - detchar
18:15-18:19UTC Richard and Eric going into LVEA to check for HVAC noise - detchar
19:46 Lockloss
20:41 NOMINAL_LOW_NOISE
20:56 Observing
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
14:59 | FAC | Karen | MY | n | Tech cleaning | 16:00 |
16:02 | VAC | Janos+others | MY | n | Vacuum tour | 18:17 |
18:15 | FAC | Richard, Eric | LVEA | n | Quick hvac noise check | 18:19 |
20:17 | VAC | Gerardo, Jordan | FCES | n | Checking cable racks | 21:00 |
21:39 | VAC | Gerardo | OSB receiving door | n | Moving stuff | 22:09 |
There's been three locklosses today 04/10 and they all seem like they could've been caused by the same issue. For all three locklosses, it looks like the lockloss was seen by L2 of all four quads before it was seen by DARM or ETMX L3, which are usually the first that we see the locklosses hit. These three locklosses were seen in the order of: both ITMs at the same time, then ETMX, then ETMY, followed by DARM and ETMX L3. It's possible that the 19:46 LL had ETMX L2 see it at the same time as the ITMs, but the other two locklosses (07:21UTC and 13:11UTC) look to have hit the ITMs first.
I also looked the the last lockloss before 04/10 where there wasn't high wind or commissioning as a comparison and that lockloss had ETMX L3 and DARM seeing it first, like usual.
Lockloss 2024/04/10 07:21:20UTC - 77080
- ALL quad L2s saw the lockloss before DARM or EX L3 - Order: ITMs, EX, EY
Lockloss 2024/04/10 13:11:12UTC - 77082
- Same as above Order: ITMs, EX, EY
Lockloss 2024/04/10 19:46:48UTC - 77093
- Same as above - ITMs+EXmaybe, EY
Lockloss 04/10 19:46 UTC
late alog from work done yesterday:
I bumped up the ETMX L2 line to keep the uncertainty in the kappa PUM SUS line below a threshold of 0.01. Uncertainties above 0.01 clash with the hourly uncertainty services that run on the ldas clusters.
command and output:
gpstime;val=1.7 && caput H1:SUS-ETMX_L2_CAL_LINE_CLKGAIN $val && caput H1:SUS-ETMX_L2_CAL_LINE_SINGAIN $val && caput H1:SUS-ETMX_L2_CAL_LINE_COSGAIN $val && caput H1:CAL-CS_TDEP_SUS_LINE2_COMPARISON_OSC_CLKGAIN $val && caput H1:CAL-CS_TDEP_SUS_LINE2_COMPARISON_OSC_SINGAIN $val && caput H1:CAL-CS_TDEP_SUS_LINE2_COMPARISON_OSC_COSGAIN $val
PDT: 2024-04-09 15:42:06.558045 PDT
UTC: 2024-04-09 22:42:06.558045 UTC
GPS: 1396737744.558045
Old : H1:SUS-ETMX_L2_CAL_LINE_CLKGAIN 1.5
New : H1:SUS-ETMX_L2_CAL_LINE_CLKGAIN 1.7
Old : H1:SUS-ETMX_L2_CAL_LINE_SINGAIN 1.5
New : H1:SUS-ETMX_L2_CAL_LINE_SINGAIN 1.7
Old : H1:SUS-ETMX_L2_CAL_LINE_COSGAIN 1.5
New : H1:SUS-ETMX_L2_CAL_LINE_COSGAIN 1.7
Old : H1:CAL-CS_TDEP_SUS_LINE2_COMPARISON_OSC_CLKGAIN 1.5
New : H1:CAL-CS_TDEP_SUS_LINE2_COMPARISON_OSC_CLKGAIN 1.7
Old : H1:CAL-CS_TDEP_SUS_LINE2_COMPARISON_OSC_SINGAIN 1.5
New : H1:CAL-CS_TDEP_SUS_LINE2_COMPARISON_OSC_SINGAIN 1.7
Old : H1:CAL-CS_TDEP_SUS_LINE2_COMPARISON_OSC_COSGAIN 1.5
New : H1:CAL-CS_TDEP_SUS_LINE2_COMPARISON_OSC_COSGAIN 1.7
The ETMX L2 cal line gains are set by ISC_LOCK during the TURN_ON_CALIBRATION_LINES state, so Louis's gain changes had been reverted when we relocked. I dropped H1 out of observing at 00:07 UTC to re-run the command from Louis's alog above, update the gain value in lscparams.py, load ISC_LOCK, and accept the associated SDF diffs (screenshot attached).
PDT: 2024-04-10 17:08:06.083606 PDT
UTC: 2024-04-11 00:08:06.083606 UTC
GPS: 1396829304.083606
Old : H1:SUS-ETMX_L2_CAL_LINE_CLKGAIN 1.5
New : H1:SUS-ETMX_L2_CAL_LINE_CLKGAIN 1.7
Old : H1:SUS-ETMX_L2_CAL_LINE_SINGAIN 1.5
New : H1:SUS-ETMX_L2_CAL_LINE_SINGAIN 1.7
Old : H1:SUS-ETMX_L2_CAL_LINE_COSGAIN 1.5
New : H1:SUS-ETMX_L2_CAL_LINE_COSGAIN 1.7
Old : H1:CAL-CS_TDEP_SUS_LINE2_COMPARISON_OSC_CLKGAIN 1.7
New : H1:CAL-CS_TDEP_SUS_LINE2_COMPARISON_OSC_CLKGAIN 1.7
Old : H1:CAL-CS_TDEP_SUS_LINE2_COMPARISON_OSC_SINGAIN 1.7
New : H1:CAL-CS_TDEP_SUS_LINE2_COMPARISON_OSC_SINGAIN 1.7
Old : H1:CAL-CS_TDEP_SUS_LINE2_COMPARISON_OSC_COSGAIN 1.7
New : H1:CAL-CS_TDEP_SUS_LINE2_COMPARISON_OSC_COSGAIN 1.7
Many of the VAV terminals on the lab side of the OSB have been showing erroneous airflow readings and damper positions. I believe the cause of this is the VAV programming, which was written to drive the dampers closed during unoccupied hours, during which the damper actuators would calibrate their zero points to maintain correct readings of airflow and damper positions. Because these VAV units never go unoccupied since they are not on a schedule, the controllers have drifted over time and their zero points are no longer accurate. To correct this, I switched half of the VAV units to unoccupied mode, but the fans in AHU-3 did not respond quickly which led to an overpressure condition in the supply ducting which tore the flexible ducting at a VAV terminal in the control room. We had to shut down AHU-3 in order to get the duct reattached. This shutdown occurred ~8:30 local time and lasted about a half hour. Prior to the shutdown, their was a period of time ~10 minutes in which the airflow noise in the control room dramatically increased. The VAV reset was mildly effective; the cold deck static pressure was trending at .06" W.C. before the reset. After the reset it is trending closer to .09" W.C. The hot deck static pressure was trending at .09" W.C. prior to the reset, and at 1.4" W.C. after the reset. This shows that some of the terminal units have adjusted their flow rates, but there are still erroneous readings which will require further investigation.
HAM1 ASC FF was off from 1396381762 to 1396382969 (20 minutes starting 90 minutes into lock). Using this data I could retune the feed forward filters that subtract HAM1 motion from the ASC signals.
I based the attached jupyter code on an older version, and automated the subtraction training for all pitch and yaw ASC signals (CHARD, INP1, PRC2, DC1 and DC2).
Looking at the filters that are loaded now, we used to do the FF only for the pitch degrees of freedom. The subtraction seems to be doing something good in yaw too, so I suggest we upload and try the yaw feedforward filter.
The first plot shows the expected subtraction predicted by NonSENS. The second plot shows the absolute value of the transfer functions.
Finally, the attached text file contains the foton filter definitions for all the feedforward filter banks. I haven't upload those flter to foton, and not tried them yet. Before each filter there is the name of the filter bank where it should be loaded, for example H1:HPI-HAM1_TTL4C_FF_CART2ASCP_1_1
[Jennie, Jim, Gabriele]
We tried the new filter, and there's something wrong with them. Even with a gain of 0.5 (instead of 1) they make some of the ASC signals much worse, see attached plot.
We'll use the time we got today with FF off (starting 1396987833 and lastring 10 minutes) to debug the problem and retune the FF
In the attached plot: green = FF off, blue = nominal FF, red = new FF with half gain
Gabriele, Jim, Jennie W
I saved the new HAM FF filters in H1SEIPROC foton file. These will only be loaded in during our commissioning window tomorrow.
The code attached to this alog had a bug that produces the wrong filters. Attached an updated version, hopefully without any more bugs. Also attached the new text filw with all the filters.
The two plots show the expected subtarction and the absolute value of the filters.
Gabriele's new coefficients have been loaded into SEIPROC and are available, but haven't been turned on or tested yet.
Closes FAMIS#25986, last checked in 76761
They all look very similar to at least the last few weeks of checks.
Wed Apr 10 10:12:00 2024 INFO: Fill completed in 11min 56secs
Gerardo confirmed a good fill curbside.
Added new PSL WD channel to PSLOPC SDF
Jason, Patrick, Dave:
H1:PSL-LASER_PDWD was added to the h1pslopcsdf slow controls sdf.
New 3IFO Storage Container #2 humidity sensor
Fil, Bubba, Dave:
A new dewpoint/humidity sensor was installed for CON2 (H0:VAC-3IFO_MOD_CON2_DP3, H0:VAC-3IFO_MOD_CON2_H2O_3). When the 3ifo dewpoint IOC was restarted 12mar2024 these channels were significantly different from the rest (dewpoint ~ 0C, PPM > 4000) which we interpreted as a sensor issue. In the 4 weeks since these values have slowly dropped (see attachment). The new sensor continues where the last one left off, suggesting that this container readings were in fact correct and there is nothing wrong with the original sensor.
TITLE: 04/10 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 2mph Gusts, 1mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.24 μm/s
QUICK SUMMARY:
Detector relocking and at LOWNOISE_ASC.
15:09UTC Observing
Eric, Naoki, Vicky, Jennie W, Sheila
We did SQZ OMC mode scan with cold OM2. The PSAMS is 175/100 (strain voltage 8.9/-0.69). The attached figure shows the result (ref 34). The SQZ OMC mode matching is 0.64/(0.64+2*0.03)~91.4%.
We followed the instruction in 74892. Here are some notes.
PSAMS 175/100, cold OM2
DARK
I used my code to fit the scan and C02 peak for this scan. As we saw with the PSL beam the new OMC means we cannot resolve these two peaks.
But if I assume one is buried in the noise and do the fit anyway we get (0.027 - 0.0038)/(0.027 + 0.64 - 0.0038) = 3.5% mode-mismatch for the squeezed beam matching to the OMC, with a fitted HWHM of 0.33 MHz.
NB: The factor 0.0038 mA is what I had to add to the scan data to shift the baseline to be above zero otherwise the fitting doesn't work well.
If I just use the measured height of C02 from the full scan (which is like assuming both C02 and C20 are overlapped) the mode-mismatch would be (0.029 - 0.0038)/(0.029 + 0.64 - 0.0038) = 3.8 %.
The OMC (001 is the current) parameters are given in this alog by Matt Todd, this gives a value for the half-width at half-maximum of 0.31MHz which matches our fitted one fairly closely.
The C02 peak graph (second image) has the OMC scan data in blue, fit in red, and initial guesses for the fit in purple.
Looking at this SQZ-OMC mode scan (data, mode scan), and running the same scripts as before (e.g. LHO:70866), this loss estimate based on OMC visibility using the squeezer beam doesn't really make sense. Hopefully PSL scans will make more sense. May be worth re-taking this squeezer measurement sometime.
Mode-matching and visibility make sense: Single-bounce SQZ-OMC mode mismatch ~ 4% based on TEM02/20 is consistent with Jennie's peak fitting, HOM content ~ 9-12%, and OMC locked/unlocked visibility = 1-locked/unlocked ~ 90%.
Loss estimate does not make sense: the inferred OMC transmission is ~72%, and if we ignore mode mismatch (ie assume 100% matching) this corresponds to an OMC transmission of 81%. But ~20% omc loss is ruled out by observed squeezing levels.
5.4 dB SQZ corresponds to a total loss of 20-25%, see params below including 12.3% expected sqz losses. This limits OMC losses < 10-15%, if assuming no mode-mismtch. With mode-mismatch, squeezing suggests OMC losses << 10%. This is not saying much, except this measurement is off.
So squeezing rules out the ~20% omc losses inferred from this sqz-omc visibility measurement. I'm not sure what is exactly is wrong, but in the mode scan, there was considerable TEM01/10 misalignment despite running OMC ASC + manual alignment. Unsure if alignment is the issue, or maybe something wasn't right with the squeezer beam, such that alignment couldn't improve (worth checking the beam round-ness on sqzt7?).
Script output below (code in git). Running this and Sheila's script in LHO:70866, I get the same output. So, the script seems OK, but something else is wrong with this measurement.
--------------- er16 sqz ------------------ processing measurement for er16 sqz P_Refl_on_res*1e3 = 0.12831 mA P_Refl_off_res*1e3 = 1.00626 mA Trans_A*1e3 = 0.62987 mA (I assume OMC-DCPD_SUM_OUTPUT is calibrated into mA with the new OMC, I did not check this calibration.) removed trans blocked of -0.00332 mA Power on refl diode when cavity is off-resonance: 1.006 mW Power on refl diode when cavity is on-resonance: 0.128 mW Trans power when cavity is on-resonance: 0.734 mW Incident power on OMC breadboard (before QPD pickoff): 1.026 mW Measured efficiency (DCPD current/responsivity if QE=1)/ incident power on OMC breadboard: 71.546 % Using 4.0% mode-mismatch from Jennie peak fitting, visibiilty_00*100 = 90.885% assumed QE: 100 % power in transmission (for this QE) 0.734 mW HOM content infered (like mode matching): 11.964 % Cavity transmission infered: 82.057 % predicted efficiency () (R_inputBS * mode_matching * cavity_transmission * QE): 71.546 % omc efficency for 00 mode (incl R_inBS * cavity_transmission * QE, no mm): 81.270 % round trip loss: 1606 (ppm) Finesse: 372.613
Oli, Erik, Dave:
Around 04:30 this morning a ADC channel hop on h1iopomc0 caused all of the omc0 models to stop running, and also caused a corner station Dolphin glitch which DACKILLED h1susb123, h1sush34, h1sush56 and h1lsc0.
h1omc0 dmesg:
[Tue Apr 9 04:34:05 2024] rts_cpu_isolator: LIGO code is done, calling regular shutdown code
[Tue Apr 9 04:34:05 2024] h1iopomc0: ERROR - A channel hop error has been detected, waiting for an exit signal.
[Tue Apr 9 04:34:05 2024] h1omcpi: ERROR - An ADC timeout error has been detected, waiting for an exit signal.
[Tue Apr 9 04:34:05 2024] h1omc: ERROR - An ADC timeout error has been detected, waiting for an exit signal.
Initially I thought this was an IO Chassis issue, so we power cycled h1omc0 rather than a restart all of its models (my confusion was because this front end only has one Adnaco, and is running the low-noise ADC). This brought h1omc0 back up and running.
We restarted the models on h1lsc0, which cleared the DACKILL.
Oli put the SUS and SEI for BSC1,2,3 and HAM3,4,5,6 into safe, I SWWD bypassed the SEI IOPs and we restarted the models on h1susb123, h1sush34, h1sush56. All came back with no problems.
I cleared the SWWDs, did a DIAG_RESET and cleared the DAQ CRCs.
Handing over to Oli for IFO recovery.
CDS Overview after DIAG_RESET run on all front ends:
Time of OMC crash:
04:32:45 PDT
11:32:45 UTC
1396697583 GPS
TJ, Camilla. WP 11730 Table: D1800270
TJ and I swapped the EX HWS fiber from a M92L02 200um 0.22NA multi-mode fiber to a M14L01 50um 0.22NA fiber, the same that LLO successfully uses. This gave us a focus ~150mm after the 125mm lens, where D1800125 suggests our focus should be 1.25m from the launcher (or ~100mm from L2 62398). TJ found we need to change the spacers in the D1800125 launcher, from the design 12mm closer to 11mm to get the beam focus at 1.25m. We got a beam a much more sensible size by only securing the launcher on one side and could see a return beam off ETMX, image will be commented. We plan to buy/find more spacers before continuing this work.
LLO has recently been swapping 1" optics to 2" to reduce clipping 69891. We did this in 62995 and 73878 so have M1A, M1B and M1C on EX as 2" optics but currently no picomotors in the HWS path.
Attached image with the plate off. Looks much better than before in the size and uniformity, but it would need more alignment and focusing if we decide to stay near this launcher to lens length.
From March 19th. TJ, Gabriele, Camilla
On March 12th, TJ and I tried to change the length of spacers in D1800125 by 1mm increments. This still didn't give us the required beam size.
On March 19th, TJ, Gabriele and I measured the beam straight out of the fiber and SM05SMA adapter, before the spacers and f = 20.0 mm bi-convex collimating lens. We used a ruler for horizontal, put white laminated paper on a stand to see the beamsize and measured the diameter with calipers, as the beam is too large for beamscanner. Results attached. Plan to use this to make a mode matching solution.