TITLE: 02/12 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 05:58 UTC
Pretty quiet shift with one EQ caused Lockloss (alog bla bla)
We also dropped out of OBSERVING briefly since SQZ unlocked. The SQZ_OPO_LR was not reaching its count threshold of 80. Following this wiki page, I first attempted to change the SHG temperature in order to increase the SHG power, which was indeed reduced compared to Lock Acquisition time. This did not work, so I ended up reverting my -0.015C change and just set the threshold from 80 to 74 in SQZ params, which did end up working. After this, I retuned the opo temp, which prompted SQZ to automatically re-lock. After SQZ ANGLE guardian optimized the angle, we were able to get back into Lock. There were no SDFs to accept before going back to OBSERVING. Total troubleshooting/outage time was 40 mins, from 02:21 UTC to 03:01 UTC.
LOG:
None
Earthquake caused Lockloss (very local 4.6)
TITLE: 02/12 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 4mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.42 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING as of 21:26 UTC
Gerardo, Jordan, Camilla
We disassembled ASSY-D1700340-001 (installed in 39957, removed in 60667) with the plan of re-cleaning the VAC surface and reusing the metal parts with new DAR ZnSe in the upcoming vent.
Sadly I chipped the 0.5" thick D1700340 window when removing it, photo, will add a note to the ICS record. The dust damaged the in-vac o-ring so we threw away the pair of o-rings but have spares in the LVEA TCS cabinets.
D1700338 is at C&B, other parts are stored in the PCAL lab VAC storage and the ZnSe will be placed in the LVEA TCS cabinets..
Gerardo notified me earlier today he observed an abnormal noise and some wetness coming from the Y+ chilled water pump. When Eric and I arrived we more or less observed the same. It is most likely another failed pump seal which will have to be replaced. One will be ordered shortly assuming demand has lifted and there is available inventory. Pumps were swapped and no interruption to temperatures should be seen. E. Otterman T. Guidry
Matthew, Camilla
Today we went to EX to try and measure some beam profiles of the HWS beam as well as the refl ALS beam.
Without analyzing the profiles too much, it seemed like (at least the HWS beam) matches previously taken data from Camilla and TJ.
The ALS beam still has the same behavior as reported a few years ago by Georgia and Keita (alog 52608), could not find a referencing alog) who saw two blobs instead of a nice gaussian, just as we see now
Attached is the data we took of the HWS beam coming out of the collimator, both the data from 11th Feb and 10th Dec (81741) together.
We also took data of the return ALS beam, which as Matt showed was shaped like two-lobes. Attached here. We measured the beam more downstream than when we did this at EY 81358 due to the ALS return beam being very large at the ALS-M11 beamsplitter. Table layout D1800270. Distances on table measured in 62121, since then HWS_L3 has been removed.
We didn't take any outgoing ALS data as ran out of time.
TITLE: 02/11 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: The shift started with the IFO unlocked. I kept it down since we had a full 4 hours of maintenance planned for the day. Relocking presented a few odd ALS issues, but Sheila managed to figure it out (see below). Other than that it was smooth relocking and we have now been observing for and hour.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
16:54 | SAF | Laser SAFE | LVEA | YES | LVEA is SAFE | 16:54 |
15:45 | FAC | Kim, Nelly | EX, EY, FCES | n | Tech clean | 17:31 |
15:56 | VAC | Ken | EX mech | n | Compressor electrical | 20:01 |
16:13 | PCAL | Francisco | EX | EX VEA | PCAL spot move | 17:48 |
16:15 | FAC | Chris | LVEA | yes | FAMIS checks | 16:38 |
16:20 | TCS | Camilla | LVEA | yes | Table pictures | 16:58 |
16:22 | VAC | Jordan | LVEA | yes | RGA power cycle | 16:57 |
16:32 | FAC | Eric | Mech room | n | HVAC heater element replacement | 17:46 |
16:37 | VAC | Gerardo | Mech room | n | Kabelco startup | 17:07 |
16:37 | IAS | Jason, Ryan | LVEA | n | Alignment FARO work | 19:32 |
16:38 | FAC | Chris | outbuildings | - | FAMIS checks | 18:24 |
16:39 | ISC | Daniel | LVEA | n | ISC-R2 rack 36MHz rewire | 17:55 |
16:43 | FAC | Tyler | LVEA, Mids | - | 3IFO checks | 18:13 |
17:08 | VAC | Gerardo | LVEA | n | Check on vac things | 17:22 |
17:23 | VAC | Gerardo, Jordan | EY | n | Ion pump replacement | 19:03 |
17:24 | SEI | Jim, Fil | EY | n | SEI cabling | 18:13 |
17:31 | CDS | Marc | MY | n | Parts hunt | 17:56 |
17:35 | HWS/ALS | Camilla, Matt | EX | Yes | Table beam measurement | 19:48 |
17:51 | FAC | Kim, Nelly | LVEA | n | Tech clean | 18:50 |
17:57 | CDS | Marc, Sivananda | LVEA | n | Rack check | 18:04 |
18:18 | SEI | Jim, Mitchell | Mids | n | Find parts | 18:42 |
18:51 | FAC | Kim, Nelly | EX | yes | Tech clean | 19:47 |
19:20 | FAC | Tyler, Eric | EY | n | Looking at chilled water pump | 19:33 |
19:51 | VAC | Gerardo | LVEA | n | Dew point meas | 20:17 |
19:52 | - | Ryan C | LVEA | n | Sweep | 20:07 |
20:53 | ISC | Sheila | LVEA | Local | POP_X align | 21:28 |
21:36 | VAC/TCS | Camilla, Gerardo, Jordan, Travis | Opt Lab | n | Look at ZnSe viewport |
ongoing |
I made two minor changes to these nodes today:
The POP X PZT was railed after yesterday's PR2 spot move, so Jennie and I went to the table and relieved it using the steering mirror right after the the pZT.
Elenna is looking for some information about the POP X path, so Jennie W and I took this photo and added optic labels to it.
All maintenance day activities have finished and we just got back to observing at 2126 UTC.
WP 12327. This is for the work Aidan is doing looking at LVEA temperature vs range (LLOalog#74999).
This morning, prior to sunrise, zone 2A in the LVEA experienced a rapid temperature decline. This was partially due to broken heating elements which prevented the duct heater from producing as much heat as it needed to. I have replaced the broken elements.
The LVEA has been swept following maintenance, nothing really of note.
The POP-X readout chain has been moved from 45.5MHz to 36.4MHz. All RF chains have been checked and they look fine.
H1:ASC-POP_X_RF_DEMOD_LONOM was changed from 24 to 23.
POP_X 45MHz signal trends while powering up. If we believe the POP_X_DC calibration it detects around 12mW at full input power.
Same trend with 36MHz signals.
All RF signals have an analog whitening gain of 21dB. There is also a digital gain in the I/Q input segments of ~2.8 at the beginning, and ~5.6 at the end of the power up. At the same time of the gain change, the 45 MHz modulation idnex is reduced by 6dB, whereas the 9MHz one is reduced by 3dB. This means the 45MHz signals are correctly compensated, whereas the 36MHz signal are not and the outputs become 3dB smaller than expected. The maximum observed INMON signal is around 8000 count and factor ~4 from saturating.
I took a look at the locklosses during the calibration measurements the past week. Looking at DARM right before the locklosses, both times a large feature grows around ~42 Hz right before the lockloss. Sat LL Thur LL
Thursday:
DARM_IN was dominated by the 42 Hz long oscillation and a ~505 short oscillation until the LL, DARM_OUT was dominated by the harmonic of the violins ~1020 Hz.
Saturday:
DARM_IN had a long and a short oscillation, the fund violin modes, ~510 Hz and ~7.5 Hz, DARM_OUT was dominated by the harmonic of the violins ~1020 Hz
I'm not sure how to/where to see exactly what frequencies the simulines were injecting during and before the lockloss.
Looking into what's going awry.
I pushed for a change of calibration sweep amplitudes on the Pcal and the PUM (which had been tested a couple of month's back) which was instilled into the calibration sweep wiki last week, labeled informatively as "settings_h1_20241005_lowerPcal_higherPUM.ini
".
Both of these sweeps were very near the end, where Pcal is driving at 7.68 Hz and PUM is driving at either 42.45 Hz or 43.6 Hz, which should clarifiy the source of the signals you are pointing out in this aLog.
The driving amplitude of the Pcal at 7.68 is about 20% lower than the injections that were being run the week before, deliberately done to reduce kicking the Pcal during ramping to reduce broad band coupling into DARM which would affect other measurement frequencies like the L1 which is driving at ~12 Hz at this time.
The driving amplitude of the PUM at ~42 Hz is unchanged from injections that had been running up until last week.
Not seeing any SUS stage saturating at lock losses. Presently unconvinced lock losses are related to new sweep parameters.
Both locklosses coincided with the ramping ON of the final DARM1_EXC at 1200 Hz
Tagging CAL properly
Sheila, Matt Todd
We are having more incidences of the nonstationary noise 20-60 Hz that correlates with the CO2 ISS channel.
Here's a collection of links related to this issue in the past:
Here is a Lasso result for the noisy lock from this weekend: lasso Feb 9. Jane Glanzer is working on running lasso for some of the recent noisy times including the max as well as mean channels, so that may provide additional clues. We still see the ITMX CO2 ISS channel correlated with range. Note, on Feb 7th Lasso chooses H1:IOP-OAF_L0_MADC2_EPICS_CH15 which is the same thing as ITMX_CO2_ISS_CTRL2_INMON,
For the 9th the Rayleigh statistic also clearly shows this issue: summary page for Feb 9th, but comparing this to one of our normal days (range just below 160 and stable on Feb 4th) we also see nonstationarity at these frequencies. So, it is possible that we normally have this non stationary noise at a lower level and it is always limiting our sensitivity.
Feb st we had an incident where the ISS CO2 channel was correlated with the range, screenshot. Feb 1st there was also a remarkable change in the FC length control signal peak to peak, which has not shown any correlation with these range drops in the last week and a half, but did last may (78485). Matt found this alog about squeezer issues on the 1st, 82581, we adjusted the SHG temperature, and fiber polarization, and there was a temperature excursion in the FCES. The FC length control signal was noisy from Jan 2nd to Feb 1st, and has been back to normal since.
Operator request: If operators see the range fluctuating with lots of noise between 20-40 Hz, (similar to Feb 9th), could you drop out of observing and go to no squeezing for 10 minutes or so? We would like to see if this problem comes and goes with squeezing as it did last May.
I have run lasso for four different time periods as suggested by Sheila. As mentioned, these lasso runs differ from a traditional run in that I am using .max trends of the auxiliary channels to model the bns range. Below are links to each run, along with brief comments on what I saw.
Feb 1st 10:20 - 18:40: The top channel is a SQZ channel, H1:SQZ-PMC_TRANS_DC_NORMALIZED. H1:ASC-POP_X_RF_Q4_OUTPUT and H1:TCS-ITMX_CO2_ISS_CTRL2_OUT_DQ were also picked out. The CO2 channel is correlated with the bigger dip around ~15 UTC.
Feb 7th 12:21:58 - 15:17:03: For some reason, lasso only runs until 15 UTC, when I have specified it run until 18:30 UTC. I am not 100% sure why this happens. I think it may be because of a large range drop around 15 UTC. I have still included what lasso found for the initial 3 hours or so. Some top channels picked out are H1:SUS-ITMY_M0_OSEMINF_F2_OUT16 & H1:ASC-AS_A_RF45_Q2_OUT16. These are new channels that I don't think have been picked out before. I will say that farther down the list of correlated channels H1:PSL-ISS_SECONDLOOP_QPD_SUM_OUT16.
Feb 7th 21:55:32 - Feb 8th 01:40:00: Top channel is H1:TCS_ITMX_CO2_ISS_CTRL2_OUT_DQ.
Feb 9th 07:04:07 - 11:59:26: Top correlated channels are H1:PSL-ISS_SECONDLOOP_PDSUMOUTER_INMON & H1:TCS-ITMX_CO2_ISS_CTRL2_OUT_DQ. These are also the top two channels from the regular lasso run (with .mean trends) as viewed from the summary pages.
The most consistently picked out channel is the ITMX_CO2 channel, but the .max trend method also seems to pick up some PSL, SQZ, SUS, and ASC channels as well.
Sheila and I are continuing to check various PD calibrations (82260). Today we checked the POP A LF calibration.
Currently there is a filter labeled "to_uW" that is a gain of 4.015. After some searching, Sheila tracked this to an alog by Kiwamu, 13905, with [cnts/W] = 0.76 [A/W] x 200 [Ohm] x 216 / 40 [cnts/V]. Invert this number and multiply by 1e6 to get uW/ct.
Trusting our recalibration of IM4 trans, we have 56.6 W incident on PRM. We trust our PRG is about 50 at this time, so 2.83 kW are in the PRC. PR2 transmission is 229 ppm (see galaxy optics page). Then, the HAM1 splitter is 5.4% to POP (see logs like 63523, 63625). So we expect 34 mW on POP. At this time, there was about 30.5 mW measured on POP according to Kiwamu's calibration.
I have added another filter to the POP_A_LF bank called "to_W_PRC", that should calibrate the readout of this PD to Watts of power in the PRC.
POP_A_LF = T_PR2 * T_M12 * PRC_W, and T_PR2 is 229 ppm and T_M12 is 0.054. I also added a gain of 1e-6 since FM10 calibrates to uW of power on the PD.
Both FM9 (to_W_PRC) and FM10 (to_uW) should be engaged so that POP_A_LF_OUT reads out the power in the PRC.
I loaded the filter but did not engage it.
More thoughts about these calibrations!
I trended back to last Wednesday to get more exact numbers.
input power = 56.8 W
PRG = 51.3
POP A LF (Kiwamu calibration) = 30.7 mW
predicted POP A LF = 0.054 * 229 ppm * 56.8 W * 51.3 W/W = 36 mW
ratio = 30.7 mW / 36 mW = 0.852
If the above calibrations of PRG and input power are correct, we are missing about 15% of the power on POP.
During commisioning this morning, we added the final part of the ESD glitch limiting, by adding the actual limit part to ISC_LOCK. I added a limit value of 524188 to ETMX_L3_ESD_UR/UL/LL/LR filter banks, which are the upstream part of the 28 bit dac configuration for SUS ETMX. These limits are engaged in LOWNOISE_ESD_ETMX, but turned off again in PREP_FOR_LOCKING.
In LOWNOISE_ESD_ETMX I added:
log('turning on esd limits to reduce ETMX glitches')
for limits in ['UL','UR','LL','LR']:
ezca.get_LIGOFilter('SUS-ETMX_L3_ESD_%s'%limits).switch_on('LIMIT')
So if we start losing lock at this step, these lines could be commented out. The limit turn-off in PREP_FOR_LOCKING is probably benign.
Diffs have been accepted in sdf.
I think the only way to tell if this is working is to wait and see if we have fewer ETMX glitch locklosses, or if we start riding through glitches that has caused locklosses in the past.
Using the lockloss tool, we've had 115 Observe locklosses since Dec 01, 23 of those were also tagged ETM glitch, which is around 20%.
Since Feb 4th, we've had 13 locklosses from Observing, 6 of these tagged ETM_GLITCH: 02/10, 02/09, 02/09, 02/08, 02/08, 02/06.
Jim, Sheila, Oli, TJ
We are thinking about how to evaluate this change. In the meantime we made a comparison similar to Camilla's: In the 7 days since this change, we've had 13 locklosses from observing, with 7 tagged by the lockloss tool as ETM glitch (and more than that identified by operators), compare to 7 days before the change we had 19 observe locklosses of which 3 had the tag.
We will leave the change in for another week at least to get more data of what it's impact is.
I forgot to post this at the time, we took the limit turn on out of the guardian on Feb 12, with the last lock ending at 14:30 PST, so locks since that date have had the filters engaged, but since they multiply to 1, they shouldn't have an effect without the limit. We ran this scheme between Feb 3 17:40 utc until Feb 12 22:15 utc.
Camilla asked about turning this back on, I think we should do that. All that needs to be done is uncommenting out the lines (currently 5462-5464 in ISC_LOCK.py):
#log('turning on esd limits to reduce ETMX glitches')
#for limits in ['UL','UR','LL','LR']:
# ezca.get_LIGOFilter('SUS-ETMX_L3_ESD_%s'%limits).switch_on('LIMIT')
The turn off of the limit is still in one of the very early states is ISC_LOCK, so nothing beyond accepting new sdfs should be needed.
SHG power drop was probably due to LVEA temp swings, I touched the wave plates in the SHG path to reduce rejected SHG light (in SQZT0 and HAM7). Then tried setting opo_grTrans_setpoint_uW back to 80uW but controlmon was at 1.5 (too low). Reverted back to 75uW so H1:SQZ-OPO_ISS_CONTROLMON is at 2.4 (still not ideal). Didn't recheck OPO temp.