Back to observing at 2057UTC.
Maintenance recovery was slowed by ITMY CPS glitching (alog72683) that Jim fixed by replacing a CPS card. This was also the cause of some lock losses last night.
Jim kindly agreed to sweep the LVEA and didn't notice anything amiss.
J. Kissel, J. Warner More details to come, but after trending around this morning, Jim found that Ryan's issues with lock losses last night were coincident with H1:ISI-ITMY_ST1 and ST2 capacitive position sensor (CPS) glitching as loud as ~4 micron jumps in 65-100 Hz band limited RMS. Attached are trends of the new-ish band-limited RMS channels installed in April 2023 (see CPS BLRMS ECR installation mentioned in LHO aLOG 68798, and design in SEI aLOGs 1849 and 1867) compared against the ISC_LOCK_STATE_N, where 600 in NOMINAL_LOW_NOISE. The 2nd row of both plots show the ST1 and ST2, H2 and V2 CPS BLRMS trends for both the 65-100 and 130-200 Hz bands. One can see the loudest of glitching in these sensors, where the y-axis is in nano meters. These 4 sensors are all in the corner two "rack," but each sensor has its own readout card. The loud glitches correspond to times of ITMY watchdog trips too, which is what Ryan saw in LHO aLOGs 72671 and 72666. Jim is swapping out some of the cards as I type, and will aLOG accordingly.
For future investigations, I call out these channels for ITMY explicitly: H1:ISI-ITMY_ST1_CPSINF_H1_BLRMS_65_100 H1:ISI-ITMY_ST1_CPSINF_H2_BLRMS_65_100 H1:ISI-ITMY_ST1_CPSINF_H3_BLRMS_65_100 H1:ISI-ITMY_ST1_CPSINF_V1_BLRMS_65_100 H1:ISI-ITMY_ST1_CPSINF_V2_BLRMS_65_100 H1:ISI-ITMY_ST1_CPSINF_V3_BLRMS_65_100 H1:ISI-ITMY_ST1_CPSINF_H1_BLRMS_130_200 H1:ISI-ITMY_ST1_CPSINF_H2_BLRMS_130_200 H1:ISI-ITMY_ST1_CPSINF_H3_BLRMS_130_200 H1:ISI-ITMY_ST1_CPSINF_V1_BLRMS_130_200 H1:ISI-ITMY_ST1_CPSINF_V2_BLRMS_130_200 H1:ISI-ITMY_ST1_CPSINF_V3_BLRMS_130_200 H1:ISI-ITMY_ST2_CPSINF_H1_BLRMS_65_100 H1:ISI-ITMY_ST2_CPSINF_H2_BLRMS_65_100 H1:ISI-ITMY_ST2_CPSINF_H3_BLRMS_65_100 H1:ISI-ITMY_ST2_CPSINF_V1_BLRMS_65_100 H1:ISI-ITMY_ST2_CPSINF_V2_BLRMS_65_100 H1:ISI-ITMY_ST2_CPSINF_V3_BLRMS_65_100 H1:ISI-ITMY_ST2_CPSINF_H1_BLRMS_130_200 H1:ISI-ITMY_ST2_CPSINF_H2_BLRMS_130_200 H1:ISI-ITMY_ST2_CPSINF_H3_BLRMS_130_200 H1:ISI-ITMY_ST2_CPSINF_V1_BLRMS_130_200 H1:ISI-ITMY_ST2_CPSINF_V2_BLRMS_130_200 H1:ISI-ITMY_ST2_CPSINF_V3_BLRMS_130_200 but the equivalent of these BLRMS channels are now installed on ALL ISIs with CPS have them, so for BSC-ISIs, replace ITMY with ITMX, BS, ETMX, or ETMY. For HAMs, take out the "_ST#" stage reference, and use "HAM2" through "HAM8". Templates for the above attached trends live here: /ligo/home/jeffrey.kissel/Templates/NDScope/ ISI_ITMY_ST1_CPS_65to200Hz_BLRMS.yaml ISI_ITMY_ST2_CPS_65to200Hz_BLRMS.yaml
So far I have replaced the St1 H2 and St2 H2 sensors. As per usual with these, there's no clear indicator of which sensor is misbehaving. The first sensor I replaced was St1 H2, because it seemed from looking at trends for the last couple days for all corner 2 sensors that St1 H2 glitches were on average louder than the other sensors. It worked for a while, but then St2 tripped a bit after we got to NLN. Looking at trends for the raw CPSINF channels, the St1 H2 sensor didn't even see this glitch and it seemed loudest on St2 H2. I've now replaced the St2 H2 sensor, things seem quiet so far.
I had hoped that the 65-100hz or 130-200hz blrms for the CPS would help find a glitching sensor, but so that doesn't seem to be the case. But, neither has looking at asds of the sensors or trends of the sensor readouts. For now, I don't have any clever ideas to tell which sensor is misbehaving beyond guessing at what sensor to replace and seeing when the ISI stops tripping.
TLDR - Looks llike it was a race condition between during a lock loss and a timer never getting reset. Put in some code to hopefully get around this.
Recently we've seen IFO_NOTIFY alerting as soon as we get to the state NOMINAL_LOW_NOISE on a few locks, rather than waiting for the camera servo timer or the nln timer. Ryan S and I took a look at it last week and didn't find the issue, but left some more log messages to help us debug. These messages showed that the timer for nln was getting reset at a lockloss. The only way for this to happen is if during a lock loss the IFO node went out of its Observing state before ISC_LOCK went out of NOMINAL_LOW_NOISE. This happened twice last night because of the ITMY ST2 trips (alog72666), but looking at slow channels for other times make it difficult to confirm.
I added a reset for the timer to some absurdly large number when ISC_LOCK is in DOWN. This should stop this past situation and let us use IFO_NOTIFY during commissioning times if we wanted. I loaded it into the node, hopefully this is the fix.
Maintenance almost wrapped up, starting to relock now.
Tue Sep 05 10:07:08 2023 INFO: Fill completed in 7min 4secs
Looking over the PSL this morning, I noticed the RefCav TPD dropped from ~0.86V to ~0.8V over the Labor Day weekend (see attachment). Checked with the on-duty Operator to ensure no one was using the IMC (no one was), so we took it to offline and I remotely tweaked the beam alignment into the RefCav to attempt to recover some of the lost TPD signal. I was able to get the TPD up to ~0.885V with the IMC unlocked (~0.86V with the IMC locked, unsure why the difference). Will keep an eye on this, looks like we may need a PSL enclosure incursion sometime in the next couple weeks to adjust the FSS path alignment on table.
WP11409
A new h1hpiham1 model was installed on h1seih16, no DAQ restart was needed.
TITLE: 09/05 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 1mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.13 μm/s
QUICK SUMMARY: PEM injections and SUS charge are finishing up, IFO locked for 45min. Maintenance about to start
The ITMY ISI watchdog tripped, hasn't shown up on H1locklosses yet.
We're going through an initial alignment on the relock.
Reaquired NLN at 13:17, just waiting on the camera_servo
ITMY IST ST2 watchdog tripped at 13:21, not totally sure why. We lost lock 4 minutes later
These lock losses were caused by the ITMY capacitive position sensors glitching. See LHO:72683.
ITMY stage2 watchdog tripped and killed the lock, not sure why it tripped. Its the second time in a row that it's happened.
Reaquired NLN at 14:09UTC
These lock losses were caused by the ITMY capacitive position sensors glitching. See LHO:72683.
Vicky and I went to SQZT7 while calibration work was happening, to follow up on some of our observations from 72525 (and Vicky's comment).
Polarization issue:
With the seed dither locked, we placed a PBS before the half wave plate in the homodyne sqz path and measured 67.2uW transmitted (vertical pol) and 750uW reflected (horizontal) (817uW total, 8% in the wrong polarization, 16.5 degrees polarization rotation). After the half wave plate we measured 5.48uW transmission through the PBS (vertical) and 802uW reflected (horizontal) ( 807uW total, 0.7% in the wrong polarization, polarization less than 5 degrees away from horizontal). We also placed the PBS right at the bottom of the periscope, and there measured 70uW transmitted and 820uW before the PBS was inserted (8.5% in the wrong polarization, 17 degrees polarization rotation away from horizontal). This would not limit the squeezing measured on the homodyne since we are able to correct it with the HWP, but measuring the same polarization rotation at the bottom of the periscope suggests that the beam could be coming out of HAM7 with this polarization error, which would look like an 8% loss to the squeezing level in the IFO.
In Sept 2022, during the vent for the OM2 swap, we measured the throughput of the seed beam from HAM7 to HAM6 65110, which agreed well with the only loss between HAM7 and HAM6 being the 65.6% reflectivity of SRM, and suggests that there was not an 8% loss in the OFI at that time.
Loss on SQZT7 (not bad):
Comparing the total power measurements here, we have 820uW at the bottom of the periscope, and 807uW measured right before the homodyne, so we have something like 1.6% loss on SQZT7 optics (small compared to the type of loss we need to explain our squeezing level).
Seed transmitted power over reflected power ratio has dropped:
We also measured the seed power reflected from the OPO, so that we could compare the ratio of transmitted to reflected seed measured at the time of the squeezer installation in HAM7 in Feb 2022: 61904 (3.9% trans/refl). Today we saw 0.82mW seed transmitted, and 27mW of reflected seed at the bottom of the periscopes (3.03% trans/refl). This is 78% of the ratio measured at installation. Because this seems like a large drop, we repeated the measurement twice more, and got 3% each time. We also checked that the dither lock is locking at the maximum seed transmission.
Homodyne PD QE check (QE of PDB might be low):
We used an Ophir which was calibrated in 2018 to measure the LO power onto the homodyne PDs, the filter and head are SN 889882 and the controller is SN 889428. For PDA we saw 0.6mW, for PDB we saw 0.63mW.
Both the PDs are calibrated into mA in the front end, which includes anti-gain of gain(0.25)gain(0.22027), transimpedance of 0.001 (1kOhm), two anti-whitening filters (and cnts2V and mA factors). For PDA there is a fudge factor in the filter gain, if we divide this out, the readback is that the PDA photocurrent was 0.512mA, and 0.5126mA for PDB (with a drift of 0.5% over the measurement time). This gives a responsivity of 0.855A/W for PDA and 0.813A/W for PDB. For QE of 1, the responsivity would be e lambda/(h c) = 0.8582 A/W, so our measurement is 99.6% QE of PDA, and 95% QE for PDB. (See Vicky measured higher reflection off PDB than PDA in 63893 and Haocuns' measurement in 43452).
Above I mixed up vertical and horizontal polarization. The LO beam arriving at the homodyne is vertically polarized, as well as the seed beam coming out of the chamber.
Revisiting old alogs about the seed refl/trans (throughput) measurement:
In the first installation Feb 2022, The refl/trans ratio was measured as 4% Feb 24th 61904, and the ratio of IR trans arriving on SQZT7 to right after the OPO was 95% measured Feb 10th 61698
When the CLF fiber was swapped this measurement was redone: 64272 There we didn't measure CLF refl, but combining the measurements of 37mW out of fiber and 8mW rejected we can expect 29mW CLF refl. With 0.81mW reaching HAM7 this was a 2.8% ratio of refl/trans. This is worse than at the inital installation but similar to what Vicky and I measured last week. But, this alog also indicated 95% transmission from right out of the OPO to SQZT7. So this second measurement is consistent with the one we made last week, and would indicate no excess losses in HAM7 compared to that time.
Polarization rotation is only an on-table problem for SQZT7, not an issue for IFO. It can be attributed to the SQZT7 persicope. To close the loop, see LHO:73537 for Don's latest CAD layout with the squeezer beam going to SQZT7 at a 14.4 degree angle (90-75.58) from +Y. SQZT7 periscope re-directs the beam to travel basically along +Y.
The nominal value of the squeezer laser diode current was change to 1.863 from 1.95. The tolerance is unchanged at 0.1. Looking at trends we sometimes read a value just below 1.85 leading to a failed laser condition which in turn triggers a relocking of the squeezer laser. However, since we are already locked, all we see is the fast and commen gains ramping down and up.
Looking at this diode current trend over the past 500 days, we see it fairly stable but trending down very slowly. It may have lost 10 mA over the past year. Resetting the nominal value should keep us in the good band for a while if this trend continues.
So far this seems to have fixed the TTFSS gain changing issue! Haven't seen gain changes while locked in the past couple days, since Daniel changed the laser diode nominal current (bottom purple trend).
In the past week there wasn't single TTFSS gain ramping incident during lock. The fast and common gains are again monitored in SDF.
Camilla and I removed the auxillary laser from HAM7 this morning, but left the irseses in place.
Before removing the laser we noted that there was 0.25 NSUM counts on both OMC QPDs, with 21mW out of aux laser. This is with the laser current at 150 mA, and the OMC QPD sums were similar to last week.
We then switched to the squeezed beam from HAM7, using the OPO dither lock. We temporariy turned the power into the seed fiber (which has no fiber switch right now) to 77mW, and measured 1.18mW leaving HAM7 at the gate valve. We measured 0.78mW arriving at HAM6 before OM1, which agrees well with SRM transmission of 34.34%. We saw the SQZ beam on the OMC QPDs, 0.015 and 0.013 counts NSUM. (This is a slightly higer ratio of NSUM counts to measured power incident on OM1 than for the aux laser. Keita tell us this is not suprising since the aux laser beam size was large.
We wrapped the breadboard that the aux laser was on in foil and brought it to the optics lab, and Camilla turned the seed power back down.
The SRM transmission above was wrong, we have SRM06 which has a reflectivity of 32.34% according to E1700158 https://galaxy.ligo.caltech.edu/optics/
This means that our measurement above of 66% transmission from HAM7 to HAM6 indicates 97.6% transmission taking out the SRM transmission, which seems reasonable for two passes of the OFI with this kind of measurement accuracy.
We've had ITMY ST2 trip two more times since observing, but we haven't broken lock. Jim is preparing a new CPS card, but we'll have to break lock to do it if we choose to.
Intentionally broke to lock to fix allow for a fix of the CPS glitching. Relocking now.