FAMIS 19979
Cooling: The chiller water temp seems to have dropped 6 days ago, cooling power meters, amplifiers, diode boxes, and control boxes slightly.
Environment: Temperatures in the laser room, anteroom, LVEA, diode room, and chiller room also dropped slightly at around the same time. Humidity seems to be falling everywhere, too.
Laser: This change 6 days ago also correlates with a drop in NPRO output power from 1.83 W to 1.82 W and a drop in AMP1 power.
Stabilization: PMC REFL is falling, but I don't exactly notice a corresponding increase in PMC TRANS. There was an ISS AA chassis failure 3 days ago (alog 70089).
Last week Elenna and Derek pointed out there were some evidence of PRCL ringing up around 11 Hz around locklosses due to not having the appropriate unity gain frequency (alog 70044).
Since that report was made I looked into the following locklosses that were at the guardian state of 600 NLN.
Some of the PRCL plots showed a longer or shorter oscillation durations before lockloss, examples are shown in the images attached. Some of the PRCL plots did not show any evident oscillations.
Lockloss, 600 NLN, at 05/30/23 00:55:57 UTC, PRCL issue shown around 12 Hz (shown in first image, this was one of the longer oscillations)
Lockloss, 600 NLN, at 05/30/23 08:30:27 UTC, PRCL issue shown around 12.6 Hz (short oscillation)
Lockloss, 600 NLN, at 05/30/23 12:30:27 UTC, PRCL issue not shown
Lockloss, 600 NLN, at 05/30/23 14:59:29 UTC, PRCL issue not shown
Lockloss, 600 NLN, at 05/31/23 01:09:14 UTC, PRCL around 10 Hz (short oscillation)
Lockloss, 600 NLN, at 05/31/23 08:10:26 UTC, PRCL issue not shown
Lockloss, 600 NLN, at 05/31/23 12:13:26 UTC, PRCL issue shown around 11 Hz (short oscillation)
Lockloss, 600 NLN, at 05/31/23 18:03:16 UTC, PRCL issue shown around 11 Hz (short oscillation) (last lock at 76 W input, Elenna lowered it by 1 W (alog 70042) after this)
Lockloss, 600 NLN, at 06/01/23 01:19:29 UTC, PRCL issue not shown
Lockloss, 600 NLN, at 06/01/23 05:20:18 UTC, PRCL issue not shown
Lockloss, 600 NLN, at 06/01/23 08:41:49 UTC, PRCL issue not shown
Lockloss, 600 NLN, at 06/01/23 12:44:30 UTC, PRCL issue not shown
Lockloss, 600 NLN, at 06/02/23 06:57:23 UTC, PRCL issue not shown
Lockloss, 600 NLN, at 06/02/23 08:57:58 UTC, not sure what to make of the glitches shown…but could be interesting to look into (shown in second image)
Lockloss, 600 NLN, at 06/02/23 20:13:27 UTC, PRCL issue not shown, (I believe this was due to parametric instability in mirrors)
Lockloss, 600 NLN, at 06/02/23 04:23:53 UTC, PRCL issue not shown
Lockloss, 600 NLN, at 06/03/23 08:32:59 UTC, PRCL issue shown around 11 Hz (short oscillation)
Lockloss, 600 NLN, at 06/03/23 12:06:23 UTC, PRCL issue not shown
Lockloss, 600 NLN, at 06/03/23 13:56:36 UTC, PRCL issue not shown
Lockloss, 600 NLN, at 06/04/23 20:54:10 UTC, PRCL issue not shown
Lockloss, 600 NLN, at 06/05/23 01:44:44 UTC, PRCL issue shown around 11 Hz (short oscillation)
Lockloss, 600 NLN, at 06/05/23 11:16:09 UTC, PRCL issue shown around 11 Hz (shown in third image, this is an example of what the short oscillation looks like)
Lockloss, 600 NLN, at 06/05/23 18:17:00 UTC, PRCL issue shown around 13 Hz (longer oscillation)
Since we still see the PRCL issue after the power change, we will look into what the arm circulating power was for the last 6 locks to see if there is any pattern.
It seems like we have still range to gain from increasing the generated squeezing levels back up to nominal. This has been a challenge since the LVEA temperature drifts, which I think have affected some on-table SQZT0 alignments, so we can't run with nominal/typical generated sqz levels. And moreover, it's caused our ISS to rail several times recently, which we've had to drop out of observing to fix.
Looking at on-table power trends, we are having to launch more power to fiber for the same power out of the fiber. The Y2 cursors show that for 20mW fiber launched, we used to have 85uW opo trans, with lots of range on pump ISS. Now we have way less power launched, and no room on ISS. Specifically:
- >2 months ago, we launched 20 mW into the fiber, for 85uW opo trans (w/lots of room to spare on iss).
- 1 month ago, we launched 20mW into the fiber, for 75uW opo trans (some room to spare). Camilla aligned mid-May 69704, but then a bunch of LVEA temp changes happened.
- Today, we launch 22mW into the fiber, for 60uW opo trans (almost no room to spare on ISS).
Things we can try to get back:
- Given the LVEA temperature changes, we can check/optimize the pump fiber coupling efficiency on SQZT0. We can also check pump AOM throughput, but its diffraction effiency looks good still (~60%).
- Can also check the OPO transfer function, or revert the powers to OPO's RF PD. Since I rejected power to not saturate the REFL RF80 PD used for OPO PDH locking in mid-april (~50 days ago, LHO:69677), even though the reported RF power was similar for the lowered dc power, maybe I should have adjusted the opo servo gain. But, we've recovered the "normal" situation since this change, so I don't think this alone is responsible.
TITLE: 06/05 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 132Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 18mph Gusts, 13mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.15 μm/s
QUICK SUMMARY: Taking over from Austin. H1 has been locked and Observing for 1.5 hours
TITLE: 06/05 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 132Mpc
SHIFT SUMMARY:
- Arrived with the IFO locked and in observing
- Handing H1 off to Ryan S in observing
- Lock #1:
- Lock #2-??:
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 15:13 | FAC | Karen | Optics Lab/VAC/MY | N | Tech clean | 17:01 |
| 16:28 | FAC | Kim | MX | N | Tech clean | 17:08 |
| 19:46 | EE | Fil/Patrick | EX Mech | N | Wiring checks | 21:46 |
| 21:51 | EE | Fil | MY | N | Inventory | 22:51 |
We have been having some locklosses during the TURN_ON_BS_STAGE2 step. Looking at a couple of events, there are transients seen in the BS st2 CPS during this step from engaging the isolation loop boosts. The boost filters have ramp times, some of them were 5 seconds, probably related after this alog, but the RX RY loops still had 2 second ramps. All of these are set to 5 seconds now, this seems much smoother from the ISI CPS. Otherwise, these filters have not been changed since 2020. I also tried using the "Always On" option in foton, this went poorly. And tripped the ISI.
First attached plot is one of the previous transitions, middle row are the SEI_BS guardian state transition and the ISC_LOCK state. Top and bottom rows are the St2 CPS, RX/RY on the top, X/Y on the bottom. It seems like the probably the ramp of 2 seconds for RX/RY is too fast, and this causes the X/Y to move around a bit through tilt to horizontal coupling. Second attached plot is the DRMI lock just now and the CPS transition is much smoother.
Nothing else has changed on these filters or loops in many months, I don't know why we would be having more problems now, but maybe we're just paying more attention.
This is, of course, exactly the opposite of what I did back in 2019, when I was decreasing ramp times to get the boost engaged as quickly as possible. Shorter ramps seemed to fix locklosses at this state back then. But we were turning the stage 2 loops completely off to acquire DRMI at the time, and I wasn't using the St2 RX/RY loops then either. Now we only turn off the boosts, which leaves the dc isolation loops mostly engaged.
I've counted six lock attempts since loading the new ramp time settings in the boost filters and I don't see any kicks in the St2 CPS when engaging the boost filters. I also don't see strings of lock losses during this step like before. Seems like that means it at least didn't make things worse.
Mon Jun 05 10:07:23 2023 INFO: Fill completed in 7min 22secs
Gerardo confirmed a good fill curbside.
I have updated lscparams with new violin settings that damp our troublesome modes faster (I have only changed the gains). Here are the updated settings:
ITMY 4 G: 0.4 (was +0.1)
ITMY 5: -0.04 (was -0.01)
ITMY 7: 0.2 (was 0.1)
ITMY 8: -0.2 (was -0.08)
FAMIS 17590
All STSs prrof masses that within healthy range (< 2.0 [V]). Great!
Here's a list of how they're doing just in case you care:
STS A DOF X/U = -0.601 [V]
STS A DOF Y/V = -0.696 [V]
STS A DOF Z/W = -0.637 [V]
STS B DOF X/U = 0.481 [V]
STS B DOF Y/V = 0.828 [V]
STS B DOF Z/W = -0.336 [V]
STS C DOF X/U = -0.477 [V]
STS C DOF Y/V = 0.937 [V]
STS C DOF Z/W = 0.234 [V]
STS EX DOF X/U = -0.032 [V]
STS EX DOF Y/V = 0.08 [V]
STS EX DOF Z/W = 0.121 [V]
STS EY DOF X/U = 0.263 [V]
STS EY DOF Y/V = -0.894 [V]
STS EY DOF Z/W = 0.971 [V]
STS FC DOF X/U = 0.39 [V]
STS FC DOF Y/V = -0.77 [V]
STS FC DOF Z/W = 0.749 [V]
There are 12 T240 proof masses out of range ( > 0.3 [V] )!
ETMX T240 2 DOF X/U = -0.64 [V]
ETMX T240 2 DOF Y/V = -0.75 [V]
ITMX T240 1 DOF X/U = -0.606 [V]
ITMX T240 1 DOF Y/V = 0.323 [V]
ITMX T240 1 DOF Z/W = 0.448 [V]
ITMX T240 3 DOF X/U = -0.545 [V]
ITMY T240 3 DOF X/U = -0.345 [V]
ITMY T240 3 DOF Z/W = -0.792 [V]
BS T240 1 DOF Y/V = -0.38 [V]
BS T240 3 DOF Y/V = -0.355 [V]
BS T240 3 DOF Z/W = -0.487 [V]
HAM8 1 DOF Z/W = -0.341 [V]
All other proof masses are within range ( < 0.3 [V] ):
ETMX T240 1 DOF X/U = 0.161 [V]
ETMX T240 1 DOF Y/V = 0.116 [V]
ETMX T240 1 DOF Z/W = 0.124 [V]
ETMX T240 2 DOF Z/W = -0.211 [V]
ETMX T240 3 DOF X/U = 0.098 [V]
ETMX T240 3 DOF Y/V = 0.067 [V]
ETMX T240 3 DOF Z/W = 0.101 [V]
ETMY T240 1 DOF X/U = 0.046 [V]
ETMY T240 1 DOF Y/V = 0.052 [V]
ETMY T240 1 DOF Z/W = 0.122 [V]
ETMY T240 2 DOF X/U = -0.077 [V]
ETMY T240 2 DOF Y/V = 0.142 [V]
ETMY T240 2 DOF Z/W = 0.077 [V]
ETMY T240 3 DOF X/U = 0.129 [V]
ETMY T240 3 DOF Y/V = 0.064 [V]
ETMY T240 3 DOF Z/W = 0.096 [V]
ITMX T240 2 DOF X/U = 0.13 [V]
ITMX T240 2 DOF Y/V = 0.219 [V]
ITMX T240 2 DOF Z/W = 0.213 [V]
ITMX T240 3 DOF Y/V = 0.134 [V]
ITMX T240 3 DOF Z/W = 0.159 [V]
ITMY T240 1 DOF X/U = 0.082 [V]
ITMY T240 1 DOF Y/V = 0.051 [V]
ITMY T240 1 DOF Z/W = -0.055 [V]
ITMY T240 2 DOF X/U = 0.105 [V]
ITMY T240 2 DOF Y/V = 0.194 [V]
ITMY T240 2 DOF Z/W = 0.027 [V]
ITMY T240 3 DOF Y/V = 0.04 [V]
BS T240 1 DOF X/U = -0.229 [V]
BS T240 1 DOF Z/W = 0.056 [V]
BS T240 2 DOF X/U = -0.113 [V]
BS T240 2 DOF Y/V = -0.023 [V]
BS T240 2 DOF Z/W = -0.207 [V]
BS T240 3 DOF X/U = -0.224 [V]
HAM8 1 DOF X/U = -0.264 [V]
HAM8 1 DOF Y/V = -0.166 [V]
TITLE: 06/05 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 133Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 12mph Gusts, 7mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.13 μm/s
QUICK SUMMARY:
Current IFO Status: NOMINAL_LOW_NOISE & OBSERVING
11:16 UTC Lockloss Update: There were SUS ETMX L3 Saturations right before the Lockloss.
LVEA Temperatue Update:
LVEA Temp has stayed elevated but stable over the last several hours.
Relocking:
Increase flashes And I are having a hard tme getting ALS-C_TRY_LF above 80% due to the low DIFF beat note being in the -40 range.
I'm gonna try going through some of the Manual Initial Alignment to see if I can get that a little higher. That was not a good idea, because not the Diff beatnote got even worse going down to -50s. Taking Sliders back to GPSTIME 1369976970 which is right before the most recent lock. Which did not help the beat notes or the Y arm flashes and better. so i took the sliders back to where they were 5 minutes ago GPStime 1370001722 and touched them up manually. I was able to get a COMM beatnote of -10, (not ideal) And A DIFF beat note of -7 ( Which I think is good.)
First locking attempt : lost lock at TRANSITION_DRMI_TO_3F While Watching H1:LSC-POPAIR_B_RF90_I_ERR_DQ. I saw a kick happen during the transition.
For the Next Locking Attempt I Waited in ENGAGE_DRMI_ASC for ASC Signals to converge for about a minute before moving on. DRMI Lost lock but it wasn't a full lockloss. I tried it again but waited about 4 minutes, and it smoothly transitioned to DRMI_LOCKED_CHECK_ASC.
Got to NOMINAL_LOW_NOISE at 13:35 but the CAMERA_SERVO Guardian has yet to get all the way up to CAMERA_SERVO_ON. Turns out i was being impatient and it would have happened eventually.
13:51 UTC OBSERVING
This looks to me like the ADS is actually still converging this whole time and there wasn't an issue with the gaurdian(s). The convergence of the smooth channels (eg. H1:ASC-ADS_PIT3_SMOOTH_INMON) seems to take longer than usual for a reason I haven't looked into. The convergence checker in the CAMERA_SERVO node will run instantaneous checks of these smooth channels to look for values below 0.0025. Since we are looking at 6 different channels (4,5,6 P&Y for each), when we are below that threshold for all of them at the same time, we are probably converged enough.
Generally noisier at the start of each obsrving segment, range improves over time as noise subsides Limited observing time Number of lines seems very unstable Hveto working well, on Saturday and Sunday finds many low frequency glitches associated with H1:ASC-PRC2_P_OUT_DQ Lasso using channels it shouldn't be like H1:CAL-CS_TDEP_SUS_LINE3_GRM_REAL_OUTPUT Full report here: https://wiki.ligo.org/DetChar/DataQuality/DQShiftLHO20230522
Sun Jun 04 10:08:40 2023 INFO: Fill completed in 8min 40secs
TITLE: 06/05 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 130Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 11mph Gusts, 9mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.14 μm/s
QUICK SUMMARY:
- Taking over from Tony with H1 locked and in observing for about ~1:30 hours
- CDS/SEI/DMs ok
- No alerts
The low frequency (<50 Hz) strain sensitivity is worse now than it was a few months ago. One of the questions we had was whether this was due to the increase of input power from 60 to 75 W. The quick answer is:
the 20-50 Hz noise got worse both when the power was increased from 60 to 75 W and when the DARM offset was increased from 20 to 40 mA
To try and answer this question:
The plot attached below shows in the top panel how the noise in the 20-50 Hz region got worse over time: it shows the average noise in the band, normalized to the beignning of the analysis, in March. There are two clear discrete steps when the noise got significantly worse. The second panel shows (in two twin y axis) the input power in black and the DCPD power in red. There are two clear times when the noise got worse: when the power was increased from 60 to 75 W and when the DARM offset was increased from 20 to 40 mA.
The colored X's in the second panel correspond to the DARM spectra shown in the bottom panel, same color.
The results shown here are obtained looking at CAL-DELTAL_EXTERNAL_DQ, but very similar results can be obtained with OMC-DCPD_SUM_OUT_DQ, compensating for the calibration line amplitudes to account for changes in the optical gain.
Gabriele also made plots comparing the noise with relevant sqz metrics, I've annotated some trends I see from the various frequency bands here. He's compared the noise reduction between 20-50Hz, 100-200 Hz, and 1-2kHz, with the filter cavity detuning, generated sqz level, and the squeezer blrms. These are parameters I think could meaningfully change the noise in clear ways, there are more parameters we could compare (e.g. psams, sqz angle, srcl detuning, etc..), but a start. Note he has plotted all SQZ BLRMS for all bands ; but the sqz blrms are band selective, so it's a bit confusing comparing them all.
Trends that we noticed when compared with squeezing, in line with his earlier comments, are:
20 - 50 Hz, probably technical noise. Sees higher noise w/higher IFO power, and higher dcpd mA. When I looked at this before (e.g. summary here), this is consistent with my impression that this extra low-freq noise is added technical noise. (normally blrms 1)
100 - 200 Hz, unclear if this is quantum or classical. Not consistent/clear trends with generated sqz level, or fc detuning (though maybe?), or IFO power. Probably some extra technical noise here from the jump to 40mA, as seen on both elevated sqz blrms and worse noise. (normally blrms 2-3)
1-2 kHz, more squeezing and lower power seemed better for high frequency kHz noise. At 60W power, improvement probably related to lower ifo technical noise, also consistent that injecting more squeezing reduces noise more. (normally blrms 4-5)
At the same time, there was some improvement in the noise between 100 and 200 Hz, but it seems very loosely correlated with the increased power: I would argue that the noise between 100 and 200 Hz improved gradually before the power increase.
There is almost no difference in the noise between 1 and 2 kHz
Current IFO Status: Aligning & Relocking
11:16 UTC Lockloss with Currently unknown cause.
Ground motion is low, Wind is low. Currently waiting for the Lockloss analysis to return with some plots.
LVEA Temperatue Update:
LVEA Temp has stayed elevated but stable over the last several hours.
TITLE: 06/05 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 134Mpc
SHIFT SUMMARY: Busy shift following first lockloss. Discovered and worked around BS ISI stage 2 kick, waited for earthquakes to pass, and adjusted PR3 due to a slight temperature excursion in the LVEA.
Lock #1:
Lock #2:
Lock #3:
Lock #4:
Lock #5:
Lock #6: (where I finally figure out what's happening)
My impressions of this behavior are that the action of turning on the BS ISI's stage 2 high isolated damping was enough movement to cause the DRMI lock to break, but once it was already on, DRMI was able to lock just fine. Tagging SEI for follow-up. It's possible Tony saw this behavior last night too when trying to lock (from alog 70114: "There were 2 Locklosses at TRANSITION_DRMI_TO_3F").
Locks #7-10:
After ground motion calmed down, I noticed that the beatnotes (especially COMM) had been steadily getting worse along with TR{X,Y} signals. Trending the PR3 oplev, I saw that it had moved a fair amount in yaw compared to its normal fluctuation over the past few days. Since I still couldn't move past LOCKING_ALS, I decided to move PR3 in yaw (slider from 153.2 to 153.5) to bring it back closer to its position last lock acquisition, which improved TR{X,Y} and COMM beatnote. I then ran another initial alignment and looked into if there were any temperature changes that could've caused this motion. LVEA zone 1B has had an excursion this evening that appears to line up with PR3 yaw motion (trend attached, tagging OpsInfo and FMCS), which is strange to me because zone 1B is closest to the vertex, not the input arm, but it may have more of an effect than I'm realizing.
Lock #11:
Handing off to Tony.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 23:30 | SQZ | Vicky, Guests | Overpass, LExC | - | Tour | 01:24 |
Added trends to relevant oplevs at the time of the incursion. From my eye, it looks like potentially PR3 Y and ITMX P had a decent amount of movement.
EDIT: Added another long term trend of the oplevs, with more zone temps. Seems like with long term trends, ITMX P seems a bit suspect still (and seems to coincide with the temperature change).
Austin, Jim, Dave:
Starting at 06:56 PDT the h1seih7 ADC1 (2nd) channels 08-19 started misbehaving. All the other channels on this ADC are unaffected.
Austin confirmed that the AA chassis for this ADC looks correct on the mezzanine.
Jim confirmed that this block of channels are associated with a single ISI interface chassis which feeds the AA.
The six broken signals are:
H1:ISI-HAM7_L4CINF_[V2,H2]_MON
H1:ISI-HAM7_GS13INF_[V2,H2]_MON
H1:ISI-HAM7_CPSINF_[V3,H3]_MON
The asds of the CPS, GS13 and L4C in this chassis all look identically bad, the other sensors seem fine. All ISI controls are off, loops can't be re-engaged at this point and we probably can't get the squeezer back on. I don't think we can get to Observe without fixing this.
Attached ADC Monitor MEDMS for h1iopseih7 and h1isiham7 highlight the broken channels. The signals have two modes, the are either somewhat static with incorrect low values, or they are in a flashing mode, where they transition between the low value to a very high value (tens of thousands) with a flash frequency of about 1Hz, they then go back to the quiet mode for 10-20 seconds.
For about 30 minutes after the 06:58 failure these signals were very noisy all the time, and then they settled to the static/flash-modes we are seeing now.
Marc, Austin and I pulled a spare ISI interface chassis from the staging building test stand and replace the corner 2 unit. So far signals all look normal now and the ISI was able to re-isolate. Marc opened the chassis and it looks like a cap or something failed on the power board.
We also noticed at the mezzanine racks that all of the chassises there are quite hot, I think Marc said he saw 110 F on some part of a chassis. It's warm out there, maybe we should space those chassises out a bit on the rack.
We replaced S20000568 with S1201328. This spare interface is an ALIGO spare and the last one we have.
IFO stayed locked this whole time, but squeezer was down. Vicki has now relocked and we are back observing.
C5 blown on power regulator.
Asds of sensors after chassis swap.
Nice recovery, well done all.
Tagging SEI, OpsInfo, and FRS. Opened and closed FRS Ticket 28196. Observation time lost is ~5 hours, see attached screenshot.