Sheila, Matt, Mayank
Because of yesterday's PR2 move, the beam was very off center on the ASC POP diodes. I beleive that this was the cause of last night's locking difficulties 82678, rather than the arm alignment references, which should not have been affected (and weren't very different after being reset). This QPD is used in DRMI ASC, and the yaw loop was pushing in the wrong direction so that the beam was falling off the QPD. Turning off this loop and not PR2 can lead to some unusual alignment that might cause difficulty locking, which could be avoided by moving CHARD, even though the real problem was in the PRC.
We were dropped out of observing because of the extreme PI damping (more on that later), so we took the chance to pico on these QPDs. We used the poorly named ndscope template /sheila.dwyer/ndscope/ASC/Pico_pop_wfs.yaml (similar to 81849)
We strategy that worked for us was to use motor 5 (ASC POP steering 1) to center QPD B and motor 6 (ASC POP steering 2) to center QPD A, but we had to walk very far to get both beams on the QPDs at the same time, we ended up moving by about 17000 counts on both picos in yaw (in the attached screenshot you can see wee picked the wrong strategy first).
We ended up with both QPDs centered around 0, and the sums are about 3% higher after our pico'ing than before. We reset the offsets to 0, and accepted this in both safe.snap and observe.snap. We have set PR1 +PR2 ASC to off in the DRMI guardian, because we need to set these offsets at 2W with all the ASC on.
Request for next time we lock: DRMI may not be well aligned because the PRC1+2 loops are off, but if initial alignment gets run that should fix it. It could probably also be fixed by running PRMI ASC, then locking DRMI. Next time we lock, we can trend what the POP_A QPD pitch and yaw are in the guardian state PREP_DC_READOUT and set the offsets to -1 * this number. Then we should edit lines 1030 and 1031 in ISC_DRMI to turn back on PRC1 and PRC2 ASC loops.
Note for Detchar: Keita had pointed out that fixing this clipping on the QPDs could possibly fix our large glitches that looked like a scattered light problem, which was apparent in range fluctuations in this morning's lock. We don't have much data yet since going back to observing, spectragram.
Fri Feb 07 10:07:26 2025 INFO: Fill completed in 7min 22secs
Jordan confirmed a good fill curbside. TCmins [-68C, -34C] OAT (0C, 32F) DeltaTempTime 10:07:35
18:30 UTC H1 dropped from Observing due to a PI ring up that caused the SUS_PI Guardian to envoke XTREME_PI_DAMPING.
The SUS_PI guardian did great work and Damped the PI 24 down in 2 minutes.
H1 returned back to Observing at 18:32 UTC.
This happened again at 18:36 UTC and we got back to Observing at 18:37 UTC
This happened again at 18:42 UTC and we returned to Observing at 18:44 UTC after once again XTREME_PI_DAMPING.
Another PI24 Ring up that took us in to Comissioning at 18:57 UTC
This time we will stay in comissioning for a few minutes so Sheila can pico ASC Pop QPDS.
I've also taken SQZ MAN to NO_SqUEEZING[7] and switched back to FREQ_DEP_SQZ[100] at 19:28 UTC.
Sheila & Matt are still Pico-ing.
TITLE: 02/07 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 4mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.16 μm/s
QUICK SUMMARY:
H1 has been locked for over three and a half hours. We have a tall Violin.
SQZ
At 15:37 UTC the We dropped from Observing due to the SQZr subsystem dropped from FREQ_DEP_SQZ[100] to LOCK_OPO[28]
I opened up the SQZ Overview and saw that the Filter Cavity Guardian was working its way through the states with out any issues, So I did not intervien.
We were back in Observing by 15:43 UTC.
TLDR: H1 would not lock due to alignment changes imposed earlier in the day, so after bringing alignment back, green references were updated and H1 returned to observing at 12:01 UTC. For OPS - An initial alignment MUST be run after the next lockloss due to updated green references.
H1 called for assistance at 07:48 UTC as it could not relock on its own (still trying to recover from Corey's shift). I discovered that H1 had run an initial alignment automatically, but every time it would lock DRMI, the ASC (specifically PRC1_Y) would pull alignment the wrong way. I first tried simply turning off the PRC1_Y loop, which worked to get through DRMI ASC, but then there would be a lockloss somewhere before or during ENGAGE_ASC. One of these times at ENGAGE_ASC, I paused in the state previous and went through ISC_LOCK line-by-line to turn on each ASC loop one at a time, but I eventually lost lock while turning on CHARD_Y. At this point, I reached out to Jenne for assistance, and she reminded me of the move_ARM_dev.py script in userapps/asc/ which would allow me to converge CHARD_Y before engaging the loop, which went well. I was then able to successfully go through every step of ENGAGE_ASC and continue locking.
While this was going on, Jenne speculated that the green alignment references had not been updated after the alignment changes during the commissioning period today, which would explain why alignment looked so bad following the initial alignment run earlier in the night. So, while waiting in PREP_DC_READOUT, I opened the ALS beam shutters to see if the arm would still lock in green to update the alignment references. Surprisingly, the alignments looked great and quickly locked on the TEM00 mode, so I ran the setEndGreenQPDOffsets_{X,Y}ARM.py scripts in userapps/als/ for about 5 minutes to set the QPD offsets, then updated the ITM camera offsets by setting them to be the average of the camera P/Y positions. All of these values have been accepted in both the SAFE and OBSERVE SDF tables (screenshots attached). Since the green references have been updated, Jenne encourages there to be an initial alignment run after the next lockloss to ensure the references are good. After all that, I re-shuttered the ALS beams and H1 continued locking without issue all the way to observing (sadly just before S250207bg).
See comments in 82683. We will leave these arm references in, but I don't this that was really the issue last night, it was mostly that the beam was falling off the POP_A_QPD so that PRC1 ASC was not working, as Ryan noted.
TITLE: 02/07 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
Quiet shift for the most part until the waning minutes when there was a lockloss. No obvious reason for the lockloss, but there was the PI24 ring up a couple of min earlier, and there was an earthquake which was on its way, but it looks small (LOCKLOSS is still analyzing).
Before the lockloss, contemplated going out of observing for possible SQZ attention (due to lowish H1 range), but then the range would hover back up, so ultimately held off.
Similar to last night, have had hours of snow flurries, but only a dusting is sticking thus far.
LOG:
Tony Sanchez mounted a cisco poe switch in the MSR for initial setup and testing. Together we got the switch configured and hooked it up to the core switch. This switch is named sw-lvea-aux1. It will move into the CER racks and will act as a second camera switch. Today we moved a test camera onto it and will be using this to continue our evaluation of h1digivideo4.
TITLE: 02/07 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: The new settings I found yesterday for ITMY mode5/6 are still damping well today so I've added them to lscparams, VIOLIN_DAMPING needs to be reloaded. The range and SQZer look to be slightly degrading over the past 2 hours.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
17:16 | SAFETY | LASER SAFE ( \u2022_\u2022) | LVEA | SAFE! | LVEA SAFE!!! | 19:08 |
16:43 | FAC | Eric | FCES | N | Temperature investigation | 17:45 |
17:05 | FAC | Kim | My | N | Tech clean | 17:41 |
17:41 | FAC | Kim | Mx | N | Tech clean | 18:38 |
19:06 | FAC | Kim | H2 | N | Tech clean | 19:22 |
19:25 | FAC | Ken | EndX | N | Grab wire cart | 19:48 |
19:51 | ALS | Sheila, Matt | LVEA | LOCAL | Adjust ALS beatnote ISCT1 | 20:07 |
22:51 | SQZ | Mayank | Optics lab | N | Ongoing |
TITLE: 02/07 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 17mph Gusts, 10mph 3min avg
Primary useism: 0.06 μm/s
Secondary useism: 0.18 μm/s
QUICK SUMMARY:
Got the hand-off from RyanC (who had a nice GW candidate in the afternoon!) and on the To Do list is LOAD ISC_DRMI guardian.
And currently, have noticed the range drifting down. RyanC looked at SQZ ndscope and noticed SQZ blrms have been drifting up with this range drop. Looking at the SQZ wiki to see what to do when SQZ looks bad to be ready to address soon.
ITMy M5 violin continues to ring down. And as I type, RyanC saved the settings which have been damping down this mode the last 2-days.
Environmentally, it's chilly out (most of our snow melted), breezes are below 20mph and microseism continues drift down and starting to touch 50th percentile.
Added in temperatures for the VEAs for FCES, EX and Ey.
New channel prefixes are
H1:CDS-FMCS_STAT_ZONEFCES
H1:CDS-FMCS_STAT_ZONEEXA
H1:CDS-FMCS_STAT_ZONEEXB
H1:CDS-FMCS_STAT_ZONEEXC
H1:CDS-FMCS_STAT_ZONEEXD
H1:CDS-FMCS_STAT_ZONEEYA
H1:CDS-FMCS_STAT_ZONEEYB
H1:CDS-FMCS_STAT_ZONEEYC
H1:CDS-FMCS_STAT_ZONEEYD
Matt Todd, Jennie Wright, Sheila Dwyer
Today we lost lock right before the commissioning window, and so we made another effort at moving the spot on PR2 out of lock, correcting some mistakes made previously. Here's an outline of steps to take:
When relocking:
Today, we did not pico on these QPDs, but we need to. We will plan to do that Monday or Tuesday (next time we relock), and then we will need to update the offsets.
Today, I also forgot to revert the change to ISC_DRMI before we went to observing. So, I've now edited it to turn back on the PRC1 + PRC2 loops, but someone will need to load ISC_DRMI at the next opurtunity.
We need to add one more step to this procedure: pico on the POP QPDs 82683
For more context, here's a brief history of where our spot has been:
PR3 yaw slider (urad) | PR2 Y2L coeffient | spot position on PR2 [mm] (on +Y side of optic) | |
July 2018 until July 2024, except for a few days | 150 | -7.4 | 14.9 |
July 2024- Feb 6 2025 | 100 | -6.25 | 12.588 |
May 21st 2024, and Feb 6th 2024 | -74 | -3 | 6 |
Today, we have some extra nonstationary noise between 20-50 Hz, which we hoped would be fixed by pico'ing on the POP QPDs but it hasn't been fixed, as you can see from the range and rayleigh statistic in the attachments.
Back in May 2024, we had an unrelated squeezer problem that caused some confusion: 78033. We were in this alignment from 5/20/24 at 19 UTC to 22:42 UTC on 5/23/24 15 UTC. We did not see this large glitchy behavoir at this time, and there was a stretch of time when the range was 160, although there were also times when the range was lower.
J. Kissel, at the prodding of S. Dwyer, A. Effler, D. Sigg, B. Weaver, and P. Fritschel Context The calibration of the DC alignment range / position of the ITMY CP, aka CPy, has been called into question recently under the microscope of "how misaligned is the ITMX CP, and do we have the actuation range to realign it?" given that it's been identified to be a cause of excess scattered light (see e.g. LHO:82252 and LHO:82396). What's in question / What Metrics Are Valid to Compare Some work has been done here LHO:77557 to identify that we think CPy is misaligned "down" i.e. in positive pitch by 0.55 [mrad] = 550 [urad] Peter reminds folks, in LHO:77587 that the DC range of the top mass actuators should be 440 [urad] and estimates that drives ~45 [mA] through the coils, pointing to my calibration of the coil current readbacks from LHO:77545. But in that same LHO:77587, he calls out that - Slider - OSEMs calibrations into [urad] disagree by a factor of 1130 / 440 = 2.5x. #YUCK1. And Daniel points out that there's a factor of 2x error in my interpretation of the coil current calibration from LHO:77545. #YUCK2. Note -- there's conversation about the optical lever readback disagreeing with these metrics as well, but the optical lever looks at the HR surface of the main chain test mass, so it's a false comparison to suggest that this is also "wrong." Yes, technically the optical lever beam hits and reflects some portion of all surfaces of the QUAD, but by the time this spots all hit the optical lever QPD, they're sufficiently spatially separated that we have to chose one, and the install team works hard to make sure that they've directed the reflection off of the test mass HR surface onto the QPD and no other reflection. That being said, the fact hat these optical lever readings of the test mass have been identified to be wrong in the past as well (see LHO:63833 for ITMX and LHO:43816 for ITMY) doesn't help the human sort which wrong metrics are the valid ones to complain about in this context. #FACEPALM So, yes, a lot of confusing metrics around there, and all the one's we *should* be comparing disagree -- and seemingly by large factors of 2x to 4x. So let's try to sort out the #YUCKs. Comparing the big picture of all the things that "should" be the same #YUCK1 In our modeling and calibrating, we assume (1) All ITM Reaction Chains have the same dynamical response (in rotation, for the on-diagonal terms, that's in units of [rad]/[N.m]) (2) All OSEM sensors on all have been normalized to have the same "ideal" calibration from ADC counts to [um]. (3) All mechanical arrangements of OSEMs are the same, so we can use the same lever arm to convert an individually sensed OSEM [um] into a rotation in [urad], and vice versa that a requested [N.m] drive in the EULER basis creates the same Force [N] at each OSEM coil, and (4) All OSEM top-mass actuator chains are chains are the same, (with 18 or 20 bit DACs and QTOP coil drivers, and 10x10 magnets), so the same DAC counts produces the same force at the OSEM's location. In order the check "are the same sensors / actuators reporting different values for (ideally) the same mechanical system," I used our library of historical data and allquads_2025-02-06_AllITMR0Chains_Comparison_R0_Y-Y_TF_zoomed.pdf However, for pitch, we do see a good bit of difference in allquads_2025-02-06_AllITMR0Chains_Comparison_R0_P-P_TF_zoomed.pdf. Of course, we're used to looking at these plots over many orders of magnitude and call what we see "good enough" to make sure the resonances are all in the right place. If I actually call out the DC magnitude of the transfer functions in the comparisons, you do actually see several factors of two, and differences between our four instantiations of the same suspension: ITM R0 P2P TF DC magnitude Model 0.184782 Model/Meas L1ITMY / Others Meas L1 ITMX 0.0675491 2.7355 ~3x 1.4028 L1 ITMY 0.0939456 1.9501 ~2x 1 H1 ITMX 0.0517654 3.5696 ~3.5x 1.8305 H1 ITMY 0.0947546 1.9501 ~2x ~1 So, there is definitely something different about these -- ideally identical -- suspensions. I think it's an amazing testimate to the install teams that both L1 and H1's ITMY have virtually identical DC magnitude (and AC transfer functions). Of course, "ideal," in terms of mechanics is muddled by cables that are laced thru the UIM / PUM / TST stages -- we've seen (from LONG ago) that specific (most) arrangements of the cables can stiffen the reaction chain, and setting the cables in such a way that they do *not* influence the pitch dynamics is hard -- see LHO:1769, 2085 and LHO:2117. I attach the R0 P2P plot plot from LHO:2117 that shows how much influence the cabling *can* have. I had the impression that this was only an impact on the "3rd" mode of the transfer function, but when you actually look at it with this "factors of two at DC" in mind, the data clearly shows cabling impact on the DC stiffness as well, and again factors of two are possible with different cable arrangements. So, in PITCH when we request the actuators to push these suspensions at DC, we may get a different answer at the optic i.e. the compensation plate, or CP. This may be some source of the disagreement between OSEM *sensors* and requested drive from the OSEM coil sliders. Resolving how much current is being driven through the coils, as reported by the FASTIMON or RMSIMON channels #YUCK2 (A) At H1, can confirm that all the QUAD's top masses, both main chain and reaction chain are using QTOP coil drivers, as designed, with no modifications -- see the e-Travelers within the "Quad Top Coil Drivers" serial numbers listed as related to H1 SUS C5 (S1301872) (B) I was about to make the same claim of L1, but in doing the due diligence with L1 SUS C5 (S1105375), I see that the S1000369 Quad Top Driver was modified to give more drive strength on ITMY R0 F1,F2,F3 - the pitch and yaw coils, and there's no follow-up record suggesting it was reverted. The work permit from Stuart Aston mentioned in LLO:28375 indicates a request to increase the strength by 25%. The action is also documented by Carl Adams and Michael Laxen in LHO:28301. It would be helpful to confirm if this mod is still in place, and if not, then the e-Traveler should be updated with record of the reverting. I'm guessing the mod is still in place, because there's mention of the serial number that was originally there being swapped in elsewhere in 2019 -- see LLO:46238. (C) That being said, I can at least make the statement confidently that all QUAD TOP Coil Drivers in play are using the same, original noise monitor circuit D070480. (D) Looking back all the content on the DCC page, Daniel's right about my mis-calibration of the coil driver current monitor from LHO:77545. This darn monitor circuit will be the death of me. The error comes in during a misunderstanding of how the single-ended output of the current monitor circuit is piped into our differential ADCs, namely the line * "single-ended voltage piped into only one leg of differential ADC" factor of two its DB25 output J1 in the interconnect drawing because of which I added the factor of 2 [V_DF] / 1 [V_SE] to the calibration. If you look at interconnect drawing you can see that the "F" (for FAST I MON) and "S" (for Slow RMSIMON) single-ended voltages are piped into the output DB25's positive pins, and the negative pins are connected to 0 V. This is a big unusual, because typical LIGO differential ADC driver circuits copy and invert the single-ended voltage and pipe the original single-ended voltage to the positive leg and the negative copy to the positive leg, such that V_SE = V_{D+} = - V_{D-}. Comparing these two configurations, (i) piping a signal ended voltage into only one leg, and 0V into the other (V_{SE} - V_{REF}) - (0 - V_{REF}) = V_SE (ii) copying and inverting the single ended voltage yields, (V_{SE+} - V_{REF}) - (V_{SE-} - V_{REF}) = (V_{SE} - V_{REF}) - ( - V_{SE} - V_{REF}) = 2 V_{SE} So, I'd used the (ii) configuration's calibration rather than (i), which is the case for the current monitors (and everything on that noise monitor board). The corrected the RMSIMON calibration is thus calibration_QTOP [ct/A] = 2 * 40.00 [V/A] * (10e3 / 30e3) * 1 * (2^16 / 40 [ct/V]) = 4.3691e+04 [ct/A] or 43.691 [ct/mA] or 0.0229 [mA/ct] Taking the values Peter shows in the F1 RMSIMON in his ndscope session in LHO:77587, Slider [urad] RMSIMON [ct] RMSIMON [mA] 440 4022.74 92.0732 0 113.972 2.6086 Delta 3908.77 89.4646 So, we're already driving a lot of coil current into the BOSEMs, if this calibration doesn't have any more flaws in it. I'd also like to super confirm with LLO that they've still got 25% more range on their ITMY QUAD top coil driver, 'cause if they're consistently using any substantial amount of the supposed range, than they've been holding these BOSEMs at larger than 100 [mA] for a long time, which goes against Dennis' old modeled requirement (see LLO:13456). I'll follow-up next Tuesday with some cold-hard measurements to back up the model of the coil driver and its current monitor.
Closes FAMIS#26029, last checked 82536
Compared to last week:
All corner station (except HAM8) St1 H2 is elevated at 5.8Hz
All corner station (except HAM8) St1 elevated in all sensors at 3.5 Hz
HAM 5/6 H3 elevated at 8Hz
ITMX ST2 H3 elevated between 5.5 and 8.5 Hz
ITMY ST2 H1/H2/H3 elevated between 5.5 and 8 Hz
BS ST2 H1/H2/H3 elevated between 5.5 and 8 Hz
Thu Feb 06 10:05:35 2025 INFO: Fill completed in 5min 32secs
TCmins [-62C, -44C] OAT (-2C, 28F) DeltaTempTime 10:05:47
Overnight, the electric reheat in the ducting of the VEA continued heating the space, even though the automation system was commanding the heat off. The space rose over set point by several degrees, which made relocking the IFO infeasible. I checked the control circuit of the heater and didn't find any obvious problems. Once I re-energized the control circuit, the heat remained off, making it difficult to find the cause of the problem. I watched the heater cycle normally per automation command, so for the time being it is working correctly. I will monitor it throughout the day.
Planned Saturday Calibration sweep done using the usual wiki.
Simulines start
PST: 2025-02-06 08:36:47.419400 PST
UTC: 2025-02-06 16:36:47.419400 UTC
GPS: 1422895025.419400
Simulines stop, we lost lock in the middle of the measurement.
PST: 2025-02-06 08:58:38.495605 PST
UTC: 2025-02-06 16:58:38.495605 UTC
GPS: 1422896336.495605
TITLE: 02/06 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 162Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 8mph Gusts, 6mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.21 μm/s
QUICK SUMMARY:
Sheila, Camilla, follow on from 82640.
We made some more changes to SQZ_MANGER to hopefully simplify it:
Saved and added to svn but not loaded.
Once reloaded, as states have been changed, any open SQZ_MANAGER medm's should be closed and reopened.
I did load this today, there don't seem to have been any issues in this lock.
Sheila, Dave, Ryan Crouch, Tony
This afternoon after the maintence window when we first started using guardian, once the ASC safe.snap was loaded by SDF revert we started sending large signals to the quads. We found that this was due to the camera servos having their gains set to large numbers. This was set this was in the safe.snap file.
After I set these two zero in safe.snap (which is really down.snap), Ryan again went through the guardian down, and this time we started to saturate the quads because of the arm asc loops (which we probably didn't notice the first time because we tried running down when we saw that there was a problem, and down would turn these off but not the camera servos).
Dave looked in the svn for this file, which he had committed this morning with this set of: diffs from this mornings svn commit . Looking through these, it kind of seems like somehow the safe.snap may have been overwritten with the observe.snap file.
Dave reverted that to the file from 7 days ago, which has Elenna's changes to the POP QPD offsets. Then I reverted all the diffs, so that we set all settings back to 7 days ago except those that are not monitored.
After this, Mayank and I were using various initial alignment states to make some clipping checks, which Mayank will alog. We noticed that the INP1Y loop (to IM4) was oscillating, so we reduced the gain in that from 10 to 40, on line 917 of ALIGN_IFO.py We also saw that there is an oscillation in the PRC ASC if we sit in PRX, but we haven't fixed that. These should not be due to whatever our safe.snap problem is, we hope.
Edit to add: We looked at the last lockloss, when the guardian went through SVN revert at 7 am yesterday Feb 3rd. It looks like the camera gains were 0 in the safe.snap at that time, but it was 100 by the time we did SDF revert at 20:51 UTC (1pacific time) today.