Summary: H1 was out of observing from 16:31 to 21:18 UTC due to a lockloss and commissioning.
Soon after dropping observing for regularly scheduled commissioning time this morning, H1 lost lock at 16:34 UTC due to a M4.2 EQ from Idaho. Commissioners then decided to still utilize commissioning time for PR2 spot moves with H1 unlocked (see other logs, especially 82481). Once most activities had finished, I ran H1 through an initial alignment and started locking, which went smoothly and automatically.
It was taking suspiciously long to converge the full IFO ASC at 2W, and Ryan and I saw that it was the PRM ADS that was taking forever to converge. This usually occurs because th POP A offsets are not set right so the PRM doesn't go to the right place in DRMI ASC.
I saw that the POP A offsets were OFF, probably left that way after this alog: 81849.
I set the offsets where we want them and turned them on. These settings are SDFed. This will improve locking time by a small amount.
Mayank, Sheila, Jennie W, Ryan S, Elenna, Jenne, Camilla, Robert.
Follow on from 82401, mostly copied Jenne's 77968.
As soon as we started setup, the IFO unlocked from an EQ and we decided to do this with the IFO unlocked.
Ryan locked green arms in initial alignment and offloaded. Took ISC_LOCK to PR2_SPOT_MOVE (when you move PR3 it calculates and moves PR2, PRM and IM4).
Green beatnotes were low but improved when we started moving. Steps taken: Move PR3 Yaw with sliders until ALS_C_COMM_A beatnote decreased to ~-14 and then used pico A_8_X to bring it back. Repeated until PR3 M1 pitch was 1-2urad off and then Mayank brought back pitch with PR3 sliders. Repeated moving PR3 yaw sliders and picos.
Started with PR3 (Pitch,Yaw) at (-122, 96), went to (-125.9, 39.5), were aiming for Yaw at -34. But, at (-124, 68) we lost the beam on AS_AIR, whoops.
Once we realized that we fell off AS_AIR so took PR3 back to last time we had light on it (68urad in yaw slider), ignoring green arms with the plan of moving back to 38urad by moving SR2 to keep light on AS_AIR. Moved SR2 in single bounce (ITMY misaligned) to increase light on AS_AIR. We couldn’t go any further in PR3 yaw with keeping light on AS AIR so we decided to revert green picos to work with 68urad on PR3. WE took PR3 back to (-125.9, 39.5) and reversed our steps of sliders and picos.
After we got here, Ryan offloaded green arms and we tried to go to initial alignment. No flashes on init align in X arm or y-arm (touched BS for y-arm). We would usually be able to improve alignment while watching AS_AIR but the beam wasn’t clearly on AS_AIR. Improved beam on AS_AIR by moving SR2/3.
Ran SR2 align in ALIGN_IFO GRD. This seemed to make some clipping worse, are we clipping in SR2? Still working on SR2 alignment. Maybe we should update the ISC_LOCK PR2_SPOT_MOVE state to have SR2/3 follow align so that we don't loose AS_C and AS_AIR.
Regarding "Improved beam on AS_AIR by moving SR2/3."
We found that by moving SR2/3 by hand, or by engaging SR2 align and moving SR3 (SR2 follows), we can make improvements in the AS_C NSUM and the AS_C yaw position, but that clearly does not fix all the problems with input align and the terrible beam shape we saw on the AS AIR camera. This leads us to believe that part of the problem is upstream, as in even if we fix everything at the output, we may have caused some other clipping problem in the PRC.
Sheila and I tried adjusting the pointing of PR2 to see if we could improve the input align issues, but that seemed to have very little effect.
I think that a possible reason why our PR2 spot moves have gone poorly is because the PR2 spot move function in the guardian does not have the correct constants to ensure the spot moves on PR2 and not on the other optics.
Looking back in time, the PR2 spot move function was first written by Stefan in 2016 (as I can find, see: 28420, 28442). Looking at his code and the current guardian code which was originally copied from his code, the adjustment values are exactly the same:
pitPR3toPR2=-9.2;
yawPR3toPR2=+9.2;
pitPR3toIM4=56;
yawPR3toIM4=11;
pitPR3toPRM=1.5;
yawPR3toPRM=2.2;
These differ from the values you would calculate from the ray transfer matrix, which Stefan notes in a comment in 28442. My guess is that the difference in those values is related to whatever calibration we add into the optics sliders.
Also, Jeff updated the IM slider calibrations to microradians last April, see: 77211. I can't find any alog (so far) that reports an update to the PR2 spot move values in the guardian to account for this recalibration.
Sheila pointed me to the May 20, 2024 spot move where she says she updated the adjustment values from the guardian: 77949. However, the values she uses in this alog are not reported, and it doesn't look like the guardian numbers actually changed. I looked at the saved ndscope from that time and eyeballed the values to be approximately:
yawPR3toIM4 = 0.875
yawPR3toPR2 = 10
yawPR3toPRM = 2
You can check the attached screenshot to see how I calculated these values. These numbers are clearly different compared to the numbers above, so I don't really know what happened here. But, it seems that if we want to do a PR2 spot move again, we should check to make sure we are adjusting the optics appropriately.
You can also find the template for this scope in "/ligo/home/sheila.dwyer/ndscope/PR2_spot_move_jennie.yaml" if you want to check yourself.
J. Kissel, (with help from E. von Reis, D. Barker) ECR E2500017 IIET Ticket LHO:33143 WP 12302 I'm extending the infrastructure I installed July 2024 LHO:78956 in order to include replicas of the OMC DCPD GW channels' infrastructure further down the chain, namely to send the output of the 524 kHz test infrastructure over IPC to the 16 kHz OMC model to store the channels in the frames. This is result of the last two weeks' worth of use of the existing prototype infrastructure, and the findings that there may be some discrepancies between the offline down-sampled versions of the 524 kHz outputs and the 16 kHz online down-sampled versions (see LHO:82420) The simulink models touched for this update are /opt/rtcds/userapps/release/ cds/h1/models/h1iopomc0.mdl omc/h1/models/h1omc.mdl omc/common/models/omc.mdl In the attached collection of screenshots, I show "before" vs. "after" for these models, and the parts affected. (1) Top level of h1iopomc0 model before vs. after (2) Inside the OMC top_names block of the h1iopomc0 model before vs. after (3) Inside the OMC_DCPD block of the h1iopomc0.mdl model before vs. after (4) Top level of h1omc.mdl model before vs. after (5) Inside the OMCNEW block of the omc.mdl library which is used and renamed as just OMC in the h1omc.mdl model before vs. after (6) Inside the OMC_DCPD block of the OMCNEW block before vs. after I note in as a part of (6) I've re-organized the primary DCPD A and DCPD B SUM and NULL outputs such that it's much more legible than the previous version; again see that before vs. after More details to follow in the comments.
Because these models are run on the h1omc0 front-end, they all need to be compiled on the h11build machine with a special environment. To create the special environment, after logging in as controls to the h1build machine, use the command to edit the following bash environment variables: $ export RCG_SRC=/opt/rtcds/rtscore/advligorts-5.3.1_ramp $ export RTS_VERSION=5.3.0~~dual_tone_frequency Then you can do the standard $ rtcds build h1iopomc0 ... etc for each model.
Pending DAQ Changes for new models:
h1iopomc0:
h1omc:
Mon Jan 27 10:03:54 2025 INFO: Fill completed in 3min 52secs
TCmins [-78C,-24C] OAT (-2C, 29F) DeltaTempTime 10:04:01. TC-B doesn't appear to be seeing the full LN2 flow.
The ITMY camera view was not working again. I cycled the port on the switch. Then restarted the server via monit. That did not work. I had to send a SIGKILL signal to the process to get it stopped. It is going now.
TITLE: 01/27 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 0mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.13 μm/s
QUICK SUMMARY: One lockloss overnight, but H1 relocked itself and has now been locked for 5 hours.
TITLE: 01/27 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: We've been locked for 10:22. On NUC27 the glitch plot is timed out on the control room screenshots but it's live on the FOM in the control room (CDS).
LOG: No log
Lockloss tool tags anthropogenetic, there was a ground motion spike in the 0.3 - 1.0 band that matches up to the lockloss. We lose lock right around the CS_Z motions peak.
Tagging PEM
TITLE: 01/27 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: One lockloss this shift, but otherwise a quiet day. H1 has been locked and observing for almost 5 hours.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
17:16 | SAFETY | LASER HAZ | LVEA | YES | LVEA is Laser HAZARD | Ongoing |
19:09 | PEM | Robert | LVEA | - | Viewport pictures | 19:34 |
TITLE: 01/27 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 3mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.14 μm/s
QUICK SUMMARY:
Dropped Observing from 00:46 to 00:52 UTC as the SQZer lost lock.
Sun Jan 26 10:05:46 2025 INFO: Fill completed in 5min 43secs
TCmins [-87C, -51C] OAT (-1C, 30F) DeltaTempTime 10:05:55. Note TC-B didn't reach its minimum plateau.
Lockloss @ 17:57 UTC - link to lockloss tool
Usual signs of an ETMX glitch on this lockloss, otherwise cause not obvious. Ends lock stretch at just over 4 hours.
H1 back to observing at 19:41 UTC. Helped PRM and SRM during DRMI locking, but otherwise the process was automatic.
I also loaded VIOLIN_DAMPING to take in the new ETMY mode 20 damping setting (gain of 0).
Today we re-engaged the 16k Digital AA Filter in the A and B DCPD paths then re-updated the calibration on the front end and in the gstlal-calibration (GDS) pipeline before returning to Observing mode. ### IFO Changes ### * We engaged FM10 in H1OMC-DCPD_A0 and H1OMC-DCPD_B0 (omc_dcpd_filterbanks.png). We used the command in LHO:82440 to engage the filters and step the OMC Lock demod phase (H1:OMC-LSC_PHASEROT) from 56 to -21 degrees (77 degree change). The 77 degrees shift is necessary to compensate for the fact that the additional 16k AA filter in the DCPD path introduces a 77 degree phase shift at 4190Hz (the oscillator frequency at which the dither line that the OMC Lock servo is locked to) (omc_lock_servo.png). All of these changes (the FM10 toggles and the new OMC demod phase value) have been saved in the OBSERVE and SAFE sdfs. * It was noted in the control room that the range was quite low (153Mpc) and re remembered that we might want to tune the squeezer again as Camilla had done yesterday (LHO:82421). We have not done this. * Preliminary analysis of data taken with this newly installed 16k AA filter engaged suggests that the filter is helping (LHO:82420). ### Calibration Changes ### We pushed a new calibration to the front end and the GDS pipeline based on the measurements in 20250123T211118Z. In brief, here are a few things we learned/did: - The inverse optical gain (1/Hc) filter changes are not being exported to the front endat all . This is a bug. - We included the following delays in the actuation path: uim_delay = 23.03e-6 [s] pum_delay = 0 [s] tst_delay = 20.21e-6 [s] These values are stored in the pydarm_H1.ini file. - The pyDARM parameter set also contains a value of 198.664 for tst_drive_align_gain, which is inline with CALCS (H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN) and the ETMX path in-loop (H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN). - There is still a 5% error at 30Hz that is not fully understood yet. Broadband pcal2darm comparison plots will be posted in a comment.
I'm attaching a PCALY2DARM comparison to show where the calibration is now compared against what it was before the cal-related work started. At present (dark blue) we have a 5% error magnitude near 30Hz and roughly a 2degree maximum error in phase. The pink trace shows a broadband of PCALY to GDS-CALIB_STRAIN on Saturday, 1/25. This is roughly 24hrs after the cal work was done and I plotted it to show that the calibration seems to be holding steady. The bright green trace is the same measurement taken on 1/18, which is before the recent work to integrate the additional 16k AA filter in the DCPD path began. All in all, we've now updated the calibration to compensate for the new 16k AA filter and have left the calibration better than it was when we found it. More discussion related to the cause of the large error near 30Hz is to come.
I've updated the PEM_MAG_INJ and SUS_CHARGE Guardian nodes to not run their associated injections and drop H1 from observing if there is an active stand down alert from SNEWS, Fermi, Swift, or KamLAND (for supernova, GRB, and neutrino alerts). As a reminder, these normally start at 7:20am local time on Tuesdays if H1 is locked.
Due to a typo in the SUS_CHARGE Guardian (my fault), the in-lock charge measurements have not run since I implemented this change. I've fixed this and reloaded the node, so in-lock charge measurements should be back to running as usual at 7:45am Tuesday mornings if H1 is locked.