TITLE: 03/26 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
Covered the morning for TJ while he did TCS work & here are the activities from when I was in the chair.
LOG:
Tue Mar 26 10:09:35 2024 INFO: Fill completed in 9min 31secs
TCs started lower, so I restored the trip temps to -120C (as shown on plot)
WP11786
Jeff, Joe, Jonathan, Dave
A new h1calcs model was installed, this adds two slow channels
++: slow channel H1:CAL-CALIB_REPORT_HASH_INT added to the DAQ
++: slow channel H1:CAL-CALIB_REPORT_ID_INT added to the DAQ
A DAQ restart was required.
As per ECR E1700387 these two channels were added to the h1daqgds[01] GDS broadcast streams. The h1daqgds[01] systems where restarted again as part of this process. This was around 8:50 am local time.
WP11743 SUS DACKILL removal
Jeff, Oli, TJ, Dave:
This morning we installed new HAM SUS and ISI models:
h1susmc[1,2,3], h1suspr[m,2,3], h1sussr[m,2,3], h1susfc[1,2], h1isiham[2,3,4,5]
All required a DAQ restart.
For h1sush2a and h1sush34, because we restarted all the user models, I elected to restart the IOP models as well to run a DAC AUTOCAL on these front ends. For all the other frontends, only the modified user models were restarted.
J. Kissel, T. Shaffer ECR E1700387 IIET 9392 WP 11743 In prep for today's watchdog upgrades (LHO:76696), TJ and I have - Ensured all HAM triple SUS have their alignment offsets accepted / stored in their safe.snap (they're typically unmonitored, so this takes special action) - Brought the IMC guardian to OFFLINE, and brought the guardian to "AUTO" such that it's no longer managed by ISC_LOCK - Brought the MC2, PRM, PR2, SR2, SRM gguardians to "AUTO" such that they're no longer managed - Brought MC1, MC2, MC3, PRM, PR2, PR3, SRM, SR2, SR3, FC1, and FC2 to SAFE - Brought SEI MANAGER guardians for HAM2, HAM3, HAM4, HAM5, HAM7, and HAM8 to ISI_DAMPED_HEPI_OFFLINE We're ready for restarts!
TITLE: 03/26 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1:Maintenance
CURRENT ENVIRONMENT:
SEI_ENV state: Maintenance
Wind: 2mph Gusts, 2mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.21 μm/s
QUICK SUMMARY: We lost lock at 14:41UTC, cause still to be looked at. Maintenance activities have started.
TITLE: 03/26 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: None
SHIFT SUMMARY:
Since the last mid shift report there was another lockloss at 6:08 UTC .
Once again relocking went just fine without an Initial_Alignment in 56 minutes with out intervention by the opperator.
There was a PI 31 ring up @ 6:51 UTC but while I was trying to find te right buttons to press it was supressed.
LOG:
Camilla's measurements will start Running again at 2am and continue until 6am.
TITLE: 03/26 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 4mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.19 μm/s
QUICK SUMMARY:
Lock lasted 8 hours and 51 min
Lockloss 3:34 UTC still investigating the cause of this lockloss.
Camilla had some measurements running while we were in Observing see Alog 76695 and was concerned that perhaps her measurements had made the arm powers drop significantly. So I pulled up some ndscopes of H1:GRD-ISC_LOCK_STATE_N and the Circulating Arm powers H1:ASC-X_PWR_CIRC_OUT16 & H1:ASC-Y_PWR_CIRC_OUT16.
I did see that the arm power did dip before H1:GRD-ISC_LOCK_STATE_N changed to a down state.
But!!! I also did the same for the Lockloss that happened this morning with out her measurement running and saw the same behavior of the Arm power dipping before the Down state was declared by ISC_LOCK. I then decided (arbitrarily) that it was reasonable to think that the time between the ISC_LOCK state change and an observed loss of 10% of the Power in the arms would be a good metric to determine if the lockloss was "Caused" by this measurement.
I have effectively convinced myself that Camilla's A2L measurements have not caused this lockloss.... but I'm open to evidence that I have overlooked.
Relocking:
I have not taken ISC_LOCK through Initial_Alignment before relocking. H1 Is alrerady back to NOMINAL_LOW_NOISE 44 Minutes from NLN to NLN with out intervention from the Operator. 49Min 9 Sec from OBSERVING to OBSERVING.
Sheila, Jennie W
At around 21:49 UTC we used the step_45MHz.py in userapps/isc/h1/scripts to step the modulation depth in steps of 1dB down from 21dBm to 18dBm. This script adjusts the loop gains to compensate for a drop in power on the RF45 PDs.
Its also important to open the POP beam diverter to monitor what is happening to the RF 45 PD signals.
This is to check that the noise does not decrease with decreasing modulation depth which would imply that the modulator was imposing noise on the carrier through the ISS loop.
At around 22:18 UTC we put the modulation depth down by 3 dB we saw a 4% increase in the DARM broadband noise (measured the level at 2kHz with cursors) which is the green trace in the second image. The level of KAPPA C (shown in first image on bottom plot) only decreased by around 1% over this time, so we suspect some other cause other than a change in optical gain being responsible for this.
When we stepped the modulation depth up by 3dB (third image) we saw no change in the noise from the nominal state (purple trace on second image). Kappa C (bottom plot) looks almost the same as the nominal also.
In the fourth image it can be seen that f_C does not change much either.
After we did this test Daniel commented that we should check what the PCAL lines did as we stepped down and up the 45 MHz modulation depth.
To do this I used the spectra we took and measured the height of the 410.3 Hz line (PCAL Y) in DARM in m.
Our nominal level was 1.35370 e-19 m
The line height decreased by 1.1% when we decreased the modulation depth by 3dB. I think this is consistent with our optical gain decreasing by 1%. I am therefore not sure why lowering the mod depth causes a 4% increase in DARM.
The line height increased by 0.04 % when we increased the mnodulation depth by 3dB. I think this is consistent with us not seeing much decrease in the DARM noise when we increase the modulation depth.
DriptaB, LouisD, TonyS, FranciscoL
We noticed a 0.4% change from the value at the end of O4a for the X/Y ratio. We found alog 76562 where the Pcal EY EPICS values were changed. We have now changed them back to O4a values. See screenshot of SDF screen to find values. We will follow up with an updated value of X/Y to see if anything went wrong.
For future reference: steps to update SDF
UPDATED Instructions!!!
For future reference: steps to update SDF
XY_COMPARE_CORR_FACT XY_COMPARE_CORR_FACTSheila, Naoki
We plan to scan the ZM alignment and need the fast SQZ BLRMS for that. We copied the BP filter for BLRMS 5 at 1.7kHz to BLRMS 6 and replaced the 0.1 Hz LP to 1 Hz LP for BLRMS 6. So the BLRMS 6 is same as BLRMS 5 now, but has 1 Hz LP instead of 0.1 Hz LP. The SDF is attached.
TITLE: 03/25 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Commissioning
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 2mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.25 μm/s
QUICK SUMMARY:
H1 has been locked for 4 hours and 22 minutes, We are currently in commissioning while Robert does injections.
The current plan is to stay in comissioning until 6PM then go into Observing.
TITLE: 03/25 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Locked most of the day; several commissioning activities ongoing with observing periods in between.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 15:17 | FAC | Karen | Opt/Vac Lab | - | Technical cleaning | 15:51 |
| 16:06 | FAC | Chris | FCES | - | Safety checks | 16:42 |
| 16:10 | PEM | Robert | LVEA | - | Set up for PEM injections | 16:31 |
| 16:17 | FAC | Karen | MY | - | Technical cleaning | 17:30 |
| 16:24 | FAC | Kim | MX | - | Technical cleaning | 17:46 |
| 21:07 | TCS | TJ | Opt Lab | - | Looking for parts | 21:56 |
| 21:26 | PEM | Robert | LVEA | - | Set up for PEM injections | 22:22 |
J. Kissel ECR E1700387 IIET 9392 WP 11743 Tomorrow, we continue on the adventure towards upgrading the suspension watchdog systems that Oli, Dave and I have been chugging along on for the previous two weeks (see, eg, LHO aLOGs 76305, 76269 and 76545). This week, we're tackling a big chunk -- all of the HAM triple suspensions, both large and smal, i.e. the 7 HSTS and 2 HLTS. Note :: these are the final suspensions that are hooked up to their respective ISIs, so this should be the last major upgrade. As such I've: In the SUS model library parts, - Converted the BLRMS trigger generating systems to include a trust-worthy RMS calculation, coupled with a down-stream low-pass filter, - Removed all evidence of the USER DACKILL watchdog aggregator system - While I was there, I removed remaining evidence of the long-defunct / depricated "online detector characterization" ODC system, long since replaced by guardian state information On the top level SUS models, - Removed all evidence of the USER DACKILL watchdog aggregator system - Removed the IPC sender of the USER DACKILL to the respective ISI - Verified that the removal of library block output ports didn't botch any of the top level connections, reconnecting and re-organizing as needed On the top level of the ISI models, - Replaced the IPC receiver of the USER DACKILL watchdog from all respective SUS which impacts all of the following models: Top Level ISI Top Level Optic Governing Library Part h1isiham2.mdl h1susmc1.mdl HSTS_MASTER.mdl h1susmc3.mdl HSTS_MASTER.mdl h1suspr3.mdl HLTS_MASTER.mdl h1susprm.mdl RC_MASTER.mdl h1isiham3.mdl h1susmc2.mdl MC_MASTER.mdl h1suspr2.mdl RC_MASTER.mdl h1isiham4.mdl h1sussr2.mdl RC_MASTER.mdl h1isiham5.mdl h1sussr3.mdl HLTS_MASTER.mdl h1sussrm.mdl RC_MASTER.mdl (no change to isiham7) h1susfc1.mdl HSTS_MASTER.mdl (no change to isiham8) h1susfc2.mdl HSTS_MASTER.mdl In addition, some of the lower level SUS library parts were also impacted, SIXOSEM_T_STAGE_MASTER.mdl FOUROSEM_STAGE_MASTER.mdl FOUROSEM_STAGE_MASTER_OPLEV.mdl All of the above model changes have been committed to the userapps SVN repo in revs 27310, 27311, 27312, and 27313.
This set of screenshots shows the typical edits to the top level HAM models, before vs. after. Note, because the "payload" is more that one suspension on the HAMs, I got lazy and started connecting two constants to four places, rather than putting in eight constants. In doing so, I harnessed the trickery of the bus creator / bus selector, and re-arranged the order hidden within. The screenshot highlights this debauchery in purple.
Here are some before vs. after, example top level model changes for the various types of HSTS and HLTS: PRM (an RC_MASTER) before vs. after MC2 (an MC_MASTER) before vs. after PR3 (an HLTS_MASTER) before vs. after
We aimed to do A2L steps over the weekend, 15 minutes/step 76630, 76683, but as Elenna points out with the camera servos running the beam spot stayed static on the mirror while the mirror actuation point changed.
Looking at data from A2L tuning in April 2023 68384 attached plot, A2L steps are ~0.4 for a change of around 2 in the camera offset (TRAMP 120s). We'll keep around the steps same as the weekend: +/- 1.0 per 15 minutes to up to +/-2.0 in the H1:ASC-CAM_{PIT,YAW}{1,2,3}_OFFSET channels during observing with nominal TRAMPs extended from 10s to 120s. These offsets are already unmonitered in sdf as reset each lock, I unmonitered the TRAMPs.
These offfsets vary up to 3 counts lock to lock, see attached plot. Set based on ADS converging.
Script saved in /ligo/gitcommon/labutils/beam_spot_raster/camera_servo_offset_stepper.py Scheduled to run if we're in NLN via tmux session on cdsws26 at:
Tony will be here during the first set so can cancel the others if there is any issues.
Jennie, Camilla
I looked at the CAM3 test from 08:58:40 UTC to 12:46:33 UTC and can see that a high offset (ASC-CAM_{PIT3, YAW3}_OFFSET) in both pitch and yaw corresponds to high power build-ups in the PRC and arms. Yaw seems to have more of an effect than pitch. The range dips when the offset is away from its nominal state. See first image. The offsets don't seem to have much effect on KAPPA C.
During the CAM2 test the IFO unlocked and so only the P2L steps seem to have run, its hard to see a trend in the build-ups for this as the IFO was still thermalising - maybe this hsould be run again. See second image.
For the CAM1 test the yaw steps were interrupted by lockloss so we will have to redo these.
The pitch steps for CAM 1 show optimum build-ups/power recycling at Pitch offset = -231.6 counts and worst at -229.6 counts. The range is worst when the build-ups are best and vice versa.
I am not sure why lower circulating power makes the range drop for all these measurements.
We scanned the FC detuning as shown in the first attachment. The nominal FC detuning is -28. The FC detuning below -31 seems better around 30 Hz. The second attachment shows that the BNS range at -34 FC detuning seems a few Mpc better than nominal so we set the FC detuning at -34 as shown in the third attachment.
A sinc function / temperature measurement from the single pass (spare) SHG is shown. The basic idea of the experiment was to see if the crystal is intact (it is). For this purpose, the crystal was pumped with 1064 nm and after one pass (single pass) it was checked how much green light was generated. The temperature was adjusted to find the best possible phase matching temperature.
No similarity to Mount Saint Helens, as described in 76239, was observed. Conclusion, the crystal is intact. (What seems strange is the asymmetry in the side maxima.)
Setup:
J. Kissel, L. Dartez ECR E2400103 IIET 30760 WP 11786 To better coordinate and confirm the front end portion of the calibration pipeline and the GDS portion running on the DMT machines remain in sync in terms of calibration model installed, which is necessary for good calibration, we are adding two slow 16 Hz EPICS channels to the h1/l1calcs models (so both LLO and LHO) that will be added to the GDS broadcaster list so the GDS calibration pipeline can read them. These are H1/L1:CAL-CALIB_REPORT_ID_INT and H1/L1:CAL-CALIB_REPORT_HASH_INT. Louis and I created the top-level EPICs records today; see attached screenshot. The change has been committed to the userapps SVN under: /opt/rtcds/userapps/release/cal/h1/models/ h1calcs.mdl committed at rev 27307. It's in no way a complicated add to the model, but I've ran a test compile of the h1calcs.mdl on the build machine anyways just to check. The test compiled succeeded without issue.
J. Kissel This model changes have been installed this morning. As such, I've updated the safe.snap SDF system to initialize the H1:CAL-CALIB_REPORT_ID_INT and H1:CAL-CALIB_REPORT_HASH_INT channels. I've also chosen to monitor them, since we don't expect them to change during observation, and/or regulary, and this information will become critical for calibration librarianship.