I ran the scheduled calibration measurements starting at 16:30 UTC following the wiki.
Broadband Start Time: 1418056220
Broadband End Time: 1418056588
Simulines Start Time: 1418056620
16:43:42 UTC EX saturation
Simulines End Time: 1418058008
Files Saved:
2024-12-12 17:00:28,850 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20241212T163643Z.hdf5
2024-12-12 17:00:28,857 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20241212T163643Z.hdf5
2024-12-12 17:00:28,862 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20241212T163643Z.hdf5
2024-12-12 17:00:28,866 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20241212T163643Z.hdf5
2024-12-12 17:00:28,870 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20241212T163643Z.hdf5
ICE default IO error handler doing an exit(), pid = 900165, errno = 32
PST: 2024-12-12 09:00:28.919662 PST
UTC: 2024-12-12 17:00:28.919662 UTC
GPS: 1418058046.919662
Wed Dec 11 10:04:17 2024 INFO: Fill completed in 4min 14secs
Gerardo confirmed a good fill curbside. Late entry for yesterday's fill. Note Y marker on trend is displayed incorrectly as 70C, it is actually 65C.
TITLE: 12/12 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 8mph Gusts, 4mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.32 μm/s
QUICK SUMMARY:
I ran the coherence check and range comparison (Sheila said she's not sure if these plots are showing exactly what they should be though) comparing the better range at the start to now.
2ndary microseism also took a step up above the 90th percentile in the past 30 minutes and brough SEI_ENV to useism.
TITLE: 12/12 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 159Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: One lockloss this shift with an unknown cause, but otherwise a quiet evening with no sign of The Noise seen last night. H1 has now been observing for about 30 minutes.
Lockloss @ 04:00 UTC - link to lockloss tool
No obvious cause, but maybe a very slight ETMX glitch right before the lockloss. Ends lock stretch at 26:51.
H1 back to observing at 05:31 UTC. DRMI looked quite bad so I opted for an initial alignment, which ran automatically as well as main locking.
J. Oberling, R. Short
This afternoon, Jason and I started to look into why the FSS has been struggling to relock itself recently. In short, once the autolocker finds a RefCav resonance, it's been able to grab it, but loses it after about a second. This happens repeatedly, sometimes taking up to 45 minutes for the autolocker to finally grab and hold resonance on its own (which led me to do this manually twice yesterday). We first noticed the autolocker struggling when recovering the FSS after the most recent NPRO swap on November 22nd, which led Jason to manually lock it in that instance.
While looking at trends of when the autolocker both fails and is successful in locking the RefCav, we noticed that the fastmon channel looks the most different between the two cases. In a successful RefCav lock (attachment 1), the fastmon channel will start drifting away from zero as the PZT works to center on the resonance, but once the temperature loop turns on, the signal is brought back and eventually settles back around zero. In unsuccessful RefCav lock attempts (attachments 2 and 3), the fastmon channel will still drift away, but then lose resonance once the signal hits +/-13V (the limit of the PZT as set by the electronics within the TTFSS box) before the temploop is able to turn on. I also looked back to a successful FSS lock with the NPRO installed before this one (before the problems with the autolocker started, attachment 4), and the behavior looks much the same as with successful locks with the current NPRO.
It seems that with this NPRO, for some reason, the PZT is frequently running out of range when trying to center on the RefCav resonance before the temploop can turn on to help, but it sometimes gets lucky. Jason and I took some time familiarizing ourselves with the autolocker code (written in C and unchanged in over a decade) to give us a better idea of what it's doing. At this point, we're still not entirely sure what about this NPRO is causing the PZT to run out of range, but we do have some ideas of things to try during a maintenance window to make the FSS lock faster:
As part of my FSS work this morning (alog81865), I brought the State 2 delay down from 1 second to 0.5, and so far today every FSS lock attempt has been grabbed successfully on the first try. I'll leave in this "Band-Aid" fix until we find a reason to change it back.
Starting from the somewhat strange RM1 spectra I saw earlier today (alog ), I have been looking at HAM1-related things. I don't think this is a strong correlation, and maybe this just means that the HAM1 FF is doing what it should be, but it seems that the TTL4Cs on HAM1 are qualitatively different between times of good range and poor range. However, the confusing thing is that the L4Cs seem bad at times when the range is good, which means that I don't really understand how they could be causing our troubles. Also, other times seem to not have this inverse pseudo-correlation. So, I'm not so sure that this is a sign of our troubles, or just something totally unrelated.
If one or more of the L4Cs is failing (which can be intermittent), that would change the effectiveness of the HAM1 TT asc ff. Turning off the HAM1 asc FF (as Elenna and I commented on earlier) would help narrow things down. I can try to do an assessment of the health of the L4Cs offline.
Sheila found that H1:TCS-ITMX_CO2_ISS_CTRL2_OUT_DQ stepped up from -0.2 to 0 on 2024/12/05 21:40:07 UTC (13:40 PST) Plot attached. This is of interest as this channel has ben a witness to our noisy/low range periods in DARM but is not connected to anything. I see no reason for this step up, the only person in the LVEA at the time was Robert setting up VP measurements (near HAM3 not CO2X) 81628, the CO2 laser remained l locked and we were not touching the CO2 chiller around the time 81634.
Since this step up, this channel has not been a good witness of our DARM noise, maybe the cable wasn't grounded and something changed to ground it on 12/05. Plot of it being a witness to the noise on 12/02 and not on 12/11.
Before and after the step up, this CO2 channel is still a witness to the CO2 rotation stage moving, attached. Both CO2X and CO2Y ISS CRTL2 channels see the rotation stages move, some crosstalk in chassis? CO2Y signal is orders of magnitude larger. Jason was looking into these channels and the chassis
I don't see any change in the H1:TCS-ITMX_CO2_RIN_INLOOP_OUTMON channel at that time, but the reason for the step in the CRTL2 output is from a turned off digital offset in the H1:TCS-ITMX_CO2_AOM_SET_POINT bank. I turned this off when we were looking at it and I forgot to alog it, apologies.
Curious that we lost sensitivity to whatever this is when an offset was removed, but I think this is a good clue.
I've just put the offset back in just to see if we get our "monitor" back. Accepted in safe and observe snaps, but only one screenshot.
It seems that with the offset on again, this channel is again a witness of the noisy times.
TITLE: 12/12 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: We stayed locked the entire shift, over 23 hours as of 00:30 UTC. Lost of investigations into the range drops today, range has been steady since the last occurance ~14hours ago. If the drop happens again there's a plan to investigate out of observing alog81774.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 16:53 | OPS | LVEA | LVEA | YES | LVEA IS LASER HAZARD | 21:13 |
| 15:56 | FAC | Tyler | MidX | N | Crane inspection, FCES next (~10/11am?) | 18:24 |
| 16:07 | FAC | Karen | Optics lab, vac prep | N | Tech clean | 16:51 |
| 16:34 | FAC | Chris | Midx | N | Join Tyler | 17:01 |
| 16:51 | FAC | KAren | OSB recieving | N | Cardboard moving, door rollup | 17:02 |
| 17:01 | FAC | Chris | Midx | N | Driving a trailer | 18:24 |
| 17:02 | FAC | Karen | Woodshop, firepump room | N | Tech clean | 17:14 |
| 18:24 | FAC | Tyler | FCES | N | Crane inspection | 18:35 |
| 18:59 | PSL | Jason | Optics lab | N | Put away optic | 19:05 |
| 19:18 | VAC | Gerardo | OSB recieving | N | Loading and moving away parts | 19:22 |
| 19:21 | FAC | Chris | OSB recieving | N | Moving the van away | 19:25 |
| 21:07 | CAL | Francisco | PCAL lab | LOCAL | PCAL work | 21:38 |
I can't find the right alog for directions, so here is an easily searchable set of directions for HAM1 FF.
There is a master switch for the HAM1 feedforward, but it can sometimes cause problems to slam it on and off that way. Instead, you can ramp the input to the feedforward down to zero. From sitemap:
SEI > ISI Sensor Config > [middle of the screen, see attachment] HAM1 ASC FF > L4CINF
This opens a filter bank page with four filter banks. They each have a gain of 1 and a ramp time of 20 seconds. Set all of these gains to zero to turn off the input to the feedforward. Ramp them back to 1 to engage.
I put cli instructions in this alog : https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=79033.
caput H1:HPI-HAM1_TTL4C_FF_INF_RX_GAIN 0 & caput H1:HPI-HAM1_TTL4C_FF_INF_RY_GAIN 0 & caput H1:HPI-HAM1_TTL4C_FF_INF_X_GAIN 0 & caput H1:HPI-HAM1_TTL4C_FF_INF_Z_GAIN 0 &
This should turn the HAM1 asc ff off in a safe way.
As several of us just talked about in the control room, if The Noise is happening when folks are on site and there's a plan of a thing to try, please feel free to drop Observing to check. I think we'll keep the list of things to try elsewhere more dynamic than the alog (probably the LHO commissioning google doc). Some examples of things that we're thinking of right now are (a) turning off the HAM1 FF, or (b) walking (gently) around electronics racks (CER, EX ESD driver area) to see if we can hear any electronics 'whining' or otherwise going bad.
Please do send me a mattermost message, which should audibly ping my phone, but this is causing such problems for our data quality that there is no need to wait for a response from me before trying something.
TITLE: 12/12 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 2mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.28 μm/s
QUICK SUMMARY: H1 has been observing for 23 hours. The vac prep lab dust monitor is reporting as disconnected.
This alog presents the first steps I am taking into answering the question: "what is the calibrated residual test mass motion from the ASC?"
As a reminder, the arm alignment control is run in the common/differential, hard/soft basis, so we have eight total loops governing the angular motion of the test masses: pitch and yaw for differential hard/soft and common hard/soft. These degrees of freedom are diagonalized in actuation via the ASC drive matrix. The signals from each of these ASC degrees of freedom are then sent to each of the four test masses, where the signal is fed "up" from the input of the TST ISC filter banks through the PUM/UIM/TOP locking filters banks (I annotated this screenshot of the ITM suspension medm for visualization). No pitch or yaw actuation is sent to the TST or UIM stages at Hanford. The ASC drive to the pum is filtered through some notches/bandstops for various suspension modes. The ASC drive to the TOP acquires all of these notches and bandstops and an additional integrator and low pass filter, meaning that the top mass actuates in angle at very low frequency only (sub 0.5 Hz).
Taking this all into account involves a lot of work, so to just get something off the ground, I am only thinking about ASC drive to the PUM in this post. With a little more time, I can incorporate the drive to the top mass stage as well. Thinking only about the PUM makes this a "simple" problem:
I have done just this to achieve the four plots I have attached to this alog. These plots show the ITM and ETM test mass motion in rad/rtHz from each degree of freedom and the overall radian RMS value. That is, each trace is showing you exactly how much radians of motion each ASC degree of freedom is sending to the test mass through the PUM drive. The drive matrix value is the same in magnitude for each ITM and each ETM, meaning that the "ITM" plot is true for both ITMX and ITMY (the drives might differ by an overall sign though).
Since I am just looking at the PUM, I also didn't include the drive notches. Once I add in the top mass drive, I will make sure I capture the various drive filters properly.
Some commentary: These plots make it very evident how different the drive is from each ASC degree of freedom. This is confusing because in principle we know that the "HARD" and "SOFT" plants are the same for common and differential, and could use the same control design. However, we know that the sensor noise at the REFL WFS, which controls CHARD, is different than the sensor noise at the AS WFS that control DHARD, so even with exact same controller, we would see different overall drives. We also know that we don't use the same control design for each DOF, due to the sensor noise limitations and also the randomness of commissioning that has us updating each ASC controller at different times for different reasons. For example, the soft loops both run on the TMS QPDs, but still have different drive levels.
Some action items: besides continuing the process of getting all the drives from all stages properly calibrated, we can start thinking again about our ASC design and how to improve it. I think two standout items are the SOFT P and CHARD Y noise above 10 Hz on these plots. Also, the fact that the overall RMS from each loop varies is something that warrants more investigation. I think this is probably related to the differing control designs, sensor noise, and noise from things like HAM1 motion or PR3 bosems. So, one thing I can do is project the PR3 damping noise that we think dominates the REFL WFS RMS into test mass motion.
I have just realized I mixed up the DACs and ADCs (again) and the correct count-to-torque calibration should be:
So these plots are wrong by a factor of 2. I will correct this in my code and post the corrected plots shortly.
The attached plots are corrected by the erroneous factor of two mentioned above, which has an overall effect of reducing the motion by 2.
As part of a different investigation (still trying to understand why our range goes bad sometimes), I incidentally may have found a source for some glitches / range drops.
I'm not going to look further into this, but instead tag detchar-request in hopes that someone else has some time to think about it. I'll also directly send messages to Jim and Elenna, who manage this feedforward system.
In the attached screenshot there are a few channels that include 'TTL4C' - those are the tabletop L4C seismic sensors on HAM1. There are also 'FFHAM1' channels that take a channel derived from those L4Cs, and feed it forward to the error signal of the ASC loop that is referred to in the name. A moment or so after there is a glitch in those channels, there is a range drop in DARM.
I will say that when I zoom in, it looks like the glitch appears in the FFHAM1 channel before it appears in the TTL4C channel, but based on my understanding of the signal flow, I'm not entirely sure how that's possible. I'm hoping that Elenna (who knows much more than I do) can help think this through.
PCAL team went to End Y today with PS4 to do a regular measurement and a "long measurement consisting of 15 minutes of time in each position instead of 240 seconds".
PS4 rho, kappa, u_rel on 2024-10-25 corrected to ES temperature 299.3 K : -4.71053733727373 -0.0002694340454223 4.653616030093759e-05
Copying the scripts into tD directory...
Connected to nds.ligo-wa.caltech.edu
martel run
reading data at start_time: 1417885234
reading data at start_time: 1417885750
reading data at start_time: 1417886151
reading data at start_time: 1417886600
reading data at start_time: 1417886970
reading data at start_time: 1417887305
reading data at start_time: 1417887420
reading data at start_time: 1417888020
reading data at start_time: 1417888356
Ratios: -0.5346804302935332 -0.543306389094602
writing nds2 data to files
finishing writing
Background Values:
bg1 = 18.604505; Background of TX when WS is at TX
bg2 = 5.391990; Background of WS when WS is at TX
bg3 = 18.556794; Background of TX when WS is at RX
bg4 = 5.396890; Background of WS when WS is at RX
bg5 = 18.642247; Background of TX
bg6 = -0.202112; Background of RX
The uncertainty reported below are Relative Standard Deviation in percent
Intermediate Ratios RatioWS_TX_it = -0.534680;
RatioWS_TX_ot = -0.543306;
RatioWS_TX_ir = -0.527163;
RatioWS_TX_or = -0.534899;
RatioWS_TX_it_unc = 0.055923;
RatioWS_TX_ot_unc = 0.051445;
RatioWS_TX_ir_unc = 0.062749;
RatioWS_TX_or_unc = 0.054710;
Optical Efficiency
OE_Inner_beam = 0.986010;
OE_Outer_beam = 0.984479;
Weighted_Optical_Efficiency = 0.985245;
OE_Inner_beam_unc = 0.044504;
OE_Outer_beam_unc = 0.041112;
Weighted_Optical_Efficiency_unc = 0.060587;
Martel Voltage fit:
Gradient = 1637.914766;
Intercept = 0.150812;
Power Imbalance = 0.984123;
Endstation Power sensors to WS ratios::
Ratio_WS_TX = -0.927655;
Ratio_WS_RX = -1.384163;
Ratio_WS_TX_unc = 0.044122;
Ratio_WS_RX_unc = 0.042178;
=============================================================
============= Values for Force Coefficients =================
=============================================================
Key Pcal Values : GS = -5.135100; Gold Standard Value in (V/W)
WS = -4.710537; Working Standard Value
costheta = 0.988362; Angle of incidence
c = 299792458.000000; Speed of Light
End Station Values : /ligo/gitcommon/Calibration/pcal
TXWS = -0.927655; Tx to WS Rel responsivity (V/V)
sigma_TXWS = 0.000409; Uncertainity of Tx to WS Rel responsivity (V/V)
RXWS = -1.384163; Rx to WS Rel responsivity (V/V)
sigma_RXWS = 0.000584; Uncertainity of Rx to WS Rel responsivity (V/V)
e = 0.985245; Optical Efficiency sigma_e = 0.000597; Uncertainity in Optical Efficiency
Martel Voltage fit :
Martel_gradient = 1637.914766;
Martel to output channel (C/V)
Martel_intercept = 0.150812;
Intercept of fit of Martel to output (C/V)
Power Loss Apportion : beta = 0.998844; Ratio between input and output (Beta)
E_T = 0.992021; TX Optical efficiency
sigma_E_T = 0.000301; Uncertainity in TX Optical efficiency
E_R = 0.993169; RX Optical Efficiency
sigma_E_R = 0.000301; Uncertainity in RX Optical efficiency
Force Coefficients :
FC_TxPD = 9.138978e-13; TxPD Force Coefficient
FC_RxPD = 6.216600e-13; RxPD Force Coefficient
sigma_FC_TxPD = 4.923605e-16; TxPD Force Coefficient
sigma_FC_RxPD = 3.250921e-16; RxPD Force Coefficient
data written to ../../measurements/LHO_EndY/tD20241210/
Before beam spot looking a little oblonged but not too bad.
Martel Voltage Test plots
WS_at_RX plots
WS at RX Side with Both Beams
WS at Transmitter Module
PCAL ES procedure & Log DCC T1500062 ( Modified for long measurement)
The analysis for the long measurement is still pending.
This adventure was brought to you by Dripta & Tony S.
I forgot to link to the trends doc:
https://git.ligo.org/Calibration/pcal/-/blob/master/O4/ES/measurements/LHO_EndY/tD20241210/LHO_EndY_PD_ReportV4.pdf?ref_type=heads