I accepted these P2L spot gain SDF Diffs to get into Observing.
Summary highlights of DQ shift the week from 2024-04-08 to 2024-04-14
The IFO was in observing mode for 74% avg. (lower during the first half of the week and better during the second half)
LHO was observing at a range of 158 Mpc with some dip in range to 134Mpc on the first 2 days of the week (8th and 9th).
For most of the week, whenever the IFO transitioned to observing from a lockloss, the strain h(t) showed noise in the lower frequency regions 10-50Hz and also broadband high frequency noise (above 500Hz) which reduced with time. On some days, these features lasted for more than a couple of hours.
Violin modes at 500Hz and its harmonics were strong at the beginning of observation time after lock loss but dampened after a couple of hours.
Most of the high SNR omicron glitches were centered at ~ 40Hz
The dominant channels in Hveto were usually
H1:PEM-EX_VMON_ETMX_ESDPOWER48_DQ
H1:ASC-CHARD_Y_OUT_DQ
Fscan lines have been on the higher side in the range of 600 to 700 on most days this week.
Full DQ shift report with daily observations can be found in the wiki page here: https://wiki.ligo.org/DetChar/DataQuality/DQShiftLHO20240408
TITLE: 04/18 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM_EARTHQUAKE
Wind: 3mph Gusts, 1mph 5min avg
Primary useism: 0.97 μm/s
Secondary useism: 1.28 μm/s
QUICK SUMMARY:
Starting the Shift Oli and I noticed that the IFO was still relocking and needed an initial_alignment.
Since then Initial_alignment had stalled and been restarted. Possibly because the IMC was unlocked. I'm usure if Oli had done that or not.
Once that completed Another earthquake hit but was not announced over verbals.
M 5.3 - 60 km WSW of Pozo Dulce, Mexico 2024-04-18 07:39:45 (UTC)26.546°N 110.267°W10.0 km depth
So I'm just waiting on the earthquake to pass.
TITLE: 04/18 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Quiet night most of the night, but had a local earthquake knock us out close to the end of my shift. Currently running an initial alignment with the help of H1 MANAGER running an initial alignment and Tony watching H1 MANAGER run the initial alignment to make sure H1 MANAGER gets us locking once that's done.
LOG:
23:00 Detector Observing and Locked for over 15 hours
06:08 Earthquake mode activated
06:11 Lockloss due to local earthquake
- ALS flashed were really low and took a while to get up
07:03 Started an Initial Alignment
Lockloss 04/18 06:11UTC due to local earthquake :( We were locked for 22.5 hours
Observing at 155Mpc and locked for 20 hours exactly. Calm evening with low wind
Naoki, Vicky, Jennie W
Naoki followed the directions in to set up the SQZ beam OMC scan.
We had a problem with the DC centering loops. After we switched them on they saturated the OM1 and OM2 suspensions.
We got around this by switching each of the 4 degrees of freedom on first - ie. DC3 Y, DC3 P, DC4 Y, DC4 P.
Then we engaged OMC ASC and this seemed to work ok.
When we tried to manually lock the OMC length loop we had problems as when we switched the gain of 1 on it would lose lock, even when on a TM00 mode of the expected height (0.6mA on DCPD_SUM).
Vicky got around this this using a lower gain and not engaging the BOOST filter in the servo filter bank.
Then she had to touch up the alignment in lock with OM3.
locked quiet time 1397329970 GPS 1 min:
OMC-REFL_A_LF_OUT16 = 0.0930255 mW
OMC-DCPD_SUM_OUTPUT = 0.652156 mA
unlocked quiet time 1397330106 GPS 1 minute:
OMC-REFL_A_LF_OUT16 = 1.04825 mW
OMC-DCPD_SUM_OUTPUT = -0.00133668 mA
dark measurement 1397330625 GPS 1 minute:
OMC-REFL_A_LF_OUT16 = -0.0133655 mW
OMC-DCPD_SUM_OUTPUT = -0.00133668 mA
I noticed after I took the dark measurement that OM1 and 2 were staurating again and need to clear history twice on OM1 to remove this.
Reverted OM1 and 2, 3 OMC sliders at 8:33 am (local time) on the 16th April.
Data is saved as REf 3, 4 and 5 in /ligo/home/jennifer.wright/Documents/OMC_scan/2024_04_16_OMC_scan.xml. Where 3 is the scan channel OMC-DCPD_SUM_OUT_DQ on the bottom right plot, 4 is the PZT excitation channel OMC-PZT2_EXC on the bottom left plot, and 5 is the monitor of the actual PZT output voltage OMC-PZT2_MON_DC_OUT_DQ on the top right plot.
Using Sheila's code from this entry and updating the code with the current OMC values for transmission of the mirrors:
Tio = 7670e-6 #according to T1500060 page 116 input and output mirror transmission
R_inBS = 1-7400e-6
The outout of the code gives us the following values for the cavity incident power, efficiency and finesse:
Power on refl diode when cavity is off resonance: 1.062 mW
Incident power on OMC breadboard (before QPD pickoff): 1.078 mW
Power on refl diode on resonance: 0.106 mW
Measured effiency (DCPD current/responsivity if QE=1)/ incident power on OMC breadboard: 70.5 %
assumed QE: 100 %
power in transmission (for this QE) 0.760 mW
HOM content infered: 8.748 %
Cavity transmission infered: 77.827 %
predicted efficiency () (R_inputBS * mode_matching * cavity_transmission * QE): 70.493 %
omc efficency for 00 mode (including pick off BS, cavity transmission, and QE): 77.251 %
round trip loss: 2063 (ppm)
Finesse: 362.923
I need to compare the HOM content measurement with that derived from the mode scan.
Just realising now that I need this data that I never posted the results of the OMC scanm with this squeezed beam for the ZM4 = 120, ZM5 = 137 PSAMS settings.
The analysis was run with /labutils/omc_scan/fit_two_peaks_no_sidebands10.py and the function this uses to fit the whole scan is OMCscan_no_sidebands10.py, this code is in the /ligo/gitcommon/labutils/omc_scan repository but in the dev branch.
The first graph shows the full scan, the second zoomed in on the fit for the CO2 mode, since the astigmatism in our OMC is too small to resolve the two modes this fit has less value than with the old OMC.
To work out mode-matching it is probably enough to use the C02 height from the data and not the fit.
TJ, Camilla. WP 11811
Yesterday, we repeated the 76715 CO2 beam scan using a razor blade on a 25mm transition stage while measuring the power transmitted to our PD (channel H1:TCS-ITMX/Y_CO2_LSRPWR_MTR_OUTPUT).
We realized that some of the blocked beam was transmitted through a hole in the razor blade in our first set of measurements (changed radius measurement ~1mm) so the CO2X 76715 are not as trusted.
All data taken for CO2X and CO2Y attached and summarized below:
Position Downstream of CO2Y Annular Mask [mm]
Downstream = (+), Upstream = (-)
|
Beam Raduis [mm] | Prev. measured with hole in razor |
- 238 | 11.39 | 12.34 |
0 | N/A | 11.27 (think hole was covered) |
+143 | 10.77 | 11.17 |
+194 | 10.61 | 11.39 |
Position Downstream of CO2X Annular Mask [mm]
Downstream = (+), Upstream = (-)
|
Beam Raduis [mm] |
+ 32 | 9.85 |
+ 133 | 10.01 |
+184 | 10.49 |
TITLE: 04/17 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 126Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
IFO is in NLN and OBSERVING
Comissioning today went well. We survived a few high magntitude earthquakes.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:14 | FAC | Karen | Optics, Vac Prep Labs | N | Technical Cleaning | 15:43 |
15:43 | FAC | Ken | FC Enclosure | N | Electrical maintenance | 16:43 |
15:44 | FAC | Karen | MY | N | Technical Cleaning | 16:44 |
17:17 | FAC | Fil | CS Recieving Door | N | Roll up recieving door | 18:17 |
21:16 | SQZ | Andrei, Sheila | Optics Lab | N | Tour | 22:16 |
TITLE: 04/17 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 16mph Gusts, 12mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.10 μm/s
QUICK SUMMARY:
We're Observing and have been Locked for over 15 hours.
Jennie, Jim
We tried Gabriele's newest version of the HAM1 ASC feedforward. New filters were installed last week, but we ran out of time during the commissioning window. It seems like we can still only get good subtraction from the pitch degrees of freedom, we got a lot of 10-ish hz injection when we engaged the yaw degrees of freedom.
To start we turned off the HAM1 ff, by setting H1:HPI-HAM1_TTL4C_FF_INF_{RX,RY,Z,X)}_GAIN to zero, then shutting off the H1:HPI-HAM1_TTL4C_FF_OUTSW. While collecting that data we switched to the new filters, and zerod the gains for all of the individual H1:HPI-HAM1_TTL4C_FF_CART2ASC{P,Y}_{DOF}_{DOF}_GAIN filter banks. We then tried turning on all of the pitch feedforward first, by ramping the gains in a couple of steps from 0 to 1. It seemed like it worked well, but I don't think we actually got to full gain of 1. We then tried turning on the yaw FF, but that pretty quickly started injecting noise around 10hz, first attached spectra, red traces are with the new ff, blue are with ff off.
Second image are trends where Jennie tries to reconstruct the timeline, which is how we found the first pitch test wasn't complete. Sheila ran an A2L measurement, then we tried the pitch ff again, spectra in the third plot. All of the red traces are new P ff (1397415884), blue is the old (1397407980), green is with the ff off (1397405754). This worked well, we got some slight improvements around 15hz, new chard p gets rid of some noise injection around 2hz. We didn't see the pitch ff affect the yaw dofs.
We left the new pitch filters running, and accepted in the Observe SDF. The yaw filters were left off.
I looked at the attempt of engaging the HAM1 yaw FF:
1) the filter that were loaded were correct, meaning they were what I expected
2) retraining with a time when the HAM1 pitch FF were on yields yaw filters that are similar to what was tried, so it doesn't look like it's a pitch / yaw interaction (Jennie also pointed to some evidence of this in the alog)
I suspect there might be cross coupling between the various yaw dofs. I would suggest that we upload the newly trained filters (attached) and try to engage the yaw FF one by one, starting from CHARD that is the one we care most
TCS Chiller Water Level Top-Off (Bi-Weekly, FAMIS #27787)
Closes FAMIS 27787
We wanted to check the PRCL offset during our commissoning time this morning, we ended up not having much time for that but did re-run Elenna's template from 76814. It looks like the PRCL to REFL RIN transfer function is much the same as at that time. It seems as though there was a change in the PRCL to DARM transfer function, but that measurement has low coherence at most frequencies.
Joe B from LLO and I ran the short and long stochastic injections with the commands below starting at 19:08UTC. Both ran successfully, back to Observing at 1955UTC
hwinj stochastic short --time 1397416098 --run
ifo: H1
waveform root: /ligo/groups/cal/H1/hwinj
config file: /ligo/groups/cal/H1/hwinj/hwinj.yaml
GraceDB url: https://gracedb.ligo.org/api/
GraceDB group: Detchar
GraceDB pipeline: HardwareInjection
excitation channel: H1:CAL-INJ_TRANSIENT_EXC
injection group: stochastic
injection name: short
reading waveform file...
injection waveform file: /ligo/groups/cal/H1/hwinj/stochastic/SB_ER15HLShort_H1.txt
injection waveform sample rate: 16384
injection waveform length: 780.0 seconds
injection start GPS: 1397416098.0
hwinj stochastic long --time 1397417118 --run
ifo: H1
waveform root: /ligo/groups/cal/H1/hwinj
config file: /ligo/groups/cal/H1/hwinj/hwinj.yaml
GraceDB url: https://gracedb.ligo.org/api/
GraceDB group: Detchar
GraceDB pipeline: HardwareInjection
excitation channel: H1:CAL-INJ_TRANSIENT_EXC
injection group: stochastic
injection name: long
reading waveform file...
injection waveform file: /ligo/groups/cal/H1/hwinj/stochastic/SB_ER15HLLong_H1.txt
injection waveform sample rate: 16384
injection waveform length: 1800.0 seconds
injection start GPS: 1397417118.0
H1 is now finished with comissioning and IFO is now OBSERVING
Following up on 77139, we set the camera offsets to those found, and added these to lscparams.py:
cam_offsets = {'PIT':{'1':-233,
'2':-173,
'3':-230},
'YAW':{
'1':-236,
'2':-422,
'3':-349.5},}
This increased the power in the X arm by 5kW, the Y arm by 6 kW, as the screenshot shows. After this we re-ran the A2L script, starting with much lower amplitudes for ETMX yaw. It seems that with the A2L as badly tuned as this, we need to adjust the amplitude of the excitations for each test mass and each DOF separately
optic, dof | amplitude | starting A2L gain | final A2L gain |
ETMX, Y | 0.3 | 4.9315 | 5.3477 |
ETMY, Y | 3 | 3.061 | 0.8592 |
ITMX P | 3 | -0.9709 | -1.0603 |
ITMY P | 10 |
-0.3830 |
-0.3660 |
ETMX P | 1 | 4.1183 | 3.10708 |
ETMY P | 0.3 | 4.6013 | 4.46241 |
ITMX Y | 1 | 2.7837 | 2.82803 |
ITMY Y | 1 | -2.3962 | -2.38278 |
I edited the my_a2l.py script in userapps/isc/h1/scripts to round the A2L gains that it sets, to make it easier to deal with these values in SDF.
Editing to add, we found that there was still angular coherence so re-ran the script for all DOFs with an amplitude of 1. This made changed gains for ITMX, ITMY, ETMX, Y2L gains, ETMX and ETMY P2L gain, not ETMY Y2L gain, or ITMX , ITMY P2L. The end result is fairly high CHARD P coherence, which is limiting our range right now (see attached screenshot).
J. Kissel During today's Systems Call, it came up that there was scant documentation on the timing performance of the existing fast shutter system. A quick look through the aLOG showed some results from the 2016 era including examples like LHO aLOGs 29062, 28958, and 26263. Koji's LHO:29062 is probably the most conclusive, quoting (repeated here for convenience): "The fast mechanical shutter start blocking the beam at 1.7ms, reaches half blocking at 1.85ms, complete block at 2.0ms." Here, I take some modern data with what digital signals stored in the frames: [65 kHz] H1:OMC-PI_DCPD_64KHZ_AHF_DQ OMC DCPDs (down stream of the OMC Cavity) [4096 Hz] H1:ISI-HAM6_BLND_GS13Z_DQ Vertical GS13s on the HAM6 ISI (nominally calibrated into 1 nm/s, but I've scaled it to arbitrarily show up demonstratively on the OMC DCPD plot) [2048 Hz] H1:ASC-OMC_A_NSUM_OUT_DQ OMC QPD A SUM (up-stream of the OMC Cavity, but still down-stream from the fast shutter) [2048 Hz] H1:ASC-OMC_B_NSUM_OUT_DQ OMC QPD B SUM (""") [2048 Hz] H1:ASC-AS_C_NSUM_OUT_DQ AS_C QPD SUM (up-stream of the fast shutter) [16 Hz] H1:SYS-MOTION_C_FASTSHUTTER_A_STATE Slow-channel record of the logical state of the shutter [16 Hz] H1:YSY-MOTION_C_SHUTTER_G_TRIGGER_VOLTS Slow-channel record of the trigger PD voltage (which is AS_C) With these channels, I show three recent lock-losses. It seems like all of the photo-diode signals show a very long "turn off time," (of order ~1 [sec]) and are pretty un-reliable at precision estimates of shutter timing. This is likely a linear combination of - For the OMC DCPDs at least, there's an *additional* hardware "shutter" where the cavity's length actuator -- the OMC PZT -- are quickly railed to unlock the cavity after a lock-loss trigger, - For the OMC DCPDs at least, there's a non-negligible cavity ring-down time, - For OMC DCPDs and all the QPDs, impulse response of the photo-diode's readout electronics and anti-aliasing filters, and/or the However, the "clunk" of the shutter on the HAM6-ISI is quite obvious and precise, so I use the start of the impulse (a few samples after the transition from "no motion" to "large fast negative start of the clunk's ring-down") that as an upper-bound estimate of when the shutter is completely closed. But, with the imprecision of the [16 Hz] channels -- 62 ms -- it's tough to get anywhere. In short -- what's stored in the frames are a pretty bad metrics for precision timing of the fast shutter system. But anyways -- the upper limit of the time between the change of the logical state of the trigger and the HAM6 ISI GS13s registering a large kick is - 08:23:20 UTC :: 0.3125 (+/-0.0625) and 0.341064 (+/-0.000244) = 0.02856 [sec] = 28.5 [ms]. - 13:01:26 UTC :: 0.125 and 0.1604 = 0.0354 [sec] = 35.4 [msec] - 23:39:28 UTC :: 0.3125 and 0.328857 = 0.016357 [sec] = 16.4 [msec] I'll keep looking for better ways to increase the precision on this statement, but for now, we should still role with Koji's results from LHO:29062
In G2201762 > O3a_O3b_summary.pdf Niko and I showed in O3b, there was a subset of ~25 locklosses where the light fell off H1:ASC-AS_C_NSUM_OUT_DQ on the order of 1ms, where this usually takes ~30ms. There was none of these locklosses in O3a and although the analysis is not currently online, I haven't noticed any of these 1ms fast lcoklosses in O4.
These O3b "Fast" locklosses often had the light fall off POPAIR (shutter now closed in NLN) and IMC_TRANS in the same 1ms period, sometimes with the FSS signals oscillated beforehand, showing these locklosses may have been correlated to issues with the PSL system that were resolved after O3.
Ah HA! Thanks Camilla! Camilla's comment is only tangentially related, but is fruitful anyways! The O3B study she points to is researching lock losses that are dubbed "Fast lock losses" by the collaboration because they happen in under 1 [msec], but as the plots on page two of her linked study show, light "disappears" from the interferometer because the IMC is unlocking, NOT because the fast shutter has triggered. Camilla thought they were related to my aLOG about the fast shutter because of conversations with Rana suggesting that "if these fast locklosses release all of their energy to the dark port prior to the fast shutter firing, then we need to know, and make the fast shutter faster!" HOWEVER, this pointer reminds me of two more channels, [2048 Hz] H1:ASC_A_DC_NSUM_OUT_DQ The "DC" or "QPD" component of WFS A (down-stream of the fast shutter, but upstream of the OMC) [2048 Hz] H1:ASC_B_DC_NSUM_OUT_DQ The "DC" or "QPD" component of WFS B (down-stream of the fast shutter, but upstream of the OMC) Apparently, whatever's going on with the OMC QPDs and the OMC DCPDs (also downstream of the fast shutter) which seem to take a lot longer for light to dissipate, *doesn't* happen with the AS WFSs. So, these channels much more clearly show the cut-off of light. That being said, they corroborate well with the HAM6 ISI GS13s. The uncertainty still remains, though, on when the shutter gets the firing signal -- i.e. when does the shutter's trigger PD get enough voltage above threshold to trigger the shutter. For this we still only have the 16 Hz "SYS-MOTION" channels. And thus my conclusions about timing remain. Anyone know of a faster channel of the trigger PD?
WP11812
Jamie, Jonathan, Erik, Dave:
The two CAL HASH channels (H1:CAL-CALIB_REPORT_HASH_INT and H1:CAL-CALIB_REPORT_ID_INT) were moved from the h1calcs model, where they were being acquired as single-precision floats, to a new custom EPICS IOC called cal_hash_ioc, where they are acquired as signed 32bit integers.
The h1calcs.mdl model was modified to remove the EPICS-INPUT parts for these channels. The model was then restarted.
The IOC, which was being tested with H3 channels, was modifed and rstarted to serve the H1 channels as integers.
The EDC was modified to add H1EPICS_CALHASH.ini to its master list. This added 5 channels the the DAQ, the two hash channels and three IOC status channels.
The EDC was restarted, followed in short order by a DAQ restart.
Tue16Apr2024
LOC TIME HOSTNAME MODEL/REBOOT
10:04:52 h1oaf0 h1calcs <<< Model change to remove channels
10:07:51 h1susauxb123 h1edc[DAQ] <<< EDC has the new channels
10:08:43 h1daqdc0 [DAQ] <<< 0-leg restart, no issues
10:08:55 h1daqfw0 [DAQ]
10:08:55 h1daqnds0 [DAQ]
10:08:55 h1daqtw0 [DAQ]
10:09:03 h1daqgds0 [DAQ]
10:11:55 h1daqdc1 [DAQ] <<< 1-leg restart, no issues
10:12:06 h1daqfw1 [DAQ]
10:12:07 h1daqtw1 [DAQ]
10:12:08 h1daqnds1 [DAQ]
10:12:16 h1daqgds1 [DAQ]
N.B. that changes to the channelsH1:CAL-CALIB_REPORT_HASH_INT
andH1:CAL-CALIB_REPORT_ID_INT
will appear on the CDS SDF screen, *not the H1CALCS* screen. They get automatically re-populated each timepydarm export
is used to update the calibration in the control room (which should be considered to happen atomically with GDS restarts). So, when updating the calibration both the H1CALCS and the H1CDS SDF screens need to be checked.
WP 11805
ECR E2400083
Lengths for possible SPI Pick-off fiber. Part of ECR E2400083.
PSL Enclosure to PSL-R2 - 50ft
PSL-R2 to SUS-R2 - 100ft
SUS-R2 to Top of HAM3 (flange D7/D8) - 25ft
SUS-R2 to HAM3 (flange D5) - 20ft
J. Kissel [for J. Oberling] Jason also took the opportunity during his dust monitoring PSL incursion today to measure the distance between where the new fiber collimator would go on the PSL table to the place where it would exit at the point Fil calls the PSL enclosure. He says SPI Fiber Collimator to PSL Enclosure = 9ft.
J. Kissel [for F. Clara, J. Oberling] After talking with Fil I got some clarifications on how he defines/measures his numbers: - They *do* include any vertical traversing that the cable might need to go through, - Especially for rack-to-rack distances, always assumes that the cable will go to the bottom of the rack (typically 10 ft height from cable tray to rack bottom), - He adds two feet (on either end) such that we can neatly strain relieve and dress the cable. So -- the message -- Fil has already built in some contingency into the numbers above. (More to the point: we should NOT consider them "uncertain" and in doing so add an addition "couple of feet here" "couple of feet there" "just in case.") Thanks Fil! P.S. We also note that, at H1, the optical fibers exit the PSL at ground level on the +X wall of the enclosure between the enclosure and HAM1, underneath the light pipes. Then the immediately shoot up to the cable trays, then wrap around the enclosure, and then land in the ISC racks at PSL-R2. Hence the oddly long 50 ft. number for that journey. Jason also reports that he rounded up to the nearest foot for his measurement of the 9ft run from where the future fiber collimator will go to the PSL enclosure "feed through."
Upon discussion with the SPI team, we want to minimize the number of "patch panel" "fiber feedthrough" connections in order to minimize loss and polarization distortion. As such, we prefer to go directly from the "SPI pick-off in the PSL" fiber collimator directly to the Laser Prep Chassis in SUS-R2. That being said purchase all of the above fiber lengths, such that we can re-create a "fiber feedthrough patch panel full" system as contingency plan. So, for the baseline plan, we'll take the "original, now contingency plan" PSL-R2 to SUS-R2, 100 ft fiber run and use that to directly connect the "SPI pick-off in the PSL" fiber collimator directly to the Laser Prep Chassis in SUS-R2. I spoke with Fil and confirmed that 100 ft is plenty enough to make that run (from SPI pick-off in PSL to SUS-R2).