TITLE: 09/12 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 15mph Gusts, 9mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.11 μm/s
QUICK SUMMARY:
H1 is locked and Observing for the past 4 hours.
DMT seems to be having trouble retrieving data beyond the hour for some reason.
The DMT PEM NUC trends have been replaced with a temporary ndscope.
Sheila, Ibrahim
Context: ALSY has been misbehaving (which on its own is not new). Usually, problems with ALS locking pertain to an inability to attain higher magnitude flashes. However, in recent locks we have consistently been able to reach values of 0.8-0.9 cts, which is historically very lockable, but ALSY has not been able to lock in these conditions. As such, Sheila and I investigated the extent of misalignment and mode-mismatching in the ALSY Laser.
Investgiation:
We took two references, a "good" alignment, where ALSY caught swiftly with minimal swinging, and a "bad" alignment where ALSY caught with frequent suspension swinging. We then compared their measured/purported higher order mode widths and magnitudes. The two attached screenshots are from two recent locks (last 24hrs) from which we took this data. We used the known Free Spectral Range and G-factor along with the ndscope measurements to obtain the higher order mode spacing and then compared this to our measurements. While we did not get exact integer values (mode number estimate column), we convinced ourselves that these peaks were indeed our higher-order modes (to a certain extent that will be investigated more). After confirming that our modes were our modes, we then calculated the measured power distribution for these modes.
The data is in the attached table screenshot (copying it was not very readable).
Findings:
Next:
Investigation Ongoing
TJ and I had a look at this again this morning, and realized that yesterday we misidentified the high order modes. In Ibrahim's screenshot, there is a small peak between the 00 mode and the one that is 18% of the total, this is the misalignment mode, while the mode with 18% of the total is the mode mismatch. This fits with our understanding that part of the problems that we have with ALSY locking is due to bad mode matching.
Attached is a quick script to find the arm higher order mode spacing, the FSR is 37.52kHz, the higher order mode spacing is 5.86kHz.
-Brice, Sheila, Camilla
We are looking to see if there are any aux channels that are affected by certain types of locklosses. Understanding if a threshold is reached in the last few seconds prior to a lockloss can help determine the type of lockloss, which channels are affected more than others, as well as
We have gathered a list of lockloss times (using https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi) with:
(issue: the plots for the first 3 lockloss types wouldn't upload to this aLog. Created a dcc for them: G2401806)
We wrote a python code to pull the data of various auxilliary channels 15 seconds before a lockloss. Graphs for each channel are created, a plot for each lockloss time are stacked on each of the graphs, and the graphs are saved to a png file. All the graphs have been shifted so that the time of lockloss is at t=0.
Histograms for each channel are created that compare the maximum displacement from zero for each lockloss time. There are also a stacked histogram based on 12 quiet microseism times (all taken from between 4.12.24 0900-0930 UTC). The histrograms are created using only the last second of data before lockloss, are normalized by dividing by the numbe rof lockloss times, and saved to a seperate pnd file from the plots.
These channels are provided via a list inside the python file and can be easily adjusted to fit a user's needs. We used the following channels:
After talking with Camilla and Sheila, I adjusted the histogram plots. I excluded the last 0.1 sec before lockloss from the analysis. This is due to (in the original post plots) the H1:ASC-AS_A_NSUM_OUT_DQ channel have most of the last second (blue) histogram at a value of 1.3x10^5. Indicating that the last second of data is capturing the lockloss causing a runawawy in the channels. I also combined the ground motion locklosses (EQ, Windy, and microseism) into one set of plots (45 locklosses) and left the only observe (and Refined) tagged locklosses as another set of plots (15 locklosses). Both groups of plots have 2 stacked histograms for each channel:
Take notice of the histogram for the H1:ASC-DC2_P_IN1_DQ channel for the ground motion locklosses. In the last second before lockloss (blue), we can see a bimodal distribution with the right groupling centered around 0.10. The numbers above the blue bars is the percentage of the counts in that bin: about 33.33% is in the grouping around 0.10. This is in contrast to the distribution for the observe, refined locklosses where the entire (blue) distribution is under 0.02. This could indicate a threshold could be placed on this channel for lockloss tagging. More analysis will be required before that (I am going to next look at times without locklosses for comparison).
I started looking at the DC2 channel and the REFL_B channel, to see if there is a threshold in REFL_B that can be put for a new lockloss tag. I plotted the last eight seconds before lockloss for the various lockloss times. This time I split up the times into different graphs based on if the DC2 max displacement from zero in the last second before lockloss was above 0.06 (based on the histogram in previous comment): Greater = the max displacement is greater than 0.06, Less = the max displacement is less than 0.06. However, I discovered that some of the locklosses that are above 0.06 for the DC2 channel, are failing the logic test in the code: getting considered as having a max displacement less than 0.06 and getting plotted on the lower plots. I wonder if this is also happening in the histograms, but this would only mean that we are underestimating the number of locklosses above the threshold. This could be suppressing possible bimodal distributions for other histograms as well. (Looking into debugging this)
I split the locklosses into 5groups of 8 and 1 group of 5 to make it easier to distinghuish between the lines in the plots.
Based on the plots, I think a threshold for H1:ASC-REFL_B_DC_PIT_OUT_DQ would be 0.06 in the last 3 seconds prior to lockloss
Fixed the logic issue for splitting the plots into pass/fail the threshold of 0.06 as seen in the plot.
The histograms were unaffected by the issue.
Added code to the gitLab
Lockloss during SQZ comissioning during a suspect ZM4 move.
1410197873 Maybe squeezing can cause a lockloss.... we lost lost lock 400ms after a 80urad ZM4 pitch move (ramp time 0.1s), sorry plot attached. Maybe this caused extra scatter in the IFO signal. We were seeing a scatter peak in DARM around 100Hz from PSAMS settings near where we were.
Took Calibration Sweep at 13ish H1 NLN Lock.
BB Start and End: 15:27 UTC, 15:35 UTC
File Names:
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240912T152702Z.xml
Simulines GPS End and Start: 1410190636, 1410192001
File Names:
File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240912T153646Z.hdf5
File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240912T153646Z.hdf5
File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240912T153646Z.hdf5
File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240912T153646Z.hdf5
File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240912T153646Z.hdf5
Calibration Monitor Screenshot taken right after hitting run on BB attached.
Camilla, Ibrahim and I ran a second set of simulines. This is not a full calibration measurment. Screenshot of the monitor lines attached. Got the following output at the end of the measurement, note the 30 minutes between 'Ending lockloss monitor' and 'File written':
2024-09-12 17:16:07,510 | INFO | Ending lockloss monitor. This is either due to having completed the measurement, and this functionality being terminated; or because the whole process was aborted.
2024-09-12 17:48:05,831 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240912T165244Z.hdf5
2024-09-12 17:48:05,845 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240912T165244Z.hdf5
2024-09-12 17:48:05,856 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240912T165244Z.hdf5
2024-09-12 17:48:05,866 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240912T165244Z.hdf5
2024-09-12 17:48:05,876 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240912T165244Z.hdf5
Thu Sep 12 08:14:19 2024 INFO: Fill completed in 14min 15secs
Jordan confirmed a good fill curbside.
TITLE: 09/12 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 143Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: EARTHQUAKE
Wind: 14mph Gusts, 9mph 3min avg
Primary useism: 0.19 μm/s
Secondary useism: 0.29 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING since 01:49 UTC (13 hr lock!)
Just entered EQ mode due to 4 back-to-back earthquakes arriving from Mexico (4.6-5.1 Mag).
Planning on going into Calibration + Comissioning time at 8:20PST (15:30 UTC).
TITLE: 09/12 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY:
Short 48 min lock, when the shift started, then let H1 try to lock itself. After 5 locklosses and an initial alignment we got to NLN.
1:47 UTC GRB-Short E511072
1:47 UTC GRB-Short E511073
1:48 UTC GRB-Short E511072
1:49 UTC Nominal_Low_Noise reached
1:49 UTC Observing reached
LOG:
No Log
PS here is a rainbow.
TITLE: 09/11 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 21mph Gusts, 15mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.10 μm/s
QUICK SUMMARY:
IFO is locked ..... was locked for 48 min.
Everything was running smoothly until the lockloss of unknown cause.
TITLE: 09/11 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:
Calibration Early Morning (8AM-10AM):
Lockloss during calibration with the probable reasons. Happened as soon as simulines ran an L1_SUSETMX injection. Either:
So in short, no calibration work was done.
Squeeze Late Morning (10AM-12PM):
We got to NLN again at 17:17 UTC! Then a 6.3 EQ came though and we lost lock while the ground PEAKMON was reading 4.7 microns. We Initially thought we would make it but alas.
Sadly, no squeeze work was done either.
Earthquakey, Windstormy Quiet Noon (12-4:30PM)
After this 6.3 EQ, we were hit by two more 5.8’s from Vanuatu while wind speeds steadily went over 30mph from 19:20 UTC to 20:20 UTC. We stayed in this unstable state for about 3 hours losing lock approximately 20 times pre-DRMI. Wind peaked at 38Mph but once it went below 28ish Mph, we were able to lock, and quite quickly as well. The successful acquisition run only took 50 minutes!
As though the tree has to fall when nobody is listening, I left the room for 40 minutes for the OPS meeting and I came back to a fully automatically locked IFO in NLN. We were in OBSERVING minutes later. I guess the wind also came down by 10mph avg in that time so scientifically, it was probably that.
Initial Alignment Weirdness: Ongoing Issue
During initial alignment SRC align and only since making the SR3 move, SRM, SR2, SQUEEZE_OUT and IFO_OUT saturate at times where they are not known to do so. Looking closely at the ASC AS_A DC_SUM_OUT, this happens when the counts are sufficiently high but a glitch occurs that misaligns SRM very badly. Ryan C found a temp solution by going into IFO_NOTIFY and then pausing at PREP_FOR_SRY for a few seconds.
Sheila and I investigated very briefly and found that the ASC trigger signal malfunctions sometimes by activating when it is much lower than its threshold for activating. More investigation and monitoring to come.
Other:
IMC Gain Redistribution during LASER_NOISE_SUPPRESSION worked! I’ve unmonitored its SDF (by instruction) and we’re testing/monitoring it. SDF Screenshot attached.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:58 | SAF | H1 | LHO | YES | LVEA is laser HAZARD | 18:24 |
15:19 | FAC | Karen | Optics Lab | N | Technical Cleaning | 16:19 |
17:49 | SQZ | Camilla | LVEA | YES | Turn on hartman Wavefront Sensor chassis | 17:51 |
21:04 | EE | Marc | MY | N | Part Search | 22:29 |
21:40 | VAC | Janos | Cryo: LVEA, MX/Y, EX/Y | N | Cryopump Check | 23:39 |
Samantha Callos, Robert Schofield
August 30, 2024
CER ACs turned off and on at the following times:
CER Bank 1 (Upstairs)
CER Bank 2 (Downstairs)
The new Daikin heat pump that serves the clean side of the VPW has now been fully tied into the building and is running. One of the two cooling circuits appears to be DOA as it is completely flat. In the coming days Facilities team will pull a vacuum on the problematic lineset to assess whether it has a leak and if so, to what degree. In the meantime, the unit will still heat and, to a lesser degree, cool the spaces. E. Otterman C. Soike T. Guidry
We've been experiencing locking issues the whole day, briefly reaching NLN twice for <30mins. Here's why:
Lock 1: Calib Injection cause
Lock 2: 6.3 EQ Cause
Lock Attempt 3: 38Mph Gusts + Back to back 5.8 mag EQs are preventing ALS from locking. We've lost lock an approximate 8 times now on the way to DRMI, only grabbing DRMI once for <1 minute.
Calibration Early Morning Comissioning 8AM-10AM:
Lockloss during calibration with the probable reasons. Happened as soon as simulines ran an L1_SUSETMX injection ran. Either:
So in short, no calibration work was done.
Squeeze Late Morning 10AM-12PM:
We got to NLN again at 17:17 UTC! Then a 6.3 EQ came though and we lost lock while the ground PEAKMON was reading 4.7 microns. We Initially thought we would make it but alas.
Sadly, no squeeze work was done either.
Earthquakey, Windstormy Quiet Noon 12PM-Now
After this 6.3 EQ, we were hit by two more 5.8’s from Vanuatu while wind speeds steadily went over 30mph from 19:20 UTC to 20:20 UTC.
We've done one initial alignment, having issues there too at SRC align, which has a weird glitch that doesn't let SRM catch only sometimes also causing SQZ_OUT and SR2/SRM to saturate and even trip SRM WD once. Pausing ALIGN_IFO at PREP_FOR_SRY seems to help but for unknown reasons so far.
Where the EQ has calmed down, reducing the ground motion to a lockable state, the wind speeds have picked up past 35mph (which statistically means >2hr lock 95% of the time). ALS is barely locking and we haven't made it past Locking_ALS since 20:02 UTC (1hr 48 mins ago).
Sheila, Louis, Francisco
We changed the ASC-AS_A_DC_YAW_OFFSET from -0.15 to -0.3 and saw an increase in ASC-DHARD_Y power spectrum in the 10-30 Hz range.
Originally, we planned to make simuline and pcaly bb injections with two different offset values for WFS given a thermalized and locked IFO. Having a non-thermalized IFO, we decided to make calibration measurements after changing the WFS offset, reverting the offset, and make calibration measurements again. We lost lock during the calibration measurement. However, we used DTT to see the data from the different offset values during NOMINAL_LOW_NOISE (as seen in AS_A_DC_YAW_OFFSET).
DHARD_DARM_PS_10-30_Hz shows the power spectrum from 10 Hz to 30 Hz for the channels of relevance. From this plot we see that the YAW dof is coupled to DARM such that the lines are visible. The increase in magnitude suggests that a big offset makes things worse, also seen in DHARD_DARM_TF_10-30_Hz, so we might try smaller offsets next time.
DTT template can be found in /ligo/home/francisco.llamas/A2L/DHARD_DARM.xml
Yesterday we added the remote access ioc (RACCESS) uid channels to the DAQ so they can be trended.
I have written a progam called who_is_logged_in which takes any gpstime format and lists who was logged into CDS remotely at that time.
The epoch for these channels is noon Tue 10 Sept 2024.
Example:
who_is_logged_in "17:00 yesterday"
Who is logged remotely into CDS on Tue Sep 10 17:00:00 2024 PDT
cdsssh Number of users 5
cdsssh elenna.capote (1 session)
cdsssh erik.vonreis (1 session)
cdsssh ezekiel.dohmen (1 session)
cdsssh gerardo.moreno (1 session)
cdsssh louis.dartez (2 sessions)
cdslogin Number of users 1
cdslogin david.barker (1 session)
The code uses minute trend data, so you will get the login list rounded to the minute.
If you ask for who is logged in now, it defaults to 10 minutes ago because minute trend data is not immediately available, e.g.
who_is_logged_in now
Who is logged remotely into CDS on Wed Sep 11 13:32:16 2024 PDT
cdsssh Number of users 7
cdsssh elenna.capote (1 session)
cdsssh erik.vonreis (1 session)
cdsssh ezekiel.dohmen (1 session)
cdsssh gerardo.moreno (1 session)
cdsssh jim.warner (1 session)
cdsssh louis.dartez (2 sessions)
cdsssh tyler.guidry (1 session)
cdslogin Number of users 2
cdslogin david.barker (1 session)
cdslogin root (1 session)
Marc Daniel
We measured the gain and phase difference between the new DAC and the existing 20-bit DAC in SUS ETMX. For this we injected 1kHz sine waves and measure gain and phase shifts between the two. We started with a digital gain value of 275.65 for the new DAC and adjusted it to 275.31 after the measurement to keep the gains identical. The new DAC implements a digital AI filter that has a gain of 1.00074 and a phase of -5.48° at 1kHz, which corresponds to a delay of 15.2µs.
This puts the relative gain (new/old) to 1.00074±0.00125 and the delay to 13.71±0.66µs. The variations can be due to the gain variations in the LIGO DAC, the 20-bit DAC, the ADC or the AA chassis.
DAC | Channel Name | Gain | Adjusted | Diff (%) | Phase (°) | Delay (us) |
0 | H1:SUS-ETMX_L3_ESD_DC | 1.00239 | 1.00114 | 0.11% | -5.29955 | -14.72 |
1 | H1:SUS-ETMX_L3_ESD_UR | 1.00026 | 0.99901 | -0.10% | -5.10734 | -14.19 |
2 | H1:SUS-ETMX_L3_ESD_LR | 1.00000 | 0.99875 | -0.12% | -4.93122 | -13.70 |
3 | H1:SUS-ETMX_L3_ESD_UL | 1.00103 | 0.99978 | -0.02% | -5.11118 | -14.20 |
4 | H1:SUS-ETMX_L3_ESD_LL | 1.00088 | 0.99963 | -0.04% | -5.21524 | -14.49 |
8 | H1:SUS-ETMX_L1_COIL_UL | 1.00400 | 1.00275 | 0.27% | -4.72888 | -13.14 |
9 | H1:SUS-ETMX_L1_COIL_LL | 1.00295 | 1.00170 | 0.17% | -4.88883 | -13.58 |
10 | H1:SUS-ETMX_L1_COIL_UR | 1.00125 | 1.00000 | 0.00% | -5.08727 | -14.13 |
11 | H1:SUS-ETMX_L1_COIL_LR | 1.00224 | 1.00099 | 0.10% | -4.92882 | -13.69 |
12 | H1:SUS-ETMX_L2_COIL_UL | 1.00325 | 1.00200 | 0.20% | -4.78859 | -13.30 |
13 | H1:SUS-ETMX_L2_COIL_LL | 1.00245 | 1.00120 | 0.12% | -4.55283 | -12.65 |
14 | H1:SUS-ETMX_L2_COIL_UR | 1.00175 | 1.00050 | 0.05% | -4.52503 | -12.57 |
15 | H1:SUS-ETMX_L2_COIL_LR | 1.00344 | 1.00219 | 0.22% | -5.00466 | -13.90 |
Average | 1.00199 | 1.00074 | 0.07% | -4.93611 | -13.71 | |
Standard Deviation | 0.00125 | 0.00125 | 0.13% | 0.23812 | 0.66 |
FPGA filter is
zpk([585.714+i*32794.8;585.714-i*32794.8;1489.45+i*65519.1;1489.45-i*65519.1;3276.8+i*131031; \
3276.8-i*131031;8738.13+i*261998;8738.13-i*261998], \
[11555.6+i*17294.8;11555.6-i*17294.8;2061.54+i*26720.6;2061.54-i*26720.6;75000+i*93675; \
75000-i*93675;150000+i*187350;150000-i*187350;40000],1,"n")
Vicky, Camilla
Repeated 66946 PSAMs changes with SQZ-OMC scans with a new method of dithering the PZT around the TEM02 mode and minimizing it. With this we improved the mode mismatch from 4% to 3%. It will interesting to see if these settings are still better in full lock. Plots of OMC scan attached and same plot zoomed on peaks attached.
Took OMC scans using tempalte /sqz/h1/Templates/dtt/OMC_SCANS/Sept10_2024_PSAMS_OMC_scan_coldOM2.xml Unlock OMC and H1:OMC-PZT2_OFFSET to -50 (nominal is-17) before starting scan.
ZM4/5 PSAMs | TEM00 | TEM02 |
Mismatch
(% of TEM02)
|
Notes | Ref on Plot |
5.5V/-0.8V | 0.6362 | 0.02684 | 4.048% | Starting | 0 (pink) |
4.0V/0.34V | 0.6602 | 0.02332 | 3.412% | Maximized TEM00 | 1 (blue) |
2.1V/0.2V | 0.6611 |
0.02097 |
3.074% | Minimized TEM02 | 2 (green) This is 0V on the ZM4 PZT. |
3.0V/0.85V | 0.6609 | 0.02209 | 3.234% | Minimized TEM02 | 3 (orange) |
4.0V/0.34V | N/A | N/A | N/A | Minimized TEM02 | Didn't scan, checked that we got similar results with different method of minimumizing TEM02 rather than maximizing TEM00 |
8.1V/-0.4V | 0.6422 | 0.03432 | 5.073% |
Minimized TEM02
|
4 (cyan)
Chose similar ZM4 settings to what we found was good in full lock with cold OM2 in 76986
|
2.1V/-0.1V |
0.6591 | 0.02162 | 3.176% | Minimized TEM02 | 5 (red) |
2.1V/0.2V |
0.6580 | 0.01954 | 2.884% | Back to best ref2 PSAMS values | 6 (brown) LEAVING HERE |
Over 1V and under -1V on ZM5 is bad at most ZM4 strains (tested at 4.0V ZM4). For each step we adjusted ZM4 and then fine adjusted ZM5.
Note OM2 is cold currently for this measurement (and since the vent it seems).
With the orginal PSAMs (5.5V/-0.8V) we had:
At the better PSAMS settings (2.1V/0.2V) plot attached: