Following a lockloss caused by an earthquake, H1 has now just gotten back into observing as of 11:04 UTC.
About an hour ago BSC5 AIP railed, the MEDM vacuum screen site overview shows a red field for this AIP, and the X-End station MEDM shows the AIP railed at 10 mA. No action is required for now, we will take look at it on Tuesday to asses its situtation.
Closes 25869, last completed by Corey in May
All looks nominal, barring a few glitches here and there. The HEPI L0 CONTROL VOUT does look a bit in flux, but not terribly.
TITLE: 06/16 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 134Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 7mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
- H1 has been locked for 8.5 hours and all looks stable
- SEI/DMs/CDS ok
TITLE: 06/15 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 136Mpc
SHIFT SUMMARY:
H1 locked at NLN the entire shift. There were a little over 2hrs OUT of OBSERVING for COMMISSIONING work by Robert & Elenna. H1 rode through a 4.5 earthquake from Mexico. HAM6 OMC video is not working on nuc30 (I believe Erik has been made aware.).
LOG:
Addressed TCS Chillers (10:30pm local) & later CLOSED FAMIS #21123:
[Measurements attached]
BSC CPS: Looks good.
HAM CPS: Looks good.
H1's been locked 4.5hrs. 2hrs of this was for commissioning time, otherwise OBSERVING. Just had a 4.5 EQ from Oaxca roll through (hasn't posted on USGS for some reason). Otherwise, a nice shift with much less wind than the last 2-days!
I took an open loop gain measurement of CSOFT P today, using Craig's unbiased method. The measurement is rough, but it gave me enough of a picture to suggest that increasing the gain might help. I have changed the gain from 20 to 25. This has been SDFed in observe.snap. We should monitor whether we continue to see locklosses from CSOFT P to determine if this was enough of a gain increase, or if the problem was from something else entirely. I'll post the measurement results compared to a model as soon as I have time to do the processing.
Sheila said that LSC-POP_A caused us locklosses in our early 2022 efforts to power up. Now we are running with a large SRCL offset we have more RF power on that diode ands should be wary of this.
She suggested we looked into if the LSC-POP_A photodiodes were saturated in some way, not necessarily the ADC counts but in the RF electronics, I've looked at 4 lock losses and see no evidence of POP_A saturations so far, see attached. Sheila thought this might be hard to identify. We could try the LSC sensing matrix using Craig's script to check the phases, that's what we did to help us improve this before 62662 62486.
Our fastest recorded channels of these RF signals are 16Hz, we could look further downstream for 16kHz recorded channels.
Here is a list of the things we need to change to return as closely as possible to a desirable 60W configuration.
If you are looking for a timestamp to determine when the power change occurred, the last full lock at 60W was on April 6 from about 17:00 UTC to April 7 3:30 UTC.
Under LSC controls, I claimed that we should revert the PRCL loop design, however Gabriele reminded me that the new PRCL design has better suppression, see alog 68817. We should keep this new design, but we should still determine how/if we need to change the gain to ensure the loop UGF is around 30 Hz.
Under LSC feedforward, I forgot to mention that we did not run with PRCL feedforward at 60W, so we can turn that back off at 60W.
I have also recovered the old MICH FF filter that was in FM9, called "May_d". At 60W, we will need to engage FM6-9. labeled May a-d.
We will need to update the violin mode threshhold checker. The counts value for the DARM offset was hard coded, and will be different at 60W.This value will only need to change if we change the DARM offset.
Tagging a lot of the teams who will either need to be involved in these changes, or at least be impacted by these changes when/while we revert.
J. Kissel, J. Driggers, N. Aritomi, S. Dwyer
Just FYI I brought up the open question in Elenna's aLOG about
DARM offset: 20 mA, not sure if we want to revert this value
The quick consensus (without agreeing to write it in stone) is that we "plan" to *not* revert the DARM offset, leaving us with 40 mA of current on the DCPDs, as has been the case since May 05 2023 (see LHO:69358).
J. Kissel, J. Driggers, S. Dwyer
Regarding the following setting suggestions in this bullet point,
SRCL offset: we had been running with an offset of -175.
This was also with the previous LSC-POP_RF45 whitening at 21 dB.
We could revert the whitening change as well if we think it's better
for noise considerations
The plan is to *definitely* go to the -175 ct SRCL offset, however -- upon discussion this morning -- we've decided *not* to revert the reduction in POP A RF45 whitening gain from +21 dB to +15 dB. Said with all positives to avoid confusion, we'll continue to reduce the gain to +15 dB rather than revert to +21 dB.
We think
- the extra ADC range head room is nice,
- the sacrifice in SRCL / PRCL sensing noise is minimal, and/or has minimal impact***
- for now, today, when we power down, we want to change as little as is need to achieve stability, rather than revert absolutely everything.
***One may find the assessment of the noise impact in LHO:69350.
I forgot to include this in this alog, but the CSOFT P gain should probably be reduced to 20 again. This was a change made late last week.
TITLE: 06/15 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 134Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 16mph Gusts, 12mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
H1 is Observing, but will be taken out for COMMISSIONING shortly for Robert EX imaging work & Elenna measurement. Nuc27 had not been showing H1's range but Erik recently fixed it.
Low winds *knock on wood*
TITLE: 06/15 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 135Mpc
SHIFT SUMMARY: Large EQ today kept us DOWN for ~ 4 hours. EX has been left in LASER HAZARD for Robert's photos he plans to take in a commissioning time later. Plan for commissioning time at 23:25UTC.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 15:10 | CDS | Dave | Remote | N | Rebooting nuc26, nuc30 and cameras with issues | 15:33 |
| 15:54 | CDS | WAP | Remote | N | Dave turned MSR WAP on/off 70480 | 15:56 |
| 17:08 | PEM | Plane | CR | N | Plane heard overhead (tagging DetChar) | 17:10 |
| 17:15 | VAC | Janos | MY | N | VAC checks | 17:50 |
| 18:33 | PEM | Robert | EX | YES | Installing cameras at EX WP#11264 | 19:59 |
| 18:43 | WAP | Robert | EX | N | Wifi turned ON | 22:40 |
| 19:52 | CDS | Erik, Jonathan | CER | N | Check on HAM6 camera cable 70478 | 20:00 |
| 20:00 | LASER | EX | EX | YES | EX Left in LASER HAZARD | 04:00 |
Thu Jun 15 10:06:34 2023 INFO: Fill completed in 6min 33secs
Jordan confirmed a good fill curbside.
Naoki, Vicky
To try damping the 80 kHz PI recently causing locklosses (LHO:70434), today we installed a new path on PI28, with drive sent to ETMX. To phase-lock the ESD damping drive to the PI mode, we bandpassed the DCPD signal around 80.296 kHz (foton here). This frequency was chosen based on the DCPDs full-spectrum signal around 80.3 kHz, which today we saw the 3 peaks in red here between 80.295-80.301 kHz in full lock. It seems like our problem is the bigger peak around *296, where pink cursors are centered. We are trying to damp on ETMX first, since it seems like PI29 80kHz damping on ETMX could impact this mode.
The new path for PI28 has been updated and damping is guardianized, but this PI28 damping is untested, and there are no verbal alarms for PI28 yet.
Summary with the current status of PI damping:
| PI frequency | PI damping mode number | Test Mass | PI Guardian status | _DAMP_GAIN |
| 10.428 kHz | 24 | ETMY | automated | 1000 |
| 10.431 kHz | 31 | ETMY | automated | 1000 |
|
80.302 kHz (LHO:68760) |
29 | ETMX | automated, likely working (LHO:70243) |
50000 |
|
80.296 kHz (LHO:70443) |
28 | ETMX | guardianized, testing now |
50000 |
SDFs were recoinciled after, see screenshots -- mainly, we un-monitored guardian-controlled things like the damping phase and PLL integrator.
I've added in a test for PI mode 28 into verbal alarms with an RMS threshold of 1 for now (the same as mode 29).
Just checking in, looks like PI28 does at least see a mode come through ~1-2 hours into lock (RMSMON spikes just before NLN b/c OMC whitening, the second peak is the real one), but this hasn't run away yet. Then ~10 min after PI28, looks like PI29 sees something pass through.
Looking at disaggregated temperature trends across the EX VEA over the last year, it might be a bit misleading to only look at the "average EX VEA" temperature. Temperatures in some parts of the EX VEA seem to have drifted by up to 0.5-2 deg F over the past few months. Temps now seem to be returning to where they were about 6mo - 1 year ago.
I think Robert and Aidan have both made the point that, to understand temperature drifts, it's helpful to look at the individual temperature sensors across the VEA, rather than the average VEA temperature. For example, see Aidan's alog LLO:25785 where he also thought about stabilizing the VEA temperatures to a different sensor, which is better correlated with the test mass temperature. For this 80 kHz PI though, Aidan's also said that the temperature dependence of the mechanical mode freq is about 80ppm/K, so for ~0.5K (delta~1degF), the HOM frequency changes by ~3 Hz for the 80 kHz mechanical mode, and it seems unlikely we're just within 3 Hz of the PI going unstable -- so it's not totally convincing that our recent 80 kHz PI ring ups are simply b/c VEA temperature drifts. But at least, at LLO, he found their ETMY is most correlated with the EY VEA_202B sensor.
I'm not sure which of LHO's EX VEA sensors are most correlated with ETMX. But, it may be worth considering more than the average VEA temperature. Especially since the individual temperature sensors have seen some drifts over the past year, which aren't seen by trends of the average VEA temperature.
Brina, Sheila
The DCPD balance matrix is normally set by Jeff using the method described in 47217. Today I wanted to try setting the matrix so that the contribution of the two sensors to the DARM loop gain is set to be equal, because I think this will make it easier to correct for the imbalance of the PDs while doing the offline cross correlation. I compared pcal line heights in the DCPD_A and DCPD_B at 17.1 Hz (A/B = h = 1.0277) and at 410.3Hz (A/B = 1.0362). We chose the 17Hz value, and calculated the matrix elements for DCPD A to the SUM 2/(h+1) = 0.9863 and B to the sum 2h/(h+1) = 1.0137.
The attached text file has commands used to copy and paste into a guardian shell to swap the DARM loop to one DCPD, and change the matrix elements. After doing the swap we measured the DARM OLG, which is the live trace in the attached dtt template.
no sqz start: 1370816035 (Jun 14 2023 22:13:37 UTC). lockloss: 22:23:18 UTC, we were just sitting there collecting data with no SQZ. The matrix elements have been reset by SDF.
deleted
Here are some plots of the cross correlation for this time.
The first is a comparison of the pyDARM model of the OLG to the measured OLG. At 24Hz, the model was predicting 2% less gain than the measurement, so here I've scaled the model up by 2% and used that for the estiamtion of the correlated noise.
The second plot is the DCPD sum ASD loop corrected compared to the cross correlation. You can see that the cross correlation is above the DCPD sum by 7% at 24Hz, which is incorrect. I thought about if this could be because of the imbalance of the DCPDs, I will attach a note here that explains how I attempted to handle this imbalance in estimating the cross correlation. (The DCPD_A and B channels are recorded before mulitplying by the balance matrix, the sum channel is after that matrix.) In the end this did not make a significant difference, the cross correlation is still nearly 7% overestimating DARM at 24Hz when I corrected for this.
The third plot shows the cross correlation compared to an estimate of the correlated noise obtained by subtracting the calculated shot noise from the DCPD SUM asd in quadrature, this mostly agrees with the cross correlation except at low frequencies.
These plots were made using the code https://git.ligo.org/sheila-dwyer/cross-corelation commit 3c60740b I will attempt to make a comparison of this code with Craig's cross correlation code to see if this problem is still present there.
Here is a note describing how I corrected for the DCPD imbalance. Once the matrix was reset as above, things become simple. I will add a diagram to this note if I have time.
Daniel raised the point that perhaps a phase difference between the two PDs could explain a discrepancy at low frequency, in the correction I did I assumed that the two paths were only imbalanced by a scalar gain. The attached png shows a transfer function between the two DCPD channels, taken at the time of one of yesterday's broadband pcal injections. The frequency dependence of this at first glance doesn't seem right to explain what we see, ie, the error in the cross correlation doesn't have a wiggle between 20-30 Hz, although the error does seem to happen around the frequency of the cross correlation problem.
Commissioning period,
The Ham1 FF was turned off for testing on 06/14/23 21:32:15. It stayed off for 5 minutes and was turned back on at 21:37:15.
My motivation for this test came from a bruco that I ran on the data from a long lock over the weekend: https://ldas-jobs.ligo-wa.caltech.edu/~elenna.capote/brucos/CAL_1370343461/
Specifically, the CHARD P, INP1 P and HAM1 TT L4C RY coherence were much higher than expected, and much higher than they had been in the past after successful HAM1 FF tuning and A2L gain adjustments.
The test first confirmed that we are still seeing decent subtraction of the HAM1 noise from the ASC loops, as seen in an OMC DCPD sum comparison with the feedforward on and off. I also grabbed spectra of each of the ASC loops with the feedforward on and off (in this plot the red, live traces are with the feedforward off, and the blue reference traces are with the feedforward on).
I used the feedforward off time to run the NonSENS training code and calculate a new feedforward for CHARD P, INP1 P and PRC2 P. The code also makes plots of the expected subtraction of the loops. I compared the expected subtraction plots (linked below) to the current subtraction plots linked above, and I conclude that:
I didn't check any yaw loops because there is already decent noise removal, and coherences are low. I don't expect to see much improvement there.
I will install the new INP1 P and PRC2 P feedforward filters, labeled with today's date. I think they should be engaged for the next lock if possible.
I don't understand why the coupling has changed, but I think this is a similar mystery change that changed like several other things in the IFO changed recently- perhaps some new alignment from the PR3 move? In other words, unless we have to make another big alignment change like that, I don't expect us to need to update this feedforward for a while.
Turning off the HAM1 feedforward made the CHARD and PRC2 noise signficantly larger, by about a factor 10. The effect on DARM is significant, and consistent with the fact that CHARD or PRC2 are coupling more now than before, and when the FF was on were just below the measured DARM. This is consistent with the measured coherence between DARM and CHARD or HAM1 sensors.
One possible reason for the higher coupling of HAM1 noise to DARM is that the beam spot might have moved on PR2 (where PRC2 is driven) and therefore the A2L we tuned some time ago might be wrong. It's worth and quick retuning the PR2 A2L and see if that improves the coupling of PRC2 to DARM and maybe even DARM noise.
The new filters have not yet been installed because I am getting errors from foton when I try to copy them in. I will try again tomorrow.
I have installed the new feedforward filters for INP1 and PRC2. They are labeled with "0616" for today's date. They are currently not in use, but can be quickly tested during a commissioning period. A thermalized IFO is best for the test, but they can be tried at any time.
New filters implemented and SDFed.
Daniel, Sheila
We turned off the 9 Mhz, 45 MHz, and 117 MHz sidebands in order to do an OMC loss measurement. We used a single bounce beam off of ITMX, with 10W input from the PSL. We spent some time trying to improve the alignment before making OMC scans.
locked: 1370711576 (OMC REFL avg 3.51mW, OMC DCPD sum 15.23mA)
unlocked: 1370711782 (OMC REFL avg 24.73 mW, OMC DCPD sum 0.078 mA)
OMC scan start: 1370712036 duration 100 seconds (2nd order modes are roughly 8% of the 00 mode).
shutter blocked: 1370712337 (OMC REFL avg -0.030 DCPD SUM 8e-4 mA).
Jennie Wright plans to analyze this data to estimate OMC losses.
Here are the plots of ASC-AS_C_NSUM, OMC-QPD_A_NSUM, OMC-QPD_B_NSUM and OMC-REFL_A_LF, during these measurements. ASC-AS_C_NSUM shows between 22.8 and 32.1mW, OMC-QPD_A_NSUM 23.4mW, OMC-QPD_B_NSUM 23.0mW, and OMC-REFL_A_LF 24.8mW. According to Keita OMC-REFL_A_DC has an incorrect calibration and shows 25.2mW. The average of the 2 QPDs would be 23.2mW, which is about 6.5% lower than 24.8mW.
Second screen shots shows a time when the IMC was unlocked. The DC offsets are in the 10s of uW at most.
Using data from the scan I adapted labutils/OMCscan class to plot the fitted scan and adapted labutils/fit_two_peaks.py to fit a sum of two lorentzians functions for distinguishing carrier 20/02 modes.
The first graph is the OMC scan plot, the second is the curvefit for the second order carrier modes.
We expect the HOM spacing to be 0.588 MHz as per this entry and DCC T1500060 Table 25.
The spacing for the modes measured is 0.592 MHz.
From the heights of the two peaks this suggests mode-mismatch of the OMC to be C02+C20/C00 = (0.83+1.158)/(15.32+0.83+1.158) = 11.0% mode mis-match.
From the locked/unlocked powers on the OMC REFL PD the visibility on resonance is 1-(3.51+0.03/24.73+0.03) = 85.7% visibility.
If the total loss is 14.3%, this implies that the other non mode-matching losses are roughly 1.3%.
To run the OMC scan code go to
/ligo/gitcommon/labutils/omc_scan/ and run
python OMCscan_nosidebands.py 1370712036 100 "Sidebands off, 10W input" "single bounce" --verbose --make_plot -o 2
in the labutils conda environment and on git branch dev.
To do the double peak fitting run:
python fit_two_peaks_no_sidebands.py
in the labutils conda environment and on git branch dev.
These scans were done with OM2 cold.
For comparison with new OMC measurements I used Sheila's code to process the visibility, but updated dit to use nds2utils instead of gwpy as I was having trouble using it to get data.
The code is attached and should be run in the nds2utils conda environment on the CDS workstations.
Power on refl diode when cavity is off resonance: 24.757 mW
Incident power on OMC breadboard (before QPD pickoff): 25.239 mW
Power on refl diode on resonance: 3.525 mW
Measured effiency (DCPD current/responsivity if QE=1)/ incident power on OMC breadboard: 70.4 %
assumed QE: 100 %
power in transmission (for this QE) 17.760 mW
HOM content infered: 13.472 %
Cavity transmission infered: 82.111 %
predicted efficiency () (R_inputBS * mode_matching * cavity_transmission * QE): 70.367 %
omc efficency for 00 mode (including pick off BS, cavity transmission, and QE): 81.323 %
round trip loss: 1605 (ppm)
Finesse: 371.769