Attached is a cross correlation plot for 20 minutes of no sqz time taken June 21st (70668) after first reducing the input power to 60W (366kW circulating power), this was before the LSC feedforward was retuned improving the sensitivity below 50Hz. The first plot is in loop corrected mA, you can compare the correlated noise estimate by the cross correlation and by subtracting the shot noise. At high frequencies the shot noise subtraction doesn't work well, I believe this is due to imperfections in how the DCPD sum mA channel is calibrated into actual mA. At low frequencies the cross correlation is overestimating the DCPD sum PSD, this isn't due to an error in the OLG measurement. See alog 70453 for some information about checks of the cross correlation and this low frequency problem. In mid frequencies the two methods seem to agree. The second attachment is the same data calibrated into displacement, with a model of quantum raditation pressure noise included and subtracted from the two estimates. For this time the DARM OLG model from pyDARM was underestimating the OLG by 2% at 24Hz, so I've scaled it up by 2%.
The next set of attachments are the same two plots made for 2 hour of no sqz time from June 4th when the HAM7 ISI had a problem (70117), in DCPD mA and in displacement . We don't have a DARM OLG measurement from this time, so we are just using the pyDARM model without any scaling.
Evan H pointed out that the calibration into displacement for the plots above were incorrect. This is because I used pyDARM to get the calibration from DARM err to displacement, but forgot to update my hardcoded scalar to translate mA to DARM err. I also changed the ini file that is pointed to, for 75W I'm using '/ligo/home/jeffrey.kissel/2023-05-10/20230506T182203Z_pydarm_H1.ini' for times after June 22nd (60W) I'm using '/ligo/groups/cal/H1/reports/20230621T211522Z/pydarm_H1_site.ini' When I use pydarm to calibrate mA into meters, I am not applying corrections for the kappas here, which GDS does. In all the attached calibrated plots, GDS_STRAIN shows higher noise from 60-90 Hz, which might be consitent with the fairly large GDS calibration error which has been mostly consistent throughout these configuration changes (see 70907 and 70705)
We also took another set of cross correlation data for 1 hour with the hot om2 on Wed, 70930, those plots are attached here as the last two attachments. It is interesting to open all three of these plots in a browser and look back and forth between them. The most obvious change is the jitter peaks, and the improvement at low frequency from the power reduction. But there also seems to be a broad change in noise from 60-100Hz, which should probably be confirmed by looking at other times and double checking the calibrations.
JoeB and I picked a time, and we're running /opt/rtcds/userapps/release/pem/h1/scripts$ python inject_mag_10to40.py (attached here with the current gps time we used).
I also made sure that our amplifiers were on (I think / hope) via the medm screen, which I've also attached.
In case it's helpful, here's the state of the filter bank.
Injection was succesfull.
Fig1 &2: Hx magnetometer - Lx magnetometer: Coherence/CSD, before, during and after injection
Fig3 &4: H strain - Lstrain: Coherence/CSD, before, during and after injection
Channels used:
Hx mag = H1:PEM-CS_MAG_LVEA_VERTEX_X_DQ
Lx mag = L1:PEM-CS_MAG_LVEA_VERTEX_X_DQ
H strain =H1:GDS-CALIB_STRAIN
L strain = L1:GDS-CALIB_STRAIN
Times used:
Before: start: 900sec before injection - duration: 300 sec, 10 sec fft, 50% overlap
Injection: start gps = 1372010416 (June 28 - 17:59:58 UTC) - duration: 300 sec, 10 sec fft, 50% overlap
After: start: 600sec after injection - duration: 300 sec, 10 sec fft, 50% overlap
Wed Jun 28 10:11:47 2023 INFO: Fill completed in 11min 46secs
Jordan confirmed that this was not a good fill from curbside. The TC Temps slowly dropped to the -120C trip point with no noticable increase in the discharge line pressure.
Gerardo will perform a manual fill later today after all the temperatures have normalized.
We're running the stochastic long injection, in concert with LLO, since last Friday they were interrupted by an earthquake (alog 70771).
hwinj stochastic long --time 1372008389 --run
TITLE: 06/28 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 143Mpc
SHIFT SUMMARY: Quiet night, locked at NLN for 17h50.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 07:11 | TCS | Out of Observing due to CO2X Laser Unlock 70906 | 07:13 | |||
| 08:39 | Fire Panel | CUR | N | Buzzing from Fire Panel alarm @ 8:39 UTC Tagging DetChar | 08:39 | |
| 14:36 | VAC | Janos | EY | n | Turn off pump for dewar | still out |
Plot attached of jitter nose at 120Hz and the noise at 4.5kHz changing thoughout the lock (CAL-DELTAL_EXTERNAL plotted). High freq noise is muchlower as soon as we lock (black dahsed line) and then quickly gets worse before slowly the 4500Hz noise starts decreasing. Also seen via SQZ BLRMs bands around 135Hz (orange) and 4.6kHz (purple), see attached plot, and in DARM BLRMS band 100-450Hz plot. Tagging ISC.
TITLE: 06/28 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 143Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 1mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY: Locked for almost 18 hours, range in the 140s. Quiet on site so far.
Janos noted that while he was at EY he heard some loud banging off in the distance, possible concrete hammer or small explosions?
I took a quick look at the sei .3-1hz blrms time series and didn't notice anything in the last hour (1st attachment). The EY microphones had one small feature at around 14:45:31 UTC (2nd screen shot), and it looks to be lower frequency (3rd screenshot, blue trace at the time, red just before). So maybe this is what Janos was referring to? I'm not seeing any other features elsewhere though.
STATE of H1: Observing at 144Mpc. Been at NLN for 14 hours.
Could not see a difference in DARM between the windy and not windy times above, see attached.
I'm attaching the calibration report produced by Tony in LHO 70902. Page three shows tonight's sensing function sweep in a quad plot overlaid with previous sensing sweeps in the current calibration epoch. It looks like we have evidence of a detuned spring in the sensing function.
I'm attaching a screenshot showing a comparison of Camilla's broadband calibration measurement (LHO 70854) with the hot OM2 (LHO 70849) vs a measurement taken last week while the OM2 was still cold. The pink trace shows a measurement taken last week on 6/21 and the green was taken while the OM2 was heated. I want to point out the right column, which showsH1:GDS-CALIB_STRAIN_CLEAN / H1:CAL-PCALY_RX_PD_OUT_DQ(since Nonsens cleaning is currently turned off as per LHO 70654H1:GDS-CALIB_STRAIN_CLEANis the same asH1:GDS-CALIB_STRAIN_NOLINES). The OM2 TSAMS changes have resulted is roughly 2% additional error at 20Hz, among several other changes that I can't fully explain yet. It's worth noting here that the GDS pipeline applies time dependent corrections toH1:CAL-DELTAL_EXTERNALin real time as picked up by the TDCFs ("the kappas"). The fact that there are such noticeable differences inH1:GDS-CALIB_STRAIN_CLEAN / H1:CAL-PCALY_RX_PD_OUT_DQindicates that the effects due to the OM2 being heated on the sensing function are not being tracked by the Kappas. If they were then we'd see two overlaid traces on the PCALY to GDS-CALIB_STRAIN plot. I'm also leaving a screenshot showing how the kappas changed after the OM2 adjustment (kappas_scope.png). KappaC and Fcc changed by ~1.6% and 3%, respectively. Notably, the UIM kappa was about 2% off and then slid to ~1? I'm also attaching trends of the systematic error lines (sys_err_lines_scope.png) from/ligo/home/jeffrey.kissel/Templates/NDScope/uncertainty.yaml(not a typo). Several of the systematic error lines experience a drift after the OM2 TSAMS change. Jeff suggested that changes to the sensing function, which will impact the systematic error measurements (1/(1+G)), could "bleed" into and influence the line measurements that the kappa calculations are based on. More on this later. refs in attached dtt screenshot: refs 0-9: same as listed in LHO 70705 6/27 om2 hot: ref 10: deltal_external ref 11: gds calib strain clean
Plot attached, right is a zoom of left. Unsure why the ITMX CO2 laser PZT noise-dived to 0V. Laser temperature looked stable but is now decreasing due to a slightly lower lock point.
TITLE: 06/28 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 144Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 17mph Gusts, 14mph 5min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY: Been in NLN for 10 hours.
The wind picked up outside and dust monitor counts are EY dust counts are rising with it, see attached.
TITLE: 06/28 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 144Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 19mph Gusts, 15mph 5min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.05 μm/s
QUICK SUMMARY:
Handing Camilla an IFO Locked for 10 hours.
The wind started to pick up in the last 3 hours and the Dust monitors in the optics lab are starting to sound off.
TCS Chiller Water Level Top-Off - FAMIS 21124
| TCS X | TCS Y | |
|---|---|---|
| Previous Level | 30.5 | 9.7 |
| New Level | 30.5 | 10.5 |
| Water added | 0mL | 300mL |
Inherited H1 which was already locked for about 2 hours.
If Livingston unlockes then I will be trying to take Calibration pcal BB and Sensing function, reload Diag_main, and load IMC_LOCK.
1:32 UTC:
Dropped out of Observing and into commisioning due to an accident. See alog.
Since we were no longer in Observing, I took the oppertunity and DIAG_MAIN & IMC_LOCK were reloaded.
Livingston then fell out of observing, and we took the lucky oppertunity to make a calibration measurement. See alog.
NOMINAL_LOW_NOISE was reached again at 2:35
Current H1 IFO Status: NOMINAL_LOW_NOISE & OBSERVING.
1:45 UTC:
Took ISC_LOCK to CNLN_CAL_MEAS
& ran the following terminal command:
pydarm measure --run-headless bb sens pcal
I took a screen shot of the Calibration monitor screen.
Report file path:
/ligo/groups/cal/H1/reports/20230628T015112Z/H1_calibration_report_20230628T015112Z.pdf
Got back to NOMINAL_LOW_NOISE at 2:35 UTC
I was preparing some new ASC filters for testing tomorrow, so I placed the new filters in unused filter banks of CHARD P. I loaded the coefficients, not realizing there were other differences in the ASC file. This pulled us out of observing. I am not sure what the ASC diffs were, as they were all related to the camera servos. Before Tony or I could investigate what the diffs were, they resolved themselves. It looks like the camera servos briefly turned themselves off and back on.
Apologies for the out-of-observing! It looks like whatever the issue was, it resolved itself.
[Jenne, Jason, Daniel]
Since Daniel has seen so much coherence with REFL port RIN (alog 70611), we thought we'd try increasing the ISS gain in case it's due to our being gain limited at high frequencies.
We didn't have much phase margin at all in the second loop (Measured last week in alog 70684), but we did in the first loop (last measured Feb 2022, alog 61826).
So, we increased the first loop by 6 dB (first attachment), now UGF is 46kHz with 39 deg phase margin. The gain slider is now at 11 dB. We also had to adjust the output offset from 5 to 4.3. We can turn on / off the first loop at input power of 2W just fine.
We increased the second loop by 4 dB (second attachment), now UGF measured at the -20dB line is 37 kHz with 32 deg phase margin. We had tried increasing the second loop by a further 2 dB (third attachment), but we decided that 41 kHz UGF wasn't worth having only 23 deg phase margin. These measurements were both taken with the input power at our current nominal 60W, but PRM and both ITMs misaligned. For the current lock (as well as for all these measurements) we just changed the gain after the ISS second loop was closed and DC coupled.
Still to do: At low power / DOWN in IMC_LOCK, the ISS second loop gain slider should be -2 dB (that's where all the offsets were most recently tuned). Then, in CLOSE_ISS, once the ISS is closed, the gain slider should be increased to +2 dB.
Also to do - make sure we know / remember how to get the SR785 connected to the wifi so we can download the data to the control room workstations.
The IMC_LOCK guardian's DOWN state already looks at and uses the lscparams.ISS_acquisition_gain setting of -2.
I have added some logic to the OPEN_ISS state of IMC_LOCK to also reset the second loop gain slider to -2 (to prepare for re-closing it).
I have modified the logic in CLOSE_ISS to increase the second loop gain. The final gain value for H1:PSL-ISS_SECONDLOOP_GAIN when we get to Observing should be +2.
Since we're in Observing, I have not loaded the IMC_LOCK guardian. I'll leave a sticky not for the operator to do that when we're next out of Observe.
If, when we get to Observing, the H1:PSL-ISS_SECONDLOOP_GAIN is not at +2, it is okay (for tonight, until I debug / fix why guardian didn't do it right) to change the slider in units of 1 (which is the default slider click value) until it is +2.
If there are any troubles with this this evening, please give me a call.
Ran stochastic short and long hardware injections, in coincidence with LLO. These are of the style described in alog 69723.
hwinj stochastic short --gps 1371592362 --run
hwinj stochastic long --gps 1371593299 --run (Interrupted by earthquake. LLO lost lock, so I ctrl-c'd)
It turns out that those awg scripts don't release their exitations nicely, which means that we weren't going to be able to go to Observe until they were cleared. In clearing them, I accidentally stopped the CW hardware injections. Once those were back in place, we're back to Observing.
The rest of this alog is some details, in case we need to refer to them long-term.
When trying to figure out how to stop an individual excitation (the Transient hardware excitation point) I misunderstood the meaning of the channel numbers on the GDS TABLE from the calinj's GDS screen (see attachment), and inadvertently stopped the CW injection rather than the Transient injection path. Those numbers are for *testpoints*, not awg slots (which is consistent with what the top row of the table says, but it didn't click for me). Erik reminded me later that I could have checked awg slots with diag> awg show 42 (where 42 is h1calinj's number), but it's still not clear that you can selectively clear a single excitation. Jamie and the CDS folks are going to follow this up.
In the end, I did an awg clear 42 * and also tp clear 42 *, and that released all the excitations (since we already had to restart the CW injections).
Apparently there is a 'monit' process that should automatically restart the CW injections if they are stopped for some reason like this. However, Dave found that since tconvert has an output (due to a warning), that monit process was not being successful. Dave worked some magic, and tconvert no longer gave a warning, which meant that the monit process was able to restart the CW injections. After that was done, we were able to go back to Observe long term.
Until we understood the issue with the startup script, I had been holding us out of Observe since we didn't have the CW injections. Keith let me know that they have monitoring downstream for when those are or are not in place, so it would have been fine if the CW injections were not in place for a few hours to a ~day, until we were able to debug. I tried to set us to Observing, however every few minutes the auto-startup-process was trying to start the injection, which begins with setting a gain to zero (so that it can later be ramped on), so that SDF diff kept popping us out of Observe. Dave stopped the monit process, so we went back to Observe and stayed there for a few minutes. By then, Dave had fixed the tconvert warning issue, and we went back out of Observe one last time, Dave restarted the monit process, and it restarted the CW injections. Now we're *really* back in Observe.
EDIT: Dave's alog about the magic: alog 70775
It seems that the --gps option has been removed and no longer works (it did work last Fri). We successfully used the --time option to give it the gps time.