h1guardian1, the guardian server, was rebooted to test a new boot drive. This drive was hotswapped into guardian last week.
Some confusion was caused when guardian defaulted to boot off a drive from an older guardian server. This drive had been installed as a reference, but was no longer used.
In order to get h1guardian1 booting reliably, this drive was removed from the server.
J. Freed, S. Dwyer
Yesterday we did damping loop injections on all 6 BOSEMs on the PR3 M1. PR3 shows quite alot of coupling in the 10-25Hz range. This is a continuation of the work done previously for ITMX, ITMY, and PR2
As some signals were quite strong, instead of gain of 750, gains of 300 and 600 were collected (300 is labled as low_noise). Also, this time injections were performed in diaggui instead of awggui
The plots, code, and flagged frequencies are located at /ligo/home/joshua.freed/20241021/scrpts. While the diaggui files are at /ligo/home/joshua.freed/20241021/data. This time, 600 gain data was also saved as a reference in the diaggui files (see below), saved in 20241021_H1SUSPR3_M1_OSEMNoise_T3.xml
pr3.png Shows all results for PR3 with the top half being at 300 gain and the bottom being at 600 gain. All sensors showed strong coupling in 10-25Hz range at 600 gain. [LF, RT, T2, T3] showed strong coupling in 10-25Hz range at 300 gain. [SD, T1] instead showed some coupling in the 46-48Hz range at 300 gain. I am unsure if this is signifficant or another noise source while the test was performed.
Because of the failure of the CNS power supply at EX, which was the same age as the power supply at EY, and because of some glitches starting to occur with GPS 1 PPS signal from the CNS at EY, I replaced the power supply for the CNS at EY.
relevant alogs here:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=80742
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=80537
Tue Oct 22 10:11:21 2024 INFO: Fill completed in 11min 17secs
Gerardo confirmed a good fill curbside.
FAMIS 31056
Several things have happened in the past week that show on these trends:
TITLE: 10/22 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 4mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.13 μm/s
QUICK SUMMARY:
IFO is LOCKING (but not for long). We have an 8 hr maintenance day today, expecting the following activities:
Workstations were updated and rebooted. This was an os packages update. Conda packages were not updated.
I was left in charge of the IFO with directions from Camilla on how to take us into observing and set TJ to the owl operator. There was a lockloss just as I was about to click "observe", approx 12:21 UTC.
Elenna, Camilla
Ran the automatic DARM offset sweep via Elenna's instructions (took <15 minutes):
cd /ligo/gitcommon/labutils/darm_offset_step/ conda activate labutils python auto_darm_offset_step.pyDARM offset moves recorded to /ligo/gitcommon/labutils/darm_offset_step/data/darm_offset_steps_2024_Oct_21_23_37_44_UTC.txt
Reverted the attached tramp sdf diffs afterwards, see attached. Maybe the script needs to be adjusted to automatically do this.
I just ran Craig's script to analyze these results. The script fits a contrast defect of 0.742 mW using the 255.0 Hz data and 0.771 mW using the 410.3 Hz data. This value is lower than the previously 1 mW on July 11 (alog 79045), which matches up nicely with our reduced frequency noise since the OFI repair (alog 80596).
I attached the plots that the code generates.
This result then estimates that the homodyne angle is about 7 degrees.
Last year I added some code to plot_darm_optical_gain_vs_dcpd_sum.py to calculate the losses through HAM 6.
Sheila and I have started looking at those again post-OFI replacement.
Just attaching the plot of power at the anti-symmetric port (ie. into HAM 6) vs. power after the OMC as measured by the DCPDs.
The plot is found in /ligo/gitcommon/labutils/darm_offset_step/figures/ and is also on the last page of the pdf Elenna linked above.
From this plot we can see that the relationship between the power into HAM 6 (P_AS) is related to the power at the output DCPDs as follows.
P_AS = 1.220*P_DCPD + 656.818 mW
Where the second term is light that will be rejected by the OMC + that which gets through the OMC but is insensitive to DARM length changes.
The throughput between the anti-symmetric port and the DCPDs is 1/1.22 = 0.820. So that means 18% of the TM00 light that we want at the DCPDs is lost through HAM 6.
TITLE: 10/21 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: TJ (remote)
SHIFT SUMMARY:
IFO is in NLN and COMISSIONING as of 14:40
A lot happened today, mostly related to two brief power outages taking down a few systems. Here's the summary (times in UTC).
Power Outage Story Start (alog 80791)
Pre-Shift:
11:15:48 Local: PSL downed (PMC High Voltage, HAM6 PZT High Voltage, NPRO tripped), Ryan called after 15 mins in READY. Ryan C and Jason troubleshoot, figuring out that it can’t be fixed remotely. OWL alog 80782 They hopefully go back to sleep. IFO in MAINTENANCE.
11:52 Local: Power Glitch 2, the sequel. IFO can’t lock, stays downed.
Shift Start:
14:30: I arrive and realize the PSL has no power and that NDS is being slow. I read the alog and mattermost and saw that Ryan C (OWL Ops) was called and troubleshooting.
14:45: Dave gets on TS and figures out that it was indeed a power glitch (alog bla bla) that took down 2 front end channels, the PMC Hi-Voltage, the NPRO (tripping), HAM6 PZT and potentially caused some NDS troubles. Confusion ensued about what should and shouldn’t have gone down over such a glitch.
15:00: Jonathan joins, goes into the MSR and checks the UPS, and confirms that this had gone on battery for the 2 outage times. This further confirms us that there was definitely bad sitewide power, so it went on battery back-up.
15:00: By now, Dave and Jonathan have understood the outage effects apart from 2 front-end channels going down during the outages, and auto-rebooting shortly thereafter. The two culprits are SUSAUXH2, SUSAUXEX. These aren’t on the UPS so CDS is still confused as to why those went down. Dave cleared the CDS overview alerts and CDS is investigating.
15:05: Fil joins and turns the HAM6 PZT and PMC Hi-Voltage back on.
15:10: Ryan S joins and turns on the PSL, allowing us to lock.
Power Outage Story End, Normal Locking Start:
15:25 Guardian begins initial alignment but FSS glitch (unrelated rabbit hole) happens so we relock the IMC and continue.
16:00 Initial alignment ends and we begin locking! High guardian state LL and alignment issues requiring initial alignment happen, delaying us further (but not for too much longer!)
18:07: NLN Acquired, IFO OBSERVING
19:55: Lockloss alog 80798 (Not PSL, unknown cause).
21:15: Trucks on the move in prep for Earthmoving.
21:40: NLN Acquired, IFO COMISSIONING
23:30: Shift End, Still comissioning
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 16:36 | SAFE | HAZARD | LVEA | YES | !!!!!LVEA IS LASER HAZARD!!!! | 03:16 |
| 17:39 | FAC | Karen | MY | N | Technical cleaning | 17:39 |
| 17:39 | FAC | Karen | Wood Shop | N | Technical Cleaning | 17:56 |
| 18:25 | PSL | Jason, Ryan S | Optics Lab | Local | Preliminary PSL Optics Reconnaissance | 18:42 |
| 21:25 | FAC | Earth Movers (TM) | Staging Building Behind | N | Earth Move Prep | 22:25 |
| 21:26 | PSL | Jason, Ryan S | Optics Lab | Local | Spare NPRO Work | 22:26 |
Sheila, Vicky, Camilla
We have turned back on the SQZ angle servo using the ADF at 322Hz. Last briefly tried while testing ADS alignment in ADS in 80194. Turned on ADF and used 'python setADF.py -f 322'. Then set H1:SQZ-ADF_OMC_TRANS_PHASE to get H1:SQZ-ADF_OMC_TRANS_SQZ_ANG close to zero and checked by stepping the SQZ angle that there is a zero crossing in the ADF measured SQZ angle, plot attached.
The servo adjusts the SQZ angle (H1:SQZ-CLF_REFL_RF6_PHASE_PHASEDEG) via keeping the ADF measured angle (H1:SQZ-ADF_OMC_TRANS_SQZ_ANG) at zero. Setpoint can be adjusted using the ADF phase (H1:SQZ-ADF_OMC_TRANS_PHASE).
Tagging Detchar: ADF is now on at 322Hz. It was turned all the way off in 79573 by Alan. We can adjust the frequency 50-500Hz if there is a better place for a line.
Note to operators: if you want to run SCAN_SQZANG, the ADF servo will now overwrite the sqz angle. So BEFORE going back to FREQ_DEP_SQZ you'll want to tweak H1:SQZ-ADF_OMC_TRANS_PHASE (via sqz overview > ADF) to make H1:SQZ-ADF_OMC_TRANS_SQZ_ANG close to zero. Or you can tweak H1:SQZ-ADF_OMC_TRANS_PHASE (via sqz overview > ADF) until the SQZ BLRMs/ DARM is best.
Trends of the ADF servo stabilizing the SQZ angle overnight. Looks good: the ADF SQZ ANGLE servo can hold the maximum squeezing level throughout the lock! Last night was running with the ADF SQZ angle servo + SQZ-IFO AS42 ASC together.
In the first lock of the screenshot, the ADF SQZ ANGLE servo is not yet running, and the squeezing level drifts quite a bit (~0.5-1 dB in ~2 hours, and ends up un-optimal). In the last 2 locks, the ADF SQZ ANGLE servo is running and successfully stabilizes the SQZ angle, though the 2 locks from last night stabilize at different SQZ angles (weird?). Note SQZ ASC is running in both of these locks, so it seems like ASC + ADF SQZ ANG servo work well when used together.
Naoki looked at sqz trends with/without the ADF servo before in LHO:75000. Looking at sqz trends for yesterday, the ADF servo stabilized the SQZ angle in the first ~25 minutes. Then over the first ~2 hours, the ADF servo needed to move the CLF_RF6 demod phase by 5-10 degrees to hold the SQZ angle stable. This implies something like, the optimal injected squeezing angle changed by about 2-5 degrees during IFO thermalization.
Also noting a reference to LHO:77292, where Naoki does an On/Off test with the ADF line at 322 Hz.
Checked against the 68139 list, can see that 322Hz is a good frequency for CW. We will look at trying to add this ADF line to the _CLEAN or _NOLINES subtractions.
Lockloss most likely due to PSL FSS Glitch. ASC and IMC lost lock within 63ms of one another, which is suspect. However, it seems AS_A lost lock first, so it might not be FSS related - unsure. Plots below.
No FSS tag on the lockloss tool
Tagging OpsInfo: I've added an ndscope template to the lockloss select tool that provides some good channels to look at to help determine if it was caused by the PSL glitches. Running it for this lockloss, I would say this does not look like an "FSS glitch" lockloss (see attached).
I ran the noise budget injections for frequency noise, input jitter (both pitch and yaw) and PRCL. All injections were run with CARM on one sensor (REFL B). The cable for the frequency injection is still plugged in as of this alog, but I reset the gains and switches so we are back on two CARM sensors and the injection switch is set to OFF.
All injections are saved in the usual /ligo/gitcommon/NoiseBudget/aligoNB/aligoNB/H1/couplings folder under Frequency_excitation.xml, IMC_PZT_[P/Y]_inj.xml, and PRCL_excitation.xml.
I realized an intensity noise injection might be interesting, but when I went to run the template for the ISS excitation, I was unable to see an excitation. I think there's a cable that must be plugged in to do this? I am not sure.
*********Edit************
Ryan S. sent me a message with this alog that has notes about how intensity noise injections should be taken. Through this conversation, I realized that I had misread the instructions in the template. I toggled an excitation switch on the ISS second loop screen, when I should have instead set the excitation gain to 1.
I was allowed another chance to run the intensity injections, and I was able to do so, using the low, middle, and high frequency injection templates in the couplings folder.
Also, the Input jitter injections have in the past been limited to 900 Hz, because the IMC WFS channels are DQed at 2048 Hz. However, the live IMC channels are at 16 kHz, so I edited the IMC injection templates to run up to 7 kHz, and use the live channels instead of the DQ channels. That allowed the measurements to run above 900 Hz. However, the current injections are band-limited to only 1 or 2 kHz. I think we can widen the injection band to measure jitter up to 7 kHz. I was unable to make those changes because we had to go back to observing, so this is future to-do item. I also updated the noise budget code to read in the live traces instead of the DQ traces.
Unfortunately, in my rush to run these injections, I forgot to transition over to one CARM sensor, so both the intensity and jitter measurements that are saved are with CARM on REFL A and B.
I ran an updated noise budget using these new measurements, plus whatever previous measurements were taken by Camilla in this alog. Reminder: the whole noise budget is now being run using median averaging.
I used a sqz time from last night where the range was around 165 Mpc, starting at GPS 1412603869. Camilla and Sheila took a no-sqz data set today starting at 1412607778. Both data sets are 600 seconds long. I created a new entry in gps_reference_times.yml called "LHO_O4b_Oct" with these times.
To run the budget:
>conda activate aligoNB
>python /ligo/gitcommon/NoiseBudget/aligoNB/production_code/H1/lho_darm_noisebudget.py
all plots found in /ligo/gitcommon/NoiseBudget/aligoNB/out/H1/lho_darm_noisebudget/
I made one significant edit to the code, which is that I decided to separate the laser and input jitter traces on the main DARM noise budget. That means that the laser trace is now only a sum of frequency noise and intensity noise. Input beam jitter is now a trace that combines the pitch and yaw measurements from the IMC WFS. Now, due to my changes in the jitter injections detailed above, these jitter injections extend above 900 Hz. To reiterate: the injections are still only band-limited around 2 kHz, which means that there could be unmeasured jitter noise above 2 kHz that was not captured by this measurement.
One reason I wanted to separate these traces is partly because it appears there has been a significant change in the frequency noise. Compared to the last frequency noise measurement, the frequency noise above 1 kHz has dropped by a factor of 10. The last time a frequency noise injection was taken was on July 11, right before the OFI vent, alog 79037. After the OFI vent, Camilla noticed that the noise floor around 10 kHz appeared to have reduced, as well as the HOM peak heights, alog 76794. She posted a follow-up comment on that log today noting that the IFO to OMC mode matching could have an effect on those peaks. This could possibly be related to the decrease in frequency noise. Meanwhile, the frequency noise below 100 Hz seems to be about the same as the July measurement. One significant feature in the high frequency portion of the spectrum is a large peak just above 5 kHz. I have a vague memory that this is approximately where a first order mode peak should be, but I am not sure.
There is no significant change in the intensity noise from July, except that there is also a large peak in the intensity noise just above 5 kHz. Gabriele and I talked about this briefly; we think this might be gain peaking in the ISS, but its hard to tell from alog measurements if that's possible. We think that peak is unlikely to be from the CARM loop. We mentioned the ISS theory to Ryan S. on the off-chance it is related to the current PSL struggles.
The other significant change in the noise budget is the change in the LSC noise. The LSC noise has reduced relative to the last noise budget measurement, alog 80215, which was expected from the PRCL feedforward implementation. Looking directly at the LSC subbudget, PRCL has been reduced by a factor of 10, just as predicted from the FF performance. Now, the overall LSC noise contribution is dominated by noise from MICH. Between 10-20 Hz, we might be able to win a little more with a better MICH feedforward, however that is a very difficult region to fit because of various high Q features (reminder alog).
Just as in the previous noise budget, there is a large amount of unaccounted-for noise. The noise budget code uses a quantum model that Sheila and Vicky have been working on extensively, but I am not sure of the status, and how much of that noise could be affected by adjustments to the model. Many of the noisy low frequency peaks also appear very broad on the timescale of the noise budget plot. We could try running over a longer period of time to better resolve those peaks.
Between 100-500 Hz there are regions where the sum of known noises is actually larger than the measured noise. I think this is because the input jitter projections are made using CAL DELTA L, but the overall noise budget is run on CALIB CLEAN where we are running a jitter subtraction.
I believe these couplings were pushed to aligoNB repo in commit bcdd729e.
I reran the jitter noise injections, trying to increase the excitation about 2 kHz to better see the high frequency jitter noise. The results were moderately successful; we could probably push even harder. The results indicate that jitter noise is with a factor of 2-3 of DARM above 1 kHz.
I have attached the updated DARM noise budget and input jitter budget. I'm also attaching the ASC budget (no change expected) just because I forgot to attach it in the previous post.