Recently we've been having some issues with SRY locking in initial alignment where even though flashes at AS_A are good, it wouldn't catch the resonant cavity. My thought was that the suspensions are moving too much to catch, and then it would run through the DOWN state after failing to hold it, moving the suspensions again. During initial alignment today this happened to me, so I waited in down for a bit and then tried to lock. It locked straight away, possibly confirming my suspicions. From this, I bumped up a timer in Prep_For_SRY in the ALIGN_IFO node to allow SRM to settle a bit more before trying to acquire. Hopefully this will help with the SRY locking issues we've been having, but I'll keep an eye on it.
Code changes have been loaded and committed.
The electrical circuit for the purge air compressor at EX has been disconnected at bot ends to allow for the removal of the mechanical parts.
All maintenance activities have completed for the day except for squeezer work which will continue while I finish initial alignment.
J. Kissel Back in July 2024 (LHO:78956, LHO:78960, LHO:78975), I'd installed some infrastructure in the h1iopomc0 front-end model that picks off the 4x copies of the 2x OMC DCPDs directly at the ADC input, sends them through a matrix, H1:OMC-ADC_INMTRX_$m_$n and pipes the matrix output into test filter banks, called H1:OMC-DCPD_A1 H1:OMC-DCPD_B1 H1:OMC-DCPD_A2 H1:OMC-DCPD_B2 As shown in the above mentioned aLOGs, these pick-offs are completely independent of the interferometer, the matrix output only goes to these four filter banks, and the filtered output is terminated in the front-end code. As such, there is no impact on the detector performance, noise, or calibration if and when the parameters of these pick-off paths are changed. However, - a lot of the noise investigations we'd like to do with these paths are the most interesting while the detector is in nominal low noise, the state we're in most often for observing - because this front-end is running at 524 kHz, - storing these extra, test-only, filtered output channels to frames is too taxing on the data storage system, and - even *looking* at more than 3 channels at a time live is taxing on the front-end's given 1/(2^19 [Hz]) = 1.9 [usec] clock cycle turn-around time. This makes the usual method of testing and changing parameters during an observing run -- "just change something during a lock-loss, during commissioning times, or during maintenance, then look at the data in the past" -- intractable. So, all this being said, in order to increase the amount of live time we have to commission and iterate the tests done with this path, I've unmonitored all the EPICs records associated with these pick offs' input matrix and filters. 52 channels total, and the explicit list of unmonitored channels is attached as screenshots. Details: The h1iopomc0 model only has one SDF file in /opt/rtcds/userapps/release/cds/h1/burtfiles/h1iopomc0/ safe.snap The target area's safe and OBSERVE .snap files that the front-end uses are both soft links to this file, /opt/rtcds/lho/h1/target/h1iopomc0/h1iopomc0epics/burt/ OBSERVE.snap -> /opt/rtcds/userapps/release/cds/h1/burtfiles/h1iopomc0/safe.snap safe.snap -> /opt/rtcds/userapps/release/cds/h1/burtfiles/h1iopomc0/safe.snap So. Using the graphical SPF interface, I unmonitored the channels as the graphical user inteface pointed to the safe.snap in the target area (which thus overwrites the real file in the userapps area), and then committed the file to svn rev r30313. So, in order to increase the amount of NLN time we have
Tue Jan 07 10:07:23 2025 INFO: Fill completed in 7min 20secs
Following yesterday's work on the thermocouples, both TC-A and TC-B are today looking more like their pre-19Dec2024 values. Trip temp was lowered to -60C for today's fill.
TCmins = [-98C, -96C] OAT (3C, 37F).
While removing the damaged wiring, the stud broke on the associated heating element, so the entire element had to be replaced. Temperature fluctuated a bit but should even out before end of maintenance window.
TITLE: 01/07 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 6mph Gusts, 3mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.53 μm/s
QUICK SUMMARY: Magnetic injections currently running, locked for 2.5 hours. Maintenance day today.
Workstations were updated and rebooted. This was an OS packages update. Conda packages were not updated.
TITLE: 01/07 Owl Shift: 0600-1530 UTC (2200-0730 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY:
I don't fully understand why H1 Manager got stuck, but I had to request INIT on H1_Manager to unstuck it.
11:57 UTC Ran an Initial_Alignment which worked well on the first try.
H1 Ran through all locking states quickly.
H1 Currently Observing 13:04 UTC
Initial alignment took over an hour so the timer there expired and prompted H1_MANAGER to go to Assistance_Required. Looks like ALS Y arm took most of that time.
TITLE: 01/07 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Tony
SHIFT SUMMARY: We lost lock from a 7.1 from Nepal, and we've had a high state lockloss on the way back up. Currently powering up.
LOG: No log.
We just lost lock from a 7.0 from Nepal, I'm going to hold in down for it to pass.
I ran some simple tests today to test out the PCALX_STAT Guardian node.
I started to test things at 17:00 UTC but ran into a collision with Jim's measurements.
During the first attempt I was able to determine that the shutter closing will cause the PCALX_STAT guardian to go to a FAULT [37] State after 60 Seconds.
It then cycles between waiting [4], and Nominal_PCAL[10] and Fault mutiple times per second until the fault is resolved. I have remedied this issue.
After Jim finished his work, I was able to continue testing the guardian using a few other buttons.
Potential PCAL Failure modes:
Things I've tested:
If the channel: H1:CAL-PCALX_OSC_SUM_ON gets turned off then the PCAL_STAT Gaurdian alarms because it is checking for this . This channel is also being monitored via SDF.
H1:CAL-INJ_END_SW channel being switched from X to Y does not trigger a fault but this is being Monitored via SDF.
H1:CAL-INJ_MASTER_SW channel is also monitored via SDF.
OFS Loop Open check worked well by checking to see if this H1:CAL-PCALX_OPTICALFOLLOWERSERVOENABLE channel is set correctly and checking the H1:CAL-PCALX_OFS_PD_OUT16 voltage against a known threshold. Guardian worked well.
TXPD H1:CAL-PCALX_TX_PD_WATTS_OUTMON and or RXPD H1:CAL-PCALX_RX_PD_WATTS_OUTMON could have some fault in their signals. the Guardian worked well for this as well.
H1:CAL-PCALX_SWEPT_SINE_ON could be turned off. This channel is currently not being monitored. potential solution is to just monitor it via SDF. I will be waiting for an oppertunity to change this during the next Lockloss.
Edit: It may be easier to have the guadian check this channel.
Things I have not tested.
If the Laser Diode Current increases or decreases. I do have a threshold for this outside of which the Guardian will go to Fault.
I the Laser Diode Temperature increases or decreases. I do have a threshold for this as well.
If the H1:CAL-PCALX_OFS_PD_OUT16 falls out of threshold. I do have a check for this in the Guadian but I have yet to test this.
TITLE: 01/07 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Currently Observing and have been locked for over 14 hours. Chill shift today with some commissioning in the middle but mostly Observing.
LOG:
16:32 Out of Observing for Commissioning
19:30 Observing
20:29 Potential for noise in darm due to scheduled nearby explosion tagging detchar (probably up to a few hours after) tagging detchar
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:52 | Cement truck | Staging building | n | pouring cement | 17:22 | |
15:00 | FAC | Kim | OpticsLab | n | Tech clean | 15:58 |
16:47 | SQZ | Sheila, Camilla | LVEA | y(local) | Adjusting OPO crystal position | 18:32 |
18:57 | PCAL | Tony | PCAL Lab | y(local) | PCALin' | 19:04 |
21:13 | ISC | Camilla | Optics Lab | n | Pulling out tools | 21:50 |
D. Davis, E. Capote, O. Patane
There was a discussion recently in the Detchar tools channel about how to interpret the cumulative range plots generated on the summary pages, such as today's cumulative range plot. Specifically, it seems incorrect that we could accumulate 30% of our range below 30 Hz.
Derek has pointed out that this is so misleading because the calculation of cumulative range in this manner is actually performed somewhat incorrectly. In short, range can be thought as analogous to SNR, which is a quantity that must be added in quadrature. Therefore, the order matters when calculating a cumulative range, i.e. the range acquired from 10-20 Hz, then 10-30 Hz, 10-40 Hz, etc. Therefore, the total cumulative range number, as in the one we think about all the time (160 Mpc, for example) is correct, but determining the range over a subset of the band (such as 10-30 Hz) needs to be done more carefully so it is not misleading.
Once we started discussing this, I pointed out that this means that the way we compare ranges is also misleading, as in when we run our DARM integral comparison scripts, we are subtracting the cumulative range of two different DARM PSDs, but we subtract it in amplitude (Mpc) and not in quadrature (Mpc^2).
Derek has created an improved way to calculate cumulative range, which they have coined to be the "cumulative normalized range". To get right to the point: it is better to normalize the cumulative range squared by the total range. This is an example plot showing how these two differ. This plot shows that for a given DARM PSD, the cumulative normalized range better estimates the sensitivity gained over a particular range of frequency. The low frequency portion is still very important (this results from the f^(-7/3) dependence in the range calculation), but indeed we gain very little sensitivity between 10-20 Hz, for example. You can also see that, when using the normalized method, the curve where you integrate up in frequency and the curve where you integrate down in frequency intersect at about 50% of the range, which is what you would expect.
In equation form, this image attachment defines the total cumulative range, and this image attachment shows our defintion of the normalized cumulative range.
In order to more sensibly compare two sensitivities by frequency, we have also derived a way to calculate the cumulative normalized range difference. The derivation is slightly more complicated, but the result is that you subtract the two cumulative normalized quantities, and then normalize by the sum of the two ranges.
This image attachment shows the equation form of this.
To make sense of why this method is better than the method we use now, you can imagine that we have two PSDs, one with 100 Mpc of range, and one that is exactly the same, except that between 10-20 Hz there is an additional gain of 20 Mpc, such that the total range is now 120 Mpc. If you compare these two bizarre PSDs, you would expect that the cumulative range difference between the two from 10-20 Hz is 20 Mpc, and then zero thereafter. This is an example plot showing how the cumulative range difference would appear, using the method where you subtract the two cumulative ranges, and then the method where you apply this normalized range method. The normalized range calculation behaves as expected, while the method that straightforwardly subtracts the two cumulative ranges overshoots the range gain from 10-20 Hz, and then misleadingly indicates the range is decreasing above 20 Hz to make up for it.
There is a lot of information to grasp here, and Derek and I will be posting a document to the DCC soon with a fuller explanation and full derivations. Oli has taken the time to implement these new methods in our DARM comparison scripts, and they will follow up here with more information about that.
As a start, I've only corrected these things in the range_compare script that I previously made based off of the Hanford Noise Budget darm_integral_compare script (81015). This script that I made is a simplified version of the script used for creating NoiseBudget plots so I thought it would be a good start to making these changes. There are also plans to correct the the calculations in other places (summary pages and the official NoiseBudget scripts for example).
All changes have been committed to git and are up to date in gitcommon/ops_tools/rangeComparison/. In addition to the changes necessary to correct the cumulative range plots, I also swapped out the way we were grabbing data so it now uses GWPy, and I added in an additional plot that shows the cumulative sum of the range over frequency. Here's an comparison of the old vs new cumulative range
Derek and I have just updated a document to the DCC with a full workup of this change and some fun examples, see P2500021.
Because of drift issues that we've been seeing in the BBSS, we've been looking into center of mass issues, and we are now looking at the lowest stage, M3, and the possibility that the stainless steel center insert (which was installed upside down) is causing pitch issues.
We wanted to go through and verify what the physical d4 value, aka the physical location of the prism-clamp breakoff point, is in our latest set of measurements we took at the end of October. We wanted to verify this just because when we were messing around with d4 back in early 2024 (75947, 76071), we never checked any measurements where we changed around d4 with the stage2 parameter correctly on, meaning that those d4 comparisons all plotted changes in the effective d's instead of the physical d's.
I've plotted a few parameter sets where the stage2 parameter is on so we can more properly compare them.
I've plotted:
Here are the plots. The zoomed L and P plots are the best for seeing the differences between the different parameters.
The Oct31 measurement seems to match best somewhere between d4 - 1.0mm and d4 - 0.5mm, so the value of d4 that we have in our current set seems to still match pretty well.
It is interesting to see though the difference between the dark red and the yellow traces as compared to our latest measurement, since the only difference between those two is the d1 value. It looks like somewhere along while messing with the blade heights, we also moved away from the d1 = FDR - 2.5mm value and closer to the d1 value given in the FDR.
TITLE: 01/06 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 9mph Gusts, 5mph 3min avg
Primary useism: 0.06 μm/s
Secondary useism: 0.48 μm/s
QUICK SUMMARY:
Sheila, Camilla. Last done in 80451, followed those instructions.
Before translating the crystal, we reduced the green H1:SQZ-SHG_LAUNCH_DC_POWERMON power from 30mW down to 6mW. After we finished, we brought this up to 15mW and still could lock the OPO with 80uW on H1:SQZ-OPO_TRANS_LF_OUTPUT and around 6 on the ISS controlmon. Meaning our losses reduced significantly.
We moved from the 3rd spot from the left photo to the 4th spot from the left photo ( 34 x steps of 50 to the right and then 19 x steps of 10 to the right). We have previously used the 2nd, 3rd, and 5th spots from the left, there is 6 spots in total.
Measured NLG to be 8.8 (0.0787/0.00892, 76542). This is lower than what we usually run with 11 to 17 so means that this new spot has less losses but a worse NLG.
FDS is the same but ASQZ lower. Shiela said that this good and is expected with lower losses and mean there can be less mis-rotated ASQZ injected. Plot attached.
Tagging OpsInfo: This change will mean that for the next week or so we'll need to more regularly adjust the OPO TEC temperature, instructions are in 80461. Should be done pro-actively when relocking and if range is low in observing. If you adjust while in Observing, please tag Detchar in your alog.
Operators, please actually go out of Observing to make this temperature change (no need for pre-approval from me, if the range check indicates it needs doing).
This alog from October points out that these temperature changes are so successful at improving our range so quickly that it causes some problems for the astrophysical searches. This is entirely mitigated if we pop out of Observe during the change, then back in when the change is complete.
This alog follows up LHO:81769 where I calibrated the ASC drives to test mass motion for all eight arm ASC control loops. Now, I have taken the noise budget injections that we run to measure the ASC coupling to DARM, and used that to calibrate an angle-to-length coupling function in mm/rad. I have only done this for the HARD loops because the SOFT loops do not couple very strongly to DARM (notable exception to CSOFT P, which I will follow up on).
The noise budget code uses an excess power projection to DARM, but instead I chose to measure the linear transfer function. The coherence is just ok, so I think a good follow up is to remeasure the coupling again and drive a bit harder/average longer (these are 60 second measurements). This plot shows the noise budget injection into calibrated DARM/ASC PUM drive [m/Nm] transfer function, and the coherence of the measurement.
I followed a similar calibration procedure to my previous alog:
I did not apply the drive matrix here, so the calibration is into ETM motion only (factor of +/-1), whereas the calibration into ITM motion would have an additional +/- 0.74 (+/- 0.72 for yaw) applied.
HARD Pitch Angle to Length coupling plot
HARD Yaw Angle to Length coupling plot
Overall, the best-measured DOF here is CHARD Y. In both CHARD Y and DHARD Y, there seem to be two clear coupling regions: one fairly flat region above 20 Hz in CHARD Y and above 30 Hz in DHARD Y, reaching between 20-30 mm/rad. Below, there is a steep coupling. This is reminiscient of the coupling that Gabriele, Louis, and I measured back in March and tried to mitigate with A2L and WFS offset. We found that we could reduce the flatter coupling in DHARD Y by adjusting the A2L gain, and the steeper coupling by applying a small offset in AS WFS A yaw DC. We are currently not running with that WFS offset. The yaw coupling suggests that we have some sort of miscentering on both the REFL and AS WFS which causes a steep low frequency coupling which is less sensitive to beam centering on the test mass (as shown by the A2L tests); meanwhile, the flat coupling is sensitive to beam miscentering on the test mass, which is expected (see e.g. T0900511).
The pitch coupling has the worst coherence here, but the coupling is certainly not flat. It appears to be rising with about f^4 at high frequency. I have a hard time understanding what could cause that. There is also possibly a similar steep coupling at low frequency like the yaw coupling, but the coherence is so poor it's hard to see.
Assuming that I have my calibration factors correct here (please don't assume this! check my work!), this suggests that the beam miscentering is higher than 1 mm everywhere and possibly up to 30 mm on the ETMs (remember this would be ~25% lower on the ITMs). This seems very large, so I'm hoping that there is another errant factor of two or something somewhere.
My code for both the calibrated motion and calibrated coupling is in a git repo here: https://git.ligo.org/ecapote/ASC_calibration
Today I had a chance to rerun these injections so I could get better coherence, injection plot. I ran all the injections with the calibration lines off.
The pitch couplings now appear to be very flat, which is what we expect. However, they are very high (100 mm/rad !!) which seems nearly impossible.
The yaw couplings still show a strong frequency dependence below 30 Hz, and are flat above, and around 30-50 mm/rad, still large.
Whether or not the overall beam miscentering value is correct, this does indicate that there is some funny behavior in yaw only that causes two different alignment coupling responses. Since this is observed in both DHARD and CHARD, it could be something common to both (so maybe less likely to be related to the DARM offset light on the AS WFS).
I also ran a measurement of the CSOFT P coupling, injection plot. I was only able to get good coherence up to 30 Hz, but it seems to be fairly flat too, CSOFT P coupling.
Edit: updated coupling plots to include error shading based on the measurement coherence.
Observing at 2154 UTC.