We investigated the SHV cable connectivity between the Ion Pump Controller at EY and the Ion Pump at Y2-8 checking for faults. We did not detect any obvious faults, we did detect the prior splice which was installed back in 2016. We used the Fieldfox in TDR mode with a 1.2 meter launch cable in low pass mode. I have attached scans of these findings:
First photo is a shot from the Controller, there is a large impedance change at 19 meters from the connection point, this is the same location of the splice made in 2016, the repair looks like we would expect a splice to look.
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=28847
Second photo is a shot from the Ion Pump back to the end station. There is a very small bump at 20 meters from the ion pump that is real, but is very very tiny, could be a kinked cable, but not enough to cause issues.
Otherwise, the cable looks fine. Future testing is warranted.
M. Pirello, G. Moreno, J. Vanosky
After the cable was deemed good, we took a look at the controller, and found some "events" recorded on the log, see attached photo, one such events pertains to the glitch noticed here. All events recorded were kept on the controller. We are going to look further into this.
FAMIS 22015
pH of PSL chiller water was measured to be just above 10.0 according to the color of the test strip.
Sheila, Jennie, Camilla. Last done in 76040. Want to get a homodyne measurement in our new OPO crystal spot location 82134, would plan to then repeat this a few months later to see any difference.
Aligned the homodyne but ran out of time before taking any data. May try to repeat on Thursday during commissioning.
Sheila offloaded the IFO ASC and edited the guardian code for clearing the ASC so that it now includes clearing the integrators that we moved into the ZMs.
Turned up SEED injected power from 0.7mW to 72mW to match the power from the LO (0.544mW on HDA and B). We later turned the LO power down with the pico HWP to match the SEED power.
Jennie adjusted the LO beam with SEED beam blocked to improve the subtraction: Checked we were centered on PDA/B using the steering mirrors downstream of the HD BS and then measured the HD DIFF spectrum. Adjusted BS, re centered on HD PDs and re-checked if DIFF spectrum shot noise had improved.
Sheila and Jennie aligned the SEED beam onto the LO beam. We expected to be able to see fringing on HD A/B at this point but couldn't, tried adjusting the alignment with ZM4, didn't help. It turned out tat the fringing was there, clear when using an oscilloscope just unclear on ndscope. Sheila further tuned the SEED alignment to minimize fringing. We measured peaks at 1.85V, bottom of fringes at 29mV with 2mV dark noise, giving visibility of 97.1%.
Then went back to the control room and measured NLG like normal (OPO to LOCKED SEED NLG), NLG = 10.3. Sheila turned down the SEED beam back to 0.7mW.
Then could start to take HD SQZ measurements, Using SQZ_MANAGER to SQZ READY HD (takes LO to LOCKED HD) and OPO to LOCKED_CLF_DUAL. We needed to increase SHG power from 10mW to ~16mW to allow 80uW on OPO Trans.
Plot attached: Dark noise was the same as original plot, LO shot noise (ref10), With NLG 10.3, ASQZ was 13.8dB (ref2), this seems similar to last measurements.
We planned to take ASQZ/SQZ data at different NLG's, however I was having some issues with the OPO GRD unlocking and ran out of time. May continue on Thursday during commissioning time.
Sheila, Camilla
Following up on 82039, where Vicky remotely helped Oli to recover the squeezer alignment.
We have a state called RESET_SQZ_ASC (FDS or FIS), which can be used after something goes wrong with the ASC and someone wants to get rid of what the ASC has done. This state hadn't been edited since we moved the ASC integrators to the suspension lock filters, so when Vicky and Oli tried it the ASC wasn't really reset making it more difficult to recover from that situation.
Today we added the LOCK filters to this state, and loaded and ran it. This should make things a little bit less confusing to debug next time.
Recently we've been having some issues with SRY locking in initial alignment where even though flashes at AS_A are good, it wouldn't catch the resonant cavity. My thought was that the suspensions are moving too much to catch, and then it would run through the DOWN state after failing to hold it, moving the suspensions again. During initial alignment today this happened to me, so I waited in down for a bit and then tried to lock. It locked straight away, possibly confirming my suspicions. From this, I bumped up a timer in Prep_For_SRY in the ALIGN_IFO node to allow SRM to settle a bit more before trying to acquire. Hopefully this will help with the SRY locking issues we've been having, but I'll keep an eye on it.
Code changes have been loaded and committed.
The electrical circuit for the purge air compressor at EX has been disconnected at bot ends to allow for the removal of the mechanical parts.
All maintenance activities have completed for the day except for squeezer work which will continue while I finish initial alignment.
Observing at 2154 UTC.
J. Kissel Back in July 2024 (LHO:78956, LHO:78960, LHO:78975), I'd installed some infrastructure in the h1iopomc0 front-end model that picks off the 4x copies of the 2x OMC DCPDs directly at the ADC input, sends them through a matrix, H1:OMC-ADC_INMTRX_$m_$n and pipes the matrix output into test filter banks, called H1:OMC-DCPD_A1 H1:OMC-DCPD_B1 H1:OMC-DCPD_A2 H1:OMC-DCPD_B2 As shown in the above mentioned aLOGs, these pick-offs are completely independent of the interferometer, the matrix output only goes to these four filter banks, and the filtered output is terminated in the front-end code. As such, there is no impact on the detector performance, noise, or calibration if and when the parameters of these pick-off paths are changed. However, - a lot of the noise investigations we'd like to do with these paths are the most interesting while the detector is in nominal low noise, the state we're in most often for observing - because this front-end is running at 524 kHz, - storing these extra, test-only, filtered output channels to frames is too taxing on the data storage system, and - even *looking* at more than 3 channels at a time live is taxing on the front-end's given 1/(2^19 [Hz]) = 1.9 [usec] clock cycle turn-around time. This makes the usual method of testing and changing parameters during an observing run -- "just change something during a lock-loss, during commissioning times, or during maintenance, then look at the data in the past" -- intractable. So, all this being said, in order to increase the amount of live time we have to commission and iterate the tests done with this path, I've unmonitored all the EPICs records associated with these pick offs' input matrix and filters. 52 channels total, and the explicit list of unmonitored channels is attached as screenshots. Details: The h1iopomc0 model only has one SDF file in /opt/rtcds/userapps/release/cds/h1/burtfiles/h1iopomc0/ safe.snap The target area's safe and OBSERVE .snap files that the front-end uses are both soft links to this file, /opt/rtcds/lho/h1/target/h1iopomc0/h1iopomc0epics/burt/ OBSERVE.snap -> /opt/rtcds/userapps/release/cds/h1/burtfiles/h1iopomc0/safe.snap safe.snap -> /opt/rtcds/userapps/release/cds/h1/burtfiles/h1iopomc0/safe.snap So. Using the graphical SPF interface, I unmonitored the channels as the graphical user inteface pointed to the safe.snap in the target area (which thus overwrites the real file in the userapps area), and then committed the file to svn rev r30313. So, in order to increase the amount of NLN time we have
Tue Jan 07 10:07:23 2025 INFO: Fill completed in 7min 20secs
Following yesterday's work on the thermocouples, both TC-A and TC-B are today looking more like their pre-19Dec2024 values. Trip temp was lowered to -60C for today's fill.
TCmins = [-98C, -96C] OAT (3C, 37F).
While removing the damaged wiring, the stud broke on the associated heating element, so the entire element had to be replaced. Temperature fluctuated a bit but should even out before end of maintenance window.
TITLE: 01/07 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 6mph Gusts, 3mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.53 μm/s
QUICK SUMMARY: Magnetic injections currently running, locked for 2.5 hours. Maintenance day today.
Workstations were updated and rebooted. This was an OS packages update. Conda packages were not updated.
TITLE: 01/07 Owl Shift: 0600-1530 UTC (2200-0730 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY:
I don't fully understand why H1 Manager got stuck, but I had to request INIT on H1_Manager to unstuck it.
11:57 UTC Ran an Initial_Alignment which worked well on the first try.
H1 Ran through all locking states quickly.
H1 Currently Observing 13:04 UTC
Initial alignment took over an hour so the timer there expired and prompted H1_MANAGER to go to Assistance_Required. Looks like ALS Y arm took most of that time.
TITLE: 01/07 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Tony
SHIFT SUMMARY: We lost lock from a 7.1 from Nepal, and we've had a high state lockloss on the way back up. Currently powering up.
LOG: No log.
We just lost lock from a 7.0 from Nepal, I'm going to hold in down for it to pass.
I ran some simple tests today to test out the PCALX_STAT Guardian node.
I started to test things at 17:00 UTC but ran into a collision with Jim's measurements.
During the first attempt I was able to determine that the shutter closing will cause the PCALX_STAT guardian to go to a FAULT [37] State after 60 Seconds.
It then cycles between waiting [4], and Nominal_PCAL[10] and Fault mutiple times per second until the fault is resolved. I have remedied this issue.
After Jim finished his work, I was able to continue testing the guardian using a few other buttons.
Potential PCAL Failure modes:
Things I've tested:
If the channel: H1:CAL-PCALX_OSC_SUM_ON gets turned off then the PCAL_STAT Gaurdian alarms because it is checking for this . This channel is also being monitored via SDF.
H1:CAL-INJ_END_SW channel being switched from X to Y does not trigger a fault but this is being Monitored via SDF.
H1:CAL-INJ_MASTER_SW channel is also monitored via SDF.
OFS Loop Open check worked well by checking to see if this H1:CAL-PCALX_OPTICALFOLLOWERSERVOENABLE channel is set correctly and checking the H1:CAL-PCALX_OFS_PD_OUT16 voltage against a known threshold. Guardian worked well.
TXPD H1:CAL-PCALX_TX_PD_WATTS_OUTMON and or RXPD H1:CAL-PCALX_RX_PD_WATTS_OUTMON could have some fault in their signals. the Guardian worked well for this as well.
H1:CAL-PCALX_SWEPT_SINE_ON could be turned off. This channel is currently not being monitored. potential solution is to just monitor it via SDF. I will be waiting for an oppertunity to change this during the next Lockloss.
Edit: It may be easier to have the guadian check this channel.
Things I have not tested.
If the Laser Diode Current increases or decreases. I do have a threshold for this outside of which the Guardian will go to Fault.
I the Laser Diode Temperature increases or decreases. I do have a threshold for this as well.
If the H1:CAL-PCALX_OFS_PD_OUT16 falls out of threshold. I do have a check for this in the Guadian but I have yet to test this.
TITLE: 01/07 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Currently Observing and have been locked for over 14 hours. Chill shift today with some commissioning in the middle but mostly Observing.
LOG:
16:32 Out of Observing for Commissioning
19:30 Observing
20:29 Potential for noise in darm due to scheduled nearby explosion tagging detchar (probably up to a few hours after) tagging detchar
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:52 | Cement truck | Staging building | n | pouring cement | 17:22 | |
15:00 | FAC | Kim | OpticsLab | n | Tech clean | 15:58 |
16:47 | SQZ | Sheila, Camilla | LVEA | y(local) | Adjusting OPO crystal position | 18:32 |
18:57 | PCAL | Tony | PCAL Lab | y(local) | PCALin' | 19:04 |
21:13 | ISC | Camilla | Optics Lab | n | Pulling out tools | 21:50 |
D. Davis, E. Capote, O. Patane
There was a discussion recently in the Detchar tools channel about how to interpret the cumulative range plots generated on the summary pages, such as today's cumulative range plot. Specifically, it seems incorrect that we could accumulate 30% of our range below 30 Hz.
Derek has pointed out that this is so misleading because the calculation of cumulative range in this manner is actually performed somewhat incorrectly. In short, range can be thought as analogous to SNR, which is a quantity that must be added in quadrature. Therefore, the order matters when calculating a cumulative range, i.e. the range acquired from 10-20 Hz, then 10-30 Hz, 10-40 Hz, etc. Therefore, the total cumulative range number, as in the one we think about all the time (160 Mpc, for example) is correct, but determining the range over a subset of the band (such as 10-30 Hz) needs to be done more carefully so it is not misleading.
Once we started discussing this, I pointed out that this means that the way we compare ranges is also misleading, as in when we run our DARM integral comparison scripts, we are subtracting the cumulative range of two different DARM PSDs, but we subtract it in amplitude (Mpc) and not in quadrature (Mpc^2).
Derek has created an improved way to calculate cumulative range, which they have coined to be the "cumulative normalized range". To get right to the point: it is better to normalize the cumulative range squared by the total range. This is an example plot showing how these two differ. This plot shows that for a given DARM PSD, the cumulative normalized range better estimates the sensitivity gained over a particular range of frequency. The low frequency portion is still very important (this results from the f^(-7/3) dependence in the range calculation), but indeed we gain very little sensitivity between 10-20 Hz, for example. You can also see that, when using the normalized method, the curve where you integrate up in frequency and the curve where you integrate down in frequency intersect at about 50% of the range, which is what you would expect.
In equation form, this image attachment defines the total cumulative range, and this image attachment shows our defintion of the normalized cumulative range.
In order to more sensibly compare two sensitivities by frequency, we have also derived a way to calculate the cumulative normalized range difference. The derivation is slightly more complicated, but the result is that you subtract the two cumulative normalized quantities, and then normalize by the sum of the two ranges.
This image attachment shows the equation form of this.
To make sense of why this method is better than the method we use now, you can imagine that we have two PSDs, one with 100 Mpc of range, and one that is exactly the same, except that between 10-20 Hz there is an additional gain of 20 Mpc, such that the total range is now 120 Mpc. If you compare these two bizarre PSDs, you would expect that the cumulative range difference between the two from 10-20 Hz is 20 Mpc, and then zero thereafter. This is an example plot showing how the cumulative range difference would appear, using the method where you subtract the two cumulative ranges, and then the method where you apply this normalized range method. The normalized range calculation behaves as expected, while the method that straightforwardly subtracts the two cumulative ranges overshoots the range gain from 10-20 Hz, and then misleadingly indicates the range is decreasing above 20 Hz to make up for it.
There is a lot of information to grasp here, and Derek and I will be posting a document to the DCC soon with a fuller explanation and full derivations. Oli has taken the time to implement these new methods in our DARM comparison scripts, and they will follow up here with more information about that.
As a start, I've only corrected these things in the range_compare script that I previously made based off of the Hanford Noise Budget darm_integral_compare script (81015). This script that I made is a simplified version of the script used for creating NoiseBudget plots so I thought it would be a good start to making these changes. There are also plans to correct the the calculations in other places (summary pages and the official NoiseBudget scripts for example).
All changes have been committed to git and are up to date in gitcommon/ops_tools/rangeComparison/. In addition to the changes necessary to correct the cumulative range plots, I also swapped out the way we were grabbing data so it now uses GWPy, and I added in an additional plot that shows the cumulative sum of the range over frequency. Here's an comparison of the old vs new cumulative range
Derek and I have just updated a document to the DCC with a full workup of this change and some fun examples, see P2500021.