After seeing the reflected spots on the PSL Quad display this morning and the "ISS diffracted power low" notification, since no one was using the beam today I performed a remote beam alignment tweak for both the PMC and RefCav. I started shortly after 10am, after Masayuki and I finished his PMC measurements and sufficient time had passed for the PMC to recover. The PMC Refl spot looked brighter on the left side vs. the right, indicating a potential alignment shift in yaw, while the RefCav Refl spot looked like it needed a pitch adjustment.
Starting with the PMC, with the ISS OFF there was ~103.3 W in transmission and ~25.0 W in reflection. Tweaking beam angle (entirely in yaw as expected based on the Refl spot, pitch adjustment acheived nothing) resulted in ~105.2 W transmitted and ~22.7 W reflected; walking the beam got things a little better, with ~105.5 W transmitted and ~22.3 W reflected. Seems this drop in PMC transmission was due to a slow alignment drift as the enclosure recovered from our NPRO swap ~10 days ago. With the ISS ON and diffracting ~3.8% (I had to adjust the RefSignal to -1.97 V from -1.95 V due to the increase in PMC transmission) the PMC is now transmitting ~105.7 W and reflecting ~22.4 W.
Moving on to the RefCav, with the IMC locked the RefCav TPD was ~0.79 V. Adjusting the beam angle got things a little better, with a TPD of ~0.82 V; as expected this adjustment was almost entirely in pitch. While walking the beam alignment the IMC unlocked and was having a hard time relocking, so I asked TJ to take it to OFFLINE while I finished the alignment; I note this as with the IMC unlocked the RefCav TPD is generally a little bit higher than with the IMC locked. With the IMC now unlocked I was able to get the RefCav TPD to ~0.86 V by walking the beam alignment. While I was not able to get the TPD higher than this, the Refl spot on the PSL Quad display still looks like there's some alignment work to do; the central spot is usually centered in the ring that surrounds it, but now it looks a little low and left. I don't have an explanation for this currently, we'll keep an eye on this in the coming days and see if/how things change. TJ relocked the IMC and it did not have a problem this time.
Following from last week (alog81493) I've now replaced the Increase_Flashes (IF) state calls with SCAN_ALIGNMENT (SA). The former still exists, I've created parallel paths in the ALS arm nodes, but it's not called by ISC_LOCK or INIT_ALIGN. If there are any troubles, then reverting ISC_LOCK and INIT_ALIGN should be all that's needed (no need to touch ALS nodes).
Last week I wanted to return the functionality to immediately stop if it sees a flash above threshold. I added this back in, but since it has to watch slow channels, it rarely happens. Still though, SA seems to be slightly faster than IF, accuracy seems similar.
I've also added some more fault checking and a way to "gracefully" escape the state. I would still like to go through and clean up the code a bit since it's currently a Frankenstein of comments from me integrating this. This includes some log and notifications that might still reference IF. For other time.
Maintenance wrapped up, the most notable item was an internet outage. We are back to Observing.
The LHO offsite network has been down since 06:30 PST this morning (Tue 03 Dec 2024). Alog has just become operational, but network access to CDS is still down.
TITLE: 12/03 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
LOG: Quiet shift, been locked for just about 10 hours. DIAG_MAIN has been notifying "PSL ISS diffracted power is low" its around 2.8 right now.
J. Freed,
On Wednesday Nov 20th, the Double Mixer topology for SPI D2400315-x0 was taken, and it works how we intended as a way to gerenrate a 80MHz - 4096Hz signal it to but with alot of noise.
DMPrototype.png Is a diagram of the Double Mixer Design
DM_First_Test.png Is a diagram of the test done on the Double Mixer, this inital test was just to check that the Double Mixer produces the expected output signal frequency, as such, the amplifier was ignored.
DM_FT_Result.png Shows the output on the scope. Blue is the double mixer output. White is a reference of SRS SG382 (sync with 80 Mhz OCXO via 10 MHz Timing) with a frequency of 79,995,904 Hz and power of -0.91 dBm. The SRS signal, timed with the OCXO, and the double mixer correctly locks with eachother. This gives credance that this design will work for the goal of the double mixer; a 79,995,904 Hz signal frequency locked to a 80MHz OCXO. However there is a lot of noise in the double mixer signal. An ongoing investigation is currently underway to find and reduce this noise.
ZMSCQ_Wave_Shape.png Shows a possible source of the noise, the Double Mixer requires phase delayer for the Sin path, we used a MiniCircuts ZMSCQ-2-90B for this. I tested the ZMSCQ-2-90B by puggining the input of the signal and the signal from the 2 ports into osciloscope. I split off the input signal with a Minicircuts ZMSC-2-1BR+ (Summer/Splitter) and I had to normalize input power pickoff in postprocessing by a factor of 0.68 due to the power difference. The results show that the output of the ZMSCQ-2-90B has a sligtly different shape than what was plugged in. With the delay port (orange) having a slight "hump" on the rise. Will retake with a faster osciloscope, as the signal is very noisy.
The investigation is continuing with the next step of checking the double mixer output signal with a network anaylizer to check the frequencies around 79,995,904 Hz. A Phase noise measurement would also work with the caviot that the phase adjust to destructivly interfere the double mixer signal with the reference signal would have to be adjusted manually.
Sheila and I were looking at one of the most recent ETMX glitch locklosses and found something interesting. About 150ms before this particular lockloss there is a glitch that saturates L3, which seems to be between a few hundred hz and 1khz. The glitch is big enough to propagate to L2 and L1 very quickly. The IFO rides out one glitch, but seems to lose lock during a similar glitch immediately after. There were also 4 other glitches that looked kind of similar during this lock, that didn't cause a lockloss. The open loop for the ESD is around 70 hz, so we think it might be possible to add some more low pass to reduce the drive on the ESD, and reduce the chance of saturating the dac. I'm looking at that.
While trying to understand the path to the dac on this SUS, Sheila and I had a hard time finding the custom screen for the parts added to the top of the ETMX sus model for the 32-bit (-ish?) DAC on that suspension. It lives on the WD tab on the sitemap, it's called DAC_TEST. Shown in the second image. There are calibration gains of 275.310 applied to top level filter banks on the output of the model, these filter screens can be opened by clicking on the rows of black OUT GAIN and OUT VAL epics readbacks on the bottom of the screen. I don't think clicking any of the 0|1 buttons in the middle of this screen is safe.
TITLE: 12/03 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: Currently Observing at 157Mpc and have been Locked for 4.5 hours. Today we had some trouble getting DRMI to lock, and a lockloss from TRANSITION_FROM_ETMX, but nothing too difficult to fix and bypass.
LOG:
15:30 Observing at 157Mpc and have been Locked for over 2 hours
16:33 Out of Observing for Commissioning
16:58 Lockloss due to commissioning activities
- During relocking, stopped at DRMI_LOCKED_CHECK_ASC because SRM M3 and M1 LF/RT were railed. Clearing SRCL1 history fixed M3, but not M1 LF/RT. We cleared the history on the M1 locking filters, leading to a lockloss and SRM trip. This issue happened because during the commissioning before the lockloss, SRM was accidentally moved so far out of alignment that during DRMI, ISC_LOCK thought that the lock it caught was DRMI when it was actually PRMI, which caused it to push super hard on SRM
- Lockloss from LOWNOISE_ESD_ETMX
- Next time we finished TRANSITION_FROM_ETMX, I waited 9 seconds after it completed before moving on to LOWNOISE_ESD_ETMX
- Multiple locklosses from lower states like ALS and IR
20:02 NOMINAL_LOW_NOISE
20:08 Observing
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
16:16 | FAC | Karen | OptLab/VacPrep | n | Tech clean | 16:42 |
18:05 | FAC | Karen | MY | n | Tech clean | 18:58 |
18:54 | VAC | Jordan, Gerardo | LVEA | n | Moving leak cart | 19:33 |
22:34 | RyanC | OpticsLab | n | Testing blues | 23:36 | |
23:15 | FAC | Tyler | Along XARM | n | Taking inventory | 00:29 |
TITLE: 12/03 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 0mph Gusts, 0mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.34 μm/s
QUICK SUMMARY: Observing for over 4 hours, calm environment.
Closes FAMIS26449 ,last checked in alog80350
BRS-Y temperature looks to be slowly trending upwards since ~10/22.
WP12223 FRS32776
Jonathan, Erik, Dave:
cdslogin is currently down for a rebuild following a disk failure yesterday. Alarms and Alerts are currently offline. EDC has 58 disconnected channels (cdslogin epics_load_mon and raccess channels).
Rebuild is finished. Alarms are working again.
Lockloss @ 12/02 16:58 UTC after 3:39 locked due to commissioning activities
Back to Observing as of 20:08UTC
TITLE: 12/02 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 142Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: A long relock today from DRMI locking struggles and lock losses at Transition_From_ETMX. Once back up to observing the range hasn't been very stable, hovering between 140-150Mpc. I was waiting for a drop in triple coincidence before trying to tune squeezing, but that didn't happen before the end of my shift. The 15-50Hz area in particular looks especially poor.
LOG:
Plot of range and glitches attached.
It didn't look like the explicitly squeezing was the issue last night when the range was low as SQZ was stable, sqz plot. The range seemed to correct itself on it's own as we stayed in Observing. If it happens agin, we could have gone to NoSQZ or FIS to check it's not backscatter from SQZ of FC.
Oli and I ran a Bruco during the low range time, bruco website, but Shiela noted that the noise looks un-stationary like scatter so that a bruco isn't the best way of finding the cause.
I ran a range comparison using 50 minutes of data from a time in the middle of the bad range and a time after it stops with good range. Excess noise looks to be mostly below 100 Hz for sensmon, for DARM the inflection point looks to be at 60Hz and there is broadband noise but low frequency again seems larger.
I also checked these same times in my "Misaligned" GUI that compares SUS top mass osems, witness sensors, and OPLEVS avg motion to compare alignments for relocking and to look for drifts. It doesn't look all that useful here, the whole IFO is moving together throughout the lock. I ran it for seperate times within the good range block as well and it look pretty much the same.
As discussed in today's commissioning meeting, if this low range with glitches on Omicron at low frequency happens again, can the operator take SQZ_MANAGER to NO_SQUEEZING for 10 minutes so that we can check this isn't caused by backscatter from something in HAM7/8. Tagging OpsInfo.
Runs of HVeto on this data stretch indicate extremely high correlations between strain glitches and glitches in SQZ FC channels. The strongest correlation was found with H1:SQZ-FC_LSC_DOF2_OUT_DQ.
The full HVeto results can be seen here: https://ldas-jobs.ligo-wa.caltech.edu/~detchar/hveto/day/20241202/1417132818-1417190418/
An example of H1 strain data and the channel highlighted by HVeto can be seen in the following attached plots:
Derek also so kindly ran lasso for that time period (link to lasso run) and the top correlated channel is H1:TCS-ITMX_CO2_ISS_CTRL2_OUT_DQ. Back in May we were seeing correlations between drops in range, FC alignment, and the values in this same TCS channel(78089). Here's a screenshot of the range vs that channel - the TCS channel matches with how it was looking back in May. As stated in that May alog thread, the cable for this channel was and is still unplugged :(
We've now had two lock losses from the Transition from ETMX state or immediately after while trying to reacquire. Unlike back on Nov 22 (alog81430), SRCL seems fine. For the first one, the locklost tool 1417133960 shows the IMC-SERVO_SPLITMON channel is saturating ~5sec before lock loss, then there is some odd LSC signals 40msecs before the tool tagged the lock loss (attachment 1). This might just be the lock loss itself though. The second one (locklost tool 1417136668) hasn't tagged anything yet, but ETMY has a glitch 2.5 sec before lock loss and ETMX seems to move more from that point (attachment 2).
Another one while I was looking at the code for the Transition_From_ETMX state. We were there for a few minutes before I noticed CHARD & DHARD inputs ringing up. Unsure how to save it, I just requested it to move on but it lead to a lock loss.
I ended up changing the SUS-ITMX_L3_LOCK_BIAS_TRAMP to 25 from its 30 to hopefully move to a safer place sooner. Since it was already changed from 60 a week ago, I didn't want to go too short. Worked this one time.
Sheila, Camilla
We had another lockloss with an ETMX_L2 glitch beforehand this morning, plot, and it seems that even a successful transition this morning had a glitch too but it was smaller plot. We looked at the ISC_LOCK.py code and it's not yet obvious what's causing this glitch. The successful transition also had a DARM wobble up to 2e6 plot but when we have the locklosses, DARM goes to ~10e6.
While looking at all the filters, we found that ETMX_L2_LOCK_L ramp time in 20s screenshot although we only wait 10s in ISC_LOCK. We will edit this tomorrow when we are not observing. We don't think this will effect the glitch as there is no input/output to this filter at the time of the glitch.
The only thing that seems like it could cause the glitch is the DARM1 FM1 being turned off, we don't yet understand how and had similar issues we thought we solved in 77640
This morning I edited ETMX_L2_LOCK_L FM1 ramp time down to 10s, reloaded coefficients.
Oli, Camilla WP12203. Repeat of some of the work done in 2019: EX: 52608, EY: 52636, older: part 1, part 2, part 3.
We misaligned ITMY and turned off the ALS-Y QPD servo with H1:ALS-Y_PZT_SWITCH and placed the Ophir Si scanning slit beam profiler to measure both the 532nm ALSY outgoing beam and the ALSY return beam in the HWS path.
The outgoing beam was a little oblong in the measurements but looked pretty clean and round by eye, the return beam did not! Photos of outgoing and return beam attached. Outgoing beam was 30mW, return beam 0.75mW.
Attached is the 13.5% and D4sigma measurements, I also have photos of the 50% measurements if needed. Distances are measured from the optic where HWS and ALS beams combine, ALS-M11 in D1400241.
We had previously removed HWS-M1B and HWS-M1C and translated HWS-M1A from whats shown in D1400241-v8 to remove clipping.
TJ, Camilla
We expanded on these measurements today and measured the positions of the lenses and mirrors in both ALS and HWS beampaths and took beamscan data further from the periscope, where the beam is changing size more. Data attached for today and all data together calculated from the VP. Photo of the beamscanner in the HWS return ALS beam path also attached.
Oli, Camilla
Today we took some beam measurements between ALS-L6 and ALS-M9. These are in the attached documents with today's data and all the data. The horizontal A1 measurements seemed strange before L6. We're unsure why as further downstream when the beam is larger and easier to see by eye it looks round.
Ansel reported that a peak in DARM that interfered with the sensitivity of the Crab pulsar followed a similar time frequency path as a peak in the beam splitter microphone signal. I found that this was also the case on a shorter time scale and took advantage of the long down times last weekend to use a movable microphone to find the source of the peak. Microphone signals don’t usually show coherence with DARM even when they are causing noise, probably because the coherence length of the sound is smaller than the spacing between the coupling sites and the microphones, hence the importance of precise time-frequency paths.
Figure 1 shows DARM and the problematic peak in microphone signals. The second page of Figure 1 shows the portable microphone signal at a location by the staging building and a location near the TCS chillers. I used accelerometers to confirm the microphone identification of the TCS chillers, and to distinguish between the two chillers (Figure 2).
I was surprised that the acoustic signal was so strong that I could see it at the staging building - when I found the signal outside, I assumed it was coming from some external HVAC component and spent quite a bit of time searching outside. I think that this may be because the suspended mezzanine (see photos on second page of Figure 2) acts as a sort of soundboard, helping couple the chiller vibrations to the air.
Any direct vibrational coupling can be solved by vibrationally isolating the chillers. This may even help with acoustic coupling if the soundboard theory is correct. We might try this first. However, the safest solution is to either try to change the load to move the peaks to a different frequency, or put the chillers on vibration isolation in the hallway of the cinder-block HVAC housing so that the stiff room blocks the low-frequency sound.
Reducing the coupling is another mitigation route. Vibrational coupling has apparently increased, so I think we should check jitter coupling at the DCPDs in case recent damage has made them more sensitive to beam spot position.
For next generation detectors, it might be a good idea to make the mechanical room of cinder blocks or equivalent to reduce acoustic coupling of the low frequency sources.
This afternoon TJ and I placed pieces of damping and elastic foam under the wheels of both CO2X and CO2Y TCS chillers. We placed thicker foam under CO2Y but this did make the chiller wobbly so we placed thinner foam under CO2X.
Unfortunately, I'm not seeing any improvement of the Crab contamination in the strain spectra this week, following the foam insertion. Attached are ASD zoom-ins (daily and cumulative) from Nov 24, 25, 26 and 27.
This morning at 17:00UTC we turned the CO2X and CO2Y TCS chiller off and then on again, hoping this might change the frequency they are injecting into DARM. We do not expect it to effect it much we had the chillers off for a ling period 25th October 80882 when we flushed the chiller line and the issue was seen before this date.
Opened FRS 32812.
There were no expilcit changes to the TCS chillers bettween O4a and O4b although we swapped a chiller for a spare chiller in October 2023 73704.
Between 19:11 and 19:21 UTC, Robert and I swapped the foam from under CO2Y chiller (it was flattened and not providing any damping now) to new, thicker foam and 4 layers of rubber. Photo's attached.
Thanks for the interventions, but I'm still not seeing improvement in the Crab region. Attached are daily snapshots from UTC Monday to Friday (Dec 2-6).
I changed the flow of the TCSY chiller from 4.0gpm to 3.7gpm.
These Thermoflex1400 chillers have their flow rate adjusted by opening or closing a 3 way valve at the back of the chiller. for both X and Y chillers, these have been in the full open position, with the lever pointed straight up. The Y chiller has been running with 4.0gpm, so our only change was a lower flow rate. The X chiller has been at 3.7gpm already, and the manual states that these chillers shouldn't be ran below 3.8gpm. Though this was a small note in the manual and could be easily missed. Since the flow couldn't be increased via the 3 way valve on back, I didn't want to lower it further and left it as is.
Two questions came from this:
The flow rate has been consistent for the last year+, so I don't suspect that the pumps are getting worn out. As far back as I can trend they have been around 4.0 and 3.7, with some brief periods above or below.
Thanks for the latest intervention. It does appear to have shifted the frequency up just enough to clear the Crab band. Can it be nudged any farther, to reduce spectral leakage into the Crab? Attached are sample spectra from before the intervention (Dec 7 and 10) and afterward (Dec 11 and 12). Spectra from Dec 8-9 are too noisy to be helpful here.
TJ touched the CO2 flow on Dec 12th around 19:45UTC 81791 so the flowrate further reduced to 3.55 GPM. Plot attached.
The flow of the TCSY chiller was further reduced to 3.3gpm. This should push the chiller peak lower in frequency and further away from the crab nebula.
The further reduced flow rate seems to have given the Crab band more isolation from nearby peaks, although I'm not sure I understand the improvement in detail. Attached is a spectrum from yesterday's data in the usual form. Since the zoomed-in plots suggest (unexpectedly) that lowering flow rate moves an offending peak up in frequency, I tried broadening the band and looking at data from December 7 (before 1st flow reduction), December 16 (before most recent flow reduction) and December 18 (after most recent flow reduction). If I look at one of the accelerometer channels Robert highlighted, I do see a large peak indeed move to lower frequencies, as expected. Attachments: 1) Usual daily h(t) spectral zoom near Crab band - December 18 2) Zoom-out for December 7, 16 and 18 overlain 3) Zoom-out for December 7, 16 and 18 overlain but with vertical offsets 4) Accelerometer spectrum for December 7 (sample starting at 18:00 UTC) 5) Accelerometer spectrum for December 16 6) Accelerometer spectrum for December 18
Everything is operational now.