Wed Dec 18 10:12:11 2024 INFO: Fill completed in 12min 7secs
Today we had low TC mins [-156C,-155C] because of an unusually warm outside air temp of 56F, 14C.
To hopefully help combat our recent bouts of PI24 ring ups (alog81890), we've bummped the ETMY ring heater power from 1.1W per segment to 1.2W. The safe.snap and observe.snap are updated.
We also have a plan to update the PI guardian to temporarily increase the ETM bias to hopefully give us more range to damp the PI. We will need to be out of Observing for this, but we could hopefully save the lock. TBD.
Sheila, TJ, Camilla
We've had 3 locklosses since 9th Dec from PI24 ringing up (10.4kHz on ETMY): last night 81883, Tuesday AM 81862 and the 9th Dec, plot of all.
We had locklosses like this in September after the vent, Oli has a timeline in 80299. We increased the ETMY RH on September 26th 80320 and didn't have any PI locklosses since.
We normally damp this PI during the start of thermalization and successfully damped this after a 13 hours locked on 10th Dec, plot.
We can see a 25Hz wiggle in DARM just before the lockloss, e.g. 81862. The lockloss tool isn't tagging these as DCPD Saturation so we should implement LLO's PI code: Locklost Issue #198
Plotted is the location of the arm higher order modes around 10kHz. They have moved so that the central frequency is the same as the spikes around 10.4kHz, this excites the PI's more.
Maybe an unrelated point is that since the start of October, the circulating arm power has drifted down ~4kW (~1%) from 389 to 285kW when thermalized, plot attached. It's hard to know what's caused this, the IM4 TRANS drifted ~2% but was misaligned so would see alignment shifts 81735, the POP also drifted ~1% before it was recentered 81329. Input power measured by IMC_PWR_IN remained constant.
You cannot just use L1's lockloss code for the tagging universally [should work for the 10kHz PI though].
Because of the way LHO changed their readouts for the OMC, digging into the simulink model will reveal the PI is reading in only the first 32kHz band, for the RMS calculations
You simply have no aliasing down of the full band to utilise it in the way we do; and it will never see 80kHz PI.
Also your scaling is using calibrated DARM units by the look of things; while we use a counts scale; so the numbers in the RMS readouts are different many orders of magnitude.
Very quick shot of the 10.4k homs 2 hours into the first lock with the EY ring heater bumped up to 1.2W from 1.1W. This will need to be checked again later, but it's promising so far.
We had two short locks under an hour last night and both seemed to have lost lock from the ETMX glitches that we see (1418536304 and 1418541959). The former was tagged as an ETM glitch and is clearly much more obvious, the latter was moore minor but still looks like there was something odd going on a few tenths of a second before the lock loss.
TITLE: 12/18 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Wind
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 42mph Gusts, 32mph 3min avg
Primary useism: 0.09 μm/s
Secondary useism: 0.45 μm/s
QUICK SUMMARY: I just brought the IFO to IDLE since we are getting wind gusts into the 40s and ALS won't stay locked.
TITLE: 12/18 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
Unknown lockloss at 5:51 UTC.
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1418536304
Screenshot attached below
Current Locking state: Carm_Offset_reduction.
TITLE: 12/18 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 1mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.35 μm/s
QUICK SUMMARY:
Lockloss from a PI24 Ring up
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1418526747
Pi monitor tried to Damp it but couldn't turn it around at all for over 15 minuts, where I attempted to mannually damp it.... this was not a successfully activity.
TITLE: 12/18 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Just got back to Observing with the POP Beam diverter open. We plan to leave this beam diverter open until the end of Tony's shift at 0600UTC. Relocking had three bits of testing, slowing the process down. There was some ALS testing (alog81870), SRCL OLG (alog81872), and DRMI locking (alog81880), then a lock loss closing the beam diverter...I pressed the wrong the button. Relocking was straight forward after that.
LOG:
TITLE: 12/18 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 11mph Gusts, 5mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.35 μm/s
QUICK SUMMARY:
IFO is locked, and Observing for 26 Minute now.
Note about the Configuration tagging DetChar & PEM:
The CORNER_POP Beam Diverter is currently OPEN and When I leave tonight I will make sure to close the beam diverters, after saying "The Beam Diverters Are About To Close" in my best imitation of the verbalarams voice.
Beam Diverter close instructions for future self:
Sitemap -> LSC -> Beam_Diverters -> Corner -> Corner_POP close button.
The Dust mon in the Vac prep lab is still not working.
Sheila D, TJ
Recently we've been having very long DRMI acquisition times. Sheila remembered that this might have to do with her PR3 changes on Nov 18 (alog81329). Tony confirmed this with some sleuthing (alog81879). Looking at the height of the POP air inputs to the MICH, SRCL, and PRCL triggers, this has almost doubled since the change on Nov 18th. So, we bumped up the triggers for these in lscparams.py by double.
Here's a reference from 2018 of what trigger levels were as a percentage of the locked build ups for DRMI at a time when it was locking well. 44348
I added a comment here, 81885, pointing to Tony's alog. The main point is that we probably may also need to update the check that tells the guardian to run a PRMI alignment for the higher flashes.
WP12249
Jonathan, Dave:
The offloading of the past 6 months of raw minute trend files from h1daqtw0 SSD-RAID to permanent archive on HDD-RAID is completed.
nds0 restart | 11:41 Mon 16dec2024 | |
file copy | 12:06 Mon 16dec2024 - 12:59 Tue 17dec2024 | 24hr 53min |
nds0 restart | 13:27 Tue 17dec2024 | |
old file deletion | 13:38 - 15:56 Tue 17dec2024 | 2hr 18min |
TW0 raid usage went from 91% to 2%. Jonathan made the DAQ puppet change to configure nds0's daqdrc file with the new archive.
Currently Dust LAB2 is not working, 0.3u count is NaN, 0.5u count is flatline zero.
Fil, Fernando, Patrick, Dave:
We are investigating a Beckhoff device error on the CS-AUX PLC, DEV1 on the CDS Overview. Device went into error at 11:21 PST.
Robert has powered down the chamber illuminator control chassis in the LVEA. This chassis permits remote control of the chamber illuminators via ethernet, which is not needed during O4. There is a worry that these chassis, even when in stand-by mode, could be a source of RF noise.
On the CDS overview I will change DEV1's display logic to be GREEN if the dev count is 21 of 23 and RED if any other value.
New CDS Overview has GREEN DEV1 when device count = 21, RED otherwise.
J. Oberling, R. Short
This afternoon, Jason and I started to look into why the FSS has been struggling to relock itself recently. In short, once the autolocker finds a RefCav resonance, it's been able to grab it, but loses it after about a second. This happens repeatedly, sometimes taking up to 45 minutes for the autolocker to finally grab and hold resonance on its own (which led me to do this manually twice yesterday). We first noticed the autolocker struggling when recovering the FSS after the most recent NPRO swap on November 22nd, which led Jason to manually lock it in that instance.
While looking at trends of when the autolocker both fails and is successful in locking the RefCav, we noticed that the fastmon channel looks the most different between the two cases. In a successful RefCav lock (attachment 1), the fastmon channel will start drifting away from zero as the PZT works to center on the resonance, but once the temperature loop turns on, the signal is brought back and eventually settles back around zero. In unsuccessful RefCav lock attempts (attachments 2 and 3), the fastmon channel will still drift away, but then lose resonance once the signal hits +/-13V (the limit of the PZT as set by the electronics within the TTFSS box) before the temploop is able to turn on. I also looked back to a successful FSS lock with the NPRO installed before this one (before the problems with the autolocker started, attachment 4), and the behavior looks much the same as with successful locks with the current NPRO.
It seems that with this NPRO, for some reason, the PZT is frequently running out of range when trying to center on the RefCav resonance before the temploop can turn on to help, but it sometimes gets lucky. Jason and I took some time familiarizing ourselves with the autolocker code (written in C and unchanged in over a decade) to give us a better idea of what it's doing. At this point, we're still not entirely sure what about this NPRO is causing the PZT to run out of range, but we do have some ideas of things to try during a maintenance window to make the FSS lock faster:
As part of my FSS work this morning (alog81865), I brought the State 2 delay down from 1 second to 0.5, and so far today every FSS lock attempt has been grabbed successfully on the first try. I'll leave in this "Band-Aid" fix until we find a reason to change it back.
Sheila, Ibrahim, Jenne, Vicky, Camilla
After Ibrahim told Sheila that DRMI has been struggling to lock, she checked the POP signals and found that somethings been drifting and POP now appears to be clipping, see POP_A_LF trending down. We use this PD in full lock so don''t want any clipping.
We has similar issues last year that gave us stability issues. These issues last year didn't have "IMC" lock-losses so we think this isn't the main issue we've having now but maybe effecting stability.
Trends showing the POP clipping last year, and now. Last year we moved PR3 to removing this slipping while looking at the coherence between ASC-POP_A_NSUM (one of the QPD on the sled in HAM3) and LSC-POP_A (LSC sensor in HAM1): 74578, 74580, 74581.
Similar coherence plots to 74580, for now are show the coherence is bad:
Sheila is moving the PRC cavity now which is improving POPAIR signals, plot attached is the ASC-POP_A_NSUM to LSC-POP_A coherence with DRMI only locked before and during the move. See improvements. Sheila is checking she's in the best position now.
We have been holding 2W with ASC on before powering up, and using the guardian state PR2_SPOT move, which lets us move the sliders on PR3 and moves PR2, IM4 and PRM sliders to follow PR3.
Moving PR3 by -1.7 urad (slider counts) increased the power on LSC POP, and POPAIR 18, but slightly misaligned the ALS comm beatnote. We continued moving PR3 to see how wide the plateau is on LSC POP, we moved it another -3urad without seeing power drop on LSC POP, but the ALS paths started to have trouble staying locked so we stopped there. (POP18 was still improving, but I went to ISCT1 after the OFI vent and adjusted alignment onto that diode 79883 so it isn't a nice reference). I moved PR3 yaw back to 96 urad on the yaw slider (we started at 100urad), a location where we were near the top for POP and POPAIR 18, so in total we started with PR3 yaw at 100 on the slider and ended with 96 on the slider.
Please see Tony's comment about DRMI lock acquisition times here: 81879
When we moved PR3, we inceased the level of light on the POPAIR B diode. That does two things, first it means that the LSC controls were triggered at a lower percentage of the full buildups, which could cause difficulty locking DRMI (we want to maintain the trigger thresholds similar to what was documented in 44348. ). Also, the alignment check that Tony describes won't work well, because we didn't update the threshold the guardian used to decide that the alignment was poor and we needed to do PRMI.
From Tony's historgrams I'm not sure which of these is the main impact on DRMI locking times, if it's the triggering or the alignment check. We updated the trigger levels today, but not the threshold for the alignment check.
In the future we should check all these levels again 44348 when we have a change in power on POPAIR.
Ansel reported that a peak in DARM that interfered with the sensitivity of the Crab pulsar followed a similar time frequency path as a peak in the beam splitter microphone signal. I found that this was also the case on a shorter time scale and took advantage of the long down times last weekend to use a movable microphone to find the source of the peak. Microphone signals don’t usually show coherence with DARM even when they are causing noise, probably because the coherence length of the sound is smaller than the spacing between the coupling sites and the microphones, hence the importance of precise time-frequency paths.
Figure 1 shows DARM and the problematic peak in microphone signals. The second page of Figure 1 shows the portable microphone signal at a location by the staging building and a location near the TCS chillers. I used accelerometers to confirm the microphone identification of the TCS chillers, and to distinguish between the two chillers (Figure 2).
I was surprised that the acoustic signal was so strong that I could see it at the staging building - when I found the signal outside, I assumed it was coming from some external HVAC component and spent quite a bit of time searching outside. I think that this may be because the suspended mezzanine (see photos on second page of Figure 2) acts as a sort of soundboard, helping couple the chiller vibrations to the air.
Any direct vibrational coupling can be solved by vibrationally isolating the chillers. This may even help with acoustic coupling if the soundboard theory is correct. We might try this first. However, the safest solution is to either try to change the load to move the peaks to a different frequency, or put the chillers on vibration isolation in the hallway of the cinder-block HVAC housing so that the stiff room blocks the low-frequency sound.
Reducing the coupling is another mitigation route. Vibrational coupling has apparently increased, so I think we should check jitter coupling at the DCPDs in case recent damage has made them more sensitive to beam spot position.
For next generation detectors, it might be a good idea to make the mechanical room of cinder blocks or equivalent to reduce acoustic coupling of the low frequency sources.
This afternoon TJ and I placed pieces of damping and elastic foam under the wheels of both CO2X and CO2Y TCS chillers. We placed thicker foam under CO2Y but this did make the chiller wobbly so we placed thinner foam under CO2X.
Unfortunately, I'm not seeing any improvement of the Crab contamination in the strain spectra this week, following the foam insertion. Attached are ASD zoom-ins (daily and cumulative) from Nov 24, 25, 26 and 27.
This morning at 17:00UTC we turned the CO2X and CO2Y TCS chiller off and then on again, hoping this might change the frequency they are injecting into DARM. We do not expect it to effect it much we had the chillers off for a ling period 25th October 80882 when we flushed the chiller line and the issue was seen before this date.
Opened FRS 32812.
There were no expilcit changes to the TCS chillers bettween O4a and O4b although we swapped a chiller for a spare chiller in October 2023 73704.
Between 19:11 and 19:21 UTC, Robert and I swapped the foam from under CO2Y chiller (it was flattened and not providing any damping now) to new, thicker foam and 4 layers of rubber. Photo's attached.
Thanks for the interventions, but I'm still not seeing improvement in the Crab region. Attached are daily snapshots from UTC Monday to Friday (Dec 2-6).
I changed the flow of the TCSY chiller from 4.0gpm to 3.7gpm.
These Thermoflex1400 chillers have their flow rate adjusted by opening or closing a 3 way valve at the back of the chiller. for both X and Y chillers, these have been in the full open position, with the lever pointed straight up. The Y chiller has been running with 4.0gpm, so our only change was a lower flow rate. The X chiller has been at 3.7gpm already, and the manual states that these chillers shouldn't be ran below 3.8gpm. Though this was a small note in the manual and could be easily missed. Since the flow couldn't be increased via the 3 way valve on back, I didn't want to lower it further and left it as is.
Two questions came from this:
The flow rate has been consistent for the last year+, so I don't suspect that the pumps are getting worn out. As far back as I can trend they have been around 4.0 and 3.7, with some brief periods above or below.
Thanks for the latest intervention. It does appear to have shifted the frequency up just enough to clear the Crab band. Can it be nudged any farther, to reduce spectral leakage into the Crab? Attached are sample spectra from before the intervention (Dec 7 and 10) and afterward (Dec 11 and 12). Spectra from Dec 8-9 are too noisy to be helpful here.
TJ touched the CO2 flow on Dec 12th around 19:45UTC 81791 so the flowrate further reduced to 3.55 GPM. Plot attached.
The flow of the TCSY chiller was further reduced to 3.3gpm. This should push the chiller peak lower in frequency and further away from the crab nebula.
The further reduced flow rate seems to have given the Crab band more isolation from nearby peaks, although I'm not sure I understand the improvement in detail. Attached is a spectrum from yesterday's data in the usual form. Since the zoomed-in plots suggest (unexpectedly) that lowering flow rate moves an offending peak up in frequency, I tried broadening the band and looking at data from December 7 (before 1st flow reduction), December 16 (before most recent flow reduction) and December 18 (after most recent flow reduction). If I look at one of the accelerometer channels Robert highlighted, I do see a large peak indeed move to lower frequencies, as expected. Attachments: 1) Usual daily h(t) spectral zoom near Crab band - December 18 2) Zoom-out for December 7, 16 and 18 overlain 3) Zoom-out for December 7, 16 and 18 overlain but with vertical offsets 4) Accelerometer spectrum for December 7 (sample starting at 18:00 UTC) 5) Accelerometer spectrum for December 16 6) Accelerometer spectrum for December 18
Sheila, Jenne, Tony, Camilla
We've had locklosses in DRMI because the PRCL gain has been to high when locked on REFL1F. Tony looked and thinks that this started on 77583, the day of our big shift in the output alignment.
Today we acquired DRMI with half the gain in the PRCL input matrix for 1F, this one acquisition was fast. I've attached the OLG measurements for PRCL and MICH after the change.
Tony is working on making histograms of the DRMI acquisition times, before the 23rd, from the 23rd to today, and eventually a histogram from today for the next few weeks to evaluate if this change has an impact on the DRMI acquisition times.
Jenne also found that it seems out POP18 build up seems higher in DRMI since the 23rd.
I'm no longer quite so sure about the conclusion that Pop18 is higher, or at least enough to really matter.
Here are 2 screenshots that I made extremely quickly, so they are not very awesome, but they can be a placeholder until Tony's much more awesome version arrives. They both have the same data, displayed 2 ways.
The first plot is pop18 and kappaC versus time. The x-axis is gpstime, but that's hard to interpret, so I made a note on the screenshot that it ranges from about April 20th (before The Big Shift) to today. Certainly during times when the optical gain was low, Pop18 was also low. But, Pop18 is sometimes high even before the drop in optical gain. So, probably it's unrelated to The Big Shift. That means that the big shift in the output arm is not responsible for the change in PRCL gain (which makes sense, since they should be largely separate).
The second plot is just one value versus the other, to see that there does seem to be a bit of a trend that if kappaC is low, then definitely Pop18 is low. But the opposite is not true - if pop18 is low kappaC isn't necessarily low.
The last attachment is the jupyter notebook (you'd have to download it and fix up the suffix to remove .txt and make it again a .ipynb), with my hand-typed data and the plots.
I actually didn't load the guardian at the time of this change, so it didn't take effect until today.
So, we'd like histograms of DRMI acquitisiton times from before April 23rd, from April 23rd until today, and for a few weeks from today.
Using the Summary pages I was able to get a quick google sheet to give me before and after Histograms of how long ISC_LOCK was in DRMI 1F.
First Sheet's data is before Nov 18th 2024, consisting of 100 gpstimes and durations where ISC_LOCK was in AQUIRE_DRMI_1F.
Second Sheet's data is After Nov 18th 2024. Consisting of 100 gpstimes and durations where ISC_LOCK was in AQUIRE_DRMI_1F
Interesting notes about ISC_LOCK.
ISC_Lock will request PRMI or Check MITCH Fringes some where between 180 seconds and 600 seconds, depending on how much light is seen on AS_AIR.
If AS_AIR sees flashes above 80 then ISC_LOCK will not kick us out of DRMI until 600 seconds.
So it looks like one of the changes that happened on or around Nov18th made the Flashes on AS_Air higher but we are still not actually locking DRMI.
We had fewer Aquire DRMI durations, over 180 Seconds before Nov 18th's changes.
The previous timing master which was again running out of range on the voltage to the OCXO, see alogs 68000 and 61988, has been retuned using the mechanical adjustment of the OCXO.
Today's readback voltage is at +3.88V. We will keep it running over the next few months to see, if it eventually settles.
Today's readback voltage is at +3.55V.
Today's readback voltage is at +3.116V.
Today's readback voltage is at +1.857V.
Today's readback voltage is at +0.951V.
Today's readback voltage is at -2.511V