Search criteria
Section: H1
Task: PEM
Lockloss at 2025-06-25 21:45 UTC probably from the wind. Three seconds before the lockloss, we had an EX saturation. Wind jumped up all of a sudden, peakmon jumped up (although from very low to still pretty low but LSC CPSFF was affected), DARM had a bit oscillation, and almost all the ASC channels rang up starting 20-25 seconds before the lockloss
FAMIS 26395, last checked in alog85021
All fans look largely unchanged compared to last week and are all within noise thresholds.
All looks well, aside from the known issue with LAB2 and LVEA5 seems frozen, I'll investigate that tomorrow during maintenance.
LVEA5 being off is expected, it's a pumped dust monitor so we turned it off for observing.
Closes FAMIS26392
For the CS fans, they look fine althought MR_FAN5_170_2 is a bit noisy.
For the OUT building fans, there's a periodic noise increase on a few different fans. EY_FAN2_470_2, EX_FAN1_570_{1,2} and MX_FAN2_370_1.
Kiet, Robert, and Carlos
We report the results of calibrating the LEMI magnetometers in the Vault outside of the X arm;
We went out on May 15th, 2025 and took the following measurement with each lasted 2 minutes.
Far field injection: to calibrate the LEMI
1) 17:36:45 UTC; X-axis far-field injection; without preamp on the Bartington
2) 17:43:23 UTC; X-axis far-field injection; without preamp on the Bartington
3) 18:01:53 UTC, X-axis far-field injection; without preamp on the Bartington
4) 18:04:15 UTC, X-axis far-field injection; without preamp on the Bartington
5) 18:23:25 UTC; Y-axis far-field injection; with preamp on the Bartington
6) 18:26:00 UTC; Y-axis far-field injection; with preamp on the Bartington
The preamp gain is 20; all injections are done at 20Hz. The coil used for far field injection has 26 turns, 3.2 Ohms.
The voltage that was used to drive the injection coil for farfield injection: Vp-p: 13.2 +- 0.1V. It was windy out so we decided to use the preamp on the Bartington magnetometer.
The LEMI channels used for this analysis: H1:PEM-VAULT_MAG_1030X195Y_COIL_X_DQ; H1:PEM-VAULT_MAG_1030X195Y_COIL_Y_DQ
Bartington calibration
7) 18:49:38 UTC; with preamp on the Bartington
8) 18:52:50 UTC; with preamp on the Bartington
the voltage that was used to drive the injection coil for farfield injection: Vp-p: 1.88V +- 0.01V
We inserted the bartington magnetometer to the center of a cylindrical coil to calibrate its z axis(1000 Ohms, 55 turns in 0.087 m)
The final results of LEMI calibration after taken accound the all the measurements is (9.101 +- 0.210)*10^-13 Tesla/counts, there is a 20% difference between this measurement and the measurement taken pre O3. Robert noted that when taking the previous measurements, the calibrating magnetometer was not fully isolated from the LEMI. This time they are completely independent.
We recommend analyses that use LEMI data to use the calibration value of 9.101 * 10^-13 +- 5% Tesla/counts to be consersative.
For FAMIS #26389: All looks well for the last week for all site HVAC fans (see attached trends).
Jennie W, Robert S, Sheila, Camilla, Georgia
We are worried about jitter peaks in DARM and input pointing just now so Camilla and Sheila asked me to look at the PEM accelerometer we have on the main periscope in the PSL.
The first image is a time series of periscope acceleration on the top, lock state in the middle and range on the bottom. The first vertical cursor shows a time when we were locked at NLN before the vent on 1st April, and the second is during a quiet time with no squeezing injected that Camilla and co. were using for squeeze measurements yesterday 4th June. One can see that the rms motion increased from around 670 counts before the vent to around 1200 after the vent.
The second image shows the spectra of the accelerometer on the top left plot, the darm spectra on the bottom left, and the coherence from accelerometer to darm on the top right plot - plotted for both times.
Ref 0 periscope asd now
ref 1 periscope asd pre-vent
ref 2 darm asd pre-vent
ref 3 periscope to darm coherence pre-vent
ref 4 periscope to darm coherence post-vent
ref 5 darm asd post-vent
After looking over these with Robert, he thinks that the jitter is not much higher up to about 200 Hz but noticeably worse between 500 Hz and 1kHz. There have been vacuum pumps attached to HAM1 post-vent, but these would not cause the broadband features we can see in this region and would instead be reponsible for narrow peaks.
He wants to check the air handling in the PSL enclosure to rule this out as a source.
Since the coherence with DARM is not big for these peaks this will not be causing our jitter problems currently.
Closes FAMIS 26386 and FAMIS 26384 and FAMIS . Last checked in alog 84405,
Everything is under threshold. Took 2 weeks since was undone for the week prior.
Camilla, TJ
I went to turn on the CO2 lasers to prep for locking today. I found both power supplies on the mechanical room mezzanine needed to have their outputs turned on. For TCSX, I then was able to turn the controller on, turn the key, hit the gate button, then turn on the laser via medm as usual. For TCSY though, the controller complained of a Flow Alarm as soon as the unit was turned on, and turning the key or hitting the gate would not clear it. The flow was reading 2.45gpm according to the paddle wheel flow meter on the floor, and before the vent we were just above 2.5gpm on the floor and 3.4gpm at the chiller. We had reduced the flow of this chiller back in December with Robert (alog81246), so I tried bumping up the flow slightly to 2.6gpm on the floor in the hopes to clear the flow alarm. Now, the flow alarm wasn't present when turning the unit on, but then when I turned the key and hit the gate button, the flow alarm came back.
At this point Camilla and I checked cable connections, tried turning the chassis off and on while waiting a bit, patting our heads while rubbing our bellies, and some other non-fruitful things. Eventually we moved the flow back up to 3.8gpm at the chiller and 2.8gpm at the floor and the flow alarm never showed up. We tried to bring the flow down a bit, but the flow alarm would return. We have no idea why the controller isn't happy with us running the flow at levels that we have been for all of 2025. For now, sorry Crab Nebula.
(CoreyG, MitchR, RandyT)
Wind Fence News
Today Location #4 (of6) had fabric panel started it was attached above the middle (so, 3 of 5 horizontal cables were secured to this panel). The panel was then secured at this stage since the afternoon winds were beginning to pick up.
At this point we moved to location #5 (of 6) and continued attaching the big/thick horizontal cables (bottom was installed last week, so the remaining 4 were tensioned-up/installed.
Bee News
After this work was done, several NEW bee swarms were observed along the X-arm as we drove back to the Corner!
(Earlier this week an X-arm bee swarm was observed and a bee person installed a bee hive box; this box seems to be getting populated, but we did see some bees continuing to go into the Beam Tube at this location).
But today, we saw (3) NEW Swarms at the base of the Beam Tube Enclosure---their swarms appeared within 2hrs! The bees are entering at the joint between the cement enclosures. Many of the joints have holes in the cauking toward the ground and this is where the swarms were centered; as we drove back to the corner several more of these holes had smaller groups of bees investigating these holes. Mitch went to notify Richard about the situation when we got back to the Corner Station.
(CoreyG, MitchR, RandyT)
This morning, Randy watered down the sand roads. Later, Mitch & Randy swapped in the new (& better) rigging hardware for the Wind Fence on all the big poles.
This afternoon, 2 (of5) horizontal cables were installed at Location #4 (of 6).
And of course a new bee hive was observed. This one is up in the Wind Fence vicinity, and is forming on....a tumbleweed! Tyler was notified and he checked it out.
Next up is continuing to install horizontal cables at location #4, 5, & 6. Then panel install and securing panels.
(CoreyG, MitchR, ChrisS, RandyT, JimW)
This week's wind fence work continues. With a couple of issues that continued from last week and were addressed:
The first panel is mostly complete (on Tues) with one vertical cable remaining to be installed. Old panels had thinner cables removed and are folded up for future availability and storage.
A water tank was delivered to EX (Tues) to allow daily soaks of the sand to help improve travel.
Today (Wed) was a windy day with sustained winds of around 15mph, so this morning mostly focused on non-at-height work. The thick cables were cut for the remaining 3-panel locations
Closes FAMIS37252, last checked in alog83817
The diode room DM was having some issues that I fixed the other week, lab2 is also a known issue. No other issues of note.
The DM first starting having issues at ~11am 2 weeks ago on April 22nd (Tues) for seemingly no reason, there wasn't any work or alogs that day that would made us suspicious of causing this.
Following my alog comments on alog84282, Dave showed me where the physical comtrol box is in the CER, restarting it brought the connection back but I then found that the dust monitor had no flow :(. This DM hasn't been used since we got it calibrated in April of 2024, so I swapped it for the last pumpless spare we had which has good flow but now it's not connecting again. I've tried powercycling the DM itself, the ioc, and the comtrol box then the ioc which is what got the other DM to reconnect, and it still getting "No reply from device within 1000 ms" for all its PVs. The DM it self is working and reading counts properly.
Doing a telnet network status (comand "ss -e") on h0epics for the diode room port yielded:
timer:(keepalive,38min,0) uid:1001 ino:20939307 sk:ffff8800b67bd500
ESTAB 0 0 10.105.0.80:37347 10.105.0.100:8000
I opened up the DM that had no flow to find that the internal tubing was not even connected, I swapped this one back this morning and it came right back no problem and we can see it on epics. The one that I took off has some kind of network issue.
(CoreyG, MitchR, RandyT, JimW)
Previous work posted here.
Today started with Randy and Mitch getting the small green tractor to EX and then working on compacting some of the very soft sand around the wind fence (on the front side of the Wind Fence---closest to EX).
Started to remove the final 3-wind fende panels (of 6 total). While driving the orange rental manlift (on the backside of the wind fence), it managed to get dug in some pretty deep sand a few times. On the last dig-in, we kept the manlift here (dug out/compacted sand around it and focused work on the furthest away wind fence panel since we already had the orange manlift here). The plan is to do everything for this panel location all at once:
Today we made it to step 4 and did the 1st/bottom horizontal wire.
We ran out of the thick wire, so the trailer was brought back to the Corner Station so the 1200lb spool can be loaded into the trailer in the morning. Then will continue getting that 1-panel installed....and then work on getting the orange manlift unstuck.
Debasmita, Preeti, Gaby
Robert did some injections at 0.2 Hz in the BS HEPI which showed a slight increase in DARM. We wanted to check further if the microseismic ground motion (0.1-1.0 Hz) is creating any noise in LHO as it does in LLO. For this, we chose the glitches in DARM picked up by omicron, having frequency in the range 10-60 Hz and SNR in the range 5-20.
glitch rate vs ground motion in 0.1-0.3 Hz | glitch rate vs ground motion in 0.3-1.0 Hz | |
Pearson correlation coefficient | 0.605 | 0.472 |
Spearman correlation coefficient |
0.602 |
0.439 |
The attached pdf shows qscans of some of the glitches chosen from high microseismic days. It is not very clear from the qscans if the glitches have arch like morphology, which is usually the case for glitches caused by scattered light. But, some of the glitches do show repetition within short time interval which make us suspect if they are produced by scattered light. We are still working on this to figure out if it is the process of scattering which is upconverting the low frequency ground motion to create noise in the 10-60 Hz band.
In summary, it looks like the microseismic ground motion is affecting the DARM sensitivity at LHO, maybe not as much as it does at LLO.
TITLE: 05/06 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
SEI_ENV state: MAINTENANCE
Wind: 9mph Gusts, 5mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.11 μm/s
QUICK SUMMARY: Vent work continues today with more HAM1 alignment and wind fence work.
The DR dust monitor counts did not look correct, its was only reading zero for over 2 weeks, I went out and power cycled it and made some counts which I was not able to see on epics. I then tried to restart the IOC, and it struggled to come back. After a little over 5 minutes it did come back and started showing counts again.
The DR is having network issues again, ~an hour after the restart. I'll try swapping this one with one of our 2 pumpless spares.
2025/05/06 11:09:20.443023 gt521s1 H1:PEM-CS_DUST_DR1_HOLDTIMEMON: No reply from device within 1000 ms
2025/05/06 11:09:21.444241 gt521s1 H1:PEM-CS_DUST_DR1_STATE: No reply from device within 1000 ms
2025/05/06 11:09:21.787891 gt521s1 H1:PEM-CS_DUST_DR1_OPSTATUS: Input "SH 600*00337" mismatch after 0 bytes
2025/05/06 11:09:21.787919 gt521s1 H1:PEM-CS_DUST_DR1_OPSTATUS: got "SH 600*00337" where "OP " was expected
2025/05/06 11:09:22.344822 gt521s1 H1:PEM-CS_DUST_DR1_HOLDTIMEMON: Input "OP H 58*00404" mismatch after 0 bytes
2025/05/06 11:09:22.344838 gt521s1 H1:PEM-CS_DUST_DR1_HOLDTIMEMON: got "OP H 58*00404" where "SH " was expected
2025/05/06 11:09:22.394988 gt521s1 H1:PEM-CS_DUST_DR1_STATE: Input "H 57*00403" mismatch after 0 bytes
2025/05/06 11:09:22.395020 gt521s1 H1:PEM-CS_DUST_DR1_STATE: got "H 57*00403" where "OP " was expected
2025/05/06 11:09:22.445131 gt521s1 H1:PEM-CS_DUST_DR1_OPSTATUS: Input "P H 57*00403<8d>OP H 57" mismatch after 0 bytes
2025/05/06 11:09:22.445164 gt521s1 H1:PEM-CS_DUST_DR1_OPSTATUS: got "P H 57*00403<8d>OP H 57" where "OP " was expected
Swapped the dust monitor and now it won't come back and the IOC can't connect to/see the DM.
2025/05/06 11:42:20.506771 gt521s1 H1:PEM-CS_DUST_DR1_HOLDTIME: No reply from device within 1000 ms
2025/05/06 11:42:20.507000 _main_ H1:PEM-CS_DUST_DR1_HOLDTIME: @init handler failed
2025/05/06 11:42:20.507100 _main_ H1:PEM-CS_DUST_DR1_HOLDTIME: Record initialization failed
Bad init_rec return value PV: H1:PEM-CS_DUST_DR1_HOLDTIME ao: init_record
2025/05/06 11:42:21.508499 gt521s1 H1:PEM-CS_DUST_DR1_SAMPLETIME: No reply from device within 1000 ms
2025/05/06 11:42:21.508670 _main_ H1:PEM-CS_DUST_DR1_SAMPLETIME: @init handler failed
2025/05/06 11:42:21.508762 _main_ H1:PEM-CS_DUST_DR1_SAMPLETIME: Record initialization failed
Bad init_rec return value PV: H1:PEM-CS_DUST_DR1_SAMPLETIME ao: init_record
(CoreyG, MitchR, RandyT, JimW)
Attached photo files have names describing them and they are mostly in order of when they were taken (except for the first 4, which are "overall highlights").
EY Wind Fence Status: COMPLETED on May2nd
This work started the week of April 28th. The wind fences overall have been in need of a reconfiguration to UNDO what the contractors did to build them--> Because they were not great in high winds and had frequent failures during wind storms. The EY Wind Fence was the first structure to get it's panels removed, hardware reconfigured/FIXED, and panels reinstalled. However, the final ("left-most" panel did NOT get the upgrade and had failures in the last year). This last panel needed its upgrade and this is what started the week of April 28th. By the end of the week, this final panel received its upgrade and the EY Wind Fence was complete.
The upgrade was basically removing alot of the original rigging equipment holding the fabric fence panel and replace it with more rigging hardware.
EX Wind Fence Status: Started today on May 5th
The first job here was driving (2) manlifts from EY to EX (this took about 90min and a 1/4-tank of gas for each). Then there was a bit of prep work and gaming out where to start.
The EX Wind Fence was with the original equipment, so the entire EX Fence (6-panels) needed upgrade work. Three panels for the EX Wind Fence were removed, but while trying to get to the 4th panel the smaller (blue) manlift got stuck in very soft sand. At this point, this work paused to reasses a way forward (Also phoned Tyler to take a look at the situation.).
The plan now is to get the small green tractor down to EX and try to compact the sandy ground as best as possible. It's not clear if this will work, but this is where we currently stand.