Lockloss due to 5.7 EQ in Mongolia. Initially I thought we rode it out since the EQ happened since verbal sounded it 30 minutes before losing lock. Though looking at the later arriving R waves, we seem to have lost lock 3 minutes after they were detected by ISI Sensors, microseism FOM and Picket Fence. Right after the LL, EQ mode activated. High secondary microseism likely contributes to higher EQ susceptibility too.
Shall begin to re-lock once it passes.
Robert, Tyler, Richard, Eric, TJ
Several years ago we were unable to lock because the suspensions were getting hot and lengthening when it was very cold out. The problem was that our temperature control sensors were mounted on the walls, and were influenced by the temperature of the walls. As a result, the HVAC overheated the LVEA to keep the wall sensors at the same temperature when it was very cold out, or over-cooled the LVEA when the sun shown on the walls (33320). We moved some of the sensors to cable trays by the vacuum enclosure to combat this problem, but, since the sensors on the cable trays dont provide full coverage, we still use many of the wall sensors. Yesterday we took more of the wall sensors out of the control average because we are having some problems in this cold snap with the suspensions lengthening.
In addition, TJ altered the LVEA temperature figure of merit to focus on the sensors on the cable trays so that we would be alerted to temperature increases at the chambers, where previously we were mainly monitoring the temperature at the walls.
The figure shows which sensors are now being used (checked boxes). It also shows the new FOM - the first 4 channels are on cable trays near the IFO.
TITLE: 02/15 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: One lockloss this shift with a relatively straightforward relock. SQZ isn't performing perfectly, so range has been inconsistent and possibly will be also through the weekend.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
19:34 | SAF | Laser Haz | LVEA | YES | LVEA is laser HAZARD!!! | Ongoing |
18:17 | ISC | Keita, Jennie, Mayank, Rick | Opt Lab | N | ISS array work, Keita out @ 20:55 | 21:52 |
21:07 | FAC | Robert | Opt Lab | N | Termites | 21:12 |
TITLE: 02/15 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 24mph Gusts, 15mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.37 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING as of 22:49 UTC
Range is low due to poor high freq squeezing, which Sheila and Ryan were troubleshooting when I walked in - will probably be bad over the weekend since this long standing issue is still somewhat a mystery and likely needs table work.
Additionally, love is both in the air and the walls.
Jennie W, Rick S, Mayank C, Keita K
We went into the optics lab today with the intent for Rick to review the ISS array assembly we unboxed the other day (LHO alog #82731) on which we found some particulate contaminsation and some detached parts.. Rick and Keita are unsure how the support rods (first picture) were bent during transit and they appear to have bent at the top and deformed the cover piece (D1300717) they were attached to as they came unfixed at the other end and so were just resting on the top of the QPD mount plate (D1300719). They appear to have moved around and scored the surface of this platform. Maybe this happened in transit or storage but it seems like it would have had to be a large force and Rick says the storage containers were packed in form during transit.
1st package: Base piece (D1101074-v2 S/N 004) is double-bagged in my office and I have put in a clean and bake order as this spacer is site specific in height and some of our ISS spares will have a spacer set to the L1 beam height instead so we may need to swap it out.
2nd package: Cover piece D1300717 with the spacing posts attached is not bagged properly as we probably do not want to reuse this and so it is in Keita's office.
3rd package: Array and mirror assembly with PDs still attached was wrapped in foil with dry sealed clean room wipes protecting top optic in periscope, then double-bagged. Shown in center left of this image. This will need re-cleaned if we want ot scavenge parts form it.
4th package: All the other parts from assembl y shown in this image ( apart from PD assembly, baseplate, tool pan and top cover) have been packaged up in foil and double bagged in one package. These will also need recleaned if reused.
These last two packages are stored in the cabinet that the spare ISS array units are in, situated in the vacuum bake prep area next to the PCal lab. The shipping cover and base plate for the array are also back in this cupboard.
For reference the assembly drawing can be found here.
The assembly we worked on has the dcc ref S12020967
I don't have access to the entries for serial numbers but the ISS array assembly dcc entry is here. https://dcc.ligo.org/LIGO-D1101059
Robert, Jennie, Keita, Mayank, Rick
People working in the optics lab noticed a sudden infestation of winged termites, unmated queens and kings participting in a nuptial flight. We found the nest from which they had emerged, full of alates and immature termites, in the wall behind the flow benches (see figure), and possibly extending into the fan boxes of the flow benches, which are wooden. We temporarily sealed up the nest entrance (see figure) but we should probably deal with the colony, both because of damage from feeding, and because there will be occasional mating flights from the colony.
Photo of region at base of wall where termites were emerging.
And this shows the other side of the caster of the flow bench pictured by Rick in the above alog. You can seesome kind of structure made by termites. Black ones as well as smaller white ones were coming out.
(Jordan V., Janos C., Gerardo M.)
Late entry.
On Tuesday 2/11/2025 we removed the controller for the ion pump located at Y2-8 (module Y2 and double door 8) for problems reported on aLOG 82162, no issues were encountered while performing the replacement, however, the new controller is not the same type as the one removed, since the "old" controller is an obsolete unit. BTW, the "old" unit was reporting the current being used to power the ion pump as zero, but it was clearly using some current noted by the pressure changes noted by adjacent gauges while the old unit was disconnected, something is wrong with it.
The new controller appears to be doing good, however there has a pressure spike, see the second pressure spike on the attached plot, meaning that probably the pump stops working, due to either a power glitch on the controller or the patched cable may be failing at the handful of joints made some time ago, see aLOG 33388. Attachment is a plot of the pressure spikes, the first one is due to the controller replacement.
Erik, Dave:
We added the FMCS STAT setpoint EPICS channels (high-alarm, low-alarm, rate-alarm) to the h1cdssdf. This will permit reporting if they are changed, and reverting after a FMCSSTAT restart.
+51 channels were added to h1cdssdf's monitor.req, increasing its count from 922 to 973. The SDF process was restarted at 14:30, the new channels were accepted and monitored.
Late entry.
Thursday of last week I replaced the leaky valve reported on aLOG 75980 with FRS ticket 30556, I then allowed for the parts/joints to dry and settle over the weekend. Then on Tuesday, 2/11/25, the purge air system was turned on to pressurize and leak test the system. No leaks were found. The purge air system ran for about 3 hours, the system checked out good and now we are closing the above mentioned FRS ticket.
BTW, I took a dew point measurement at the input mode cleaner purge port area, reading was -40.5 oC.
Lockloss @ 21:15 UTC - link to lockloss tool
The lockloss tool tags this as WINDY (although wind speeds were only up to around 20mph) and there seems to be an 11Hz oscillation that starts about a second before the lockloss seen by all quads, PRCL, MICH, SRCL, and DARM.
H1 back to observing at 23:11 UTC. Fully automatic relock after I started an initial alignment soon after the last lockloss.
After H1 reached NLN, I ran the A2L script (unthermalized) for both P & Y on all quads. Results here:
Initial | Final | Diff | |
ETMX P | 3.35 | 3.6 | 0.25 |
ETMX Y | 4.94 | 4.92 | -0.02 |
ETMY P | 5.48 | 5.64 | 0.16 |
ETMY Y | 1.29 | 1.35 | 0.06 |
ITMX P | -0.67 | -0.64 | 0.03 |
ITMX Y | 2.97 | 3.0 | 0.03 |
ITMY P | -0.06 | -0.03 | 0.03 |
ITMY Y | -2.51 | -2.53 | -0.02 |
New A2L gains were updated in lscparams and ISC_LOCK was loaded. I also REVERTED all outstanding SDF diffs from running the script (time-ramps on quads and ADS matrix changes in ASC). The A2L gains themselves are not monitored.
Ran coherence check for a time right after returning to observing to check to A2L gains (see attached). Sheila comments that this looks just a bit better than the last time this was run and checked on Feb 11th (alog82737).
Another lockloss that looks just like this one was seen 02/16 09:48UTC. Same 11 Hz oscillation 1 second before lockloss, seen in the same places.
I've tightened the thresholds for VEA temperature alerts to be closer to historical limits. In the corner station, the range is 18.0 to 21.5 C for all zones. For the FCES, 19.0 - 21.25 C. End station ranges are 15.0 to 21.0 C for "A" and "D" sensors, 17.0 to 21.0 C for "B" and "C" sensors.
Sheila, Camilla
We've seen our SHG power drop with LVEA temperature swings (82787, 82057), green REFL has also been increasing a little since our OPO crystal spot move, meaning our losses are increasing. Comparing our current powers to LLO's below:
Intereststing that LLO send in less SHG launch power than us but have a much higher NLG.
LHO | LLO | |
IR Laser | 815mW | 930mW |
PMC Trans | 680mW | 653mW |
SHG Trans | 100mW | 135mW |
SHG launch | 25mW | 20mW |
OPO Trans | 60uW | 140uW (from 75096, medm unclear) |
OPO REFL DC | 1.5mW | 2.3mW |
NLG |
10.2 in 82202 with 80uW.
Expect lower with 60uW OPO trans
|
19 (from 75096) |
CLF launch | 0.07mW | 0.19mW? |
FC launch | 6.6mW | 26mW |
[Joe B, Vlad B, You-Ru Lee] We've been trying to understand a ~5% systematic error in the LHO calibration for awhile now. After eliminating possibilities one by one, Vlad found that the H1:SUS-ETMX_L1_LOCK_L, FM6 aL1L3 filter definition in our calibration model (as well as in the h1calcs front end model) did not match the actual interferometer's definition. Specifically, a 50 Hz pole had been changed to a 30 Hz complex pole pair on March 20th, 2024. See LHO alog 76546. Even though the UIM is rolled off relative to other stages quite a bit by 30 Hz, it still has a several percent contribution to the overall interferometer response there, so a ~50% error at 30 Hz due to the filter shape can contribute significantly to the overall error. The other contributing factor is the SRC detuning effects, which up until this point have been neglected when pushing updated calibration models to the actual pipeline. We've been looking at early September, as that is part of the time we need to regenerate due to poor 60Hz subtraction injecting noise, and specifically wanted to eliminate this small but noticeable systematic. When I combine the fit September SRC detuning (specifically report 20240905T200854Z), along with an updated foton file (specifically ligo/svncommon/CalSVN/aligocalibration/trunk/Common/H1CalFilterArchive/h1susetmx/H1SUSETMX_1394948578.txt instead of H1SUSETMX_1394415894.txt), I get the attached plot, which seems to agree with the PCAL monitoring lines we run all the time still. Still some residual mismatch, but I didn't do a particularly good job of lining up the average kappa values I applied and the grafana page, but it certainly will flatten out the large bump and phase wiggles at 30Hz and below. The monitoring line data comes from Maddie's monitoring line grafana pages. Still a bit more work to be done, and comparisons at other times, but we should aim to fix this next week.
BS Camera stopped updating just like in alogs:
This takes the Camera Sevo guardian into a neverending loop (and takes ISC LOCK out of Nominal and H1 out of Observe). See attached screenshot.
So, I had to wake up Dave so he could restart the computer & process for the BS Camera. (Dave mentioned there is a new computer for this camera to be installed soon and it should help with this problem.)
As soon as Dave got the BS camera back, the CAMERA SERVO node got back to nominal, but I had accepted the SDF diffs for ASC which happened when this issue started, so I had to go back and ACCEPT the correct settings. Then we automatically went back to Observing.
OK, back to trying to go back to sleep again! LOL
Full procedure is:
Open BS (cam26) image viewer, verify it is a blue-screen (it was) and keep the viewer running
Verify we can ping h1cam26 (we could) and keep the ping running
ssh onto sw-lvea-aux from cdsmonitor using the command "network_automation shell sw-lvea-aux"
IOS commands: "enable", "configure terminal", "interface gigabitEthernet 0/35"
Power down h1cam26 with the "shutdown" IOS command, verify pings to h1cam26 stop (they did)
After about 10 seconds power the camera back up with the IOS command "no shutdown"
Wait for h1cam26 to start responding to pings (it did).
ssh onto h1digivideo2 as user root.
Delete the h1cam26 process (kill -9 <pid>), where pid given in file /tmp/H1-VID-CAM26_server.pid
Wait for monit to restart CAM26's process, verify image starts streaming on the viewer (it did).
FRS: https://services1.ligo-la.caltech.edu/FRS/show_bug.cgi?id=33320
Forgot once again to note timing for this wake-up. This wake-up was at 233amPDT (1033utc), and I was roughly done with this work in about 45min after phoning Dave for help.
Matthew, Camilla
Today we went to EX to try and measure some beam profiles of the HWS beam as well as the refl ALS beam.
Without analyzing the profiles too much, it seemed like (at least the HWS beam) matches previously taken data from Camilla and TJ.
The ALS beam still has the same behavior as reported a few years ago by Georgia and Keita (alog 52608), could not find a referencing alog) who saw two blobs instead of a nice gaussian, just as we see now
Attached is the data we took of the HWS beam coming out of the collimator, both the data from 11th Feb and 10th Dec (81741) together.
We also took data of the return ALS beam, which as Matt showed was shaped like two-lobes. Attached here. We measured the beam more downstream than when we did this at EY 81358 due to the ALS return beam being very large at the ALS-M11 beamsplitter. Table layout D1800270. Distances on table measured in 62121, since then HWS_L3 has been removed.
We didn't take any outgoing ALS data as ran out of time.
Back to OBSERVING as of 04:19 UTC