FAMIS 26290
Laser Status:
NPRO output power is 1.831W (nominal ~2W)
AMP1 output power is 64.55W (nominal ~70W)
AMP2 output power is 137.1W (nominal 135-140W)
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 6 days, 23 hr 49 minutes
Reflected power = 21.13W
Transmitted power = 105.6W
PowerSum = 126.7W
FSS:
It has been locked for 0 days 1 hr and 37 min
TPD[V] = 0.6479V
ISS:
The diffracted power is around 1.9%
Last saturation event was 0 days 1 hours and 37 minutes ago
Possible Issues:
AMP1 power is low
PMC reflected power is high
FSS TPD is low
ISS diffracted power is low
FAMIS 21189
Nothing much of note this week. PMC REFL is still increasing slowly while PMC TRANS is decreasing.
The FSS TPD signal is still low, and since I wasn't able to increase it much last week, we plan to go into the enclosure and tune up the FSS path on-table soon to fix it.
There were a few PCAL SDF diffs when we made it back to low noise today at CALEY. It looked like these were loaded into epics from teh safe.snap and these numbers disagreed with what was saved in the observe.snap file. I confirmed with Franscisco that this was the case, and he had me verify that they agreed with alog77386. I then also saved these new values in the safe.snap file as well. Interestingly, these channels are not monitored in the safe.snap.
I ran A2L while we were still thermalizing, might run again later. No change for ETMY but the ITMs had large changes. I've accepted these in SDF, I reverted the tramps that the picture shows I accepted. I didn't notice much of a change in DARM or on the DARM blrms.
ETMX P
Initial: 3.12
Final: 3.13
Diff: 0.01
ETMX Y
Initial: 4.79
Final: 4.87
Diff: 0.08
ETMY P
Initial: 4.48
Final: 4.48
Diff: 0.0
ETMY Y
Initial: 1.13
Final: 1.13
Diff: 0.0
ITMX P
Initial: -1.07
Final: -0.98
Diff: 0.09
ITMX Y
Initial: 2.72
Final: 2.87
Diff: 0.15
ITMY P
Initial: -0.47
Final: -0.37
Diff: 0.1
ITMY Y
Initial: -2.3
Final: -2.48
Diff: -0.18
I am not sure how the A2L is run these days, but there is some DARM coherence with DHARD Y that makes me think we should recheck the Y2L gains. See attached screenshot from today's lock.
As a reminder, the work that Gabriele and I did last April found that the DHARD Y coupling had two distinct frequency regimes: a steep low frequency coupling that depended heavily on the AS A WFS yaw offset, and a much flatter coupling about ~30 Hz that depended much more strongly on the Y2L gain of ITMY (this was I think before we started adjusting all the A2L gains on the test masses). Relevant alogs: 76407 and 76363
Based on this coherence, the Y2L gains at least deserve another look. Is it possible to track a DHARD Y injection during the test?
Since I converted this script to run on all TMs and dofs simultaneously, its performance hasn't been stellar. We've only run it a handful of times, but we definitely need to change something. One difference between the old version and the new is the frequencies the injected lines are at. As of right now, they range from 23-31.5Hz, but perhaps these needs to be moved. In June, Sheila and I ran it, then swapped the highest frequency and the lowest frequency to see if it made a difference (alog78495) and in that one test it didn't seem to matter.
Sheila and I are talking about the AS WFS offset and DHARD injection testing to try to understand this coupling a bit better. Planning in progess.
Mon Aug 26 08:02:15 2024 INFO: Fill completed in 2min 13secs
Short but a good fill. Gerardo is reducing the LLCV.
Jordan cleared an ice buildup at the end of the discharge pipe.
TJ, Jonathan, EJ, Dave:
Around 01:30 this morning we had a Dolphin crash of all the frontends at EY (h1susey, h1seiey, h1iscey). h1susauxey is not on the Dolphin network as was not impacted.
We could not ping these machines, but were able to get some diagnostics from their IPMI management ports.
At 07:57 we powered down h1[sus,sei,isc]ey for about a minute and then powered them back on.
We checked the IX Dolphin switch at EY was responsive on the network.
All the systems came back with no issues. SWWD and model WDs were cleared. TJ is recovering H1.
Screen shots of the console retrieved via ipmi. h1iscey had a similar screen to h1seiey, same crash dump. h1iscey, h1seiey - crash in the dolphin driver. h1susey - kernel panic, with a note that a LIGO real time module had been unloaded.
Crash time: 01:43:47 PDT
Reboot/Restart LOG:
Mon26Aug2024
LOC TIME HOSTNAME MODEL/REBOOT
07:59:27 h1susey ***REBOOT***
07:59:30 h1seiey ***REBOOT***
08:00:04 h1iscey ***REBOOT***
08:01:04 h1seiey h1iopseiey
08:01:17 h1seiey h1hpietmy
08:01:30 h1seiey h1isietmy
08:01:32 h1susey h1iopsusey
08:01:45 h1susey h1susetmy
08:01:47 h1iscey h1iopiscey
08:01:58 h1susey h1sustmsy
08:02:00 h1iscey h1pemey
08:02:11 h1susey h1susetmypi
08:02:13 h1iscey h1iscey
08:02:26 h1iscey h1caley
08:02:39 h1iscey h1alsey
FYI: There was a pending filter module change for h1susetmypi which got installed when this model was restarted this morning.
TITLE: 08/26 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 0mph Gusts, 0mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY: Looks like we lost the SUSEY and ISCEY front ends. This created connection errors and put many Guardians, including IFO_NOTIFY, into an error state. Contacting CDS team now.
Dave and Jonathan have fixed teh CDS FE issues, we are now starting recovery. I also found HAM5 ISI tripped as well as SRM and OFI, looks like this happened about 4 hours ago, a few hours after the FEs tripped. no idea why they tripped yet.
TITLE: 08/26 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 136Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Currently Observing and have been Locked for 1.5 hours. We were down for a while because of the big earthquake from Tonga, and recovery took a while because we were having issues with the ALSX crystal frequency. Eventually it fixed itself, as it tends to do. We got some lower state locklosses but eventaully were able to get back up with just the one initial alignment I ran when I first started relocking.
LOG:
23:30 Observing and Locked for 4 hours
23:41 Lockloss due to earthquake
01:07 Started trying to relock, starting initial alignment
- ALSX crystal frequency errors again
- Eventually went away, I didn''t do anything besides toggling Force and No Force
02:05 Initial alignment done, relocking
02:09 Lockloss from FIND_IR
02:17 Lockloss from OFFLOAD_DRMI_ASC
02:21 Lockloss form LOCKING_ALS
03:21 NOMINAL_LOW_NOISE
03:23 Observing
TITLE: 08/25 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 140Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
Fairly easy shift, with one lockloss in the middle and had a BIG Tonga EQ to end the shift with!
LOG:
TITLE: 08/25 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: EARTHQUAKE
Wind: 14mph Gusts, 8mph 5min avg
Primary useism: 0.54 μm/s
Secondary useism: 0.62 μm/s
QUICK SUMMARY: Just lost lock from a large earthquake that's going to keep us down for a bit. Before that, we had been locked for 4 hours.
Lockloss 08/25 @ 23:41UTC due to 6.9 earthquake in Tonga. We were knocked out really quickly by the S-wave and the R-wave won't be arriving for another ~30 mins, so it might be a while before we're back up.
03:23 UTC Observing
For FAMIS 26286:
Laser Status:
NPRO output power is 1.829W (nominal ~2W)
AMP1 output power is 64.59W (nominal ~70W)
AMP2 output power is 137.5W (nominal 135-140W)
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 5 days, 19 hr 56 minutes
Reflected power = 21.01W
Transmitted power = 105.7W
PowerSum = 126.7W
FSS:
It has been locked for 0 days 7 hr and 40 min
TPD[V] = 0.6499V
ISS:
The diffracted power is around 2.3%
Last saturation event was 0 days 6 hours and 33 minutes ago
Possible Issues:
AMP1 power is low
PMC reflected power is high
FSS TPD is low
TITLE: 08/24 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
Have had the BS camera die on us a couple times in the last 24hrs (this requires contacting Dave to restart the camera's computer--atleast until this camera can be moved to another computer), and so the beginning of the shift was restoring this and also figuring out why H1 had a lockloss, BUT did not drop the Input Power down from 61W to 2W.
After the camera issue was fixed, ran an alignment with no issues, and then took H1 all the back to NLN also with no issues.
Then completed taking Calibration measurements after 1.5hrs and 3hrs. Oli also ran LSC Feedforward measurements......
Then there was ANOTHER BS Camera Computer Crash! Dave brought us back fast.
Now back to OBSERVING!
LOG:
Looking into why the input power didn't come back down after a lockloss at ADS_TO_CAMERAS, seems that the proper decorators that check for a lockloss are missing from the run method (but are there in main). This means that while ISC_LOCK was waiting for the camera servos to turn on, it didn't notice that the IFO lost lock, and therefore didn't run through the LOCKLOSS or DOWN states which would reset the input power.
I've added the decorators to the run method of ADS_TO_CAMEARS so this shouldn't happen again.
R. Short, F. Clara
In the ongoing effort to mitigate the 9.5Hz comb recently found to be coming from the PSL flow meters (alog79533), this afternoon Fil put the PSL control box in the LVEA PSL racks on its own, separate 24V bench-top power supply. Once I shut down the PSL in a controlled manner (in order of ISS, FSS, PMC, AMP2, AMP1, NPRO, chiller), Fil switched CB1 to the separate power supply, then I brought the system back up without any issues. I'm leaving the ISS off for now while the system warms back up. I'm leaving WP12051 open for now until enough data can be collected to say whether this separate supply helps the 9.5Hz comb or not.
Please be aware of the new power supply cable running from under the table outside the PSL enclosure to behind the racks; Fil placed a cone here to warn of the potential trip hazard.
This looks promising! I have attached a comparison with a high-resolution daily spectrum from July (orange) vs yesterday (black), zoomed in on a strong peak of the 9.5 Hz comb triplet. Note that the markers tag the approximate average peak position of the combs from O4a, so they are a couple of bins off from the actual positions of the July peaks.