Lockloss @ 21:41, cause unknown.
To archive our lock-loss-alert changes using the userapps subversion repository, I have written a script which automatically accepts any pending contact changes, and commits the resulting burt snap file to the userapps repository. The commit string explicitly notes that this is a robot commit, and gives the date-time of the commit.
The script runs periodically as a cronjob.
For the initial roll-out, the cronjob runs on opslogin0 every 15 minutes as david.barker, and uses my current Kerberos ticket for authentication with the subversion repository server.
This service will eventually move to a CDS server and use a robot authentication ticket.
Note that currently the contact channels are indexed by ID. The MEDM needs to be consulted to map an ID to the contact's name.
Louis, Vicky
In 72525, on SQZT7 Sheila and I adjusted the beamsplitter angle between the OPO_IR_TRANS PD & camera, which changed the beam-splitter ratio between refl / trans slightly. Today Louis and I re-measured and updated the OPO_IR_PD calibration. If we want to try a SQZ single-bounce measurement through the AS port, it'll be important to have an accurate calibration of the squeezer's injected power at the beam diverter. I think this calibration update is likely good to within ~2% (and very likely good to within ~3.5%)**.
With ~10mW into the seed fiber, using the Ophir PD (calibrated 10/18, at least the filter is part of S/N 889882), at 11:05AM PT we measured 52.7uW on the OPO_IR_TRANS PD, and then at 11:08am PT, we measured like OPO_IR_TRANS = 99-100uW going into the beam-splitter. While the epics calibration was fine, the slowcontrols calibration was off, so we accordingly updated the PD responsivity (0.23 to 0.235) and the PD splitter ratio (59% to 53.5%); SDF screenshot attached.
** This is assuming we trust the Ophir PD calibration into watts, but this PD was last calibrated 2018, and it wanted to be re-calibrated in 2020, but was not. We first measured with a Thorlabs PD, but that PD was inaccurate and reported the uW high by >10%. We are assuming the Thorlabs PD is inaccurate b/c the Ophir reading was closer to the former PD calibrations in slowcontrols/epics, so we are deciding to trust that one.
Following a string of local EQs, H1 is now relocking at LOWNOISE_COIL_DRIVERS. We are also about to start a short comissioning period alongiside Livingston for PEM testing beginning at 19:00 UTC and running until 21:00.
The undamped EX cryopump manifold baffle was shown to make noise in DARM, and I suggested that I might be able to damp it through the viewports ( https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=69578 ). The two page figure shows in more detail how this would work.
The corner station LVEA and end station VEA dust monitors can be used to provide temperature trends with a signal granularity of 1F. The attached 7day trend shows the dust monitor data for the two LVEA dust monitors, and the end station VEAs (one per VEA). I have plotted the 300nm dust counts alongside to show that the dust monitors are being ran periodically (shown with y-axis log scaling).
ndscope yaml file can be found at ~david.barker/ndscope/dustmon_temps.yaml
Fri Sep 08 10:06:43 2023 INFO: Fill completed in 6min 39secs
Gerardo confirmed a good fill curbside.
Closes 26208, last completed Sep 1st.
Laser Status:
NPRO output power is 1.827W (nominal ~2W)
AMP1 output power is 67.07W (nominal ~70W)
AMP2 output power is 135.0W (nominal 135-140W)
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PMC:
It has been locked 33 days, 0 hr 0 minutes
Reflected power = 16.43W
Transmitted power = 109.4W
PowerSum = 125.8W
FSS:
It has been locked for 0 days 4 hr and 6 min
TPD[V] = 0.8649V
ISS:
The diffracted power is around 2.3%
Last saturation event was 0 days 4 hours and 6 minutes ago
Possible Issues: None
TITLE: 09/08 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 140Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 0mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
- IFO has been locked for just under 3 hours following the EQ lockloss from this early this morning
- CDS/SEI ok
- Alert handler channels are showing up white, most likely related to the server issue
H1 Manager flagged me for assistance relocking the detector at 09/08 10:38UTC, and it looks like the reason why it could not relock itself was because of earthquakes from New Zealand(slightly into yellow/large earthquake) and Japan(mid-Green/earthquake) passing through. I marked when H1 Manager asked for assistance on attachment1, and as you can see, we were at the tail end of the earthquakes and so once I intervened by taking the detector out of INITIAL_ALIGNMENT and to DOWN, H1 Manager was able to take us back through INITIAL_ALIGNMENT and all the way back up to NLN without any more intervention from me.
With regards to the earthquake, Earthquake Mode was activated at 09:23UTC, almost a minute before the warning for the New Zealand earthquake even came in, so there must have been a third, more local earthquake that we were able to survive the first 10minutes of before being knocked out of lock and fighting against poor H1 Manager.
We reached NOMINAL_LOW_NOISE at 12:12UTC and just entered into Observing as of 12:30UTC.
TITLE: 09/08 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 23:32 UTC (13 hour lock).
Nothing new/relevant to report
LOG:
None
IFO is in NLN and OBSERVING since 23:32 UTC
Other:
All of the BacNET FMCS channels except for the chillers are not being seen by the EPICS IOC. This is likely associated with an upgrade to the server done earlier this week by Apollo. Restarting the IOC has not helped. This may require different IOC code. For now, this is just to note that it is a known issue.
Opened FRS29055. At this point in time, only the CS Chiller channels are active.
TITLE: 09/07 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 136Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 12mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.10 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING since 23:06 UTC (Squeezer unlocked for 3 minutes so dropped out of observing temporarily)
FMCS systems still down
TITLE: 09/07 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 141Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: We've been locked for 5 hours after a slower lock loss and automated relock. We lost the FMCS EPICs site wide around noon during work to fix the FMCS system.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 15:00 | FAC | Kim | MX | n | Tech clean | 16:22 |
| 17:52 | PEM | Robert | LVEA | n | Picture of magnetometer | 18:02 |
| 18:02 | PSL | Jason | MX | n | Check on container | 18:36 |
| 20:04 | FAC | Tyler, Contractor | FCES, EY | n | Check on HVAC servers | 20:58 |
| 22:04 | FAC | Fil, Contractor | MY. EY | n | Connect FMCS system | 22:36 |
| 22:21 | FAC | Richard | FCES | n | Checking on HVAC servers | 22:36 |
While FMCS work is ongoing, I have bypassed the cell-phone alarms for the channels shown below:
Bypass will expire:
Fri 08 Sep 2023 03:06:06 PM PDT
For channel(s):
H0:FMC-CS_CY_H2O_PUMPSTAT
H0:FMC-CS_CY_H2O_SUP_DEGF
H0:FMC-CS_FIRE_PUMP_1
H0:FMC-CS_FIRE_PUMP_2
H0:FMC-CS_WS_RO_ALARM
H0:FMC-EX_CY_H2O_PUMPSTAT
H0:FMC-EX_CY_H2O_SUP_DEGF
H0:FMC-EY_CY_H2O_PUMPSTAT
H0:FMC-EY_CY_H2O_SUP_DEGF
All the FMCS EPICS channels have been static since around 11:00 this morning. The attached plot shows a representative channel from CS and each out-building showing they stopped updating between 10:47 and 11:06. Interestingly most of the EPICS channels did not go invalid, but some did. This explains the mixture of green/white values on the FMCS MEDMs seen this afternoon. Jonathan and Patrick restarted the FMCS IOC at 15:01, at which point all the EPICS channels went to VAL=0, SEVR=Invalid.
At the time of writing, the corner station chiller-yard and wood-shop FMCS channels continue to be good. Therefore I have un-bypassed the critical fire-pump alarms for the weekend.
The current cell-phone alarm bypass list is now
Bypass will expire:
Mon 11 Sep 2023 03:16:29 PM PDT
For channel(s):
H0:FMC-CS_CY_H2O_PUMPSTAT
H0:FMC-CS_CY_H2O_SUP_DEGF
H0:FMC-CS_WS_RO_ALARM
H0:FMC-EX_CY_H2O_PUMPSTAT
H0:FMC-EX_CY_H2O_SUP_DEGF
H0:FMC-EY_CY_H2O_PUMPSTAT
H0:FMC-EY_CY_H2O_SUP_DEGF
The ISS Second Loop engaged this lock with a low-ish diffracted power (about 1.5%). Oli had chatted with Jason about it, and Sheila noticed that perhaps it being low could be related to the number of glitches we've been seeing. A concern is that if the control loop needs to go "below" zero percent (which it can't do), this could cause a lockloss.
I "fixed" it by selecting IMC_LOCK to LOCKED (which opens the ISS second loop), and then selecting ISS_ON to re-close the second loop and put us back in our nominal Observing configuration. This set the diffracted power back much closer to 2.5%, which is where we want it to be.
This cycling of the ISS 2nd loop (a DC coupled loop) dropped the power into the PRM (H1:IMC-PWR_IN_OUT16) from 57.6899 W to 57.2255 over the course of ~1 minute 2023-Aug-07 17:49:28 UTC to 17:50:39 UTC. It caught my attention because I saw a discrete drop in arm cavity power of ~2.5W while trending around looking for thermalization periods. This serves as another lovely example where time dependent correction factors are doing their job well, and indeed quite accurately. If we repeat the math we used back in O3, (see LHO:56118 for math derivation), we can model the optical gain change in two ways: - the relative change estimated from the power on the beam splitter (assuming the power recycling gain is constant and cancels out) relative change = (np.sqrt(57.6858) - np.sqrt(57.2255)) / np.sqrt(57.6858) = 0.0039977 = 0.39977% - the relative change estimated by the TDCF system, via kappa_C relative change = (0.97803 - 0.974355)/0.97803 = 0.0037576 = 0.37576% indeed the estimates agree quite well, especially given the noise / uncertainty in the TDCF (because we like to limit the height of the PCAL line that informs it). This gives me confidence that -- at least over the several minute time scales -- kappa_C is accurate to within 0.1 to 0.2%. This is consistent with how much we estimate the uncertainty is from converting the coherence between the PCAL excitation and DARM_ERR into uncertainty via Bendat & Piersol's unc = sqrt( (1-C) / (2NC) ). It's nice to have these "sanity check" warm and fuzzies that the TDCFs are doing their job; but also nice to have detailed record of these weird random "what's that??" when trending around looking for things. I also note that there's no change in cavity pole frequency, as expected.
When the circulating power dropped ~2.5kW, kappa_c trended down, plot attached. This implies that the lower circulating powers induced in previous RH tests 73093are not the reason kappa_c increases. Maybe see an slight increase in high frequency noise as the circulating power is turned up, plot attached.