Following a string of local EQs, H1 is now relocking at LOWNOISE_COIL_DRIVERS. We are also about to start a short comissioning period alongiside Livingston for PEM testing beginning at 19:00 UTC and running until 21:00.
The undamped EX cryopump manifold baffle was shown to make noise in DARM, and I suggested that I might be able to damp it through the viewports ( https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=69578 ). The two page figure shows in more detail how this would work.
The corner station LVEA and end station VEA dust monitors can be used to provide temperature trends with a signal granularity of 1F. The attached 7day trend shows the dust monitor data for the two LVEA dust monitors, and the end station VEAs (one per VEA). I have plotted the 300nm dust counts alongside to show that the dust monitors are being ran periodically (shown with y-axis log scaling).
ndscope yaml file can be found at ~david.barker/ndscope/dustmon_temps.yaml
Fri Sep 08 10:06:43 2023 INFO: Fill completed in 6min 39secs
Gerardo confirmed a good fill curbside.
Closes 26208, last completed Sep 1st.
Laser Status:
NPRO output power is 1.827W (nominal ~2W)
AMP1 output power is 67.07W (nominal ~70W)
AMP2 output power is 135.0W (nominal 135-140W)
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PMC:
It has been locked 33 days, 0 hr 0 minutes
Reflected power = 16.43W
Transmitted power = 109.4W
PowerSum = 125.8W
FSS:
It has been locked for 0 days 4 hr and 6 min
TPD[V] = 0.8649V
ISS:
The diffracted power is around 2.3%
Last saturation event was 0 days 4 hours and 6 minutes ago
Possible Issues: None
TITLE: 09/08 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 140Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 0mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
- IFO has been locked for just under 3 hours following the EQ lockloss from this early this morning
- CDS/SEI ok
- Alert handler channels are showing up white, most likely related to the server issue
H1 Manager flagged me for assistance relocking the detector at 09/08 10:38UTC, and it looks like the reason why it could not relock itself was because of earthquakes from New Zealand(slightly into yellow/large earthquake) and Japan(mid-Green/earthquake) passing through. I marked when H1 Manager asked for assistance on attachment1, and as you can see, we were at the tail end of the earthquakes and so once I intervened by taking the detector out of INITIAL_ALIGNMENT and to DOWN, H1 Manager was able to take us back through INITIAL_ALIGNMENT and all the way back up to NLN without any more intervention from me.
With regards to the earthquake, Earthquake Mode was activated at 09:23UTC, almost a minute before the warning for the New Zealand earthquake even came in, so there must have been a third, more local earthquake that we were able to survive the first 10minutes of before being knocked out of lock and fighting against poor H1 Manager.
We reached NOMINAL_LOW_NOISE at 12:12UTC and just entered into Observing as of 12:30UTC.
TITLE: 09/08 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 23:32 UTC (13 hour lock).
Nothing new/relevant to report
LOG:
None
IFO is in NLN and OBSERVING since 23:32 UTC
Other:
All of the BacNET FMCS channels except for the chillers are not being seen by the EPICS IOC. This is likely associated with an upgrade to the server done earlier this week by Apollo. Restarting the IOC has not helped. This may require different IOC code. For now, this is just to note that it is a known issue.
Opened FRS29055. At this point in time, only the CS Chiller channels are active.
TITLE: 09/07 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 136Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 12mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.10 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING since 23:06 UTC (Squeezer unlocked for 3 minutes so dropped out of observing temporarily)
FMCS systems still down
TITLE: 09/07 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 141Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: We've been locked for 5 hours after a slower lock loss and automated relock. We lost the FMCS EPICs site wide around noon during work to fix the FMCS system.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 15:00 | FAC | Kim | MX | n | Tech clean | 16:22 |
| 17:52 | PEM | Robert | LVEA | n | Picture of magnetometer | 18:02 |
| 18:02 | PSL | Jason | MX | n | Check on container | 18:36 |
| 20:04 | FAC | Tyler, Contractor | FCES, EY | n | Check on HVAC servers | 20:58 |
| 22:04 | FAC | Fil, Contractor | MY. EY | n | Connect FMCS system | 22:36 |
| 22:21 | FAC | Richard | FCES | n | Checking on HVAC servers | 22:36 |
While FMCS work is ongoing, I have bypassed the cell-phone alarms for the channels shown below:
Bypass will expire:
Fri 08 Sep 2023 03:06:06 PM PDT
For channel(s):
H0:FMC-CS_CY_H2O_PUMPSTAT
H0:FMC-CS_CY_H2O_SUP_DEGF
H0:FMC-CS_FIRE_PUMP_1
H0:FMC-CS_FIRE_PUMP_2
H0:FMC-CS_WS_RO_ALARM
H0:FMC-EX_CY_H2O_PUMPSTAT
H0:FMC-EX_CY_H2O_SUP_DEGF
H0:FMC-EY_CY_H2O_PUMPSTAT
H0:FMC-EY_CY_H2O_SUP_DEGF
All the FMCS EPICS channels have been static since around 11:00 this morning. The attached plot shows a representative channel from CS and each out-building showing they stopped updating between 10:47 and 11:06. Interestingly most of the EPICS channels did not go invalid, but some did. This explains the mixture of green/white values on the FMCS MEDMs seen this afternoon. Jonathan and Patrick restarted the FMCS IOC at 15:01, at which point all the EPICS channels went to VAL=0, SEVR=Invalid.
At the time of writing, the corner station chiller-yard and wood-shop FMCS channels continue to be good. Therefore I have un-bypassed the critical fire-pump alarms for the weekend.
The current cell-phone alarm bypass list is now
Bypass will expire:
Mon 11 Sep 2023 03:16:29 PM PDT
For channel(s):
H0:FMC-CS_CY_H2O_PUMPSTAT
H0:FMC-CS_CY_H2O_SUP_DEGF
H0:FMC-CS_WS_RO_ALARM
H0:FMC-EX_CY_H2O_PUMPSTAT
H0:FMC-EX_CY_H2O_SUP_DEGF
H0:FMC-EY_CY_H2O_PUMPSTAT
H0:FMC-EY_CY_H2O_SUP_DEGF
The ISS Second Loop engaged this lock with a low-ish diffracted power (about 1.5%). Oli had chatted with Jason about it, and Sheila noticed that perhaps it being low could be related to the number of glitches we've been seeing. A concern is that if the control loop needs to go "below" zero percent (which it can't do), this could cause a lockloss.
I "fixed" it by selecting IMC_LOCK to LOCKED (which opens the ISS second loop), and then selecting ISS_ON to re-close the second loop and put us back in our nominal Observing configuration. This set the diffracted power back much closer to 2.5%, which is where we want it to be.
This cycling of the ISS 2nd loop (a DC coupled loop) dropped the power into the PRM (H1:IMC-PWR_IN_OUT16) from 57.6899 W to 57.2255 over the course of ~1 minute 2023-Aug-07 17:49:28 UTC to 17:50:39 UTC. It caught my attention because I saw a discrete drop in arm cavity power of ~2.5W while trending around looking for thermalization periods. This serves as another lovely example where time dependent correction factors are doing their job well, and indeed quite accurately. If we repeat the math we used back in O3, (see LHO:56118 for math derivation), we can model the optical gain change in two ways: - the relative change estimated from the power on the beam splitter (assuming the power recycling gain is constant and cancels out) relative change = (np.sqrt(57.6858) - np.sqrt(57.2255)) / np.sqrt(57.6858) = 0.0039977 = 0.39977% - the relative change estimated by the TDCF system, via kappa_C relative change = (0.97803 - 0.974355)/0.97803 = 0.0037576 = 0.37576% indeed the estimates agree quite well, especially given the noise / uncertainty in the TDCF (because we like to limit the height of the PCAL line that informs it). This gives me confidence that -- at least over the several minute time scales -- kappa_C is accurate to within 0.1 to 0.2%. This is consistent with how much we estimate the uncertainty is from converting the coherence between the PCAL excitation and DARM_ERR into uncertainty via Bendat & Piersol's unc = sqrt( (1-C) / (2NC) ). It's nice to have these "sanity check" warm and fuzzies that the TDCFs are doing their job; but also nice to have detailed record of these weird random "what's that??" when trending around looking for things. I also note that there's no change in cavity pole frequency, as expected.
When the circulating power dropped ~2.5kW, kappa_c trended down, plot attached. This implies that the lower circulating powers induced in previous RH tests 73093are not the reason kappa_c increases. Maybe see an slight increase in high frequency noise as the circulating power is turned up, plot attached.
J. Kissel, D. Barker As of today Dave helped me install the new front-end, EPICs controlled oscillators discussed in LHO:71746. Then, after crafting a few new MEDM screens (see comments below), I've turned ON some of those oscillators in order to replace the unstable function of the CAL_AWG_LINES guardian. So, there're no "new" calibration lines (not since we turned CAL_AWG_LINES back ON last week at 2023-07-25 22:21:15 UTC -- see LHO:71706) -- but they're now driven by front-end, EPICs controlled oscillators rather than by guardian using the python bindings for awg (which was unstable across computer crashes, and other connection interruptions). This is true as of the first observation segment today: 2023-08-01 22:02 UTC However, due to a mishap with me misunderstanding the state of the PCALY SDF system (see LHO:71879), I accidentally overwrote the PCALXY comparison line at 284.01 Hz, and we went into observe. Thus, The short observation segment between 22:02 - 22:11 UTC is out of nominal configration, because there's no PCALY line contributing to the PCALXY comparison. The was rectified by the second observation segment starting on 2023-08-01 22:16 UTC. Also, because of these changes the subtraction team should switch their witness channel for the DARM_EXC frequencies to H1:LSC-CAL_LINE_SUM_DQ. The PCALY witness channel remains the same, H1:CAL-PCALY_EXC_SUM_DQ, as the newly used oscillators sum in to the same channel. Below, I define which oscillator number is assigned to which frequency. Here's the latest list of calibration lines: Freq (Hz) Actuator Purpose Channel that defines Freq Changes Since Last Update (LHO:69736) 8.825 DARM (via ETMX L1,L2,L3) Live DARM OLGTFs H1:LSC-DARMOSC1_OSC_FREQ Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG 8.925 PCALY Live Sensing Function H1:CAL-PCALY_PCALOSC5_OSC_FREQ Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG 11.475 DARM (via ETMX L1,L2,L3) Live DARM OLGTFs H1:LSC-DARMOSC2_OSC_FREQ Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG 11.575 PCALY Live Sensing Function H1:CAL-PCALY_PCALOSC6_OSC_FREQ Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG 15.175 DARM (via ETMX L1,L2,L3) Live DARM OLGTFs H1:LSC-DARMOSC3_OSC_FREQ Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG 15.275 PCALY Live Sensing Function H1:CAL-PCALY_PCALOSC7_OSC_FREQ Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG 24.400 DARM (via ETMX L1,L2,L3) Live DARM OLGTFs H1:LSC-DARMOSC4_OSC_FREQ Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG 24.500 PCALY Live Sensing Function H1:CAL-PCALY_PCALOSC8_OSC_FREQ Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG 15.6 ETMX UIM (L1) SUS \kappa_UIM excitation H1:SUS-ETMY_L1_CAL_LINE_FREQ No change 16.4 ETMX PUM (L2) SUS \kappa_PUM excitation H1:SUS-ETMY_L2_CAL_LINE_FREQ No change 17.1 PCALY actuator kappa reference H1:CAL-PCALY_PCALOSC1_OSC_FREQ No change 17.6 ETMX TST (L3) SUS \kappa_TST excitation H1:SUS-ETMY_L3_CAL_LINE_FREQ No change 33.43 PCALX Systematic error lines H1:CAL-PCALX_PCALOSC4_OSC_FREQ No change 53.67 | | H1:CAL-PCALX_PCALOSC5_OSC_FREQ No change 77.73 | | H1:CAL-PCALX_PCALOSC6_OSC_FREQ No change 102.13 | | H1:CAL-PCALX_PCALOSC7_OSC_FREQ No change 283.91 V V H1:CAL-PCALX_PCALOSC8_OSC_FREQ No change 284.01 PCALY PCALXY comparison H1:CAL-PCALY_PCALOSC4_OSC_FREQ Off briefly between 2023-08-01 22:02 - 22:11 UTC, back on as of 22:16 UTC 410.3 PCALY f_cc and kappa_C H1:CAL-PCALY_PCALOSC2_OSC_FREQ No Change 1083.7 PCALY f_cc and kappa_C monitor H1:CAL-PCALY_PCALOSC3_OSC_FREQ No Change n*500+1.3 PCALX Systematic error lines H1:CAL-PCALX_PCALOSC1_OSC_FREQ No Change (n=[2,3,4,5,6,7,8])
As a part of depricating CAL_AWG_LINES, I've updated the ISC_LOCK guardian to use the new main switches for the DARM_EXC lines for the transitions between NOMINAL_LOW_NOISE and NLN_CAL_MEAS. That main switch channel is H1:LSC-DARMOSC_SUM_ON, which enables excitations to flow through to the DARM error point when set to 1.0 (and blocks it when set to 0.0). I've committed the new version of ISC_LOCK to the userapps repo, rev 26039.
Here's the updated
/opt/rtcds/userapps/release/lsc/common/medm/
LSC_OVERVIEW.adl
LSC_DARM_EXC_OSC_OVERVIEW.adl
LSC_CUST_DARMOSC_SUM_MTRX.adl
The new DARM oscillators screen (LSC_DARM_EXC_OSC_OVERVIEW.adl) is linked in the top-middle of the LSC_OVERVIEW.adl. The only sub screen on the LSC_DARM_EXC_OSC_OVERVIEW.adl is the summation matrix (LSC_CUST_DARMOSC_SUM_MTRX.adl).
I have not yet gotten to adding all the new PCAL oscillators to their MEDM screens, but I'll do so in the fullness of time.
detchar-request git issue for tracking purposes.
I found a bug in the
/opt/rtcds/userapps/release/lsc/common/medm/
LSC_DARM_EXC_OSC_OVERVIEW.adl
where DARMOSC1's TRAMP field was errantly displayed as all the 10 oscillator's TRAMPs; a residual from the copy pasta I made during the screen generation.
Fixed it. Now committed to the above location as of rev 26170.
Finally got around to updating the PCAL screens. Check out
/opt/rtcds/userapps/release/cal/common/medm/
PCAL_END_EXC.adl
CAL_PCAL_OSC_SUM_MATRIX.adl
as of userapps repo rev 26179.
See attached screenshots.