TITLE: 10/06 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 12mph Gusts, 9mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.21 μm/s
QUICK SUMMARY:
All is nominal for H1 thus far this early-evening with H1 approaching 24.5hrs of being locked. Slight increase in the microseismic band for the last 8hrs & winds are mostly under 10mph.
I've made a few updates to my script that graphs the Inlock and OPLEV charges and I've run the code with the latest measurements and included a same time scale (6 months) minute trend of Kappa TST (H1:CAL-CS_TDEP_KAPPA_TST_OUTPUT) that I've scaled the Y-axis a bit for better readability. The codes located at /ligo/home/ryan.crouch/Desktop/Charge_measurements/compare.py and you need to be in the nds2utils environment to run it (conda activate nds2utils). I've also included a solo unmodified Kappa trend.
| Test Mass | Inlock slope | Oplev slope |
| ETMY | -0.18 [V/week] | -0.17 [V/week] |
| ETMX | +0.41 [V/week] | -0.26 [V/week] |
The ETMY charge values seem to be trending down towards zero. The ETMX charge values seem to be converging towards zero as well now (yay?). Kappa_TST appears to be slowly increasing, or slowly decaying with a sign flip.
I've added a test into DIAG_MAIN that shows "Check PSL chiller" when any of the PSL chiller status channels are non-nominal (ENABLED, FLOWERROR, or ALARM). The most likely scenario when this message will show is when the chiller is at a low water level, since if the chiller flow stops or the chiller isn't enabled, the NPRO can't run at all.
Fri Oct 06 10:11:02 2023 INFO: Fill completed in 10min 58secs
Gerardo confirmed a good fill curbside.
Oli, Camilla. We popped into commissioning to adjust the ITM ring heaters up from 0.05W/segment at 17:15UTC, now ITMX is at 0.49W and ITMY 0.05W. Accepted changes in sdf. Back in observing while the slow thermalization happens, plan to revert to nominal in 3 hours at 20:15UTC.
This weeks past ring heater tests are 73289, 73093 and 73272.
During this test, our high frequency noise got worse and our circulating power slightly dropped. HOM peaks moved down in frequency, as expected when we add more RH power. DARM attached, 52Hz jitter noise didn't seem to change.
Similar to Thu morning, the picket fence server data update slowly degraded overnight and eventually stopped updating.
All was working until 02:48 PDT when some data cycles started taking about 35 seconds (normally it cycles over 3 seconds). This transition is shown in the top trend.
About an hour later at 03:48 the frequency of the long reads increased and became regular (bottom plot).
At 07:13 the updates stopped completely, this is the current situation.
thanks Dave - we'll follow up with you and Erik off line to see if we can figure out what's up. We are not seeing this at Stanford. (also tagging SEI so that I see it)
- Later update - yes we are seeing this at Stanford, see comments below :(
I ran the code on opslogin0 in non-EPICS mode (no caputs to the IOC) and it appears to be working. For the record here is the command line output:
/opt/rtcds/userapps/trunk/isi/h1/scripts/Picket-Fence/Picket_fence_code_v2.py:751: ObsPyDeprecationWarning: Deprecated keyword loglevel in __init__() call - ignoring.
self.seedlink_clients.append(SeedlinkUpdater(self.stream, myargs=self.args, lock=self.lock))
Downloading from server: cwbpub.cr.usgs.gov:18000
US_HLID:00BHZ, US_NEW:00BHZ, US_MSO:00BHZ
Downloading from server: pnsndata.ess.washington.edu:18000
UW_OTR:HHZ, UO_LAIR:HHZ
here is the cleaned up pickets dictionary being used (commented BBB and NLWA removed)
pickets= {
"HLID":{
"Latitude":43.562,
"Longitude":-114.414,
"Channel":"US_HLID:00BHZ",
"PreferredServer":"cwbpub.cr.usgs.gov:18000"
},
"NEW":{
"Latitude":48.264,
"Longitude":-117.123,
"Channel":"US_NEW:00BHZ",
"PreferredServer":"cwbpub.cr.usgs.gov:18000"
},
"OTR":{
"Latitude":48.08632 ,
"Longitude":-124.34518,
"Channel":"UW_OTR:HHZ",
"PreferredServer":"pnsndata.ess.washington.edu:18000"
},
"MSO":{
"Latitude":46.829,
"Longitude":-113.941,
"Channel":"US_MSO:00BHZ",
"PreferredServer":"cwbpub.cr.usgs.gov:18000"
},
"LAIR":{
"Latitude":43.16148,
"Longitude":-123.93143,
"Channel":"UO_LAIR:HHZ",
"PreferredServer":"pnsndata.ess.washington.edu:18000"
}
}
I restarted picket fence on nuc5 by running /opt/rtcds/userapps/release/isi/h1/scripts/Picket-Fence/picket_epics.sh from a controls vnc remote display session and it is running normally again.
USGS sent an email at 10am reporting that one of the servers LHO uses (cwbpub.cr.usgs.gov) is being migrated. We are working on using a backup server in its place.
email:
The NEIC is in process of migrating services currently hosted at the Denver Federal Center to a new location. As a result, the availability of waveform services provided on the CWB 137.227.224.97 (cwbpb) are in flux. This may last for some time, possibly into early 2024.
Thank you very much for this monitoring. The new EPICS channels are coming in handy.
We also had a similar crash at Stanford and there is also a report of a restart at LLO this morning: LLO aLog 67616. The evidence seems to indicate the problem stems from the USGS server side.
Looking at the error mesagges in our local computer, it seems that the connection to the servers timed out ---> The picket fence attempts automatic restarts ----> the picket fence fails because lsim chokes on trying to filter an empty data vector. This error is unintended on the filtering script, so I need to dig what to do to make sure the restart works properly.
More importantly, we need to ping our USGS friends to see if this is part of some maintenance situation.
Edgard
TITLE: 10/06 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 7mph Gusts, 4mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.17 μm/s
QUICK SUMMARY:
Up and Locked for 16hours. The dustmon alarms for LVEA6 300nm and LVEA10 300nm are both going off, but it looks like their counts maxed out at 70 and 170 respectively about 3 hours ago, so not too high.
Picket fence script looks broken still.
The non-beckhoff weather stations are still stopping in the early morning (random times between 3am and 5am). This morning they stopped at 03:26 and were restarted by my cronjob at 05:00. The EDC briefly disconnected from these 52 channels at 03:35.
I have increased the frequency of the crontab which restarts any invalid weather stations to hourly. This crontab runs as user controls on h0epics:
# <WEATHER>
# 4am restart out-building weather station IOCs, for unknown reasons they
# regularly lose connection between 03:18-03:28.
# D.Barker LHO 17jan2023
# 06oct2023 DB Run hourly, freeze ups currently happening between 3am and 4am
01 * * * * /ligo/home/controls/restart_weather > /dev/null
# </WEATHER>
TITLE: 10/05 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
- Arrived with H1 locked, waiting for ADS to converge
- EX saturation @ 0:35
- DM 102 alert for PSL102 500 nm, trend attached, looks like the counts got up to 100, will monitor
- 1:43 - EQ mode activated, no alert from verbal for any EQ, peakmon counts are hovering around 6-700
- 3:51 - another EQ, this one was a 5.4 from Panama
LOG:
No log for this shift.
Couple EQs that we were able to ride through, ground motion has since settled. Otherwise, H1 appears to be stable, currently locked for 4 hours.
TITLE: 10/05 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 5mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.15 μm/s
QUICK SUMMARY:
- IFO is currently at NLN, waiting for ADS to converge
- CDS/SEI/DMs ok
We've been running with both the cleanrooms in the Optics Lab and the flowbench furthest from the door running. Robert said that in the past (23169 ?) we haven't seen noise from these but should repeat the test now we have higher sensitivity.
Cleanrooms and flow-bench turned off for 8 minute intervals at below times (-/+5 seconds).
Turned off 22:40:00 to 22:48:05 UTC
Turned off 22:56:00 to 23:04:00 UTC
Turned off 23:12:00 to 23:20:00 UTC
I investigated these times as part of my DetChar DQ shift. I see no evidence for any difference in the noise distribution during the tests.
Turned on a small cleanroom inside the filter cavity enclosure at 21:45:10 UTC, allowed to the cleanroom to run for 15 minutes, then turned it off. Turned cleanroom back ON at 22:00:00 UTC, once again I let the cleanroom run for 15 minutes. Cleanroom was not moved and it is now off.
Tagging PEM.