Fri Oct 06 10:11:02 2023 INFO: Fill completed in 10min 58secs
Gerardo confirmed a good fill curbside.
Oli, Camilla. We popped into commissioning to adjust the ITM ring heaters up from 0.05W/segment at 17:15UTC, now ITMX is at 0.49W and ITMY 0.05W. Accepted changes in sdf. Back in observing while the slow thermalization happens, plan to revert to nominal in 3 hours at 20:15UTC.
This weeks past ring heater tests are 73289, 73093 and 73272.
During this test, our high frequency noise got worse and our circulating power slightly dropped. HOM peaks moved down in frequency, as expected when we add more RH power. DARM attached, 52Hz jitter noise didn't seem to change.
Similar to Thu morning, the picket fence server data update slowly degraded overnight and eventually stopped updating.
All was working until 02:48 PDT when some data cycles started taking about 35 seconds (normally it cycles over 3 seconds). This transition is shown in the top trend.
About an hour later at 03:48 the frequency of the long reads increased and became regular (bottom plot).
At 07:13 the updates stopped completely, this is the current situation.
thanks Dave - we'll follow up with you and Erik off line to see if we can figure out what's up. We are not seeing this at Stanford. (also tagging SEI so that I see it)
- Later update - yes we are seeing this at Stanford, see comments below :(
I ran the code on opslogin0 in non-EPICS mode (no caputs to the IOC) and it appears to be working. For the record here is the command line output:
/opt/rtcds/userapps/trunk/isi/h1/scripts/Picket-Fence/Picket_fence_code_v2.py:751: ObsPyDeprecationWarning: Deprecated keyword loglevel in __init__() call - ignoring.
self.seedlink_clients.append(SeedlinkUpdater(self.stream, myargs=self.args, lock=self.lock))
Downloading from server: cwbpub.cr.usgs.gov:18000
US_HLID:00BHZ, US_NEW:00BHZ, US_MSO:00BHZ
Downloading from server: pnsndata.ess.washington.edu:18000
UW_OTR:HHZ, UO_LAIR:HHZ
here is the cleaned up pickets dictionary being used (commented BBB and NLWA removed)
pickets= {
"HLID":{
"Latitude":43.562,
"Longitude":-114.414,
"Channel":"US_HLID:00BHZ",
"PreferredServer":"cwbpub.cr.usgs.gov:18000"
},
"NEW":{
"Latitude":48.264,
"Longitude":-117.123,
"Channel":"US_NEW:00BHZ",
"PreferredServer":"cwbpub.cr.usgs.gov:18000"
},
"OTR":{
"Latitude":48.08632 ,
"Longitude":-124.34518,
"Channel":"UW_OTR:HHZ",
"PreferredServer":"pnsndata.ess.washington.edu:18000"
},
"MSO":{
"Latitude":46.829,
"Longitude":-113.941,
"Channel":"US_MSO:00BHZ",
"PreferredServer":"cwbpub.cr.usgs.gov:18000"
},
"LAIR":{
"Latitude":43.16148,
"Longitude":-123.93143,
"Channel":"UO_LAIR:HHZ",
"PreferredServer":"pnsndata.ess.washington.edu:18000"
}
}
I restarted picket fence on nuc5 by running /opt/rtcds/userapps/release/isi/h1/scripts/Picket-Fence/picket_epics.sh from a controls vnc remote display session and it is running normally again.
USGS sent an email at 10am reporting that one of the servers LHO uses (cwbpub.cr.usgs.gov) is being migrated. We are working on using a backup server in its place.
email:
The NEIC is in process of migrating services currently hosted at the Denver Federal Center to a new location. As a result, the availability of waveform services provided on the CWB 137.227.224.97 (cwbpb) are in flux. This may last for some time, possibly into early 2024.
Thank you very much for this monitoring. The new EPICS channels are coming in handy.
We also had a similar crash at Stanford and there is also a report of a restart at LLO this morning: LLO aLog 67616. The evidence seems to indicate the problem stems from the USGS server side.
Looking at the error mesagges in our local computer, it seems that the connection to the servers timed out ---> The picket fence attempts automatic restarts ----> the picket fence fails because lsim chokes on trying to filter an empty data vector. This error is unintended on the filtering script, so I need to dig what to do to make sure the restart works properly.
More importantly, we need to ping our USGS friends to see if this is part of some maintenance situation.
Edgard
TITLE: 10/06 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 7mph Gusts, 4mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.17 μm/s
QUICK SUMMARY:
Up and Locked for 16hours. The dustmon alarms for LVEA6 300nm and LVEA10 300nm are both going off, but it looks like their counts maxed out at 70 and 170 respectively about 3 hours ago, so not too high.
Picket fence script looks broken still.
The non-beckhoff weather stations are still stopping in the early morning (random times between 3am and 5am). This morning they stopped at 03:26 and were restarted by my cronjob at 05:00. The EDC briefly disconnected from these 52 channels at 03:35.
I have increased the frequency of the crontab which restarts any invalid weather stations to hourly. This crontab runs as user controls on h0epics:
# <WEATHER>
# 4am restart out-building weather station IOCs, for unknown reasons they
# regularly lose connection between 03:18-03:28.
# D.Barker LHO 17jan2023
# 06oct2023 DB Run hourly, freeze ups currently happening between 3am and 4am
01 * * * * /ligo/home/controls/restart_weather > /dev/null
# </WEATHER>
TITLE: 10/05 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
- Arrived with H1 locked, waiting for ADS to converge
- EX saturation @ 0:35
- DM 102 alert for PSL102 500 nm, trend attached, looks like the counts got up to 100, will monitor
- 1:43 - EQ mode activated, no alert from verbal for any EQ, peakmon counts are hovering around 6-700
- 3:51 - another EQ, this one was a 5.4 from Panama
LOG:
No log for this shift.
Couple EQs that we were able to ride through, ground motion has since settled. Otherwise, H1 appears to be stable, currently locked for 4 hours.
TITLE: 10/05 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 5mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.15 μm/s
QUICK SUMMARY:
- IFO is currently at NLN, waiting for ADS to converge
- CDS/SEI/DMs ok
TITLE: 10/05 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Austin
SHIFT SUMMARY:
We dropped out of observing for a quick ring heater change from 17:34:23 till 17:35:25UTC, we dropped out of observing again to revert the ring heater changes and for Dave to restart picket-fence which has been frozen, starting at 20:39:26 but unfortunately we lost lock soon after at 20:46UTC.
Relocking I had to play the DOF dance a little with Yarm, turning off WFS_DOF_1_P to get ALS to lock. I couldn't get any flashes on DRMI or PRMI and CHECK_MICH lost lock so I started an initial alignment at 21:37UTC, finished at 22:05UTC. Robert and Ryan went into the LVEA to investigate the PSL airflow while we were relocking and Robert did a sweep as he left.
We reaquired NLN at 22:46UTC, we're just waiting on ADS to converge to go into Observing. PI 24 and 31 rang up early on but they were quickly damped by guardian.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:02 | FAC | Randy | MidX | N | Delivery, 1445 | 15:11 |
15:37 | FAC | Karen | Optics lab, vac prep | N | Tech clean | 16:12 |
18:16 | VAC | Jordan | VPW | N | Move pumps to mech room | 18:29 |
20:28 | FAC | Fil | MidY | N | Inventory | 21:58 |
20:36 | VAC | Jordan, Travis | EndY | N | Mech room pumps | 21:55 |
21:55 | VAC | Jordan, Travis | EndX | N | Pumps | 22:19 |
22:01 | PEM | Robert | Outside PSL eclos | N | Check mouse traps/guards | 22:18 |
22:19 | PSL | RyanS, Robert | LVEA, PSL | N | Check PSL airhandler | 22:34 |
Picket fence has not been updating recently. Ryan restarted the process on nuc5, which did not fix the issue.
Attached MEDM shows the 08:24 restart, a server uptime of only 2 mins, and no update for 24 mins.
We are investigating.
I restarted the service on nuc5 at 13:47 It has been running with no issues since that time (51 minutes)
We've been running with both the cleanrooms in the Optics Lab and the flowbench furthest from the door running. Robert said that in the past (23169 ?) we haven't seen noise from these but should repeat the test now we have higher sensitivity.
Cleanrooms and flow-bench turned off for 8 minute intervals at below times (-/+5 seconds).
Turned off 22:40:00 to 22:48:05 UTC
Turned off 22:56:00 to 23:04:00 UTC
Turned off 23:12:00 to 23:20:00 UTC
I investigated these times as part of my DetChar DQ shift. I see no evidence for any difference in the noise distribution during the tests.
Turned on a small cleanroom inside the filter cavity enclosure at 21:45:10 UTC, allowed to the cleanroom to run for 15 minutes, then turned it off. Turned cleanroom back ON at 22:00:00 UTC, once again I let the cleanroom run for 15 minutes. Cleanroom was not moved and it is now off.
Tagging PEM.
At the start of commissioning at 19:01UTC, we went out of observing to turn CO2X power up from 1.53 to 1.67W. The power had dropped 7% since May, see 72943. I've edited lscparams and reloaded TCS_ITMX_CO2_PWR guardian expecting we'll want to keep this change.
Over the last 2 weeks CO2X power has dropped 0.04W (2%), plot attached. We should keep an eye on this and maybe bump up the requested power again in the comming weeks. We plan to replace the CO2X chiller when a new one arrives. We may also replace want to replace the laser with the re-gased laser.