I had a look at lock loss times and their durations during O1 (only lock losses from nominal low noise). I've had a brief look at a couple of the physical environment channels (H1:PEM-EY_WIND_ROOF_WEATHER_MPH for wind and H1:ISI-GND_STS_ITMY_Z_BLRMS_100M_300M for seismic) to see how they are correlated to these lock losses. There are a few plots I've attached here below. All of the data for these plots are mean minute trends
Plots:
Duty_cycle_noncumulative.pdf - This is a 3D plot showing the duty cycle for different wind and seismic bins (percentiles)
Duty_cycle_cumulative.pdf - This is a 3D plot showing the duty cycle when the wind or seismic behaviour is greater than or equal to the given percentiles
Downtime_wind_seis.pdf - This is a 3D column plot showing the integrated 'downtime' that results from a lock loss in the given wind and seismic bins (percentiles)
lock_losses.pdf - This plot shows the number of lock losses per percentile bins, surf (matlab) makes it look a bit weird.
I've also attached lock_losses.txt which is a list of the times in GPS when the lock losses occur (first column) and their duration in seconds (second column). The durations and loss times are only to the nearest minute unfortunately since I used minute trend data.
Percentiles.txt contains the wind (first row) and seismic (second row) channel values that correspond to 5% intervals starting from the 0th percentile (0,5,10...). These are for the mean minute trends though which are much lower than the maximum minute trends.
The scripts used to generate these plots are located at: https://dcc.ligo.org/T1600211
I have built a 24-hour version of SenseMonitor on the redundant machine (h1dmt2) and it is apparently running now. I also modified it to initialize the dmtviewer time series from the minute trends (this should get rid of the errors that showed up every time the dmt was shut down without stutting down SenseMon cleanly in advance). This version is available right now as the SenseMonitor_rhoft_H1 monitor. If it looks like it is working well, I will make this version the default. A few notes: a) as of now, the the rhoft version will not come back without a little fidgeting. Let me know it stops working. b) because of (a) it would be good to get a definitive statement about whether this is what is desired so I can install the modified code. c) Jim requested that the _CAL_ version be modified but I changed the _hoft_ version. This was purely for ease of testing. Once I install the new executable, all versions will be 24hours. If you want some to be 24 and some to be 12 hours, I can fix that by juggling configurations. d) Yes, Chris and Vern, This means that I can now start up a BBH SenseMon.
Attached are two illustrations: The first is a notional schematic of the pum rms "watch dog" showing the interconnections between the rms circuit, the "trigger logic", and the driver amplifier. The second drawing shows the signal levels through the circuitry. The information for these illustrations came from drawings D070483, PUM Coil Driver Sheets 1 & 2, and D0900975, Monitor Board. (The D0900975 is an annotated update of the old D070480 Monitor board.)
The resetting of the rms-wd is accomplished by either powering off, waiting a minute, and powering back on, which allows the reset circuit (R17, R18, C18, and U15B) to clear the D-flip-flops or a High-to-Low voltage transition on P18 of connector J7 on the PUM Coil Driver card. There is a binary I/O and software (MEDM button) available to do this remotely. Richard M. and Fil C. are checking the integrity of this. The state of the trigger D-flip-flops (tripped or not-tripped) is available on the MEDM screen for the PUM drivers.
0915 - 0930 hrs. local -> To and from Y-mid Opened exhaust check valve bypass valve. Opened LLCV bypass valve 1/2 turn -> LN2 at exhaust after 28 seconds -> Returned valves to as found state. Next overfill to be Monday, June 13th.
SEI: BRS upgrade at End-X underway and going well. SUS: RMS WD at End-X is tripped and cannot be reset. Being worked on. VAC: Installing RGAs at both mid stations. All other groups report no outstanding issues.
Sheila, Evan, Carl Terra
At the end of the night we had several locklosses when trying to engage the soft loops. It seems like the problem could be turning on the 10 dB more offloading gain in the top mass, which we have been doing without a problem for several weeks now. In the last lockloss like this, L2 rms watchdogs tripped on all 4 test masses (not all coils). Evan manually reset the Y arm optics, but we couldn't reset the X arm optics. For ETMX, Terra and I went to the end station and tried power cycling the coil driver, but this didn't allow us to reset the watchdogs. Carl and Evan power cycled the ITMY coil driver, and were able to reset the RMS WDs but the software WD cannot be reset.
Also, we have had several EQs tonight (both large and small) and tried out turning sensor correction on and off.
Here is a time series of L2 master outs durring the lockloss where the RMS WDs tripped. This lockloss happened durring a 15 second sleep in the guardian, which is why down was not run for so long after the lock was lost. We've eliminated this particular long sleep but there are other places where we have similar sleeps that need to be fixed. We went through the guardian states up to DC_READOUT this afternoon, rewritting in places that have long sleeps. We've tested this up to ENGAGE_SOFT_LOOPS.
The version of ISC_LOCK committed just before starting these edits is 13570, the version after is 13576.
TITLE: 06/10 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: None
SHIFT SUMMARY: H1 was locking fairly well until the commissioners left for their dinner break. Then we were slammed by a series of EQs (6.1 in Nicaragua and 3 aftershocks and 5.9 in Solomon Islands). Since then, things have not been locking so well.
LOG:
3:46 Sensor Correction turned off in anticipation of incoming EQ.
3:57 ITMx and ITMy ISI WD tripped.
4:35 Attempted to start locking again, but could not get past DRMI. Realized this was due to another incoming EQ. Set IFO to DOWN.
5:58 Started Initial Alignment
6:35 Finished IA, back to locking
Bounce damping setting was changed for IY and EX so they damp at all. Don't know why the old settings don't work.
For EX I just flipped the sign.
For IY, I rotated the phase by 210 degrees in effect and increased the gain significantly. Gain increase might be unnecessary but I leave it like that because it seems to work faster.
Old | NEW | |
EX |
Gain= -0.03 (negative), FM1 (+60dg), FM4 (bp) |
Gain=+0.03, FM1 (+60dg), FM4 (bp) |
IY |
Gain= -0.03 (negative), FM1 (+60dg), FM3 (BP), FM6 (+30dg) negative sign + 60 deg + 30deg = -90 deg total |
Gain= -0.15 (negative), FM2 (-60dg), FM3 (BP) negative sign -60 deg = +120 deg total |
ISC_Lock was changed but not loaded.
Travis and I added a filter with negative gain to IY bounce (and EX) damping, so the gains for all bounce and roll damping filters will be positive from now on. The guardian is updated for this.
We found that in the state DARM offset, we got good damping for ETMX with a positive gain and -60 degrees of phase. This worked better than the settings that Keita described in the DARM offset state, but we also used Keita's settings sucsesfully in DHARD WFS.
TJ and I saw that with the Keita settings at DARM_OFFSET we were ringing up the bounce mode for ETMX. So, now in the Resonance state the bounce damping gets set to the Sheila settings. Unclear why this one needs a 120 deg phase change but no others do. Perhaps related to why the settings needed changing in the first place? When the bounce damping is first turned on in DHARD_WFS, it goes back to the Keita settings.
We have been using a bandpass filter for ETMX bounce mode damping that is designed for the ETMY bounce frequency. For the other optics we use a broad bandpass, so I've now set the gaurdian to do that for ETMX as well. The new settings are:
FM3 (broad BP), and FM8 (gain -1). These are in the guardian for now, to be used in all states, so I've commented out the filter change in RESONANCE.
When we have the notches in the ASC off, we can damp the bounce modes as seen in the DARM spectrum. However, even when they appear to be reduced in the DARM spectrum they seem to be increasing in the MICH ASC signals.
I copied quad bounce and roll notches to the BS ASC loops as well.
I looked at a 1 mHz spectrum of the bounce modes from last nights lock, and the frequencies are consistent with what we reported one year ago:
ETMY | 9.731 Hz |
ETMX | 9.776 Hz |
ITMY | 9.831 Hz |
ITMX | not rung up last night but should be 9.847 Hz |
The confusion about the ITMY mode frequency (corrected by Evan in the comment) was unfortunately propagated to our monitoring filters. I've corrected the monitor bandpass for ITMY and made all of the bandpasses 10 mHz wide (before they were all 30 mHz wide, meaning that the monitors could not distinguish between the ITMs).
Sheila and I rewrote the analog CARM transition so that we now hop directly from the digital normalized in-air REFL signal to the analog in-vacuum REFL signal (rather than the analog in-air signal). This has worked twice in a row, so maybe this will help us avoid some of the CARM switching instabilities.
Title: 06/09/2016, Day Shift 15:00 – 23:00 (08:00 – 16:00) All times in UTC (PT) State of H1: IFO unlocked. Half day of maintenance. Commissioning: Recovering the IFO and working on problems Outgoing Operator: None Activity Log: All Times in UTC (PT) 14:45 (07:45) Ken – Working in Upstairs CER 14:47 (07:47) Peter – Taking survey of IO tables in LVEA 14:48 (07:48) Tim Nelson on site – to see Bubba 15:01 (08:01) Krishna & Mike – Going to End-X to work on BRS 15:21 (08:21) Gerardo – Going to End-Y Compressor room to work on IP11 wiring 16:04 (09:04) Bubba & Tim – Going into LVEA for facilities inspection 16:05 (09:05) Gerardo – Back from End-Y 16:07 (09:07) Electrical parts delivery for Richard – Dropped off at the LVEA rollup door 16:15 (09:15) Krishna & Mike – Back from End-X 16:16 (09:16) Bubba & Tim – Finished in the LVEA 16:17 (09:17) Kyle – Going to Mid-X to take measurements 16:18 (09:18) Filiberto & Gerardo – Going to End-X to work on Binary IO (WP #5925) 16:25 (09:25) Bubba & Tim – Going to End-X for facilities inspection 16:30 (09:30) Jason & Ed – Going into the PSL enclosure to work on DBB alignment 16:54 (09:54) TJ & Jim – Starting new Guardian nodes for both end stations (WP #5907) 16:56 (09:56) Kyle – Back from Mid-X 17:00 (10:00) Filiberto & Gerardo – Back from End-X 17:20 (10:20) Christina – Forklifting wood creates from mechanical building to staging building 17:27 (10:27) Bubba & Tim – Finished at End-X – Going to End-Y 17:38 (10:38) Haocun – Taking a tour through the LVEA 17:50 (10:50) Kyle – Going into the LVEA to get a helium tank 17:58 (10:58) Bubba & Tim – Finished at End-Y 18:00 (11:00) Haocun – Out of LVEA 18:15 (11:15) Sheila, Haocun, & Filiberto going to Mid-Y to get some boards 18:27 (11:27) Bubba & Tim – Going into the CER for facilities inspection 18:34 (11:34) TJ & Jim – Finished with Guardian work 18:35 (11:35) Jason & Ed – Out of PSL 18:40 (11:40) Karen – Cleaning in the H2 building 19:00 (12:00) Sheila, Haocun, & Filiberto – Back from Mid-Y 19:05 (12:05) Karen – Out of the H2 building 19:14 (12:14) Cheryl – Going into Squeezer Bay to look for parts 19:49 (12:49) Carlos – Going into MSR to take inventory 19:51 (12:51) Cheryl – Out of the LVEA 20:00 (13:00) Start alignment and locking after maintenance window 20:09 (13:09) Dave – Restarting all PI models and the DAQ 23:00 (16:00) Turn over to Travis End of Shift Summary: Title: 06/09/2016, Day Shift 15:00 – 23:00 (08:00 – 16:00) All times in UTC (PT) Support: Evan, Sheila Incoming Operator: Travis Shift Detail Summary: Adjust Y-Arm Fiber polarization from 16.5% to 6.0%. After half plus maintenance day, working on recovering the IFO. Considerable work was needed to get the IFO back into a functional alignment. That completed, the IFO is locking up to ENGAGE_SOFT_LOOPS. There is still a problem getting through REFL_IN_VACUO. Sheila and Evan are working on this problem.
The wires that were off the connector for the IP11 signal cable were fixed today, IP11 is back on and running without problems.
I zoomed in on where the oplev signal was before the outage, and where it is now.
While I was checking through the list of running nodes, I noticed that there were a total of 5 TCS nodes. It seems that the old TCS_ITMX diagnostic node was still running, but no one knew since it was not on the overview or anywhere else. Since the newer TCS_ITMX_CO2 node has replaced this one, I destroyed the node after confirming with Nutsinee.
DAQ EDCU was briefly GREEN until this node was removed, now back to PURPLE.
Added new SEI configuration Guardian nodes to all of the corner station chambers except HAM1 (soon). For the CS BSCs the code is almost identical to what is ran at the end stations, but with the BRS checks removed. The HAM config nodes are a bit different, as they do not change the blends, only sensor correction.
In the near future there will be a SEI configuration manager that can change all of the chamber configurations at once. This will let us prep the IFO for incoming earthquakes or change the configuration for different environments.
Updated Guardian Overview shot attached.
WP#5907
The hourly cronjob which keeps the nds jobs directory from overfilling was started on h1fs0 (moved from h1boot). Details at:
https://lhocds.ligo-wa.caltech.edu/wiki/CleanupNdsJobsDirectory
Tega, Dave:
New code for h1omcpi, h1susitmpi, h1susetmxpi and h1susetmypi. Slow channels were modified, therefore a DAQ restart was needed.
The DAQ restart also caught the recent changes to Guardian (new nodes) and HWS (removed heartbeat from ETMY).
The DAQ EDCU is now GREEN again.
Attached is a mdoescan of the DBB HPO path after some alignment attempts were made. It seems that TEM01/10 modes seems to be within reasonable values. Vertical alignment into the PMC seems to be slightly better than reference. Horizontal was marginally worse yet locking of the PMC was still not possible.
This scan was taken so we could have an idea of where we are after today's alignment work. Based on what I see, the HPO path of the DBB looks pretty good. Higher order mode power of 6.9% is reasonable, TEM01 (vertical alignment indicator) looks nice and low, although the TEM10 (horizontal alignment indicator) looks a little high, but not too bad. The mode matching peaks (the 2 larger peaks left and right of center) both look good when compared to the reference. All in all I think the HPO alignment into the DBB is pretty good. That said, as Ed says above, we are still unable to lock the DBB PMC. At this point I'm not sure what/where the problem is. Based on this mode scan I don't think it's an alignment/mode matching issue. Will have to think on this one...
No work was done on the Front End (FE) path today, will work on that once the HPO path is fully up and running.