Here is a summary of some of the best curves obtained in the last few days (past Thursday, Saturday, Sunday), with corresponding UTC times for reference. There are large variations in the noise from lock to lock, and also during the same lock (see for example beginning and end of last night ~7h long lock). The obvious thing that we are not controlling right now is the SRM alignment, so recommission the SRM ASC is the next step. Note: also, we haven't been consistently setting the IMC WFS offsets to minimize the peaks between 200Hz - 1kHz.
31 seconds for LN2 to be evident at the exhaust after opening the LLCV bypass valve 1/2 turn. Next, over-fill to be Wednesday
Jeff B reported lockloss at [23:41 (16:41) Lockloss – Mag 6.3 EQ near the Philippines] If it was indeed this EQ, this is certainly the surface waves producing the lockloss. At this distance the P & S waves have come and gone and the Surface waves began arriving minutes eariler based on general rules. The attached shows the corner station ground seismo exhibiting higher-amplitude lower-frequency components 10 of seconds before the lockloss.
Unlike the closer 6.2 near Japan, this one made it through the P & S wave arrivals. What a difference 1500 miles makes! Also, the moment tensor view indicates this was a Strike/Slip fault. The near-Japan EQ was a thrust fault... This could be an important component of the IFO response.
The 9MHz modulation depth reduction is now in guardian, just before NomLowNoise. It'll take the modulation depth down by 6dB.
Currently, the 45 MHz is only being reduced by 3dB.
Everything should be reset for relocking in the Down state.
Sebastien, Dave:
Last Thursday we ran seismon in continuous mode, using the script seismon_run_traveltimes. This did not work well and missed several earthquakes (the EQ data was indeed stored and running seismon_traveltimes manually detected the events correctly). We also found that the earthquake processing time per event was very long, about 90 seconds which is much longer than that observed at MIT. Investigation is continuing.
Summary of LHO DAQ over the past week. Prior to last Tuesday the DAQ had been stable for several weeks. Starting Tuesday evening (9/20) h1fw0 and h1fw1 became slightly unstable. They sometimes reported errors normally associated with QFS file system problems, and h1fw2 (non-QFS) was stable. Thursday luch time we power cycled both Solaris QFS/NFS machines (h1ldasgw0,1) which has helped with QFS issues in the past. After this point fw0 has been stable, fw1 has restarted twice in 4 days, last restart early Friday morning.
All three writers are running RG3.2 code, fw0 and fw1 have identical hardware. Our attempt to clone fw0 from fw1 last week did not work, we may want to try this again since fw0 is still asking for retransmissions and fw1 is not. (fw2 did ask for a moderate number of retransmissions last week, none since then).
model restarts logged for Sun 25/Sep/2016
2016_09_25 13:14 h1calcs
2016_09_25 13:14 h1iopoaf0
2016_09_25 13:14 h1ngn
2016_09_25 13:14 h1oaf
2016_09_25 13:14 h1odcmaster
2016_09_25 13:14 h1pemcs
2016_09_25 13:14 h1tcscs
2016_09_25 13:15 h1susprocpi
Restart of h1oaf0 models to clear DAC error which caused TCS laser trip.
model restarts logged for Sat 24/Sep/2016
no restarts reported
model restarts logged for Fri 23/Sep/2016
2016_09_23 00:24 h1fw1
unexpected restart of fw1 (following Thursday solaris restarts)
model restarts logged for Thu 22/Sep/2016
2016_09_22 03:44 h1fw1
2016_09_22 05:56 h1fw1
2016_09_22 06:08 h1fw1
2016_09_22 09:05 h1fw1
2016_09_22 10:44 h1psliss
2016_09_22 10:48 h1broadcast0
2016_09_22 10:48 h1dc0
2016_09_22 10:48 h1fw0
2016_09_22 10:48 h1fw1
2016_09_22 10:48 h1fw2
2016_09_22 10:48 h1nds0
2016_09_22 10:48 h1nds1
2016_09_22 10:48 h1tw0
2016_09_22 10:50 h1tw1
2016_09_22 12:03 h1fw1
2016_09_22 12:25 h1fw1
2016_09_22 12:28 h1fw0
early unexpected fw1 restarts. PSL ISS model change with associated DAQ restart. h1fw1/h1ldasgw1 power cycle followed by h1fw0/h1ldasgw0 power cycle. h1fw1 unexpected restart while h1fw0 was down.
JeffB reported IFO Lockloss Thursday from 6.2 EQ near Japan.
Based on the timing and the cavity build up, it looks like the vertical P wave is the strongest candidate for the lock loss. This is a pretty big EQ and only ~4400 miles distance. At this distance, the P-Wave travel time is maybe 12 minutes and the S(shear)-wave TT is about 20 minutes.
The two attached plots shows a transmitted light sum and the CS ground seismometers. This is a very nice example of classical earthquake ground motion with distinct arrivals of the P, S, and surface waves. The second attachment has the transmitted light sum overlain on the seismos. The vertical correlates well and shows a stronger signal although the X signal on the seismo also shows color at the loss time. This makes sense as the P-Wave is arriving from depth (Z) and the great circle route is very much from the NW direction(X).
This is news as we were thinking our EQ LLs were primarily from surface and shear waves. I still think that is mostly true but closer larger EQs may be P-Wave problems.
Unfortunately after swapping out the 16bit-DAC card in h1oaf0 almost two weeks ago, yesterday we got another zeroed-dac event. At 11:53 PDT, Sunday 25th September, the h1iopoaf0 model recorded an ADC, DAC and DK error in its STATE_WORD and from then onwards was correctly outputting zeros to all DAC channels. As usual, this was instructing the TCS chillers to drive to a low water temperature, and 52 minutes later the chiller tripped the TCS lasers (12:45 PDT).
Because we received both an ADC and DAC error, we could try swapping the first and second ADC cards.
T.J. has made his EPICS alarm announcement for this event more insistent, repeating every 2 minutes. If the control room can restart the models on h1oaf0 in time, we think the laser trips could be averted.
It's been a suspicion that, lately, switching DBB shutters has been causing IMC to lose lock. Perhaps this action is causing just enough noise to perturb the injection locking? Until further investigation is done, scans will be taken at times when the mode cleaner can be down for a period of time such as maintenance days.
Weekly Xtal - No changes from last week
Weekly LASER - No changes from last week
Weekly Env - RH for ANTERM, LASERRM and DIODERM are higher than last week. All temps seem to show no change.
Weekly Chiller - nothing unusual except for the change made to the flow rate on 2/19
Given the number of trips over the last week (which are easily seen on the above attached trends), everything looks normal.
Summary of H1's nice night. (Almost 9hr lock with range going from 60 to 70Mpc).
Discussion about running over before/after holiday break, O2 Break, operator 24/7 coverage.
Open Work Permit review.
Maintenance:
Trouble with MODE17 (ITMX 15542 Hz) could be due to instability from large phase changes with slight frequency drift in our damping loop. Mechanical mode frequency at beginning of a cold lock sits around 15541.875 Hz and has drifted to 15541.95 Hz after about three hours. The previous bandpass filter carried ~40 deg phase change over that time. In an effort to fix this, I've made two BP filters that are shifted versions of one another to be turned on as the frequency migrates. The guardian uses the PLL frequency monitor to track the average frequency change of the mode peak and make the filter change switch at the appropriate time. Fall offs and notches in the filters are such that center of BP --> switching point ~10 deg. I've watched the guardian successfuly make this change.
Proof of concept appears to have worked; Mode17 was kept stable throughout frequency drift well beyond the previous 3 hourish mark. Lockloss occured after ~6 hours from the partner Mode25 who saw a phase change of ~140 deg over this time, eventually probably driving up this usually easily dampable mode. Attachment shows freq drift over last night's lock. I've added stepping bandpasses to both modes (controlled with the guardian) now to accomodate frequency drift over much longer locks.
I have moved IM2 and IM3 closer to their O1 alignment values, as indicated by Cheryl's alog 29948. At the same time, I was walking the POP_A offsets to keep the jitter coupling low. I had an excitation from 310-350Hz on IM3 yaw, but the effect it had on DARM didn't really change during this series of moves. But, the other jitter/intensity peaks seem to have perhaps gotten a teeny bit better. More significantly, this set of moves improved the carrier recycling gain as well as the 45 MHz sideband buildup in the SRC.
IM2 and IM3 I only moved in yaw, however this made their pitch values also line up more closely with their O1 values. IM2's witness sensors read (P: 608, Y: -209) while IM3's witness sensors read (P: 1954, Y: -32). The POP_A offsets are currently (P: 0.39, Y: 0.47). This gives a carrier recycling gain of 30.5.
If the IFO is a struggle to lock tomorrow, trend the IM2 and IM3 witness sensors and put them back. I'm leaving them in their new spots for tonight.
I also ran a2l.
We should check in the morning if these IM2 and IM3 positions help get us away from the REFL PD clipping that Sheila has been looking into.
IFO left undisturbed starting 08:20:00 utc.
This lock lasted for 8+hrs. PI modes 17 and 25 rang up and kept increasing in their amplitudes until the lockloss.
Going to stay with changes to the IMs Jenne made (from Cheryl's suggestion) for getting back to an O1 alignment.
Operator NOTE: This will give us a list of a few WARNINGs on the Guardian DIAG_MAIN (if we are happy with where the IMs are, we'll want to update Guardian to ACCEPT this IM alignment).
Especially with the DRMI needing alignment help during most re-locks, I have created a new state to help check BS-only alignment.
Recall that if the DRMI flashes don't look so good, you can select the PRMI_Locked state that Sheila created some time ago. This state will misalign the SRM and attempt to lock the power recycling cavity with the arms held off resonance.
Sometimes though even these flashes aren't very good, and the PRMI isn't catching lock. Now you can select the new state Check_MICH_Fringes in the ISC_LOCK guardian, and it will misalign the PRM. This state is currently not set to try to lock MICH, so it'll never look quite like MICH_dark_locked from initial alignment. But, you can watch the flashes on the AS air camera, and move the BS until they are as circularly symmetric as possible. Note that sometimes you'll only see a few flashes per minute, so this requires a bit of patience, but it's often still faster than giving up and doing a full initial alignment. Once the flashes on the AS air camera are circularly symmetric, re-select PRMI_locked. This will realign the PRM, and go back to the usual locking sequence.
The PSL tripped around 21:37 UTC, it took us about an hour to recover from this . We called Jason and Peter, Jason called back and walked me through restarting it remotely. I added about 200mL of water to the chiller, there was water on the floor in front of the chiller. Jason confirmed that this is normal, and that the turbulence and bubbles in the chiller tube are from a known slow leak.
PSL tripped due to the power meter flow. The 1st attachment shows the power meter flow dropping just before the crystal chiller interlock trips. The 2nd attachment shows the power meter flow dropping ~1 second before the FE flow drops. Interestingly, it takes ~6 seconds after the power meter flow drops and the crystal chiller interlock trips for the crystal chiller flow to drop (3rd attachment). Not sure if this is due to the rate that the Beckhoff PC polls the crystal chiller, or if it really takes 6 seconds from interlock trip to chiller shutdown (I suspect the former). The full timeline of the trip, assuming t0 = 21:37:51 UTC (time of trip):
Filed FRS #6319.
Kiwamu, Nutsinee
We did a quick test with the HWS yesterday hoping it might fix the issue of HWSY sled reflected off the CP (alog29905). We first moved the ITMX and ITMY CP by a few hundred counts to confirm that only HWSY has sled reflection from CP. Then we swapped the X(790nm) sled and Y(840nm) sled then repeat the process. HWSY still had most of its reflection from the CP. The conclusion is, using different wavelength didn't matter.
We removed the HWS plates and stopped the code. This configuration still remained.
During this test we noticed that HWSX looks clipped. Not sure when did this started happening but Kiwamu said the data from HWSX he's been using make sense. No major misalignment to the SR3, ITMX, and BS in the past few days. A small touch to the top and bottom periscope fixed this clipping.
The sled has been swapped back. This happened on Sunday. HWS plates are still off.