TITLE: 12/02 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 73.9893Mpc
OUTGOING OPERATOR: Nutsinee
CURRENT ENVIRONMENT:
Wind: 2mph Gusts, 1mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.41 μm/s
QUICK SUMMARY:
As reported in logs 32076 and 32079, HAM3 ISI Corner2 CPSs started glitching around 8am yesterday. Looking this morning, given that they had glitched several times in just a few hours yesterday once they started and now have not reoccurred since power cycling the satellite rack and seating the probe gauge boards a few times, I'd say the issue has been suppressed for now.
Given that saturations are automatically removed after 1 hour, just looking at the HAM3 watchdog periodically is not sufficient to know if saturations have occurred. If one wants to know if the platform has seen sensor saturations, trend:
H1:ISI-HAM3_WD_CPS_SAT_COUNT and
H1:ISI-HAM3_WD_CPS_SAT_SINCE_RESTART
The former channel will be reset upon WD trip, the latter channel will accumulate the saturation.
TITLE: 12/02 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 75.0267Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Quiet shift. Been locked all night. TCS hasn't been a problem. BRS thing also didn't happen again during the shift.
TITLE: 12/02 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 73.5385Mpc
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY: Mostly a quiet shift, but there were a couple SDF issues
LOG:
Corey had just relocked after an earthquake when I arrived. Shortly after going to OBSERVE the TCS guardians knocked us out, as Kiwamu logged. Then quiet until just a couple minutes ago. SDF_DIAG kick us out of OBSERVE, looking at the log I find:
2016-12-02T07:51:08.18971 DIAG_SDF [RUN_TESTS.run] USERMSG 0: DIFFS: ngn: 1
2016-12-02T07:51:10.70676 DIAG_SDF [RUN_TESTS.run] USERMSG 0: DIFFS: ngn: 3
2016-12-02T07:51:17.18839 DIAG_SDF [RUN_TESTS.run] USERMSG 0: DIFFS: ngn: 1
I can't find an (the? I didn't know we had one) NGN guardian, but I know where the CS_BRS guardian lives. When I looked at that guardian, it had just started a recenter cycle at the same time:
2016-12-02T07:51:07.99681 CS_BRS JUMP target: TURN_OFF_DAMPING
2016-12-02T07:51:07.99694 CS_BRS [RUN.exit]
2016-12-02T07:51:08.05901 CS_BRS JUMP: RUN->TURN_OFF_DAMPING
2016-12-02T07:51:08.05920 CS_BRS calculating path: TURN_OFF_DAMPING->RUN
2016-12-02T07:51:08.05959 CS_BRS new target: TURN_OFF_PZT_CTRL
...
Did the CS_BRS guardian throw an SDF difference in NGN that dropped us out of OBSERVE?
Sheila and Jenne informed me that the TSC ITMY CO2 guardian suddenly had changed its state this evening, kicking us out of the observing mode
I trended some relevant data and checked the guardian logs. It seems that since the upgrade of the guardian machine this morning (32072), the TCS CO2 guardians for both X and Y arms unlocked multiple times for some reason. The first attachment is the trend of some relevant channels for this past 12 hours. As shown in the upper panels, the guardians changed their states multiple times in the past hours due to the lasers unlocked. Reading the guardian logs (which are attached) and codes, these events were indentified to be due to the PZT output signals exceeding some thresholds that are set in the guardian codes. This had not happened in the past four or five days until this morning. The number of incidents seems to decrease as a function of time. I am hoping that the CO2 lasers will eventually settle to some kind of stable point in which the guardians don't need to change the state any more.
For operators,
If this happens again (which should be obvious as it flips the intent bit back to commissioning), please follow the following procedure.
Additionally, the attached is a magnified version of the trend focusing on the last incident. Initially the laser power dropped monotonically in sync with the PZT output signal which was monotonically increasing. Then the PZT signal crossed the upper threshold of 70 counts which was detected by the guardian. Subsequently the guardian switched its state to something else. Finally this change in the guardian state was detected by the IFO guardian who lowered the intent bit to commissioning.
TJ investigated why the LOCKLOSS_SHUTTER_CHECK guardian was sometimes mistakenly identifying locklosses when there had not been any lockloss while ISC_LOCK was in the DRMI_ON_POP state.
As a reminder, the only purpose of LOCKLOSS_SHUTTER_CHECK is to check that the shutter triggers after locklosses in which we had more than 25kW circulating power in the arms. The lockloss checking for this guardian is independent of any other guardian.
The problem TJ found was a consequence of the work described in 31980 . Since that work when we switch the whitening gain on the transmon QPDs, there is a large spike in the arm transmission channels which the LOCKLOSS_SHUTTER_CHECK guardian recognizes as a lockloss (TJ will attach a plot).
We edited the ISC_LOCK guardian to hold the output of the TR_A,B_NSUM filter before making the switch, and turn the hold off after the switch is done. We loaded this when we were knocked out of observe by TCS. This is a simple change but if operators have any trouble with DRMI_ON_POP tonight you can call TJ or I.
Here are some plots with the TR_A,B_NSUM channels and the numeric states for ISC_LOCK. The Lockloss Shutter Check node would think that the power in the arms was above its 25kW threshold where it would then move to it's High Power state. This state would check that the arm power didn't drop below its low threshold, thinking it was a lockloss, and then jump to the Check Shutter state. Here it takes the last 10sec of data and tests for a kick in the HAM6 GS13s. This test would fail since there was no lockloss. We were not even in high power at the time.
TITLE: 12/01 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Jim
SHIFT SUMMARY:
Much_Of_Morning Main Issues:
LOG:
Locking Notes:
After hand-off this morning, held at VIOLIN_MODE_DAMPING. Kiwamu came in to take a look at H1 & wanted to note a few items he checked.
A note on the OMC whitening:
The 4.7kHz mode was super rung up, and this was causing the saturations, and a giant comb of upconversion around the line. I turned off the stage of whitening so that we would have a hope of damping anything, which is nearly impossible to do while saturations are happening everywhere. Anyhow, hopefully this won't be a problem anymore since we have found filters that work well for this mode, but any operator can use this trick to save a lock, if a mode is super rung up and needs serious damping.
To remove a stage of whitening, I opened the "all" screen of the OMC_LOCK guardian, and selected RemoveWhiteningStage. Once it starts that state, you can re-select ReadyForHandoff (the nominal state) and it'll return there when it is done. You should see 2 SDF diffs in the OMC, which ensures that you aren't going to Observe with this weird state - it's just for use while damping seriously bad modes.
Young-min and I looked into the 22:08 lockloss that is still unexplained, and attempted to use the BLRMS tool.
The first suspensions to saturate are the ETMY ESD channels, which are saturate at almost exactly the lockloss time. There isn't much in the ASC until after the lockloss, and other than DARM the other LSC loops don't seem to be having trouble.
The first thing that we see happening is a fast glitch in the DCPDs. We don't see anything in CARM signals, OMC PZTs, or ISS, but there is a similar glitch in AS_C_SUM, AS_A, AS_B. I
It is hard to imagine optics moving fast enough to cause this lockloss, but I am not sure what would have caused it.
Corey, Sheila, Jim W
TerraMon and LLO warned Corey that this EQ was coming, with a predicted R wave velocity of 4.8 um/second (it showed up in our EQ band BLRMS peak at about 1 um/second RMS at about the time predicted). Our useims blrms is around 0.3-0.4 um/second right now.
Since Corey had a warning he consulted with Jim W who suggested trying BLEND_QUIET_250_SC_EQ for both end station ISIs (one at a time). The attached screenshot shows the transition from BLEND_QUIET_250_SC_EQ back to our normal windy configuration BLEND_QUIET_250_SC_BRS, which is much quieter at 50-70 mHz.
Jim explans that this sensor correction has a notch at around 50mHz (he will attach a plot), and that this worked OK durring the summer when the microseism was verry low. However, it will reduce the amount of isolation that we get at the microseism, which was fine when Jim first tested it in the summer months.
If an EQ moves the whole site in common, we can lock all the chambers to the ground at EQ frequencies to reduce the common motion. Our problem this time was probably that we switched only the end stations without changing the corner.
For now, the recomeneded operator action durring earthquakes is:
If the IFO is locked, don't do anything. We want to collect some data about what size EQ we can ride out with our normal WINDY configuration.
If the IFO unlocks, and the earthquake is going to be large enough to trip ISIs (several um/sec) switch the ISI configuration node to LARGE_EQ_NOBRSXY. This just prevents tripping of ISIs
Once the BLRMS are back to around 1 um/sec you can set the SEI_CONF back to WINDY, and ask ISC_LOCK to try LOCKING_ARMS_GREEN. If the arms stay locked for a miute or so, you can try relocking the IFO.
I took a quick look at Seismon performance on the MIT test setup. The internal notice was written a few hundred seconds after the earthquake. Internal: File: /Seismon/Seismon/eventfiles/private/pt16336050-1164667247.xml EQ GPS: 1164667247.0 Written GPS: 1164667525.0 H1 (P): 1164667949.1 L1 (P): 1164667773.9 We beat the p-wave arrival by about 200s at LLO and 400s at LHO. Arrivals below: -bash-4.2$ /usr/bin/python seismon_info -p /Seismon/Seismon/seismon/input/seismon_params_earthquakesInfo.txt -s 1164667243 -e 1164670843 --eventfilesType private --doEarthquakes --doEPICs /Seismon/Seismon/all/earthquakes_info/1164667243-1164670843 1164667246.0 6.3 1164667949.1 1164667963.2 1164671462.6 1164669655.5 1164668932.7 4.52228e-06 1164667900 1164671500 -15.3 -70.5 8.433279e+06 H1 1164667246.0 6.3 1164667773.9 1164667787.6 1164670002.2 1164668821.0 1164668348.5 1.12682e-05 1164667700 1164670100 -15.3 -70.5 5.512348e+06 L1 1164667246.0 6.3 1164668050.7 1164668064.9 1164672594.3 1164670302.2 1164669385.3 6.73904e-06 1164668000 1164672600 -15.3 -70.5 1.069658e+07 G1 1164667246.0 6.3 1164668041.4 1164668055.5 1164672479.8 1164670236.7 1164669339.5 3.22116e-06 1164668000 1164672500 -15.3 -70.5 1.046759e+07 V1 1164667246.0 6.3 1164667831.5 1164667845.3 1164670438.5 1164669070.3 1164668523.0 6.99946e-06 1164667800 1164670500 -15.3 -70.5 6.385045e+06 MIT 1164667243.2 6.3 1164667948.9 1164667953.5 1164671451.9 1164669648.2 1164668926.7 4.74116e-06 1164667900 1164671500 -15.3 -70.8 8.417411e+06 H1 1164667243.2 6.3 1164667773.6 1164667778.0 1164669993.8 1164668815.0 1164668343.5 1.15920e-05 1164667700 1164670000 -15.3 -70.8 5.501199e+06 L1 1164667243.2 6.3 1164668052.2 1164668056.8 1164672601.7 1164670305.2 1164669386.6 7.10833e-06 1164668000 1164672700 -15.3 -70.8 1.071690e+07 G1 1164667243.2 6.3 1164668043.0 1164668047.6 1164672488.9 1164670240.7 1164669341.5 3.35518e-06 1164668000 1164672500 -15.3 -70.8 1.049125e+07 V1 1164667243.2 6.3 1164667832.1 1164667836.6 1164670436.3 1164669067.8 1164668520.5 7.31460e-06 1164667800 1164670500 -15.3 -70.8 6.386137e+06 MIT 1164667247.0 6.2 1164667941.5 1164667978.2 1164671455.2 1164669651.7 1164668930.3 2.75907e-06 1164667900 1164671500 -15.4 -71.0 8.416356e+06 H1 1164667247.0 6.2 1164667767.1 1164667802.5 1164669998.4 1164668819.2 1164668347.6 7.79549e-06 1164667700 1164670000 -15.4 -71.0 5.502860e+06 L1 1164667247.0 6.2 1164668045.1 1164668082.4 1164672612.9 1164670313.2 1164669393.4 3.86408e-06 1164668000 1164672700 -15.4 -71.0 1.073178e+07 G1 1164667247.0 6.2 1164668035.9 1164668073.2 1164672500.5 1164670249.0 1164669348.4 1.94756e-06 1164668000 1164672600 -15.4 -71.0 1.050694e+07 V1 1164667247.0 6.2 1164667825.7 1164667861.5 1164670443.9 1164669073.8 1164668525.8 4.24775e-06 1164667800 1164670500 -15.4 -71.0 6.393872e+06 MIT
J. Kissel, J. Driggers, C. Gray We've successfully damped the 4735.09 [Hz] violin mode harmonic. The key: the +/- 60deg phase filters in the ETMY_L2_DAMP_MODE9 bank, which we thought were moving the damping control signal phase around the unit circle, was actually not adjusting the phase at this 4.7 [kHz] since they were tuned for the 2nd harmonics at 1 [kHz] (thanks to Jenne who dug into foton to check that these filter made sense). This left us with essentially only 0 [deg] (+ gain) or 180 [deg] (- gain) as options, and neither worked well. After rearranging some filters between MODE9 and MODE10 filter banks, Jenne was able to create new phase adjustment filters with 30 [deg] increments for 5 [kHz]. +60 [deg] with a positive gain worked well for the first two orders of magnitude, but we eventually needed to nudge by -30 [deg] once other modes around these frequencies began to be resolved, confusing the error signal. Thus, the final settings that we think will work well: MODE10: gain = +0.02 (we were able to use up to +0.1 while we were trying hard) FM4 ("100dB") gain(100,"dB") FM9 ("+30deg5k") zpk([0],[28.5238+i*4744.41;28.5238-i*4744.41],1,"n")gain(2.07488e-06) FM10 ("4735") gain(100,"dB")*butter("BandPass",4,4734.5,4735.5)gain(120,"dB") I think all the success I claimed yesterday () was merely by turning on the notch filter in the DARM path and waiting. I've updated the LHO Violin Mode Wiki, and I've also updated the ISC_LOCK guardian code to ensure this continues to get turned on (specifically I edited the gen_VIOLIN_MODE_DAMPING functional state in the ISC_GEN_STATES.py subfunction that creates VIOLIN_MODE_DAMPING_1 and VIOLIN_MODE_DAMPING_2 in the acquisition sequence). Unclear if related to Cheryl's reported problems with the fundamentals over night (LHO aLOG 32059). Note the Violin Mode Wiki or the ISC_LOCK guardian have been updated with her changes.
See the first attached, showing the ISC lock state (Ch16) dropping as the HAM3 ISI WD (Ch1) trips to State3 (Damping only.)
All the CPSs show a twitch and shift as the DC coupled ISO loops disengage but note the scale of the glitch on H2 and V2 CPSs. What is curious to me is how the C2 sensors clearly exceed saturation level (20000) but the SAT_COUNT (Ch2) does not count them. Ahhh, I think that any 'trip' of the watchdog, even if only to damping only, will clear the saturations. I think we should change this in the code although I think someone coded it explicitly this way (BTL/HRAP.) I think this hinders us though and maybe we can rework it.
I decided to Power Cycle the CPS and while off reseated the corner2 Gauge Boards in the satellite rack. Did this several times. ISI reisolated without difficulty.
Attached as well is another view a few hours ago of this glitching dropping the IFO out of some state better than zero. Prior to the ISI trip is another glitch on the Corner2 CPS that was not enough to rile things up.
Finally, a 7 hour trend capturing all the recent occurrances of this. Prior to this, 22 Nov saw a CPS saturation and trip but that was a Tuesday.
Hugh - Recall that we added the channel H1:ISI-HAM3_WD_CPS_SAT_SINCE_RESTART which can probably do what you want. It will have some big number in it, but should only count up, the delta will be what you want. I think we added this just for you, so merry christmas! -Brian
All BSC spectra looks pretty normal. Noise below 20 Hz is due to different platforms behaving differently.
HAM spectra looks fine above 20Hz except for some non-consequential bumps between 20 and 40Hz in HAMs 2&5.
Jenne, Sheila Keita
We had another instance of a jump in POP90, in which both the I and Q phases increased. We think this is a problem with the readback, similar to what is described in 31181.
We were acquiring lock, and had no ASC running on the SRC. We looked at witness sensors for all the interferometer optics, and it looks like none of them moved at the time. We also don't see changes in other RF sensors, like AS36, AS90, or POP18. We looked at both quadratures of POP90 before rotation and it seems to have both a phase shift and a 3.5 dB increase in the sum of I+Q. The RFmon and LO mon on the demod don't have any jumps nearly that large, so if it is a problem in the demod it is probably downstream of the directional coupler for the RFmon.
This seems not to be the same as the jumps in SRC alingment that started after last tuesday's maintence, (31804 31865 and other alogs), but since the symptom is very similar it would make debugging the other problem easier if this issue could be fixed. Since we use POP90 for a dither lock of the SRM angle durring lock acquisition, this can cause a lockloss if it happens while we are trying to lock.
I tested the chassis that was pulled out (S1000977). During the testing I did not see any level changes or glitches in either the I or Q channel outputs, except when a pair of cables to attached to the front panels via a BNC tee were strongly wiggled. Removal of the tee and wiggling the cables directly didn't induce any changes. Attached is an oscilloscope trace of the I&Q monitor output for the POP90 channel. It is fuzzy because of an RF amplitude modulation I was applying, however the distortion discontinuities are present with the modulation off. Daniel pointed out to me that the distortion is due to my not looking at the signal differentially on the oscilloscope. Sure enough it looks a lot cleaner when processed differentially. I did however notice that if the RF input is more than -11 dBm, the monitor signals on the rear panel are saturated/distorted. The only other output level changed that was observed was when the chassis was turned off in the evening and back on again the following morning. The chassis (strictly) failed the following tests: - front panel monitor coupling factors (all channels at 100 MHz, failed by less than 1 dB) - IF beat note amplitude versus frequency (all channels, I & Q, at 100 kHz, failed by as little as 50 mV and as much as 360 mV) - IF output noise level (channel 3, I & Q, failed by as little as 3 dB and as much as 4 dB). Channel 3 is labelled as REFLAIR_27. By any chance when the chassis was in the rack, was there more than one cable attached to the (one) front panel BNC connector?
I hit load and cleared the error. But I have no idea about the state of the injection since it didn't happen the first time. Looks like it will try again at GPS 1163938117?
And looks like another error.
I hit init. Seems like it's going to try the next one at 1163938617.
Alright I'm seeing some malicious noise in DARM now. I think the injection finally came through.
Looks like only the first two injections (1169397617 and 1163938117) didn't happen.
Looking at the error message, it appears that the injection "failed" as opposed to being skipped: https://dcc.ligo.org/DocDB/0113/T1400349/013/hwinjections.pdf (p11) • CAL-PINJX TINJ OUTCOME. Set as follows: – 1 = success – -1 = skipped because interferometer is not operating normally – -2 = skipped due to GRB alert – -3 = skipped due to operator override (pause OR override) – -4 = injection failed – -5 = skipped due to detector not being locked – -6 = skipped due to intent bit off (but detector locked) In previous experience, injections have failed when AWG has been unable to access a test point. Sometimes, this error is fixed by rebooting the awg computer. I'm not sure why it went away this time.
That's excatly what happend. I went and UNmonitored all of the CBRS channels in SDF so this cant happen again.
The rest of the NGN channels are being monitored, but I'm not sure if they should be since they are not tied into the IFO at all. I'll talk to the right people and find out.
Oh, yeah, I'm glad that you not-mon'ed the cBRS channels. Anything in the NGN Newtonian noise model is totally independent of the IFO, and shouldn't be stuff that'll knock us out of observing.
Probably the cBRS and its need for occassional damping is the only thing that will change some settings and knock us out of Observe, so maybe we can leave things as-is for now. The rest of the NGN channels are just seismometers, whos output doesn't go anywhere in the front ends (we collect the data offline, and look at it). Since all of those calibrations are in, and should be fine, I don't anticipate needing to change any other settings in the NGN EPICS channels.