TITLE: Nov 5 DAY Shift 16:00-23:00UTC (08:00-04:00 PDT), all times posted in UTC
STATE Of H1: Observing
OUTGOING OPERATOR: Jim
QUICK SUMMARY: IFO is in Observing @ ~78.2Mpc. Eq sei bands are all in the .23micron range. µSei is around .2µ. Wind is <10mph. All light appear to be off in E, M, CS & PSL. CW injections are running. Cal lines are running. Livingston is up and running. Quite 90 blends being used.
Lost lock about 2 hours ago, not sure why. Trying to relock now, but DRMI is not cooperating. Otherwise, things are quiet. Winds down, useism is down. RF45 has not been awful.
TITLE: 11/04 [EVE Shift]: 00:00-08:00 UTC (16:00-00:00 PDT), all times posted in UTC STATE Of H1: Observing @ ~80 MPc. SHIFT SUMMARY: Rode through magnitude 5.2 earthquake in Russia. Taken out of observing for 20 minutes to offload SR3 M2 to M1. RF45 noise has not reappeared. No major change in wind or seismic. SUPPORT: Jenne INCOMING OPERATOR: Jim ACTIVITY LOG: Lost lock twice on DRMI split mode. 00:28 UTC Evan and Keita to LVEA to check on RF45 noise. 00:29 UTC Caught lock on split mode. Moved BS in pitch from 164.84 to 165.32 to lock DRMI on 1f. 00:34 UTC Lock loss on CARM_5PM 00:51 UTC Lock loss on PARK_ALS_VCO 00:57 UTC Lock loss on CARM_ON_TR 01:01 UTC Keita and Evan done 01:31 UTC Observing 07:21 UTC Out of observing to offload SR3 M2 to M1 07:41 UTC Observing
Jenne finished offloading SR3 M2 to M1. Intention Bit: Commissioning (Nov 5 07:21:59 UTC) Intention Bit: Undisturbed (Nov 5 07:40:52 UTC)
Jenne, Patrick Happened to notice that a bit on the ODC mini overview on nuc1 was red. Traced it to a DAC overflow on SR3 M2. Jenne trended it back. It looks like it started about 2 hours ago. The signal originates from the guardian cage servo. We have taken the IFO out of commissioning to manually offload it to M1.
The SR3 cage servo hit its limit, and started going crazy. This wouldn't have caused (and didn't cause) an immediate lockloss, but probably would have caused one eventually, after SR3 drifted far enough that the other ASC loops ran out of range.
As a reminder, the "cage servo" puts an offset into the M2 stage of SR2, such that the OSEM witness sensors on the M3 stage are kept at a constant pitch value. There is no offloading of this offset to M1, and we just ran out of range on the M2 actuators.
In the attached plot, the offset that we send to the M2 actuator is the "TEST_P_OUTPUT", which is multiplied by 8.3 and then sent to the M2 coils. This factor of 8.3 in the Euler-to-OSEM matrix means that if the Test_P output is above 15,600, we'll saturate the DAC output. You can see that about when the output hits 15,000 the SR3_M3_WIT starts to drift away from the 922 setpoint, and the Test_P_OUTPUT starts going crazy since it thinks it needs to keep pushing harder, since the witness is below 922.
While I don't think the data is compromised, we did take the IFO out of Observing mode while I manually offloaded the pitch actuation to the M1 stage. I moved the M1_OPTICALIGN_P_OFFSET by about 3 urad, and that brought the cage servo offset back to near zero. The SRC1 and SRC2 ASC loops reacted to this, but I did the offloading slowly enough that we didn't have any problems.
I have added a notification to the SR3 cage servo guardian to put up a message if the TEST_P_OUTPUT gets above 10,000 counts, so there's plenty of time (days) to offload the SR3 alignment before we run into this problem again.
Have remained in observing. RF45 noise has not reappeared. Rode through jump in 0.03 - 0.1 Hz seismic band to slightly above 0.1 um/s approximately 90 minutes ago. From USGS: 5.2 11km WSW of Ust'-Kamchatsk Staryy, Russia 2015-11-05 01:59:22 UTC5 4.6 km deep Terramon reports it arriving at 18:24:17 PST (02:24:17 UTC).
ISI blends are at Quite_90. Seismic and winds are unchanged from start of shift. No SDF differences to check. RF45 appears to have subsided for now. Range is ~ 77 MPc.
TITLE: 11/03 [EVE Shift]: 00:00-08:00 UTC (16:00-00:00 PDT), all times posted in UTC STATE Of H1: Lock acquisition OUTGOING OPERATOR: Jeff QUICK SUMMARY: Lights appear off in the LVEA, PSL enclosure, end X, end Y and mid X. I can not tell from the camera if they are off at mid Y. Winds are between ~ 5 and 15 mph. ISI blends are at Quite_90. Earthquake seismic band is between 0.01 and 0.03 um/s. Microseism is between 0.08 and 0.2 um/s. Evan and Keita just finished looking at RF45 cabling in PSL enclosure. Attempting to relock.
Jordan, TJ, Jess, Andy, Nairwita We noticed that there was a significant increase in loud SNR glitch rate at the end of 3rd November's lock (Fig 1 and 2). While going through the HVETO result we noticed that these glitches were vetoed out at round 1 and the auxiliary channel used to veto out these triggers is : PEM-EY_MAINSMON_EBAY_QUAD_SUM (Fig 3). Omega scan of one of such glitch can be seen in Fig 4. The spectrum of one of the EY EBAY mainsmon channel is also attached. The "Blue" one is the current spectrum (4th Nov ~ 10:45pm UTC) and red one corresponding to one of the coincident glitch time. Though the later one (red trace) is slightly elevated in compare to the blue one, it's hard to believe that Mainsmon glitches with such small SNR can create those glitches in DARM. We did notice some related glitches in EX Mainsmon (but not in CS Mainsmon channel, any magnetometers or microphones). HEPI L4C, GS13s and ACC channels at corner station have some glitches around the same time. The main question is why these glitches are showing up in end station Mainsmon channels and around the same time at corner station seismometers.
Activity Log: All Times in UTC (PT) 15:51 (07:51) Check TCS Chillers 16:00 (08:00) Take over from Jim 16:16 (08:16) Lockloss – Unknown 08:23 (08:23) Keita – Going to End-X to look for missing equipment 16:39 (08:39) Keita – Back from End-X 16:44 (08:44) Chris – Replacing filter in both mid stations 17:21 (09:21) Contractor (Tom) on site for John 18:10 (10:10) Bubba & John – Going to End-Y chiller yard 18:17 (10:17) Locked at NOMINAL_LOW_NOISE, 22.5W, 76Mpc 18:25 (10:25) Set intent bit to Observing 18:56 (10:56) Bubba & John – Back from End-Y 19:10 (11:10) Chris – Beam tube sealing on X-Arm between Mid & End stations 19:26 (11:26) Set intent bit to Commissioning while Evan & Filiberto are in LVEA 19:30 (11:30) Evan & Filiberto – Going into the LVEA 19:40 (11:40) Kyle & Gerardo – Going to Y28 to look for equipment 19:45 (11:45) Lockloss – Evan & Filiberto at electronics rack 20:05 (12:05) Kyle & Gerardo – Back from Y28 20:40 (12:40) IFO locked & Intent bit set to Observing 21:12 (13:12) Joe – Going to work with Chris on beam tube sealing on X-Arm 21:49 (13:49) Kyle & Gerardo – Going to X28 22:38 (14:38) Set the intent bit to Commissioning 22:40 (14:40) Keita & Evan – Going into PSL to check cabling for the RF45 problem 22:57 (14:57) Lockloss – Due to PSL entry 23:00 (15:00) Robert – Staging shakers in the LVEA 23:20 (15:20) Joe & Chris – Back from X-Arm 23:46 (15:46) Kyle & Gerardo – Back from X28 00:00 (16:00) Turn over to Patrick Shift Summary: Title: 11/04/2015, Day Shift 16:00 – 00:00 (08:00 – 16:00) All times in UTC (PT) Support: Kiwamu, Evan, Keita, Mike, Filiberto, Daniel, Jason Incoming Operator: Patrick Shift Summary: Lockloss at 16:16 (08:16) - Unknown After lock loss, had trouble getting past DIRM_1F. Decided to do an initial alignment. Relocked IFO at 17:47 (09:47). Ran the A2L script. Had several SDF notifications for both ETMs and ITMs, could not find a SUS person. Took a snapshot of all and accepted. Set the intent bit to Observing at 18:25 (10:25) After relocking this morning the RF45 noise has come back, at several time per minute rate. This is knocking the range down into the high teens and low 20s Mpc. Confer with Evan, Keita, Mike, Daniel, and Filiberto. Evan & Filiberto going into the LVEA to check cabling. The cabling checks outside the PSL did not change the RF45 glitch rate. Keita & Evan going into the PSL to check for cabling issues there. Lockloss when the PSL environmental controls were switch on. Crew is out of the PSL. Starting to relock.
Keita, Evan
We went into the PSL to see if we could find a source for today's 45 MHz glitches.
We didn't find anything conclusive. Mostly, it seems that bending the main cable (the LMR195 that carries the 45 MHz into the PSL) causes large glitches in the AM stabilization control signal, similar to what is seen by bending/tapping the LMR195 cable at ISC R1. We did not really see anything by bending the slow controls / power cables, or the rf cable that drives the EOM.
The main cable passes from the ISC rack, through the PSL wall, through the (overstuffed) cable protector on the HPO side of the PSL table, over the water pipes underneath the PSL, and then terminates at the EOM driver, which sits just underneath the PMC side of the table. Keita notes that the pipes don't seem to be vibrating.
It is worth noting that these glitches, which are clearly seen in the control signal time series of the EOM driver in the PSL, do not show up in the control signal time series of the spare driver hooked up in the CER.
After this, Keita and I went to the ISC rack, inserted a 20 dB coupler after the balun on the patch panel, and looked at the coupled output on a scope. We didn't see anything.
However, around the time that we inserte this coupler, it seems that the glitches went away. The attachment shows 12 hours of AM stabilization control signal. The first loud burst appears to coincide with the lockloss at 16:20 Z. The second loud burst around 19:40 was Fil and me wiggling the cable. The third loud burst around 23:30 is Keita and me in the PSL. The dc shift in the control signal around 00:30 is the time period with the coupler inserted.
When inserting the coupler, I noticed that the balun casing is slightly loose; I was able to twist the face of this casing just by unscrewing the cable from it.
Here is the current list of source files which have outstanding local modifications which should be committed to the svn repository. Following Jeff's request, for each file with a local mod I've printed out the owner and date of the modification. I'm in contact with Hugh regarding the changes made to the BSC ISI observe.snap files.
david.barker@sysadmin0: ./check_h1_files_svn_status.bsh
SVN status of front end code source files...
done (list of files scanned can be found in /tmp/source_files_list.txt)
SVN status of filter module files...
M /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSMC1.txt
sheila.dwyer Oct 6 12:33
M /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSMC3.txt
sheila.dwyer Oct 6 12:32
M /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSPRM.txt
sheila.dwyer Oct 6 12:14
M /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSPR3.txt
sheila.dwyer Oct 6 12:44
M /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSPR2.txt
sheila.dwyer Oct 6 12:15
M /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSSR2.txt
sheila.dwyer Oct 6 12:16
M /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSSRM.txt
sheila.dwyer Oct 6 12:17
M /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSSR3.txt
sheila.dwyer Oct 6 12:44
M /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSITMY.txt
evan.hall Oct 28 14:31
M /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSITMX.txt
evan.hall Oct 28 13:39
M /opt/rtcds/userapps/release/isc/h1/filterfiles/H1OAF.txt
kiwamu.izumi Oct 25 21:14
M /opt/rtcds/userapps/release/cal/h1/filterfiles/H1CALCS.txt
jeffrey.kissel Oct 20 12:25
M /opt/rtcds/userapps/release/lsc/h1/filterfiles/H1LSC.txt
sheila.dwyer Oct 13 12:27
M /opt/rtcds/userapps/release/asc/h1/filterfiles/H1ASC.txt
sheila.dwyer Oct 10 14:01
M /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSETMY.txt
evan.hall Oct 29 15:47
M /opt/rtcds/userapps/release/sus/h1/filterfiles/H1SUSETMX.txt
evan.hall Oct 29 13:36
M /opt/rtcds/userapps/release/cal/h1/filterfiles/H1CALEX.txt
sudarshan.karki Oct 27 12:29
done (list of filter module files scanned can be found in /tmp/full_path_filter_file_list.txt)
SVN status of safe.snap files...
done (list of safe.snap files scanned can be found in /tmp/safe_snap_files.txt)
SVN status of OBSERVE.snap files...
M /opt/rtcds/userapps/release/isi/h1/burtfiles/h1isiitmy_OBSERVE.snap
hugh.radkins Nov 3 03:25
M /opt/rtcds/userapps/release/isi/h1/burtfiles/h1isibs_OBSERVE.snap
hugh.radkins Nov 3 03:25
M /opt/rtcds/userapps/release/isi/h1/burtfiles/h1isiitmx_OBSERVE.snap
hugh.radkins Nov 3 03:25
M /opt/rtcds/userapps/release/isi/h1/burtfiles/h1isietmy_OBSERVE.snap
hugh.radkins Nov 3 03:25
M /opt/rtcds/userapps/release/isi/h1/burtfiles/h1isietmx_OBSERVE.snap
hugh.radkins Nov 3 03:25
done (list of observe.snap files scanned can be found in /tmp/observe_snap_files.txt)
SVN status of guardian files...
done
After Evan & Filiberto were out of LVEA relocked the IOF on the first attempt with no apparent problems. Ran the A2L script and cleared the SUS SDF notifications. Back to Observing mode at 20:40 (12:40) SDF notification are listed in the attached snapshots. NOTE: Still getting RF45 noise and ETMY saturations. Range is being adversely effected
The A2L script was updated yesterday to match the LLO script - the big change was having separate folders for the data for each site, so it's easier to tell which data is from where. The new folder didn't have write access from the Ops account though, so the A2L script was failing for Jeff. There are so many EPICS writes and messages during the script that the error message scrolled off the top of the terminal window before he saw it.
Anyhow, I have chmoded the directory, so it should be fine in the future, but also I have reverted to the P2L and Y2L values that we have been using for the last month or so, since it's not clear that the new values are okay (since the script had some errors part way through).
Not a generally good morning. IFO broke lock at 16:16 (08:16). After running initial alignment; relocked the IFO and went into Observing mode at 18:25 (10:25). Shortly after relocking started seeing a lot of RF45 noise events at a rate of several per minute. It was decided to have Evan & Filiberto to go into the LVEA to check the RF cabling. While Evan & Filiberto were in the LVEA working on the cabling, the IFO dropped lock. Waiting for them to come out before relocking.
RE lockloss at 8:16.
Bubba and I were turning on a few heaters and one of these feeds the output arm (HAM5-6).
The plots show duct temperature near the heater, zone temperature near the floor, and 10 days of zone temperatures.
I have multiplied the Observation_Ready bit by 70 to get it on the same scale as the temperature.
Coincidence?
Robert suggested that looking at the OpLevs could give an idea as to if this lockloss could be caused by the temperature. I trended the OpLevs and compared their motion with the LVEA temperatures, and there is definitely some correlation seen on the SR3 OpLev movements and the temp of the LVEA. You could probably make an argument for ITMY and PR3 OpLevs as well, but it is not as clear as the SR3.
So was this temperature excusion the cause of the lockloss? Well, the ASC signals didn't seem to be showing any signs of a problem before lockloss, seismic activity was low to minimal, winds low to minimal, and I didn't see any other obvious culprits.
I attached a trend of some of the CS OpLevs and LVEA temp in a 6 hour window, as well as a group of Lockloss plots with ASC signals and such.
If air temperature was a factor in this lockloss I would be suspicious of electronics outside the vacuum near HAM5 or 6. I would not expect in-vacuum hardware to respond so promptly.
[JeffB, Jenne]
After some closer looking, we don't think that the temperature was a direct cause of the locklosss. By the time the lock was lost, the temperature in FMC-ZONE4 had only changed by a few tenths of a degree - well within our regular diurnal temperature changes. About 16 seconds before the lockloss, SRC2 Pitch and Yaw both started walking away. I don't know why they started walking away, but there's probably nothing to so about this, other than eventually move the SRC ASC from AS36 to some other signal, as we're planning to do after O1 anyway.
We have a few piece of evidence that suggest that anthropegenic noise (probably trucks going to ERDF) couples to DARM through scattered light which is most likely hitting something that is attached to the ground in the corner station.
On monday, Evan and I went to ISCT6 and listened to DARM and watched a spectrum while tapping and knocking on various things. We couldn't get a response in DARM by tapping around ISCT6. We tried knocking fairly hard on the table, the enclosure, tapping aggresively on all the periscope top mirrors, and several mounts on the table and nothing showed up. We did see something in DARM at around 100 Hz when I tapped loudly on the light pipe, but this seemed like an excitation that is much louder than anything that would normaly happen. Lastly we tried knocking on the chamber walls on the side of HAM6 near ISCT6, and this did make some low frequency noise in DARM. Evan has the times of our tapping.
It might be worth revisiting the fringe wrapping measurements we made in April by driving the ISI, the OMC sus, and the OMs. It may also be worth looking at some of the things done at LLO to look accoustic coupling through the HAM5 bellow (19450 and
14:31: tapping on HAM6 table
14:39: tapping on HAM6 chamber (ISCT6 side), in the region underneath AS port viewport
14:40: tapping on HAM6 chamber (ISCT6 side), near OMC REFL light pipe
14:44: with AS beam diverter open, tapping on HAM6 chamber (ISCT6 side)
14:45: with OMC REFL beam diverter open, tapping on HAM6 chamber (ISCT6 side)
14:47: beam diverters closed again, tapping on HAM6 chamber (ISCT6 side)
All times 2015-10-19 local
I've made some plots based on the tap time Evan recorded (the recorded time seems off by half a minute or so compare to what really shows up in the accelerometer and DARM). Not all taps created signals in DARM but every signal that showed up in DARM has the same feature in a spectrogram (visible at ~0-300Hz, 900Hz, 2000Hz, 3000Hz, and 5000Hz. See attachment2). Timeseries also reveal that whether or not the tap would show up in DARM does not seems to depend on the overall amplitude of the tap (seen in HAM6 accelerometer, see attachment 3). PEM spectrum during different taps times doesn't seem to give any clue why one tap shows up in DARM more than the other (attachment 4,5). Apology for the wrong conclusion I drew earlier based on the spectrum I plotted using wrong GPS time (those plots have been deleted).
I zoomed in a little closer at higher frequency and realized this pattern is similiar to the unsolved n*505 glitches. Could this be a clue to figuring out the mechanism that caused the n*505?
Not sure what the cause of the power drift / alignment drift was, but it looks like we may have lost lock when the power recycling gain dropped below 33.5-ish. See aLog 23164 for some plots and details.