Reloading some Guardian code knocked us out of Observe. I took this opportunity to run a2l. PI modes 27 and 28 started ringing up during a2l, so Terra and I tweaked some setting and beat them back down.
J. Kissel, J. Driggers, S. Dwyer, J. Bartlett, D. Barker Today's (not that big a deal) rabbit hole: while looking for what was preventing the IFO from being OBSERVATION_READY, we found the IFO master node complaining that the IMC_LOCK gaurdian was not ready (i.e. the H1:GRD-IMC_LOCK_OK == False), even though the node was in its nominal state, managed correctly, and in EXEC. The key was that the IMC input power was just ever so slightly out-of-range of a recently added conditional statement that prevented the run portion of the ISS_DC_COUPLED state from ever completing: # Hack, since power_adjust() won't change the gain when it's less than 1dB different, 30Nov2016 JCD if self.counter == 0: if ezca['IMC-PWR_IN_OUTMON'] > 28 and ezca['IMC-PWR_IN_OUTMON'] < 32: ezca['IMC-REFL_SERVO_IN1GAIN'] = -8 self.counter += 1 -- and today, for the first time, we hit an a low of 27.9 [W], the self.counter never incremented, and the code never advanced because all future conditionals relied on that self.counter to have incremented. Jenne recalls that this conditional statement was added to account for the IMC-REFL_SERVO_GAIN adjustment that happens as the ISS figures out what power it wants to send into the interferometer today (see LHO aLOG 32267 for side conversation about that), and prevent SDF from complaining that this gain was set to -7 dB instead of -8 dB. The solution: bring the incrementing of the self.counter out of the power check: # Hack, since power_adjust() won't change the gain when it's less than 1dB different, 30Nov2016 JCD if self.counter == 0: if ezca['IMC-PWR_IN_OUTMON'] > 28 and ezca['IMC-PWR_IN_OUTMON'] < 32: ezca['IMC-REFL_SERVO_IN1GAIN'] = -8 self.counter += 1 This bug fix has been loaded, and committed to the userapps repo. Once the bug fix was installed, the code was able to successfully run, the IMC_LOCK status cleared, and the IFO happily went into OBSERVATION_READY.
TITLE: 12/07 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing
OUTGOING OPERATOR: Jeff
CURRENT ENVIRONMENT:
Wind: 13mph Gusts, 12mph 5min avg
Primary useism: 0.28 μm/s
Secondary useism: 0.31 μm/s
QUICK SUMMARY: Ran through IA at the beginning of the shift. We are back at Observing at 1:28 UTC.
As I was working on a different script I found that both X and Y camera can be talked to simultaneously. So I'm leaving both codes running again. If DIAG_MAIN guardian complains about HWS code stops running please do not comment out the HWS diagnostic code just to get rid of the message. I need to know if they stop running so we can try to troubleshoot it some more.
All of the dust monitor EPICS IOC's, the FMCS EPICS IOC, and the dewpoint EPICS IOC was restarted due to a reboot of the computer on which they were running.
J. Kissel (with information from S. Dwyer) I have been blaming the rotation stage for the 5 [W] range of IFO input power from lock stretch to lock stretch, but it looks like I've been wrong this whole time. I attach a 25 day trend of the the power into the IMC, compared to - the power transmitted from the PMC, - the power reflected off the input of the PMC. - the relative power diffracted by the AOM (servoed by 1st loop) - the 2nd loop Reference Voltage (servoed by 2nd Loop) One can see - the range of power into the IMC is 5 [W], with an average of 30 [W], which means a relative power range of 16%. - the range of transmitted power from the PMC shows a similar range: 10 [W] or out an average of 60 [W]; a relative range of 16%. - the power changes are anti-correlated with the PSL diffracted power, as expected, but the channel appears to be poorly calibrated since it ranges between 2 and 6%. This is likely the cause of the power variations, and they are under reported. - the power reflected by the PMC shows some long-term trend over the 25 days, but no correlation with the lock-to-lock difference. - the second loop reference voltage seems to track the long term trend of the PMC REFL power. - though there may be some correlated in isolated events, the human-adjusted (and NOT every lock stretch) 1st Loop voltage reference is not correllated with the lock-to-lock stretch changes in power. The fundamental problem -- the DC voltage reference for the ISS' 1st loop is not stable. Currently, we have some alarms on the diffracted power, but they're not very tight, and only designed to prevent saturation of the servo. The power fluctuations *could* be remedied with some hacky slow digital servo, or a tightening of the threshold on the diffracted power, but it seems like we should be able to get a more stable voltage reference.
the virtual machine which serves matlab licences is being rebooted. This also serves FMCS epics data, so RO alarms are being generated.
Ops Shift Log: 12/06/2016, Day Shift 16:00 – 00:00 (08:00 - 16:00) Time - UTC (PT) State of H1: IFO is down due to two earthquakes within about an hour Intent Bit: Commissioning Support: Jeff K., Keita, Jenne, Terra Incoming Operator: Travis Shift Summary: After the scaled down maintenance window (due to maintenance work done on Monday), tried relocking. Found green power in low in both arms. Ran an initial alignment and relocked on the first attempt. Sat at NOMINAL_LOW_NOISE in Commissioning while Evan did some commissioning work on the ITM Reaction Chain alignment. Went to commissioning when Evan finished. Rode through a 5.8Mag EQ centered around Trinidad. Lost lock due to 6.4Mag EQ in Indonesia. Put IFO into DOWN until microseism drops below 1.0um/s. Activity Log: Time - UTC (PT) 07:00 (00:00) Take over from TJ 16:19 (08:19) Chris – Escorting pest control through LVEA 16:20 (08:20) MY_CP4_LT250_PUMP_LEVEL_PCT alarm – Cleared on own 16:34 (08:34) Take IFO to Commissioning for start of maintenance 16:45 (08:45) Richard – Moving fiber bundles in MSR - (WP #6292) 16:45 (08:45) Chris – Finished at CS, Escorting pest control to Mid-Y and End-Y 16:46 (08:46) Christina – Going to End-X for cleaning 16:48 (08:48) Karen – Going into the LVEA to clean around Ham4, 5, 6, and high-bay 16:50 (08:50) Jim – Going to End-X to unplug the WAP 16:50 (08:50) Carlos – Going to End-Y to unplug the WAP 17:00 (09:00) Richard – Taking Scott M (LLO) into LVEA to look at vacuum racks 17:00 (09:00) Joe – Going into LVEA to check eyewash stations 17:05 (09:05) Cintas on site to service matts 17:10 (09:10) Kyle & Gerardo – Going into LVEA to work on PT180 - (WP #6377) 17:11 (09:11) Norco on site to deliver N2 to CP2 17:18 (09:18) Jim – Back from End-X 17:19 (09:19) Richard & Scott – Out of LVEA 17:31 (09:31) Lockloss – Due to maintenance activities 17:37 (09:37) Carlos – Back from End-Y – Reports the parking lot is very icy 17:45 (09:45) Christina – Leaving End-X 17:46 (09:46) Joe – Out of the LVEA 17:46 (09:46) Chris – Pest control finished at ends and corner – Working down the arms 17:50 (09:50) Karen – Out of the LVEA 18:08 (10:08) Alfredo & Elizabeth – Going to Mid-Y to do inventory 18:30 (10:30) Dave – Going to Mid-X and Mid-Y to unplug WAP 18:44 (10:44) Joe – Going into LVEA to service first aid stations 18:47 (10:47) Norco leaving site 18:52 (10:52) Rana – Escorting tour in the LVEA (WP #6387) 18:57 (10:57) Richard & Evan – Focusing camera at SRM (WP #6371) 19:06 (11:06) Dave – Back from Mid-Stations 19:20 (11:20) Richard & Evan – Out of the LVEA 19:27 (11:27) Alfredo & Elizabeth – Back from Mid-Y 19:36 (11:36) Paradise Water on site to deliver water 19:40 (11:40) Dave – Power recycle LIGO file system server (WP #6388) 19:44 (11:44) Bubba – Inspection tour of LVEA 20:05 (12:05) Patrick – Doing sweep of the LVEA 20:10 (12:10) Reset Fiber Polarization on X Arm 20:12 (12:12) Kyle & Gerardo – Out of the LVEA 20:15 (12:15) Start to relock 21:31 (13:31) After Initial alignment – Relocked at NOMINAL_LOW_NOISE, 27.9W, 66.7MPc 21:31 (13:31) In Commissioning mode - Evan & Rana work on ITM reaction chain alignment 22:00 (14:00) Damp PI modes 9, 26, 27, 28 22:14 (14:14) Rode through a Mag 5.8 EQ near Trinidad 22:19 (14:19) Set mode to Observing 23:08 (15:08) Lost lock due to 6.4Mag EQ in Indonesia. Primary microseism up to 7.0um/s 23:09 (15:09) Put IFO into down until microseism settles down 00:00 (16:00) Turn over to Travis
The file /ligo/cds/lho/h1/camera/H1-VID-CAM17.ini was modified to change from
Frame Type = Mono12
to
Frame Type = Mono8
Now the tiff image files are not 32 bit and can be read. Note: cameras 18, 19 and 21 also have Mono12 defined.
J. Kissel, J. Bartlett Another impressive show by the SEI / ISC team in which the IFO rode through sustained ~0.5-1 [um/s] BLRMS EQ band (30-100 mHz) motion, during ~75 percentile, ~0.5 [um/s] microseism (100-300 mHz BLRMS), and 5-10 [mph]. A good study for future reference of what we can withstand. The lock loss occurred @ 2016-12-06 23:07:33 UTC after about 1.5 hours of EQ evidence on site as recorded by the BLRMS This was the survival of several ~6.0 mag EQs in a row, 5.8 [mag] Scarborough, Trinidad and Tobago 6.4 [mag] Reuleuet, Indonesia 4.3 [mag] Scarborough, Trinidad and Tobago As per Sheila and Jim's recommendation (see LHO aLOG 32086), we've used the SEI_CONF manager node to move platforms to the LARGE_EQ_NOBRSXY state after the IFO unlocked. This moves that HAMs to using no sensor correction ("SC_OFF"), and the BSCs to "BLEND_Quite_250_SC_None." We will restore the configuration to the nominal "WINDY" configuration, once the EQ band falls below 1 [um/s] BLRMS.
Rode through a 5.8Mag EQ near Trinidad, OK. Primary microseism peeked at just a little over 1.0um/s. Lost lock at 23:08 due to 6.2Mag EQ near Indonesia. Primary microseism peeked at 7.0um/s. Holding the IFO in down until the microseism improves. Jeff K. switched to LARGE_EQ_NOBRSXY after the Lockloss.
| Work Permit | Date | Description | alog/status |
| 6388.html | 2016-12-06 11:18 | Power cycle the /ligo file system server, it is reporting errors. This will freeze all CDS workstations and many servers for about 10 minutes. Front ends and DAQ should be unaffected. | 32247, 32249, & 32250 |
| 6387.html | 2016-12-06 10:49 | Rana is going to give a tour to a guest during the maintenance window. | |
| 6386.html | 2016-12-06 10:23 | Vent annulus volume between 2K input Mode Cleaner tubes A and B. Connect annulus ion pump hardware and pump with rotating shaft vacuum pumps until ion pump can maintain unassisted. Can be done limited to maintenance days until complete | 32254 |
| 6385.html | 2016-12-06 09:53 | Install OS and setup Conlog on conlog-master and conlog-replica machines in the MSR. | |
| 6384.html | 2016-12-05 17:31 | Now that the IMC ASC model is running at high frequencies, we can include jitter injections to 7kHz in our noise budget. This is a quick (10 minute) measurement injecting pitch and yaw on the IMC PZT. | |
| 6383.html | 2016-12-05 13:50 | At both end stations: disconnect WAP ethernet cables from switch and re-activate switch port to permit future use of WAP during O2 when needed. [“WAP” = wireless access point] | 32201 & 32342,& 32250 |
| 6382.html | 2016-12-05 13:39 | Add second camera capture card in h1hwsmsr and see if we can capture two images without interference between each other. | 32250 |
| 6381.html | 2016-12-05 12:18 | Replace the TCSY Flow meter. Turn TCSY laser off, valve out piping volume, remove yellow paddle wheel flow meter, install new one (same version/model number). Check flow, get laser going again - will need to stabilize. | |
| 6380.html | 2016-12-05 12:05 | Insert zp(160,16) for all four segments of ETMX oplev. See https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=32115 | 32199 |
| 6379.html | 2016-12-05 11:57 | To search for a permanent solution for the ISS first loop, we will make the following temporary change in the optical configuration for the ISS box. This shouldn't impact on the H1 noise or stability performance because the ISS box is used only for monitoring purpose. We will swap mirror M32 with a low reflective mirror (R~a few %). This will reduce the amount of the light entering the ISS box from approximately 400 mW to ~10 mW. In addition, we will place an extra beam dump (a black hole type) to dump the transmission behind M32. Once the mirror is swapped, we will adjust the half wave plate in the ISS box to maximize the p-polarizing light falling on the PDB photo detector. | 32206 & 32215 |
| 6378.html | 2016-12-05 11:31 | ECR1600364 add 5 slow channels to DMT broadcaster to help with ETMX ESD diagnostics by detchar. | 32195 & 32212 |
| 6377.html | 2016-12-05 08:58 | Isolate PT180 from site vacuum volume and pump with temporarily connected rotating shaft pumps to compare gauge drift behavior. Requires work at height via ladder also LASER SAFE in LVEA | 32185, 32209, & 32260 |
| 6376.html | 2016-12-05 08:57 | Clean the PSL enclosure Anti-Room and Laser Room. Peter K. will escort and assist | 32183 |
| 6375.html | 2016-12-05 08:40 | Adjust the reference cavity periscope alignment because the reference cavity transmission has fallen by ~1/3rd. | 32183 |
| 6374.html | 2016-12-05 07:55 | ECR: Add a 16 kHz DAQ channel to monitor the RIN in reflection | 32191 & 32212 |
| 6373.html | 2016-12-05 07:54 | ECR: Increasing processing rate of ASCIMC model | 32192 & 32212 |
| 6372.html | 2016-12-05 07:12 | Install new High Voltage supplies for the ESD Replacing the Kepco units. This should stop the random tripping that has occurred in the past. | 32186 |
| 6371.html | 2016-12-05 07:09 | Replace analog camera with GigE camera removed from ISCT6. Use same network path so no switch work is required. | 32197 & 32212,& 32245 |
| 6370.html | 2016-12-04 12:02 | Update calibration on the DMT to gstlal-calibration-1.1.0-v1 and restart it during next maintenance or opportunity on 12/5 or 12/6 at LHO. * Bug fix to make the output of primary and redundant pipeline identical * Additional coherence gating in the kappa_tst calculation | 32188 |
| 6369.html | 2016-12-02 15:15 | Try some different ST2 & ST1 configurations during high microseism and wind. Specifically, try changing ST2_CONF nodes to 250_SC_A and see if there is any improvement in IFO controls. Requires consideration of impact on science segments. Transitions, testing to be done only during single IFO time. | 32147 |
| 6368.html | 2016-12-02 12:01 | Continue with schedule of roaming high-frequency calibration line from PCALX to establish high-frequency calibration uncertainty. Switching frequencies will only occur in either Single IFO time or when IFO is down, otherwise we should be observation ready. Detchar will continue to be informed. We expect to complete the schedule in ~1 week, and then line will be turned off until further notice. | 32179 |
| 6367.html | 2016-12-01 11:29 | I will build a new guardian machine on a fast front end computer using Debian 8 as OS, the long term plan is move the guardian process and nodes into this new machine gradually. We will move all the process offline without interruption on the current production server. | |
| 6366.html | 2016-11-30 16:43 | memory usage on h1guardian0 has increased significantly over the past few weeks. At the next opportunity we will increase it from 12GB to 48GB. | 32112 |
| 6365.html | 2016-11-30 12:45 | fix bug discussed in bug report 1062 and alog 31996 and comments | |
| 6364.html | 2016-11-30 08:16 | Restart h1hwsmsr computer. |
(see WP #6377) Having not asked for permission, I am now asking for forgiveness! It was my intention to shut down the temporary pump setup @ BSC8 at the end of maintenance today. Instead, I have left the setup running. Obviously, we will shut it down if/when directed to do so. As is, we have just this morning entered in to the "pressure region of interest" for our long-term gauge drift data collection. The nature of the problem doesn't lend itself to 4 hours per week of data collecting. Recall that ON/OFF tests of this setup while in a locked "Low Noise" IFO state produced nothing of interest to the PEM folks just prior to the start of O2.
~1630 - 1635 hrs. local -> Kyle in and out of LVEA At the request of others, I shut down the setup at PT180/BSC8. This included isolating PT180, each of the three vacuum pumps and de-energizing all of the related components. The ladder which had been left up against BSC8 was also removed. NOTE: PT180 is left isolated from the site vacuum volume and all pumping sources. As such, PT180's indicated pressure value does not represent the pressure at BSC8 etc. and will be rising until further notice.
Keita, Rana, Evan
After refocusing the HAM5 camera, we see in full lock that there are at least two ghost beams hitting the SRM composite mass. These ghost beams move when CPY (but not CPX) is moved.
The attached plots show the DARM spectrum for different CPY alignments. There is no obvious sweet spot, but perhaps we will find one by looking at some long-term DARM BLRMS.
This image shows the HAM5 camera with the nominal CPY alignment (-150 µrad in pitch, 0 µrad in yaw). The two bright, vertically aligned spots on the left-hand side are the CPY ghost beams.

Also, the take snapshot button for camera 17 just saves blank images.
Here are 2 other old alogs that are relevant if you are worried about scatter from CPs:
Robert saw scattering from CPY in the PR2 camera: 31243
I saw that 6 times higher drive was needed on CPX than CPY to make noise show up in DARM, and the noise that did show up was clear fringe wrapping shelves for CPX and a broad shelf for CPY. 30979
CPY scattering seemed like it is not an immediate problem since the drive reuiqred to see noise in DARM was 100 um at the error point of the damping loop at 0.1 Hz. I don't know the gain of the damping loop, but assume that it is not less than -20 dB at 0.1 Hz, so that we are pushing the mass at least 10 um, probably more. This should be larger than the normal path length modulation. It would be good to look at what the gain of the damping loop actually is to see if this is really a much larger path length modulation than what we would normally expect.
SRC model shows that CPY is installed parallel to HR surface of ITMY (misalignment angle of 0.07 degrees). This number gives a vertical offset of the ghost beam on SR2 of 2 cm and on SRM of 10 cm. This is how I read the camera image. One might also suggest that the plate is misaligned even further from the nominal position — by 0.14 degrees. In this case ghost beams swap and we can’t tell a difference.
Not sure of the cause yet, everything seemed all good and normal. Running lockloss plots now though and I'll update if I find anything.
But don't worry, we are back to Observing at 10:36 UTC.
I haven't seen anything of note for the lockloss. I checked the usual templates, with some screenshots of them attached.
This seems like another example of the SR3 problem. (alog 32220 FRS 6852)
If you want to check for this kind of lockloss, zoom the time axis right around the lockloss time to see if the SR3 sensors change fractions of a second before the lockloss.
J. Kissel, B. Weaver, T. Sadecki Just for reference, I include a lockloss that was definitely *not* caused by the SR3 glitching for future comparison and distinction of whether this SR3 glitch has happened or not. Also, remember, although the OPS wiki's instructions suggest that one must and can only use lockloss2, not everyone has the alias yet for this more advanced version. You can make the plots, and do everything you need with the more basic version: lockloss -c /ligo/home/ops/Templates/Locklosses/channels_to_look_at_SR3.txt select It would also be great to get the support of @DetChar on this one. The fear is that these glitches begin infrequently, but get successively more frequent. Once they do, we should consider replacing electronics. The fishy thing, though, is that LF and RT are on separate electronics chains, given the cable layout of HAM5 (see D1101917). Maybe these glitches are physical motion? Maybe with statistics of two, it's unclear whether it's that LF and RT just *appear* to be the culprit whether it may be a random set of OSEMs glitching.
See my note in alog 32220, namely that Sheila and I looked again and we see that the glitch is on the T1 and LF coils, which share a line of electronics. The second lockloss TJ started with in this log (12/06) are somewhat unconclusively linked to SR3 - no "glitches" like the first one 12/05, but instead all 6 top mass SR3 OSEMs show motion before lockloss.
Sheila, Betsy
Attached is a ~5 day trend of the SR3 top stage OSEMs. T1 and LF do have an overall step in the min/max of their signals which happened at the time of that lockloss which showed the SR3 glitch (12/05 16:02 UTC)...
I used the lockloss2 script that automatically checks for sus saturations and plots them using the lockloss tool, and saw that one of the three locklosses (2016-12-05 16:02:42 UTC) in the last day or so was probably caused by a glitch on SR3. The attached screenshot shows the timeline, there is clearly a glitch on the top mass of SR3 about 0.2 seconds before the lockloss.
The dither outputs (which we use for the cage servo) don't show anything unual until after the lockloss, which means that this is not a cage servo problem. Looking at top mass OSEMINF LF and RT are the two that seem to have a glitch first, at about the same time.
I've added a lockloss template /ligo/home/ops/Templates/Locklosses/channels_to_look_at_SR3.txt for any operators who have an unexplained lockloss and want to check if it is simlar to this one.
Sheila and I looked again at this particular lockloss (2016-12-06 10:05:39 UTC) and agree that the glitch that likely caused the lockloss are actually on the T1 and LF top stage OSEMs. These are indeed on the same set cabling, satellite amp, and driver run. See attached for updated lockloss plot this time with the OSEMINF channels. We'll keep watching locklosses to see if this happens more.
Back to Observe 2:01 UTC.