(see WP #6377) Having not asked for permission, I am now asking for forgiveness! It was my intention to shut down the temporary pump setup @ BSC8 at the end of maintenance today. Instead, I have left the setup running. Obviously, we will shut it down if/when directed to do so. As is, we have just this morning entered in to the "pressure region of interest" for our long-term gauge drift data collection. The nature of the problem doesn't lend itself to 4 hours per week of data collecting. Recall that ON/OFF tests of this setup while in a locked "Low Noise" IFO state produced nothing of interest to the PEM folks just prior to the start of O2.
Keita, Rana, Evan
After refocusing the HAM5 camera, we see in full lock that there are at least two ghost beams hitting the SRM composite mass. These ghost beams move when CPY (but not CPX) is moved.
The attached plots show the DARM spectrum for different CPY alignments. There is no obvious sweet spot, but perhaps we will find one by looking at some long-term DARM BLRMS.
This image shows the HAM5 camera with the nominal CPY alignment (-150 µrad in pitch, 0 µrad in yaw). The two bright, vertically aligned spots on the left-hand side are the CPY ghost beams.

Also, the take snapshot button for camera 17 just saves blank images.
Here are 2 other old alogs that are relevant if you are worried about scatter from CPs:
Robert saw scattering from CPY in the PR2 camera: 31243
I saw that 6 times higher drive was needed on CPX than CPY to make noise show up in DARM, and the noise that did show up was clear fringe wrapping shelves for CPX and a broad shelf for CPY. 30979
CPY scattering seemed like it is not an immediate problem since the drive reuiqred to see noise in DARM was 100 um at the error point of the damping loop at 0.1 Hz. I don't know the gain of the damping loop, but assume that it is not less than -20 dB at 0.1 Hz, so that we are pushing the mass at least 10 um, probably more. This should be larger than the normal path length modulation. It would be good to look at what the gain of the damping loop actually is to see if this is really a much larger path length modulation than what we would normally expect.
SRC model shows that CPY is installed parallel to HR surface of ITMY (misalignment angle of 0.07 degrees). This number gives a vertical offset of the ghost beam on SR2 of 2 cm and on SRM of 10 cm. This is how I read the camera image. One might also suggest that the plate is misaligned even further from the nominal position — by 0.14 degrees. In this case ghost beams swap and we can’t tell a difference.
Laser Status:
SysStat is good
Front End Power is 34.4W (should be around 30 W)
Front End Watch is GREEN
HPO Watch is GREEN
PMC:
It has been locked 0.0 days, 4.0 hr 49.0 minutes (should be days/weeks)
Reflected power is 13.54Watts and PowerSum = 72.45Watts.
FSS:
It has been locked for 0.0 days 1.0 hr and 18.0 min (should be days/weeks)
TPD[V] = 3.587V (min 0.9V)
ISS:
The diffracted power is around 4.966% (should be 5-9%)
Last saturation event was 0.0 days 4.0 hours and 49.0 minutes ago (should be days/weeks)
(Kyle, Gerardo)
Installed annulus ion pump hardware at the H2 input mode cleaner tube. The entire system is ready to be pump down and turned on next maintenance day.
Notes about installation: we vented, the already vented annulus space, installed all hardware, pump, controller and managed to pump it down to 4.2x10^-05 torr, then maintenance period was over.
Items to complete next maintenance day: pump the system down, leak check and turn ion pump on.
Work done under WP#6386.
J. Kissel Continuing the schedule of roaming PCALX line. We'll run this line for one more day, so that we get two points at 4001.3 Hz with a 30 [W] IFO to check consistency between the two repeats of the schedule, and then shut off the line until further notice. Frequency Planned Amplitude Planned Duration Actual Amplitude Start Time Stop Time Achieved Duration (Hz) (ct) (hh:mm) (ct) (UTC) (UTC) (hh:mm) --------------------------------------------------------------------------------------------------------------------------------------------------------- 1001.3 35k 02:00 39322.0 Nov 28 2016 17:20:44 UTC Nov 30 2016 17:16:00 UTC days @ 30 W 1501.3 35k 02:00 39322.0 Nov 30 2016 17:27:00 UTC Nov 30 2016 19:36:00 UTC 02:09 @ 30 W 2001.3 35k 02:00 39322.0 Nov 30 2016 19:36:00 UTC Nov 30 2016 22:07:00 UTC 02:31 @ 30 W 2501.3 35k 05:00 39322.0 Nov 30 2016 22:08:00 UTC Dec 02 2016 20:16:00 UTC days @ 30 W 3001.3 35k 05:00 39322.0 Dec 02 2016 20:17:00 UTC Dec 05 2016 16:58:57 UTC days @ 30 W 3501.3 35k 05:00 39322.0 Dec 05 2016 16:58:57 UTC Dec 06 2016 21:09:56 UTC ~15:00 @ 30 W 4001.3 40k 10:00 39322.0 Dec 06 2016 21:09:56 UTC 4301.3 40k 10:00 39322.0 4501.3 40k 10:00 39322.0 4801.3 40k 10:00 39222.0 5001.3 40k 10:00 39222.0 Frequency Planned Amplitude Planned Duration Actual Amplitude Start Time Stop Time Achieved Duration (Hz) (ct) (hh:mm) (ct) (UTC) (UTC) (hh:mm) --------------------------------------------------------------------------------------------------------------------------------------------------------- 1001.3 35k 02:00 39322.0 Nov 11 2016 21:37:50 UTC Nov 12 2016 03:28:21 UTC ~several hours @ 25 W 1501.3 35k 02:00 39322.0 Oct 24 2016 15:26:57 UTC Oct 31 2016 15:44:29 UTC ~week @ 25 W 2001.3 35k 02:00 39322.0 Oct 17 2016 21:22:03 UTC Oct 24 2016 15:26:57 UTC several days (at both 50W and 25 W) 2501.3 35k 05:00 39322.0 Oct 12 2016 03:20:41 UTC Oct 17 2016 21:22:03 UTC days @ 50 W 3001.3 35k 05:00 39322.0 Oct 06 2016 18:39:26 UTC Oct 12 2016 03:20:41 UTC days @ 50 W 3501.3 35k 05:00 39322.0 Jul 06 2016 18:56:13 UTC Oct 06 2016 18:39:26 UTC months @ 50 W 4001.3 40k 10:00 39322.0 Nov 12 2016 03:28:21 UTC Nov 16 2016 22:17:29 UTC days @ 30 W (see LHO aLOG 31546 for caveats) 4301.3 40k 10:00 39322.0 Nov 16 2016 22:17:29 UTC Nov 18 2016 17:08:49 UTC days @ 30 W 4501.3 40k 10:00 39322.0 Nov 18 2016 17:08:49 UTC Nov 20 2016 16:54:32 UTC days @ 30 W (see LHO aLOG 31610 for caveats) 4801.3 40k 10:00 39222.0 Nov 20 2016 16:54:32 UTC Nov 22 2016 23:56:06 UTC days @ 30 W 5001.3 40k 10:00 39222.0 Nov 22 2016 23:56:06 UTC Nov 28 2016 17:20:44 UTC days @ 30 W (line was OFF and ON for Hardware INJ)
Unplugged an unused extension cord (maybe two). Turned off a power strip that a computer was plugged into. The computer appeared off but I wanted to make sure. The left door on the left table enclosure behind the black laser safe curtains near the H1 PSL is unlocked. I did not have a key with me to lock it. Went through the PSL science mode check list hanging next to the air handling controls for the H1 PSL enclosure. Did nothing for the H2 PSL enclosure. Was not certain of the procedure for checking ISC table fans. Did not hear humming. The pump near BSC8 is still running. There is a tall ladder against BSC8. Made sure both card readers were off.
6388 Power cycle cdsfs0
Jim, Dave, Ryan, Carlos:
cdsfs0 (NFS server for /ligo) was powered down, RAID controller card and cables inspected, and powered back up between 11:42 and 12:02 PDT. This is the first action in an investigation of recent periods of /ligo unavailability.
6383 outbuilding WAP disconnects
Carlos, Jim, Dave:
The WAP ethernet cables were disconnected from the local switch at all outbuildings (EX, EY, MX, MY). These switch ports were then reactivated, permitting future use of CDS WiFi if the cables were to reconnected to local switch port #12.
6382 MSR TCS Hartmann Wavefront Sensor
Nutsinee:
The problem of the ITMX HWS image being glitched by running the ITMY camera could not be reproduced. If this problem re-appears, we will install a second camera card in h1hwsmsr. For now, no further action will be taken.
looking at the logs on cdsfs0 over the past week, here are the number of times an error is reported which could have lead to /ligo being unavailable for a short period of time:
Wed Nov 30 04:43
Wed Nov 30 05:51
Wed Nov 30 18:37
Wed Nov 30 20:50
Fri Dec 2 04:40
Mon Dec 5 03:48
Mon Dec 5 14:52
Mon Dec 5 16:17
Mon Dec 5 17:32
Mon Dec 5 19:15
Tue Dec 6 10:42
Maintenance activities are complete. Reset Fiber Polarization on X-Arm. Starting relocking.
After detecting failures on /Ligo, We power off CDSFS0, review all cable connections and verified that the controller card was plugged correctly on the board, after power back on we manually remounted and exported /Ligo. We also verified all workstations were connected again.
Richard, Patrick, Evan
We refocused the SRM digital camera (cam17) while watching IR flashes on the SRM. The camera aperture was also stopped down a bit.
I've unmonitored all elements of ASC DC5 input matrix for OBSERVE in a hope that this is somewhat useful for passive ASC sensing measurement.
Unmonitored channels are H1:ASC-INMATRIX_P_16_XX and H1:ASC-INMATRIX_Y_16_XX where XX is 1, 2, 3, ... 33.
DC5 is not used at LHO, output of DC5 filters are still monitored (attached), and as soon as the output is turned on we're kicked out of observing, so this has zero impact on the IFO performance.
WP 6383 Carlos, Jim The wireless access points at EX and EY have been unplugged and the switch ports turned on. To reactivate the WAP for CDS access, the red ethernet cable will need to be plugged in to the switch as outlined in the VEAsWirelessAccess page in the CDS Wiki. Please remember to disconnect it when done!
I've unplugged both mid stations as well.
Carlos reports the parking lot at End-Y is very icy. Please be careful.
17:31 Lockloss due to maintenance activities
Not sure of the cause yet, everything seemed all good and normal. Running lockloss plots now though and I'll update if I find anything.
But don't worry, we are back to Observing at 10:36 UTC.
I haven't seen anything of note for the lockloss. I checked the usual templates, with some screenshots of them attached.
This seems like another example of the SR3 problem. (alog 32220 FRS 6852)
If you want to check for this kind of lockloss, zoom the time axis right around the lockloss time to see if the SR3 sensors change fractions of a second before the lockloss.
J. Kissel, B. Weaver, T. Sadecki Just for reference, I include a lockloss that was definitely *not* caused by the SR3 glitching for future comparison and distinction of whether this SR3 glitch has happened or not. Also, remember, although the OPS wiki's instructions suggest that one must and can only use lockloss2, not everyone has the alias yet for this more advanced version. You can make the plots, and do everything you need with the more basic version: lockloss -c /ligo/home/ops/Templates/Locklosses/channels_to_look_at_SR3.txt select It would also be great to get the support of @DetChar on this one. The fear is that these glitches begin infrequently, but get successively more frequent. Once they do, we should consider replacing electronics. The fishy thing, though, is that LF and RT are on separate electronics chains, given the cable layout of HAM5 (see D1101917). Maybe these glitches are physical motion? Maybe with statistics of two, it's unclear whether it's that LF and RT just *appear* to be the culprit whether it may be a random set of OSEMs glitching.
See my note in alog 32220, namely that Sheila and I looked again and we see that the glitch is on the T1 and LF coils, which share a line of electronics. The second lockloss TJ started with in this log (12/06) are somewhat unconclusively linked to SR3 - no "glitches" like the first one 12/05, but instead all 6 top mass SR3 OSEMs show motion before lockloss.
Sheila, Betsy
Attached is a ~5 day trend of the SR3 top stage OSEMs. T1 and LF do have an overall step in the min/max of their signals which happened at the time of that lockloss which showed the SR3 glitch (12/05 16:02 UTC)...
Lockloss due to PSL tripping off. Currently on the phone with Jason working to remotely restart if possible. Will update as I know more.
Jason noticed that the chiller was complaining about low water level as he was bringing it back up. This is apparently due to the fact that when the chiller trips off, it burps a bunch of water onto the floor. I topped the Xtal chiller off with 300 mL H2O.
Filed FRS ticket 6853 for this trip.
Also, back to Locking now.
Also, Rana and photographer are on site. I let them onto the Observation Deck to take pics while we are relocking.
Sorting through the myriad of signals leads me to think that the laser trip was due to the NPRO passing out. Although it is possible that the flow rate in head 3 dipped below the 0.4 lpm limit. Head3FlowNPRO.png suggests that the NPRO tripped out before the flow rate in head 3 reached its limit.
When restarting the laser last night, the status screen on the PSL Beckhoff PC indicated a trip of the "Head 1-4 Flow" interlock; although looking at the graphs Peter posted above it appears that the laser lost power before any of the flow sensors dropped below the trip threshold.
Further forensics: Attached are trends of the laser head temperatures around the time of last night's PSL trip. To my eye nothing looks out of the ordinary.