[Jenne, Vaishali, Karl]
We have replaced the razor beam dump on ISCT1 that was causing scattered light problems (see alog 35538) with an Arai-style black glass dump, provided by the 40m (see 40m log 8089, first style). You can see the new dump just to the left of the PD in the attached photo. I was thinking about sending the reflection from this dump (after several black glass bounces) to the razor dump, but I can't see any reflection with just a card, so skipped this step for now. We can come back to it with an IR viewer if we have more time in the future.
We're on our way to NLN, so maybe we'll see if this helps any, if we happen to get high ground motion sometime.
[Jenne, Vaishali, Karl, Betsy]
Koji pointed out to me that even though the new black glass beam dump had been sitting on a HEPA table at the 40m, since it has been so long since it was cleaned, it could accumulate a bit of dust or film.
So, we temporarily put the razor dump back, disassembled the black glass dump, and with Betsy's guidance cleaned the surfaces of the black glass with first contact. We then reassembled the dump and put it back on the table.
Taking advantage of a few minutes while those working on the cleanroom took a short break, we transitioned to laser hazard so that we could do a fine alignment of the beam dump with the DRMI flashing. The LVEA was transitioned back to laser safe after this brief work was completed, so that the cleanroom team could work more easily.
Jeff K, Jonathan, Dave:
h1calcs was restarted with new code at 10:15 PDT. Jonathan and I took the opportunity to update the H1EDCU_RACCESS.ini file with missing channels, and then restarted the DAQ.
This was h1calcs model change was covered under the continued work described in WP #6572 ECR E1700121 II 7828 This restart was to incorporate a few more EPICs monitor channels (24 or so) to support commissioning of the new SRC Detuning Infrastructure (see LHO aLOG 35547). In addition, I moved the pick-offs for calculated time-dependent correction factors that feed into the next subsequent calculations upstream of the final-answer 128 [sec] low pass. No change was made to the functionality of the infrastructure for h(t), these are all control-room only used redundant infrastructure. The changes to the h1calcs model is actually just changes to common library parts, which have been committed to the userapps repo: /opt/rtcds/userapps/release/cal/common/models/ CAL_CS_MASTER.mdl CAL_LINE_MONITOR_MASTER.mdl These EPICs channels have been added to the various MEDM screens, and those screens have been committed to the userapps repo under /opt/rtcds/userapps/release/cal/common/medm/ CAL_CS_TDEP_F_C_OVERVIEW.adl CAL_CS_TDEP_F_S_OVERVIEW.adl CAL_CS_TDEP_KAPPA_PU_OVERVIEW.adl CAL_CS_TDEP_KAPPA_TST_OVERVIEW.adl CAL_CS_TDEP_OVERVIEW.adl Initial results (i.e. the 30 minute NLN lock we just had) indicate that moving the pick-offs that pass one answer to the next calculation upstream of their 128 sec low-pass has cleaned up the final answers. Plots to come when we have more data.
For future reference:
with the card in the horizontal position, with the bottom tab of the PCI-e card on the left (as viewed from behind the computer), the connector's wide end is in the upper location. The port in use are:
h1hwsex: Left hand port
h1hwsey: Left hand port
h1hwsmsr: Right hand port (ITMX readout)
Nutsinee, Dave:
we had two problems in getting h1hwsex running again: It was running a new kernel which is missing the frame grabber driver, and the fiber was plugged into the wrong port on the card.
h1hwsex by default runs a later kernel (3.19.0-78-generic) than h1hwsey (3.19.0-49-generic). The '78' kernel is missing the edt kernel object needed to drive the frame grabber card. For a quick fix for today, I rebooted h1hwsex while standing in front of its console at EX and interrupted the boot to force a boot to the 3.19.0-49-generic kernel (it is under the U14 advanced options on the boot screen, using the non-recovery version of the '49') kernel.
The fiber/transceiver was plugged into the right hand side of the card in h1hwsex. When I moved it to the left, the code detected the camera. I went to EY and verified that the transceiver is also on the left port of the card in h1hwsey
We'll have a brief commissioning break from 9AM to 1PM Pacific tomorrow, coincident with LLO.
Task list (will be down selected):
Measure ASC sensing matrix, including POP WFS. (Should be quick - amplitudes all tuned already)
Now that we've got infrastructure in place for tracking SRC detuning parameters live, I'd like to repeat Evan Hall's study from LHO aLOG 27675, putting offsets into the SRCL loop and see if the infrastructure / parameters track the change in detuning as that has been confirmed to do. I request 30 minutes of full, NLN IFO time.
ascii file saved on desktop folder of RGA machine in control room
Here is a link to a Vertex scan taken a few months ago: https://alog.ligo-wa.caltech.edu/aLOG/uploads/29517_20160907141626_09062016_Vertex_SEM1500_analog.pdf Note: the apparent amplitude discrepancies are due to differing multiplier voltage settings. As such these two scans are for qualitative comparisons only.
Completed WP 6579 and then some.
*left three turbo stations and Kobelco energized, water still valved into QDP80 (#2) for cooling
Follow-up maintenance items:
I replaced the burned-out incandescent lamp with an upgraded LED version (Data Display Products, miniature, wedge-base 24-28VAC white WB200-FW4K28HD)
I de-energized Kobelco and valved out water to QDP80. Tomorrow during commissioning I will enter LVEA and de-energize turbo stations.
This completes WP #6587
Checked and adjusted the air bypass on the dust monitor vacuum pumps in the CS and at both ends. All pumps are OK, pressures and temperatures are within normal limits. The pump at End-Y is a bit noisy will keep an eye on it and if it gets worse will rebuild it. Closed out FAMIS 7512.
Maintenance Update as of19:00UTC:
Completed maintenance by replacing the one way valve that compressor #2 needed.
Work done under WP 6541, which is now closed.
[Jenne, Vaishali, Karl, Heather]
We have removed all cabling and all L4Cs that were in the beergarden and potentially in the way of the vent work (incl. prep).
Since work was ongoing on the HWS table, we have not yet removed the ~4 sensors that are in the vicinity of the north HAM4 door. We can do this opportunistically during the week if the IFO is offline for some reason, otherwise we'll do it next Tuesday.
Since the HWS table work was completed before we were ready to start locking, we pulled the rest of the L4Cs.
Please let me know if there are others that need removing.
[Greg Mendell, Aaron Viets] The GDS calibration pipelines were restarted at GPS time 1176571587. The output, latency, and CPU usage looks as expected. The command line arguments and filters have not changed from last week, as the same filters and options will work with the older code.
Unfortunately we could not get DMT hoft to send output downstream with this change. We are going back to calibration 1.1.5 for now.
Both pipelines were restarted again at 1176577576 to pick up version 1.1.5 again.
This morning I swapped the ITMy oplev laser, per LHO WP 6565. This was in response to this alog from K. Venkateswara and J. Kissel, indicating an ~0.44 Hz feature on the ITMy oplev (that showed up after the laser swap on March 7th) was possibly due to an issue with the laser and causing range issues. The old laser SN was 189-1, the new laser SN is 191. This laser will need a few hours to come to thermal equilibrium, once that is done I will assess whether or not the laser needs a power tweak for glitch-free operation; I will leave WP 6565 open until this is complete.
Went out today to tweak the power of this laser, as it is still glitching, and found the interior of the box to be very warm. This is highly unusual. I also found the exterior mounted box that holds the DC/DC converter that powers the laser to be very warm as well. My suspicion is that something, likely the laser, is drawing more current than it should. I checked the other oplev lasers in the LVEA and none of them were running warm like this. The quickest fix at this point is to swap in another laser. Unfortunately the only laser currently ready for swap in is SN 189-1, which happens to be the laser I just removed from the ITMy oplev due to concerns over an ~0.44 Hz feature that might be from the laser. Doing a quick spectrum of the new ITMy oplev laser SUM, pitch, and yaw signals after install on 4/11/2017 (attachment 1) and this morning (attachment 2) shows that the laser is currently very noisy compared to just after install (likely due to the increased operating temperature), but also still shows a feature at ~0.44Hz. As this ~0.44Hz feature is still there with this new laser, the feature is likely either from the the electronics of the ITMy oplev (QPD, whitening chassis) or is something real that the oplev is only a witness for. Assuming laser SN 189-1 checks out in the lab, I will re-install SN 189-1 into the ITMy oplev at the next available opportunity.