TITLE: 04/26 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC STATE of H1: Observing at 68Mpc INCOMING OPERATOR: Corey SHIFT SUMMARY: Nutsinee covered the first hour or so of my shift while I finished up work on the FMCS IOC changes. I'm afraid I am not certain if the LVEA has been swept. I do know that the WAP and FLIR cameras are off. I spent a considerable amount of time damping the ITMY bounce and roll modes with Sheila's help. Greg helped me verify that the calibration code was running after I went into observing. At one point while attempting to lock the ETMX RMS WD tripped. I reset it. The verbal alarms script crashed when I set the intention bit to observing. I restarted it. I had to accept some minor SDF differences to go into observing. A screenshot of these is attached to the mid shift summary. The ops_auto_alog.py script does not return from attempting to get data when run for the shift transition log. I suspect an NDS issue. LOG: 00:28 UTC Kiwamu to CER to turn off LVEA WAP 00:31 UTC Kiwamu back 00:41 UTC Kyle back from mid stations 00:53 UTC ETMX RMS WD tripped Stopped at ENGAGE_REFL_POP_WFS to damp ITMY bounce and roll modes. 01:59 UTC Moving on 02:20 UTC NLN. Accepted SDF differences. 02:24 UTC Running a2l. 02:33 UTC a2l done. Set intent bit to observing. Verbal alarms script crashed. Restarted it. 02:42 UTC Damped PI mode 28 by changing phase 02:54 UTC Damped PI mode 27 by changing phase
WP 6577 Dave B., Carlos P., Bubba G., John W., Patrick T. I have migrated a subset of the EPICS channels provided by the FMCS IOC on h0epics to an IOC I created on fmcs-epics-cds. The IOC on fmcs-epics-cds connects to the BACNet server that Apollo has installed as part of the FMCS upgrade. The channels that I migrated have been taken over by this upgrade and can no longer be read out by the server that the IOC on h0epics reads from. The fmcs-epics-cds computer connects to the slow controls network (10.105.0.1) on eth0 and the BACNet network (10.2.0.1) on eth1. It is running Debian 8. The IOC on h0epics is started from the target directory /ligo/lho/h0/target/h0fmcs (https://cdswiki.ligo-wa.caltech.edu/wiki/h0fmcs). I commented out the appropriate channels from the fmcs.db and chiller.db files in the db directory of this path and restarted this IOC. I made no changes to the files in svn. The IOC on fmcs-epics-cds uses code from SNS: http://ics-web.sns.ornl.gov/webb/BACnet/ and resides in /home/cdsadmin/BACnet_R0-8. This is a local directory on fmcs-epics-cds. This IOC is started as cdsadmin: > ssh cdsadmin@10.105.0.112 cdsadmin@fmcs-epics-cds: screen Hit Enter cdsadmin@fmcs-epics-cds: cd /home/cdsadmin/BACnet_R0-8/iocBoot/e2b-ioc/ cdsadmin@fmcs-epics-cds: ../../bin/linux-x86_64/epics2bacnet st.cmd Hit CTRL-a then 'd' Issues: I came to realize during this migration that the logic behind the binary input channels is different in BACNet. In BACNet a value of 0 corresponds to 'inactive' and a value of 1 corresponds to 'active'. In the server being migrated from a value of 0 corresponds to 'invalid'. This was verified for the reverse osmosis alarm: H0:FMC-CS_WS_RO_ALARM. In the BACNet server it reads as 0 or 'inactive' when not in alarm. When John W. forced it into alarm it read as 1 or 'active'. I believe Dave has updated his cell phone alarm notifier to match this. A similar situation exists for the state of the chiller pumps. In the server being migrated from a value of 1 appears to correspond to 'OFF' and a value of 2 appears to correspond to 'ON'. It has not been verified, but I believe in the BACNet server a value of 0 corresponds to 'OFF' and a value of 1 corresponds to 'ON'. The pump status for each building is calculated by looking at the state of the pumps. The calculation in the database for the IOC being migrated from appears to be such that as long as one pump is running the status is 'OPERATIONAL'. If no pump is running the status is 'FAILED'. I need to double check this with John or Bubba. I updated the corresponding calc records in the database for the BACNet IOC to match this. In the server being migrated from, channels that are read by BACNet as binary inputs and binary outputs are read as analog inputs. I changed these in the database for the BACNet IOC to binary inputs and set the ONAM to 'active' and the ZNAM to 'inactive'. The alarm levels also need to be updated. They are currently set through the autoburt snapshot files that contain a separate channel for each alarm field. The autoburt request file has to be updated for the binary channels to have channels for .ZSV and .OSV instead of .HSV, .LSV, etc. So currently there is no control room alarm set for the binary channels, including the reverse osmosis alarm. I also need to update the medm screens to take account of this change. Also, there is an invalid alarm on the control room alarm station computer for the mid X air handler reheat temperature. Looking on the BACNet FMCS server this channel actually does appear to be genuinely invalid. It should be noted that this BACNet IOC is a temporary install until an OPC server is installed on the BACNet server. I would like to leave the permit for this work open until the FMCS upgrade is complete and all the channels have been migrated to the BACNet IOC.
Back to observing. Accepted attached SDF differences. Spent a significant amount of time damping ITMY roll and bounce modes with Sheila's help. Confirmed that the calibration code is running with Greg's help.
Nutsinee covered the first hour or so of my shift while I finished migrating the BacNET FMCS channels to EPICS. Commissioning appears to be complete. Attempting to relock. Note: The ops_auto_alog.py script hangs while trying to get data. NDS issue?
TITLE: 04/25 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Patrick
SHIFT SUMMARY:
LOG:
15:00 Chris, Bubba, Christina, Gerardo to LVEA, LVEA to laser safe
15:15 Fil to HAM3 for temp sensor work
15:30 Fred, Mike & guests out to LVEA, out 16:45
15:30 JimW, Krishna to LVEA to move cBRS
16:15 Nutsinee to EY, transitioning VEA to laser hazard
16:15 Jason to LVEA, ITMY oplev power tweak
16:30 Gerardo & Chandra to LVEA
16:30 Jason to EY
16:45 Fred, Mike & guests to EY
17:30 JeffK doing charge measurements
19:00 Krishna, Hugh to EY to recenter T240 on BRS table
19:30 LVEA to laser hazard, Nutsinee to HWSX, Jenne looking at ISCT1, out 20:00
21:30 Richard out of LVEA
21:45 JeffK starting rubbing measurement on ITMY
22:45 Gerado to both ends
22:45 Nutsinee to EX
Today I mounted the CP1 ion pump (described as "auxiliary ion pump" in ECR?) to the floor beneath CP1. I did not make the vacuum connection at this time - TBC. In the process, I managed to dislodge the accelerometer that is mounted on the floor with douple-sided tape. I set it back on the tape but have no idea 1. if I damaged the unit and 2. if my re-attachment attempt is adequate.
Tagging CDS, PEM, and DetChar -- this smells like one of Robert's. He should comment.
The LVEA crane is not in the regular parking spot. It is located above the clean room just west of the bier garden that was slated to be moved into the bier garden. We ran into some electrical issues with some of the FFUs which were resolved with the help of Richard and Fil however, we ran out of time before we were able to place the clean room in the bier garden. We will relocate the clean room at the next most opportune time.
WP6590 Gather Ubuntu12 workstations from outbuildings and LVEA
Richard, Carlos:
All ubuntu12 workstations were removed from the LVEA and VEAs and are stored in the CUR awaiting OS upgrade to Debian8.
WP6594 NDS2 client update
Jonathan:
Actually the new NDS2 client 0.14 was made the default for matlab.
WP6577 Read out new FMCS data using the BacNET EPICS IOC
Patrick, Carlos, Dave, Buba, John
EPICS FMCS channels which are now being montored by the new FMCS system were removed from the old IOC (h0epics) and added to the new IOC (fmcs-epics-cds). Due to a difference in the way binary data is handled, the cell phone text alert configuration was changed and restarted. This change is being propagated to the MEDM screens by Patrick.
Carlos installed the new server in the MSR, it spans the CDS-SLOW (10.105) and the FMCS (10.2) networks, talking BacNET on the FMCS network, and EPICS on the SLOW network.
WP6603 Rack up 4th DMT computer
Dan, Carlos, Dave:
An iLIGO Sun X2200 was racked up in the MSR (second DAQ rack) above h1dmt2. This will be configured as the fourth DMT machine (h1dmt3) and be used as a test system.
WP6602 Downgrade DMT calibration/hoft code
Greg, Aaron, John Z.
The DMT code was downgraded to the version as-of three weeks ago. This is a temporary status while the new code is made ready for release.
WP 6584
Continued work from last week, see alog 3564. Final temperature sensor cable was pulled from HAM3 over to HAM2. This completes work for WP 6584.
At the request of the control room and as outlined in WP #6607 the new nds client package has been made the default on the debian 8 workstations. You no longer need to source an additional environment file or do a javaaddpath in matlab.
The default matlab (2012b) for the debian 8 workstations will pick up the new client by default.
This does not affect the remaining Ubuntu12 systems (including the guardian).
Gerardo, Chandra
Soft closed GV 5,7 this morning in preparation for craning over large cleanroom. Unfortunately there were some issues with the cleanroom so it didn't get moved today. We opened valves at 2:30 pm local.
We've also reserved three hours for calibration on Thu. Apr. 27, 1600-1900 UTC (0900-1200 Pacific) in coincidence with LLO.
Keita writing as Corey.
ASC -> Jenne, Sheila and others. Locked IFO needed.
EY oplev -> Jason. IFO status doesn't matter.
Additionally, the following might happen.
OMC jitter measurement -> Sheila, locked IFO.
Removing/putting on Hartman plate on the end station HWF camera when the IFO is unlocked -> Nutsinee.
J. Kissel I've taken this week's effective bias voltage measurements that track accumulated charge on the test mass/reaction mass electrostatic drive system. The standard metrics: - Monitoring the single-frequency actuation strength of each quadrant as a function of requested bias voltage using Pitch and Yaw optical optical lever signals to measure the response. - Monitoring the overall longitudinal strength comparing an ESD excitation against a photon calibrator excitation at adjacent single frequencies. The conclusions from both these metrics have not changed from the last two measurements (LHO aLOGs 35553, 35366): Just before we vent on May 8th, during last minute preparations as we bring the IFO down, let's - Turn OFF the ETMX ESD bias completely, and leave it OFF - Leave the ETMY bias ON at +400 [V], with the opposite sign as in observation (Negative in NLN, switch to Positive for vent) for the duration of the vent (i.e. until we open back up to the arms)
Sheila, Jason, Evan G, Krishna
Mode hoping of the ETMY oplev has been showing up in hveto since April 19th, although the oplev damping is off. The glitches that show up are definitely glitches in the sum, and the oplev is well centered, so the issue is not that the optic is moving. There is a population of DARM glitches around 30 Hz that is present on days when the oplev is glitching but not on other days. We are curious about the coupling mechanism for these glitches and wonder if this coupling could be causing problems even when the oplev is not glitching loudly .
Evan, Jason and I connected the monitor on the oplev laser diode power to one of the PEM ADC channels used to monitor sus rack power (we used H1:PEM-EY_ADC0_14_OUT_DQ, which was monitoring the +24V power and is channel 7 on the field rack patch pannel. Jason can make the laser glitch by tapping it, with this test we saw clear glitches in the sum but no sign of anything in the monitor so this monitor might not be very useful. Plugging this in means that the lid of the cooler is slightly open.
We also unplugged the fiber, so that for the time being there is no light going into the chamber from the oplev. If these glitches are coupling to DARM electromagnetically, we expect to keep seeing them in DARM. If they were somehow coupling through the light (radiation pressure, something else), we would expect them to go away now. One glitch that we looked at is about a 75 uW drop in the laser power on the optic. (A=2P/(c*m*omega^2)= 3e-19 meters if all the power were at 20 Hz). We don't really know how centered the beam is on the optic, or what the reflectivity is for the oplev laser, but it seems like radiation pressure could be at the right level to explain this.
Using an ASD of the oplev sum during a time when the oplev is quiet, this noise is more than 3 orders of magnitude below DARM at 30 Hz.
The fiber was disconnected at ~19:05:00 25 April 2017 UTC. There will not be any Hveto correlations after this time because the OpLev QPD will not be receiving any light. We will be looking at Omicron histograms from the summary pages to determine whether this is the cause of noise.
With this test over, I have reverted the above changes; the ETMy oplev is now fully functional again, as of ~18:30 UTC (11:30 PDT). I also unplugged the cable we used for the laser diode monitor port and reconnected H1:PEM-EY_ADC0_14_OUT_DQ so it is now once again monitoring the +24V power on the SUS rack.
To address the glitching, I increased the oplev laser output power; using the Current Mon port on the back of the laser:
The laser will need several hours to come to thermal equilibrium, as the cooler was left open overnight (as a result of the above test). Once this is done I can assess the need for further output power tweaks.
If the glitch happens tonight while Jason is unavailable, leave it until tomorrow when Jason can make another attempt to tune the temperature.
Even when H1 is observing, operators can go out of observing, let Jason work for a short while, and go back into observation as soon as he's done.
But it's clear that we need a long term solution that doesn't include intensive babysitting like this.
As I noted here, the oplev laser SN 191 was found to be running very warm; this in turn made it very difficult to eliminate glitches. In light of this, as per WP 6591, this morning I re-installed laser SN 189-1 into the ITMy oplev. The laser will need a few hours to come to thermal equilibrium, then I can assess whether or not further tweaks to the laser output power are needed to obtain glitch-free operation. I will leave WP 6591 open until the laser is operating glitch-free.
SUM counts for this laser have been very low; ~2.8k versus the ~30k the last time this laser was used (March 2017). Today I pulled the laser out of the cooler and tested it in the Pcal lab and found the output power to be very low; at the setting being used I measured 0.11 mW versus the 2.35 mW I measured before I installed the laser. By tweaking the lens alignment (lens that couples light from the laser diode into the internal 1m fiber) I was able to increase the output power to ~0.2 mW. There is clearly something not quite right with this laser, my suspicion being either a gross misalignment of the coupling assembly (which takes longer than a maintenance period to correct) or something is going bad with the laser diode. Knowing the history of these lasers, both are equally probably in my opinion.
Unfortunately there is not a spare currently ready for install. In light of this, since the laser is currently working, I reinstalled SN 189-1 into the ITMy optical lever so at least we have a functional ITMy oplev. Once I get a spare ready for install this laser will be swapped out at the earliest opportunity. I have closed WP 6591.
[Greg Mendell, Aaron Viets] Due to the bug found in gstlal-calibration-1.1.5, the DMT machines have been reverted to version 1.1.4. Primary and redundant pipelines, as well as the DMTDQ processes, were restarted at 1177181448. Output looks normal so far.
Aggregation of calibration hoft has been restarted on DCS, with these channels removed,
H1:GDS-CALIB_F_S
H1:GDS-CALIB_F_S_NOGATE
H1:GDS-CALIB_SRC_Q_INVERSE
H1:GDS-CALIB_SRC_Q_INVERSE_NOGATE
starting from 1177181608 == Apr 25 2017 11:53:10 PDT == Apr 25 2017 18:53:10 UTC
[Jenne, Vaishali, Karl]
We have replaced the razor beam dump on ISCT1 that was causing scattered light problems (see alog 35538) with an Arai-style black glass dump, provided by the 40m (see 40m log 8089, first style). You can see the new dump just to the left of the PD in the attached photo. I was thinking about sending the reflection from this dump (after several black glass bounces) to the razor dump, but I can't see any reflection with just a card, so skipped this step for now. We can come back to it with an IR viewer if we have more time in the future.
We're on our way to NLN, so maybe we'll see if this helps any, if we happen to get high ground motion sometime.
[Jenne, Vaishali, Karl, Betsy]
Koji pointed out to me that even though the new black glass beam dump had been sitting on a HEPA table at the 40m, since it has been so long since it was cleaned, it could accumulate a bit of dust or film.
So, we temporarily put the razor dump back, disassembled the black glass dump, and with Betsy's guidance cleaned the surfaces of the black glass with first contact. We then reassembled the dump and put it back on the table.
Taking advantage of a few minutes while those working on the cleanroom took a short break, we transitioned to laser hazard so that we could do a fine alignment of the beam dump with the DRMI flashing. The LVEA was transitioned back to laser safe after this brief work was completed, so that the cleanroom team could work more easily.