These graphs compare the STS2-C instrument with ITMY (STS2-B.) Roam2 is ~24" -X-Y of the +X-Y (NE) leg of the WBSC8 (H2 ITMY) Chamber. The graphs show velocity ASDs of both instruments during Calm (<5mph) and Windy (20+mph) Periods. Also shown is the calm and windy period coherence. Plots are segregated by dof. The calm period begins at 1300utc 26 April; the windy period begins at 000utc 26 April. The wind direction was generally from the S to SSW (from +Y and -X.)
Comparing the similar plots of alog 35186, a few things may be concluded:
Despite the calm period being even calmer here at Roam2, the X axis noise is greater than at Roam1 (drawing coming soon) by ~x10. Likewise during the windy period.
The Y axis comparison has the calm winds producing similar noise on the two seismometers with the windy period elevating the noise more at the Roam2 position by a few.
The Z axis response also indicates a noisier response during higher winds.
The coherence plot (not shown in 35186 alog) is uhhhh, well, depending on freq and dof, shows either more or less coherence between the instruments. I could report specifics or details but just enjoy the silence and see for your self.
If these comparisons are valid and I haven't screwed anything up, the conclusion is, this location is not nearly as good as the current ITMY or the Roam1 position, and, the chamber is not providing any pinning against tilting of the seismometer.
TITLE: 04/27 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 56Mpc
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY: Started rough after EX sei crash, but running smooth now
LOG:
The big event was the recovery after the EX seismic front end crashed (see Jeff and Dave's logs for details). Otherwise a few people went to the mid stations today, but we were a little distracted with recovery.
Post the 7 day OpLev trends. SUM: ITMY - No data after 04/24 - Zoomed the plot way out and could find a bottom. ETMY - Big jump after the 04/25 maintenance window. This is a result of Jason's tune up during the window. Yaw: ITMX - Near the 10urad line. Trend line is flat. Pitch: SR3 - Near the 10urad line. Trend line is flat.
Everything looks normal with the trends. ITMx yaw is beginning to get close to the 10 µrad limit, and will likely need re-centering in the near future.
Regarding ITMy SUM, it changed due to my swap of the laser (reported here). Unfortunately it's not entirely out of the woods yet, as the laser is showing signs of low output power. While the SUM signal doesn't show up on Jeff's plot above (not sure if this is due to something in the script or some odd artifact of the OS change with the control room workstations, or something else entirely), it is there; DetChar summary page for ITMy oplev shows it, as well as the oplev MEDM screen. The oplev seems to be functioning normally, just looks like its output power is low. I plan on investigating this during the 5/2/2017 maintenance window.
J. Kissel, J. Warner, J. Driggers, J. Oberling, C. Gray Executive Summary Running an old, infrequently used "ditherAlign" script to recover green spots after gross X arm misalignment (i.e. because of the SEI front-end failure early this morning; see LHO aLOG 35824, 35834) caused more than usual trouble regaining the X ARM ALS angular control. After slowly working / stumbling our way through identifying the problems by performing "the usual" troubleshooting (i.e. G1602280), we were able to return to full initial alignment and lock acquisition and achieved nominal low noise by 20:00 UTC. Total down time -- 7 hours from 2017-04-27 13:00 to 20:00 UTC. Lessons Learned - There are four places any given operator goes for information to diagnose a problem when alone on evening / night shifts: (1) Jenne's H1 Troubleshooting Presentation: G1602280 (2) The OPS Wiki Troubleshooting Page: Trouble Shooting the IFO (3) The OPS Wiki Useful Scripts Page: Useful Scripts For Operators (4) Nutsinee has her own Trouble Shooting Page: Nutsinee's Page When an operator has just restarted shifting after a month off, and nothing's gone wrong for that operator in a while, you forget even the location of resources let alone which resource to use. It will be a giant effort to merge these documents, but we could at least work to link all of them to the other. Jim recommends we banish (2), we update and maintain (3), and acknowledge that (4) is not cannon if used by others who are not Nutsinee. Jenne acknowledges that (1) has several pending updates, and will add a few things from today's experience. - It's important to have such a "ditherAlign" script that rescues us from a gross arm misalignment in which we've lost spots. But we have those events so infrequently that the script suffers from bit rot between uses (e.g. it uses the TDS library, and we just upgraded the control room to Debian8, which doesn't support the TDS library). As such, we should upgrade, debug and fix this script and update the associated documentation. However, after looking at it, it's a beast of a spaghetti monster. Also a giant effort. - When we get such a gross misalignment, we should not expect *any* operator to be able to fix the problem (let alone diagnose it) quickly or by themselves. It took all of the authors patiently sitting through the problem, picking up clues, trying this and that, looking in 15 different places (only possible with 3-4 pairs of eyes) for us to solved the problem, and later identify it. We should just expect this after we lose a seismic front-end. - Since we've moved toward the O2 model of "do not call commissioners if you have a problem," operators have, in general, become reticent to call if there's a problem, especially on owl shifts -- and that call list is Keita. Further, in the era of 71 hour locks and 80-90% duty cycle, commissioners and detector engineers are far less regularly in the control room. Yet further, shift changes are also a really tough point in the chain of communication and on the day operator. Not only do they have the stress of a mid-night failure from which they don't have all the information, but the gets compounded with the phones ringing, everyone coming in asking what's wrong and/or is it fixed, and not knowing who can actually stay to help. I make this last statement with no proposed solution, but merely to expose what happens these days during an un-identified mid-night failure mode and to encourage patience and cooperation by all. Detailed Timeline - Seismic front-end crashed - After seismic computer and platform recovery (see LHO aLOG 35824, 35834), we did not see any spot on the ALS X Green camera. - After some manual tries (unclear if any restoration to slider / oplev / OSEM values was done), an infrequently used script to recover green spots after gross arm misalignment, /opt/rtcds/userapps/release/asc/h1/scripts/ditherAlign.py was run on both TMSX (twice) and ITMX. These scripts failed, and left a whole bunch of stuff in a bad alignment state, namely - all X arm optics (ITMX, ETMX, TMSX) were aligned to a bad location, - the ITM *misalignment* offsets were changed, and - The green camera (CAM) Reference OFFSETS were changed. Some, but not all recovery was made, because the users weren't aware of everything this script touched. (And before you cry "but the SDF system!" remember that there are lots of DIFFs that appear in down snaps that are usually overlooked because things work out in the end.) (Corey departs, Jim Arrives) - After a restoration of the TMSX alignment sliders to the previous observation stretch's OSEM location, we regained spots on the camera. Jim then heroically pushed the ETMX and ITMX alignment around until he recorved *decent* arm cavity transmission. - As is standard practice, he then went for through an initial alignment of the X ARM (INITIAL ALIGNMENT state on ALS_XARM) which turns on automatic alignment, including green WFS and green Camera (CAM) ITMX spot restoration. However, because the end station alignment was still far enough off that the WFS error signals were too large, and the green CAM references had been errantly changed by the ditherAlign script the WFS and CAM servos blew up after every attempt to automatically close them as normal. (Jeff Arrives) - After further efforts to manually increase the transmission to reduce the WFS and CAM error signals without success, we remembered one has to clear WFS / SUS offsets if/when/after they drive optics into the weeds. (Jenne Arrives) - Having cleared the weeds, we were able to close the green YAW WFS 1 & 2 loops that control TMSX and ETMX. To do so, we needed to set the ALS_XARM guaridan to ENABLE_WFS, and force the triggering of the alignment servos to be ALWAYS ON, i.e. set - H1:ALS-X_WFS_TRIG_THRESH_ON and H1:ALS-X_WFS_DOF_FM_TRIG_THRESH_ON to -0.1 (i.e. anything below zero), - Flip to the master gain switch (H1:ALS-X_WFS_SWITCH) to ON. However, with the triggering forced ON, that meant that if the arm lost lock, we'd need to quickly turn off the inputs to the WFS loops we had under control so more weeds would not grow. Recall from G1602280, successful closure of these loops was only possible if the error signals were less than ~1000 and preferably better than 500 [ct]. We were using Jenne's premade StripTool template for the WFS error signals, /ligo/home/jenne.driggers/Templates/Strip/Green_Y_AlignErrorSigs.stp (Jason Arrives) - With TMS and ETMX under control in YAW, we tried slowly moving ITMX in yaw (with the green WFS) to reduce the CAM YAW error signal (at this point both were in the 0.7 range, and we want better than 0.1). As we (very slowly) moved the ITM (so that the end-station optics' WFS could follow), we realized that reducing the CAM error signal made MICH fringes on the AS AIR camera look worse, implying that we were doing the wrong thing. So, we reverted the ITM location, and went to close the Pitch loops. - Closing WFS A / DOF1 / ETMX pitch loop was relatively easy, but even though the error signals were sufficiently small, closing the WFS B / DOF 2 / TMSX loop would start growing weeds and break. After several tries, we realized we may be on the wrong side of the WFS A / DOF1 / ETMX PDH error signal hump, so we pushed the ETM back in the oppposite direction, indeed went over the hump, and began to reduce the error signal again. After that discovery, both PIT DOFs clsoed nicely. - Now having ALL WFS end X optics controlled DOFS closed, and the green arm transmission up in the high 0.9s to 1, we again went back to ITMX to reduce the CAM error signal. This again made the AS AIR spot / MICH fringes look like crap, and reduced the green arm transmission. We [incorrectly] concluded that the cameras must have moved, and remeasured and set the CAM offsets. - At this point we were able to resume regular initial alignment. All went well, until SRC_ALIGN, where we saw excess fringing on the AS port camera. Jenne knew this was because some optic was not entirely *misaligned* so we trended the *misalignment offsets* in the M0_TEST bank of ITMX, and found they were wrong by half. Restoring these allowed us to complete initial alignment as normal. - All the rest of normal lock acquisition worked swimmingly. - Upon arrival at nominal low noise, we began checking for SDF differences -- and it was only then that we started to put the pieces together that the driveAlign script had messed with everything listed above.
While updating the FMCS MEDM screens for the migration to the BACNet IOC I noticed that the alarm status for chiller pump 2 at end Y (H0:FMC-EY_CY_ALARM_2) has been active (value = 2) for at least a month. Is this normal?
This is NOT a chiller pump alarm. It is a "Chiller 2" alarm. There are two devices which are supplying chilled water for the building HVAC - The Chiller is a refrigeration machine which cools the water and there is a separate chilled water pump (CWP) which circulates the water through the chiller and up to the building.
This chiller has two cooling circuits - one has a known fault - hence the alarm.
To confuse the issue further there are two chillers and two chilled water pumps at the end station - this provides us redundancy in case of failure.
The critical alarm is the "Chilled Water Supply Temperature". This temperature is currently normal.
Yesterday Gerardo (with my assistance) landed another near perfect silicate bond of an ear to a test mass. Only a tiny "feature" appeared in the upper corner of the bond after the ~2hr mark of curing - a speck or dimple which is extremely hard to spot, with no apparently bubble of air around it, but instead a bit of an interference rainbow. This feature did not change over the following 3 hours of yesterday afternoon and did not warrant any alarm, so we considered it good. The placement of the ear is within 0.04mm of spec (under 0.1mm tol). Upon inspection this morning, the "feature" had not changed in shape or size.
Spare ITM06 test mass can be used if needed anytime after May 26th, 2017.
Carlos, Dave:
we connected the fourth DMT computer (Sun X2200) to the DMT switch for its DMT-VLAN (10.121) connection. We will only connect it to the Broadcaster network on Tuesday. Dan can now install SL7 on this machine and use the DMT-VLAN to go offsite.
The sw-msr-dmt switch configuration was changed and committed to SVN. some old changes were also committed.
I have produced filters for offline calibration of Hanford data starting at GPS time 1173225472. The filters can be found in the calibration SVN at this location: ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/GDSFilters/H1DCS_1173225472.npz The new filters have EPICS and calibration line parameters for computing SRC detuning parameters. See https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=35041 For suggested command line options to use when calibrating this data, see: https://wiki.ligo.org/Calibration/GDSCalibrationConfigurationsO2 The filters were produced using this Matlab script in SVN revision 4584: ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/TDfilters/H1_run_td_filters_1173225472.m The parameters files used (all in revision 4584) were: ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/params/2017-01-24/modelparams_H1_2017-01-24.conf ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/params/2017-01-24/H1_TDparams_1175954418.conf ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/CAL_EPICS/callineParams_20170411.m ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O2/H1/Scripts/CAL_EPICS/D20170411_H1_CAL_EPICS_VALUES.m Several plots are attached. The first four (png files) are spectrum comparisons between CALCS, GDS, and DCS. GDS and DCS agree to the expected level. Brief time series of the kappas and coherences are attached, for comparison with CALCS. Time domain vs. frequency domain comparison plots of the filters are also attached.
As a sanity check, I have calibrated data from early June to test whether these filters are up to date. Spectrum comparisons between the C00 frames and these filters are shown, and some unexpected discrepancy is noted at higher frequencies. Time series of the kappas are attached as well. These agree with the summary pages (i.e., the GDS pipeline).
It is now evident that this change did not occur during the vent, as the attached ASD ratio plot is from C01 and C00 data from May 08, 2017 at 14:00:00 UTC (GPS 1178287218), right before the vent.
It seems this change occurred during maintenance on Tuesday, April 11. The first ASD ratio is from data right before maintenance activities, and the second is from right after. Also it is confirmed that this is not being caused by a difference in the applied kappas (so the EPICS records agree). The most likely culprit is a change in the inverse sensing filtering.
Relevant aLOGs around April 11th -- LHO aLOG 35474, and more specifically the comment, LHO aLOG 35476 which a "small change in writeEPICs code" is mentioned. Can we compare the EPICs records committed to the repo around that time?
I traced the problem to the GDS filters installed on April 11: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=35462 Simply regenerating those filters seems to have fixed the problem, as shown in the attached ASD ratio plot comparing C01 to the corrected GDS filter output right after maintenance on April 11. The filters were generated as described in the above aLOG, except that the SVN revision was 4781, so the exact reason for the problem is unknown. The C00 data affected by this problem starts on April 11 (GPS 1175976351), and it is still currently being affected, until we restart the GDS pipeline, no later than next Tuesday, June 20. The C01 frames are not affected by this and should be fine.
WP6608
The FMCS channels reporting if either wood shop fire pumps are running were added to the cell phone alarm system (and the system was restarted). If either or both pumps start to run, we will get a cell phone text within a minute.
At 13:10:07 UTC (06:10:07 PDT) h1seiex went into an error mode:
Real-time code continue to run normally
Dolphin IPC channel data from h1seiex to h1iscex continued to run normally
DAQ data coming from the open-mx gigabit ethernet port to h1dc0 stopped running
The CDS network gigabit ethernet port stopped working (EPICS data, SSH login, NFS logging)
The console froze, no keyboard entry possible
The system continued in this state until around 13:57 UTC when h1seiex was powered down. The system was fully recovered at 14:22:19 UTC (07:22:19 PDT).
The computer h1seiex has been running since the last site power outage of 06:08 Sep 30 2016 PDT. The SEI models have been running since the Dolphin glitch of 11:40 Oct 23 2016 PDT. In other words, this machine has not been running any longer than most other H1 front end computers.
7 weeks of BRSX and BRSY driftmon channels.
Channel names are H0:FMC-CS_FIRE_PUMP_1 and H0:FMC-CS_FIRE_PUMP_2. Also updated ONAM and ZNAM fields of binary input channels.
I have added these to the control room alarm handler.
The full report can be found on the detchar wiki: https://wiki.ligo.org/DetChar/DataQuality/DQShiftLHO20170424 Below I summarize the main highlights of this shift: -The duty cycle was 64.3%. The range was around 65Mpc but it reduced by about 5Mpc mainly due to seismic noise -The out of observing due to the calibration pipeline problem from 6:42 to 11:24UTC on Monday see more details -There were the fundamental violin mode seen druing 1:30 - 4:30UTC on Tuesday and during 2:30 - 9:00UTC on Wednesday. -The Commisioning started at 16:10UTC on Wednesday for ASC measurements and OMC jitter -30Hz blob of glitches has been showing up but after disconnecting the EY Oplev fiber, we have not observed any glitching around the 30 Hz region.The coupling to h(t) might be caused by the changing radiation pressure as the OpLev mode hops -50Hz glitche still persists and remains mysterious.
Completes WP 6576
Added 50 mL H2O to Xtal chiller. Diode chiller reported water level OK. Canister filters look clean. This closes FAMIS task 6520.
TITLE: 04/27 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 61Mpc
INCOMING OPERATOR: Jim
SHIFT SUMMARY:
Nice shift up until a quick lockloss and then had more significant downtime due to EX computer issues.
LOG:
Random lockloss (no obvious seismic to blame).
Out of OBSERVING for 33min.
The wrong gain of CHARD P was because I forgot to load the guardian. Should be fixed now. Thanks Corey.
WP 6577 Dave B., Carlos P., Bubba G., John W., Patrick T. I have migrated a subset of the EPICS channels provided by the FMCS IOC on h0epics to an IOC I created on fmcs-epics-cds. The IOC on fmcs-epics-cds connects to the BACNet server that Apollo has installed as part of the FMCS upgrade. The channels that I migrated have been taken over by this upgrade and can no longer be read out by the server that the IOC on h0epics reads from. The fmcs-epics-cds computer connects to the slow controls network (10.105.0.1) on eth0 and the BACNet network (10.2.0.1) on eth1. It is running Debian 8. The IOC on h0epics is started from the target directory /ligo/lho/h0/target/h0fmcs (https://cdswiki.ligo-wa.caltech.edu/wiki/h0fmcs). I commented out the appropriate channels from the fmcs.db and chiller.db files in the db directory of this path and restarted this IOC. I made no changes to the files in svn. The IOC on fmcs-epics-cds uses code from SNS: http://ics-web.sns.ornl.gov/webb/BACnet/ and resides in /home/cdsadmin/BACnet_R0-8. This is a local directory on fmcs-epics-cds. This IOC is started as cdsadmin: > ssh cdsadmin@10.105.0.112 cdsadmin@fmcs-epics-cds: screen Hit Enter cdsadmin@fmcs-epics-cds: cd /home/cdsadmin/BACnet_R0-8/iocBoot/e2b-ioc/ cdsadmin@fmcs-epics-cds: ../../bin/linux-x86_64/epics2bacnet st.cmd Hit CTRL-a then 'd' Issues: I came to realize during this migration that the logic behind the binary input channels is different in BACNet. In BACNet a value of 0 corresponds to 'inactive' and a value of 1 corresponds to 'active'. In the server being migrated from a value of 0 corresponds to 'invalid'. This was verified for the reverse osmosis alarm: H0:FMC-CS_WS_RO_ALARM. In the BACNet server it reads as 0 or 'inactive' when not in alarm. When John W. forced it into alarm it read as 1 or 'active'. I believe Dave has updated his cell phone alarm notifier to match this. A similar situation exists for the state of the chiller pumps. In the server being migrated from a value of 1 appears to correspond to 'OFF' and a value of 2 appears to correspond to 'ON'. It has not been verified, but I believe in the BACNet server a value of 0 corresponds to 'OFF' and a value of 1 corresponds to 'ON'. The pump status for each building is calculated by looking at the state of the pumps. The calculation in the database for the IOC being migrated from appears to be such that as long as one pump is running the status is 'OPERATIONAL'. If no pump is running the status is 'FAILED'. I need to double check this with John or Bubba. I updated the corresponding calc records in the database for the BACNet IOC to match this. In the server being migrated from, channels that are read by BACNet as binary inputs and binary outputs are read as analog inputs. I changed these in the database for the BACNet IOC to binary inputs and set the ONAM to 'active' and the ZNAM to 'inactive'. The alarm levels also need to be updated. They are currently set through the autoburt snapshot files that contain a separate channel for each alarm field. The autoburt request file has to be updated for the binary channels to have channels for .ZSV and .OSV instead of .HSV, .LSV, etc. So currently there is no control room alarm set for the binary channels, including the reverse osmosis alarm. I also need to update the medm screens to take account of this change. Also, there is an invalid alarm on the control room alarm station computer for the mid X air handler reheat temperature. Looking on the BACNet FMCS server this channel actually does appear to be genuinely invalid. It should be noted that this BACNet IOC is a temporary install until an OPC server is installed on the BACNet server. I would like to leave the permit for this work open until the FMCS upgrade is complete and all the channels have been migrated to the BACNet IOC.
I updated the autoBurt.req file.
I have set the alarm on the RO channel: caput H0:FMC-CS_WS_RO_ALARM.OSV MAJOR
I have updated the medm screens.
Note: The 'ARCH = linux-x86' line in the Makefile under 'BACnet_R0-8/iocBoot/e2b-ioc' had to be changed to 'ARCH = linux-x86_64' in the code copied from SNS.