Today I connected the TMDS Gas Delivery Table to the 100 psi tap of the X-end vent/purge air supply and let the dry air flow through the table's plumbing. The ionizer assembly, scroll pump, 6' of 1" SS tube and 3' of 3/8" copper tubing which would nominally be connected during an actual discharge exercise were not connected. As such, the air exhausted into the room instead. I had to make a few minor "tweeks" and fix a few leaks. The observed flow rate (rotameter) was ~50-60 lpm for ~20-25 psi regulator output. This is consistent with the flow vs. pressure relationship observed by LLO (see Harry, Ryan, Scott notes "TMDS_ETMX_2-3-16.pdf" from LLO log entry?) I shut down the vent/purge compressors and removed the TMDS table from the VEA (brought it back to the Corner Station Mechanical Room). I will now remove the table's various plumbing components and have them cleaned. NOTES TO SELF: Compressor #4 needs a replacement pressure relief valve. The 1" MNPT to 1" tube adapter at the VEA point-of-use valve is still leaking. Need to adapt one of the various iLIGO aluminum billet "door stops" to serve as a pipe support for the 1" line that will route beneath the ISC table. The dedicated TMDS scroll pump has a factory label stating that it is configured for "US 120VAC" but, in fact, it is still configured internally for 220VAC - need to correct this before using. Also, need to integrate the AC drop-out mechanism and NEMA 20 twist lock connectors to the scroll pump power cord. Need to modify terminal cover and add 120VAC wiring for spring-close isolation solenoid valve.
Jonathan, Carlos, Dave:
the GraceDB database access certificate on h1fescript0 expired today, meaning that GRB and SN alerts could not be raised in the control room. Jonathan and Carlos obtained a new cert and installed it in record time. I noticed this machine had been running for 383 days, and needed a reboot to install patches, so I took this opportunity to reboot it.
There was some confusion on starting the seismon IOC code (currently we need both the old and new code), and I missed the restart of the camera copy program.
I'm updating the relevant CDS wiki pages related to code running on this machine.
Installed cable roller guides (pulley) in cable tray from CER to HAM6.. This is in preparation for pulling in the new RF and DC cabling for SQZ.
On the Pump Station output line below the reservoir is found a pressure relief valve. This valve is factory set to 125psi but this looks very coarse. The output from the valve is plumbed to a small drum. I found this clear line full of fluid. This suggests to me there have either been several pressure spikes on pump station restart--not good; or, a slowly leaking valve--less bad but still not ideal. I've drained the line as best I can and marked the hose for monitoring.
The pressure spikes upon restart are caused by poor operation (not following the restart guidance: https://cdswiki.ligo-wa.caltech.edu/wiki/SEI) and should be avoided to limit fluid loss and disposal pain down the road. Plus, the system will very likely have not successfully restarted if the best practice has not been followed.
The attached plot shows the pump drive and the output pressure closest to the relief valve during the July 29 OU3 fault. The bottom plot is zoomed into the pressure during restart attempts. The bottom line to remember when operating this system is to manually zero the PID output before pressing the red button or hitting the fault reset. The PID loop knows nothing about the VFD (maybe this will change with potential Beckoff upgrade.) It knows only that the process variable differential pressure is not at the process setpoint. So the PID increases the output to max. In this state, as shown in this trends, the reset button was pushed with the VOUT at max and as a result the pressure spikes to more than 100psi very quickly. Whether this spike opens the relief valve is unknown--these EPICS trends could easily not show the highest pressure, the relief valve and this pressure gauge are likely not tightly calibrated. Couple things could happen with this spike occurs: the relief valve opens and spills fluid; and/or, the fluid level in the reservoir is pulled down too quickly and the pump station trips. Obviously something like this happened twice here. Once the VOUT was zero'd, restart proceeded nicely.
Again, bottom line, Operators--If the HEPI Fluid Pressure is not okay, manually reduce the output of the PID before pushing any hardware buttons.
Maintenance tasks: all times UTC
Current:
19:46 Fil to CER to pick up his notebook.
19:51 Fil out
19:50 Beginning Initial alignment.
19:55 Shiela and Thomas to End stations to revert ETMX ESD Cabling
20:04 Jason to Mid X
20:30 Jason back
20:45 Having trouble keeping green arms locked, once ALS WFS + CAM are engaged, even with good flashes to start
Found Green ITM camera image processing had stalled since h1script0 was rebooted.
21:00 Fire Alarm
21:15 Guardian node errors as a result of failed reverting of attempt to lock ALS on ETMY in the morning
Sheila and Thomas also went to EX to confirm that ESD EX is cabled up correctly.
While snooping, they found that C and D fault lights were on for the TMSX OSEM sat amps. Thinking this was a problem, we restarted the coil drivers twice (no change), then unplugged then reseated the CD to SatAmp cables at the satamp (no change). Then we remembered that the TMS's second top mass signal chain only runs 2 OSEMs on channels A and B, so C and D will always have fault lights on.
All cables and power restored, TMSX returned to normal functionality.
SudarshanK, RichardS
We moved the Pcal beam back to their optimal position at 111.6 mm away from the center of the optic. The actual position of the current Pcal beams (last column) along with the history of where they were are in the table below. The number quoted on the table are distance of Pcal beam (in mm) from their optimal position of [0, +/- 111.6] mm.
| Before 07/25/2017 | 07/25/2017 | 08/01/2017 | 08/08/2017 | |
| Upper Beam | [1.9, 0.3] | [2.5, -8.4] | [1.1, 14.5] | [0.8, 0.6] |
| Lower Beam | [-1.0, 0.3] | [-1.3, 8.6] | [-0.5, -14.1] | [-0.8, -0.2] |
We also re-centered the Pcal beams on the receiver side to relieve it from any clipping that was happening outside the vacuum. The spectra attached below shows no significant clipping on the Rx beams.
We will run a set of calibration lines (from 4501.3 - 1501.3 at 500 Hz interval) with this Pcal beam configuration for about a week.
After this Pcal beam configuration change, we turned on the two Pcal lines at 333.9 and 1083.3 Hz using Pcal at ENDX. We will collect about 2-3 hours worth of data after we acquire the lock and turned them off. We plan to initiate the HIGH_FREQ_LINES guardian node to acquire data at high frequency after that.
Attached are the weekly charge plots with today's new data appended. The charge on both ETMs continue to trend away from zero. Sadly.
Cheryl asked for a command line program to write operations logs to a text file. I have created a simple bash script called oplog
Here is the help page (printed if oplog is called with no arguments, or a single 'help' argument)
david.barker@zotws6: oplog help
Usage:
oplog text to be entered into log file | Simple text entry
oplog 'text with non alpha-numberic characters' | Complex text entry
oplog help | Show this help page
oplog show | Print content of your log file
Each user has their own log file, dated with the current day's date, in the /tmp/directory. The log file can be listed with the 'oplog show' command
oplog show
Aug 08 2017 18:03:12 UTC one two three four
Aug 08 2017 18:03:32 UTC five six seven eight
Aug 08 2017 18:13:26 UTC here is a long text line, it has many characters - including a dash
Aug 08 2017 18:17:31 UTC how about
Aug 08 2017 18:17:40 UTC how about & character?
Aug 08 2017 18:18:02 UTC show
Aug 08 2017 18:29:29 UTC
Aug 08 2017 18:32:28 UTC show the text
Aug 08 2017 18:33:18 UTC reboot h1fescript0
following Ryan's excellent suggestion, the log file has been moved from the /tmp directory into the user's home directory as a 'dot' file, specifically from:
/tmp/<date>_<username>
to:
/ligo/home/username/.<date>_<username>
J. Kissel I've checked the last suspensions for any sign of rubbing. Preliminary results look like "Nope." The data has been committed to SUS repo here: /ligo/svncommon/SusSVN/sus/trunk/HAUX/H1/IM1/SAGM1/Data/ 2017-08-08_1629_H1SUSIM1_M1_WhiteNoise_L_0p01to50Hz.xml 2017-08-08_1629_H1SUSIM1_M1_WhiteNoise_P_0p01to50Hz.xml 2017-08-08_1629_H1SUSIM1_M1_WhiteNoise_Y_0p01to50Hz.xml /ligo/svncommon/SusSVN/sus/trunk/HAUX/H1/IM2/SAGM1/Data/ 2017-08-08_1714_H1SUSIM2_M1_WhiteNoise_L_0p01to50Hz.xml 2017-08-08_1714_H1SUSIM2_M1_WhiteNoise_P_0p01to50Hz.xml 2017-08-08_1714_H1SUSIM2_M1_WhiteNoise_Y_0p01to50Hz.xml /ligo/svncommon/SusSVN/sus/trunk/HAUX/H1/IM3/SAGM1/Data/ 2017-08-08_1719_H1SUSIM3_M1_WhiteNoise_L_0p01to50Hz.xml 2017-08-08_1719_H1SUSIM3_M1_WhiteNoise_P_0p01to50Hz.xml 2017-08-08_1719_H1SUSIM3_M1_WhiteNoise_Y_0p01to50Hz.xml /ligo/svncommon/SusSVN/sus/trunk/HAUX/H1/IM4/SAGM1/Data/ 2017-08-08_1741_H1SUSIM4_M1_WhiteNoise_L_0p01to50Hz.xml 2017-08-08_1741_H1SUSIM4_M1_WhiteNoise_P_0p01to50Hz.xml 2017-08-08_1741_H1SUSIM4_M1_WhiteNoise_Y_0p01to50Hz.xml /ligo/svncommon/SusSVN/sus/trunk/HTTS/H1/OM1/SAGM1/Data/ 2017-08-08_1544_H1SUSOM1_M1_WhiteNoise_L_0p01to50Hz.xml 2017-08-08_1544_H1SUSOM1_M1_WhiteNoise_P_0p01to50Hz.xml 2017-08-08_1544_H1SUSOM1_M1_WhiteNoise_Y_0p01to50Hz.xml /ligo/svncommon/SusSVN/sus/trunk/HTTS/H1/OM2/SAGM1/Data/ 2017-08-08_1546_H1SUSOM2_M1_WhiteNoise_L_0p01to50Hz.xml 2017-08-08_1546_H1SUSOM2_M1_WhiteNoise_P_0p01to50Hz.xml 2017-08-08_1546_H1SUSOM2_M1_WhiteNoise_Y_0p01to50Hz.xml /ligo/svncommon/SusSVN/sus/trunk/HTTS/H1/OM3/SAGM1/Data/ 2017-08-08_1625_H1SUSOM3_M1_WhiteNoise_L_0p01to50Hz.xml 2017-08-08_1625_H1SUSOM3_M1_WhiteNoise_P_0p01to50Hz.xml 2017-08-08_1625_H1SUSOM3_M1_WhiteNoise_Y_0p01to50Hz.xml /ligo/svncommon/SusSVN/sus/trunk/HTTS/H1/RM1/SAGM1/Data/ 2017-08-08_1516_H1SUSRM1_M1_WhiteNoise_L_0p01to50Hz.xml 2017-08-08_1516_H1SUSRM1_M1_WhiteNoise_P_0p01to50Hz.xml 2017-08-08_1516_H1SUSRM1_M1_WhiteNoise_Y_0p01to50Hz.xml /ligo/svncommon/SusSVN/sus/trunk/HTTS/H1/RM2/SAGM1/Data/ 2017-08-08_1520_H1SUSRM2_M1_WhiteNoise_L_0p01to50Hz.xml 2017-08-08_1520_H1SUSRM2_M1_WhiteNoise_P_0p01to50Hz.xml 2017-08-08_1520_H1SUSRM2_M1_WhiteNoise_Y_0p01to50Hz.xml Will post results in due time, but my measurement processing / analysis / aLOGging queue is severely backed up.
J. Kissel Process the IM1, IM2, and IM3 data from above. Unfortunately, it looks like I didn't actually save an IM4 Yaw transfer function, so I don't have plots for that suspension. I can confirm that IM1, IM2, and IM3 do not look abnormal from their past measurements other than a scale factor gain. Recall that the IMs had their coil driver range reduced in Nov 2013 (see LHO aLOG 8758), but otherwise I can't explain the electronics gain drift, other than to suspect OSEM LED current decay, as has been seen to a much smaller degree in other larger suspension types. Will try to get the last DOF of IM4 soon.
All HTTSs are clear of rubbing. Attached are - the individual measurements to show OSEM basis transfer function results, - each suspensions transfer functions as a function of time - all suspensions (plus an L1 RM) latest TFs just to show how they're all nicely the same (now) Strangely, and positively, though RM2 has always shown an extra resonance in YAW (the last measurement was in 2014 after the HAM1 vent work described in LHO aLOG 9211), that extra resonance has now disappeared, and looks like every other HTTS. Weird, but at least a good weird!
J. Kissel Still playing catch up -- I was finally able to retake IM4 Y. Processed data is attached. Still confused about scale factors, but the SUS is definitely not rubbing, and its frequency dependence looks exactly as it did 3 years ago.
The 10 day trends attached around the April vent show the change in HAM2's position sensors cartesian location. The middle panels show the pressure and gaurdian request state.
At the beginning, the HAM remained locked through the venting and then was taken to DAMPED. Clear shifts in position are seen at that time.
This table is the position in the two states:
DOF Isolated/vacuum Damped/vented
Z -39800nm -20 -- -17um
Y -57000 -61.5um
X 4300 15
RX 26000 26.3
RY -33400 -19.5
RZ 85800 86.3 -- 88.5
This vent was short, about 2 days and the observant reader will notice the range in the above table for Z and RZ along with the trends for these DOFs moving during the vent period. I'm guessing this trend is the continuing thermal transition after coming to atmosphere. After pumping down, the ISI remained unisolated for a few more days and the positional change between at atmosphere vs at vacuum is evident. Finally, the ISI is reisolated back to where it started.
The important point is the Isolated versus vented position. At atmosphere, the ISI would be stopped on the lockers if work were to be done and we strive to have that shift between locked/unlocked to be less than 50um local coordinates. The locking is usually a bit better than this.
During at-atmosphere work, these LOCATIONMON channels will give an indication of position changes with the caveat that all references are on the slab.
Restarted at 17:21UTC
Sparked by an email from Jeff B, I talked to Jason and went over what the Guardian should turn off if we lose the PSL light. Previously the LASER_PWR node would turn off the following when H1:PSL-PWR_HPL_DC_LP_OUTPUT < 1:
H1:PSL-FSS_AUTOLOCK_ON = 0
H1:PSL-PMC_LOCK_ON = 0
H1:PSL-ISS_AUTOLOCK_ON = 1
H1:PSL-ISS_SECONDLOOP_OUTPUT_SWITCH = 0
Today I added to the list:
H1:PSL-ISS_LOOP_STATE_REQUEST = 0
H1:PSL-PMC_RAMP_ON = 0
I tested this by changing the above listed condition to be > 1, and then manually putting it into that state. Everything was turned off, and did what it should. I brought it back and we should be good.
FAMIS7450
Laser Status:
SysStat is good
Front End Power is 33.9W (should be around 30 W)
HPO Output Power is 154.8W
Front End Watch is GREEN
HPO Watch is GREEN
PMC:
It has been locked 20 days, 23 hr 9 minutes (should be days/weeks)
Reflected power = 17.42Watts
Transmitted power = 57.23Watts
PowerSum = 74.65Watts.
FSS:
It has been locked for 0 days 2 hr and 35 min (should be days/weeks)
TPD[V] = 1.875V (min 0.9V)
ISS:
The diffracted power is around 2.9% (should be 3-5%)
Last saturation event was 0 days 0 hours and 55 minutes ago (should be days/weeks)
Possible Issues:
PMC reflected power is high
Attached are trends for the HPO laser head flow rates for the last 4 days. Once again, no change since the last report, flows are holding steady.
This morning I completed the weekly PSL FAMIS tasks.
HPO Pump Diode Current Adjustment (FAMIS 8434)
With the ISS OFF, I adjusted the HPO pumpd diode operating currents. All currents were increase by 0.1 A; I have attached a screenshot of the PSL Beckhoff main screen for future reference. The changes are summarized in the below table:
| Operating Current (A) | ||
| Old | New | |
| DB1 | 49.3 | 49.4 |
| DB2 | 52.3 | 52.4 |
| DB3 | 52.3 | 52.4 |
| DB4 | 52.3 | 52.4 |
I did not adjust the operating temperatures of the diode boxes. The HPO is now outputting ~154.9 W; the ISS is now turned ON. This completes FAMIS 8434.
PSL Power Watchdog Reset (FAMIS 3662)
I reset both PSL power watchdogs at 15:52 UTC (8:52 PDT). This completes FAMIS 3662.
TITLE: 08/08 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Patrick
CURRENT ENVIRONMENT:
Wind: 7mph Gusts, 6mph 5min avg
Primary useism: 0.47 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY:
Maintenance has already started because of the EQ.
TITLE: 08/08 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC STATE of H1: Preventive Maintenance INCOMING OPERATOR: Jeff (covering for Travis) SHIFT SUMMARY: Remained in observing until a 6.5 mag earthquake in China knocked us out of lock. Left ISC_LOCK at DOWN and started maintenance early. ISI_CONFIG is set to LARGE_EQ_NOBRSXY. Issues (see earlier alogs): GraceDB query failure Possible mismatch on ops overview for ETMY RMS WD, current overview on video2 not the same as one currently linked on sitemap? ALS X fiber polarization Seismon not updating Tripped watchdogs from earthquake LOG: 12:32 UTC restarted video4 14:05 UTC Lockloss, 6.5 mag earthquake in China 14:17 UTC Chris to CER to work on scaffolding 14:32 UTC Karen opening receiving door 14:46 UTC Bubba to CER to work with Chris on scaffolding
~15:02 UTC Karen driving car to OSB receiving
M 6.5 - 36km W of Yongle, China 2017-08-08 13:19:49 UTC 33.217°N 103.843°E 10.0 km depth Lockloss at 14:05 UTC. It seems the System Time for Seismon stopped updating at 1186191618 GPS, Aug 08 2017 01:40:00 UTC, yet its System Uptime and Keep Alive are still updating. 14:10 UTC ISI platforms started tripping. Transitioned ISI config to LARGE_EQ_NOBRSXY. Tripped: ISI ITMY stage 2 ISI ETMY stage 2 ISI ITMY stage 1 ISI ITMX stage 1 ISI ITMX stage 2 ISI ETMY stage 1 SUS TMSY Leaving ISC_LOCK in DOWN. Starting maintenance early. 14:18 UTC Set observing mode to preventive maintenance.
I don't quite know what happened with seismon. I updated the USGS client yesterday and restarted that code, but I didn't make any changes to the seismon code. I've restarted the seismon_run_info code that we are using and that seems to have fixed it. Maybe the seismon code crashed when I added geopy yesterday?