Maintenance tasks: all times UTC
Current:
SudarshanK, RichardS
We moved the Pcal beam back to their optimal position at 111.6 mm away from the center of the optic. The actual position of the current Pcal beams (last column) along with the history of where they were are in the table below. The number quoted on the table are distance of Pcal beam (in mm) from their optimal position of [0, +/- 111.6] mm.
| Before 07/25/2017 | 07/25/2017 | 08/01/2017 | 08/08/2017 | |
| Upper Beam | [1.9, 0.3] | [2.5, -8.4] | [1.1, 14.5] | [0.8, 0.6] |
| Lower Beam | [-1.0, 0.3] | [-1.3, 8.6] | [-0.5, -14.1] | [-0.8, -0.2] |
We also re-centered the Pcal beams on the receiver side to relieve it from any clipping that was happening outside the vacuum. The spectra attached below shows no significant clipping on the Rx beams.
We will run a set of calibration lines (from 4501.3 - 1501.3 at 500 Hz interval) with this Pcal beam configuration for about a week.
After this Pcal beam configuration change, we turned on the two Pcal lines at 333.9 and 1083.3 Hz using Pcal at ENDX. We will collect about 2-3 hours worth of data after we acquire the lock and turned them off. We plan to initiate the HIGH_FREQ_LINES guardian node to acquire data at high frequency after that.
Attached are the weekly charge plots with today's new data appended. The charge on both ETMs continue to trend away from zero. Sadly.
Cheryl asked for a command line program to write operations logs to a text file. I have created a simple bash script called oplog
Here is the help page (printed if oplog is called with no arguments, or a single 'help' argument)
david.barker@zotws6: oplog help
Usage:
oplog text to be entered into log file | Simple text entry
oplog 'text with non alpha-numberic characters' | Complex text entry
oplog help | Show this help page
oplog show | Print content of your log file
Each user has their own log file, dated with the current day's date, in the /tmp/directory. The log file can be listed with the 'oplog show' command
oplog show
Aug 08 2017 18:03:12 UTC one two three four
Aug 08 2017 18:03:32 UTC five six seven eight
Aug 08 2017 18:13:26 UTC here is a long text line, it has many characters - including a dash
Aug 08 2017 18:17:31 UTC how about
Aug 08 2017 18:17:40 UTC how about & character?
Aug 08 2017 18:18:02 UTC show
Aug 08 2017 18:29:29 UTC
Aug 08 2017 18:32:28 UTC show the text
Aug 08 2017 18:33:18 UTC reboot h1fescript0
following Ryan's excellent suggestion, the log file has been moved from the /tmp directory into the user's home directory as a 'dot' file, specifically from:
/tmp/<date>_<username>
to:
/ligo/home/username/.<date>_<username>
J. Kissel I've checked the last suspensions for any sign of rubbing. Preliminary results look like "Nope." The data has been committed to SUS repo here: /ligo/svncommon/SusSVN/sus/trunk/HAUX/H1/IM1/SAGM1/Data/ 2017-08-08_1629_H1SUSIM1_M1_WhiteNoise_L_0p01to50Hz.xml 2017-08-08_1629_H1SUSIM1_M1_WhiteNoise_P_0p01to50Hz.xml 2017-08-08_1629_H1SUSIM1_M1_WhiteNoise_Y_0p01to50Hz.xml /ligo/svncommon/SusSVN/sus/trunk/HAUX/H1/IM2/SAGM1/Data/ 2017-08-08_1714_H1SUSIM2_M1_WhiteNoise_L_0p01to50Hz.xml 2017-08-08_1714_H1SUSIM2_M1_WhiteNoise_P_0p01to50Hz.xml 2017-08-08_1714_H1SUSIM2_M1_WhiteNoise_Y_0p01to50Hz.xml /ligo/svncommon/SusSVN/sus/trunk/HAUX/H1/IM3/SAGM1/Data/ 2017-08-08_1719_H1SUSIM3_M1_WhiteNoise_L_0p01to50Hz.xml 2017-08-08_1719_H1SUSIM3_M1_WhiteNoise_P_0p01to50Hz.xml 2017-08-08_1719_H1SUSIM3_M1_WhiteNoise_Y_0p01to50Hz.xml /ligo/svncommon/SusSVN/sus/trunk/HAUX/H1/IM4/SAGM1/Data/ 2017-08-08_1741_H1SUSIM4_M1_WhiteNoise_L_0p01to50Hz.xml 2017-08-08_1741_H1SUSIM4_M1_WhiteNoise_P_0p01to50Hz.xml 2017-08-08_1741_H1SUSIM4_M1_WhiteNoise_Y_0p01to50Hz.xml /ligo/svncommon/SusSVN/sus/trunk/HTTS/H1/OM1/SAGM1/Data/ 2017-08-08_1544_H1SUSOM1_M1_WhiteNoise_L_0p01to50Hz.xml 2017-08-08_1544_H1SUSOM1_M1_WhiteNoise_P_0p01to50Hz.xml 2017-08-08_1544_H1SUSOM1_M1_WhiteNoise_Y_0p01to50Hz.xml /ligo/svncommon/SusSVN/sus/trunk/HTTS/H1/OM2/SAGM1/Data/ 2017-08-08_1546_H1SUSOM2_M1_WhiteNoise_L_0p01to50Hz.xml 2017-08-08_1546_H1SUSOM2_M1_WhiteNoise_P_0p01to50Hz.xml 2017-08-08_1546_H1SUSOM2_M1_WhiteNoise_Y_0p01to50Hz.xml /ligo/svncommon/SusSVN/sus/trunk/HTTS/H1/OM3/SAGM1/Data/ 2017-08-08_1625_H1SUSOM3_M1_WhiteNoise_L_0p01to50Hz.xml 2017-08-08_1625_H1SUSOM3_M1_WhiteNoise_P_0p01to50Hz.xml 2017-08-08_1625_H1SUSOM3_M1_WhiteNoise_Y_0p01to50Hz.xml /ligo/svncommon/SusSVN/sus/trunk/HTTS/H1/RM1/SAGM1/Data/ 2017-08-08_1516_H1SUSRM1_M1_WhiteNoise_L_0p01to50Hz.xml 2017-08-08_1516_H1SUSRM1_M1_WhiteNoise_P_0p01to50Hz.xml 2017-08-08_1516_H1SUSRM1_M1_WhiteNoise_Y_0p01to50Hz.xml /ligo/svncommon/SusSVN/sus/trunk/HTTS/H1/RM2/SAGM1/Data/ 2017-08-08_1520_H1SUSRM2_M1_WhiteNoise_L_0p01to50Hz.xml 2017-08-08_1520_H1SUSRM2_M1_WhiteNoise_P_0p01to50Hz.xml 2017-08-08_1520_H1SUSRM2_M1_WhiteNoise_Y_0p01to50Hz.xml Will post results in due time, but my measurement processing / analysis / aLOGging queue is severely backed up.
J. Kissel Process the IM1, IM2, and IM3 data from above. Unfortunately, it looks like I didn't actually save an IM4 Yaw transfer function, so I don't have plots for that suspension. I can confirm that IM1, IM2, and IM3 do not look abnormal from their past measurements other than a scale factor gain. Recall that the IMs had their coil driver range reduced in Nov 2013 (see LHO aLOG 8758), but otherwise I can't explain the electronics gain drift, other than to suspect OSEM LED current decay, as has been seen to a much smaller degree in other larger suspension types. Will try to get the last DOF of IM4 soon.
All HTTSs are clear of rubbing. Attached are - the individual measurements to show OSEM basis transfer function results, - each suspensions transfer functions as a function of time - all suspensions (plus an L1 RM) latest TFs just to show how they're all nicely the same (now) Strangely, and positively, though RM2 has always shown an extra resonance in YAW (the last measurement was in 2014 after the HAM1 vent work described in LHO aLOG 9211), that extra resonance has now disappeared, and looks like every other HTTS. Weird, but at least a good weird!
J. Kissel Still playing catch up -- I was finally able to retake IM4 Y. Processed data is attached. Still confused about scale factors, but the SUS is definitely not rubbing, and its frequency dependence looks exactly as it did 3 years ago.
The 10 day trends attached around the April vent show the change in HAM2's position sensors cartesian location. The middle panels show the pressure and gaurdian request state.
At the beginning, the HAM remained locked through the venting and then was taken to DAMPED. Clear shifts in position are seen at that time.
This table is the position in the two states:
DOF Isolated/vacuum Damped/vented
Z -39800nm -20 -- -17um
Y -57000 -61.5um
X 4300 15
RX 26000 26.3
RY -33400 -19.5
RZ 85800 86.3 -- 88.5
This vent was short, about 2 days and the observant reader will notice the range in the above table for Z and RZ along with the trends for these DOFs moving during the vent period. I'm guessing this trend is the continuing thermal transition after coming to atmosphere. After pumping down, the ISI remained unisolated for a few more days and the positional change between at atmosphere vs at vacuum is evident. Finally, the ISI is reisolated back to where it started.
The important point is the Isolated versus vented position. At atmosphere, the ISI would be stopped on the lockers if work were to be done and we strive to have that shift between locked/unlocked to be less than 50um local coordinates. The locking is usually a bit better than this.
During at-atmosphere work, these LOCATIONMON channels will give an indication of position changes with the caveat that all references are on the slab.
Restarted at 17:21UTC
Sparked by an email from Jeff B, I talked to Jason and went over what the Guardian should turn off if we lose the PSL light. Previously the LASER_PWR node would turn off the following when H1:PSL-PWR_HPL_DC_LP_OUTPUT < 1:
H1:PSL-FSS_AUTOLOCK_ON = 0
H1:PSL-PMC_LOCK_ON = 0
H1:PSL-ISS_AUTOLOCK_ON = 1
H1:PSL-ISS_SECONDLOOP_OUTPUT_SWITCH = 0
Today I added to the list:
H1:PSL-ISS_LOOP_STATE_REQUEST = 0
H1:PSL-PMC_RAMP_ON = 0
I tested this by changing the above listed condition to be > 1, and then manually putting it into that state. Everything was turned off, and did what it should. I brought it back and we should be good.
FAMIS7450
Laser Status:
SysStat is good
Front End Power is 33.9W (should be around 30 W)
HPO Output Power is 154.8W
Front End Watch is GREEN
HPO Watch is GREEN
PMC:
It has been locked 20 days, 23 hr 9 minutes (should be days/weeks)
Reflected power = 17.42Watts
Transmitted power = 57.23Watts
PowerSum = 74.65Watts.
FSS:
It has been locked for 0 days 2 hr and 35 min (should be days/weeks)
TPD[V] = 1.875V (min 0.9V)
ISS:
The diffracted power is around 2.9% (should be 3-5%)
Last saturation event was 0 days 0 hours and 55 minutes ago (should be days/weeks)
Possible Issues:
PMC reflected power is high
Attached are trends for the HPO laser head flow rates for the last 4 days. Once again, no change since the last report, flows are holding steady.
This morning I completed the weekly PSL FAMIS tasks.
HPO Pump Diode Current Adjustment (FAMIS 8434)
With the ISS OFF, I adjusted the HPO pumpd diode operating currents. All currents were increase by 0.1 A; I have attached a screenshot of the PSL Beckhoff main screen for future reference. The changes are summarized in the below table:
| Operating Current (A) | ||
| Old | New | |
| DB1 | 49.3 | 49.4 |
| DB2 | 52.3 | 52.4 |
| DB3 | 52.3 | 52.4 |
| DB4 | 52.3 | 52.4 |
I did not adjust the operating temperatures of the diode boxes. The HPO is now outputting ~154.9 W; the ISS is now turned ON. This completes FAMIS 8434.
PSL Power Watchdog Reset (FAMIS 3662)
I reset both PSL power watchdogs at 15:52 UTC (8:52 PDT). This completes FAMIS 3662.
TITLE: 08/08 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Patrick
CURRENT ENVIRONMENT:
Wind: 7mph Gusts, 6mph 5min avg
Primary useism: 0.47 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY:
Maintenance has already started because of the EQ.
TITLE: 08/08 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC STATE of H1: Preventive Maintenance INCOMING OPERATOR: Jeff (covering for Travis) SHIFT SUMMARY: Remained in observing until a 6.5 mag earthquake in China knocked us out of lock. Left ISC_LOCK at DOWN and started maintenance early. ISI_CONFIG is set to LARGE_EQ_NOBRSXY. Issues (see earlier alogs): GraceDB query failure Possible mismatch on ops overview for ETMY RMS WD, current overview on video2 not the same as one currently linked on sitemap? ALS X fiber polarization Seismon not updating Tripped watchdogs from earthquake LOG: 12:32 UTC restarted video4 14:05 UTC Lockloss, 6.5 mag earthquake in China 14:17 UTC Chris to CER to work on scaffolding 14:32 UTC Karen opening receiving door 14:46 UTC Bubba to CER to work with Chris on scaffolding
~15:02 UTC Karen driving car to OSB receiving
M 6.5 - 36km W of Yongle, China 2017-08-08 13:19:49 UTC 33.217°N 103.843°E 10.0 km depth Lockloss at 14:05 UTC. It seems the System Time for Seismon stopped updating at 1186191618 GPS, Aug 08 2017 01:40:00 UTC, yet its System Uptime and Keep Alive are still updating. 14:10 UTC ISI platforms started tripping. Transitioned ISI config to LARGE_EQ_NOBRSXY. Tripped: ISI ITMY stage 2 ISI ETMY stage 2 ISI ITMY stage 1 ISI ITMX stage 1 ISI ITMX stage 2 ISI ETMY stage 1 SUS TMSY Leaving ISC_LOCK in DOWN. Starting maintenance early. 14:18 UTC Set observing mode to preventive maintenance.
I don't quite know what happened with seismon. I updated the USGS client yesterday and restarted that code, but I didn't make any changes to the seismon code. I've restarted the seismon_run_info code that we are using and that seems to have fixed it. Maybe the seismon code crashed when I added geopy yesterday?
Started appearing intermittently (see attached).
I am assuming I should not try to adjust the fiber polarization at this point (triple coincidence observing).
Have remained in observing. No issues beyond the GraceDB query failure to report.
The Ops overview MEDM screen is showing a red block that reads 'GraceDB query failure' (see attached). I found the following wiki page: https://cdswiki.ligo-wa.caltech.edu/wiki/ExternalAlertNotification and followed the instructions to log in to h1fescript0 and run 'ps aux | grep exttrig'. It reports the following: root 787 0.0 0.0 77568 3636 ? Ss 23:41 0:00 sshd: exttrig [priv] exttrig 963 0.0 0.0 77568 1612 ? S 23:41 0:00 sshd: exttrig@pts/5 exttrig 964 0.9 0.0 26860 8112 pts/5 Ss 23:41 0:00 -bash exttrig 1217 0.0 0.0 18148 1260 pts/5 R+ 23:41 0:00 ps aux exttrig 1218 0.0 0.0 9380 940 pts/5 R+ 23:41 0:00 grep --color=auto exttrig exttrig 2386 0.0 0.0 28320 1640 ? S 2016 348:20 caRepeater exttrig 5449 0.0 0.0 26992 1464 ? Ss Jun20 0:00 SCREEN exttrig 5450 0.0 0.0 27416 8820 pts/8 Ss Jun20 0:00 /bin/bash exttrig 6107 0.0 0.0 129772 9276 pts/8 Sl+ Jun20 33:49 python ./viralert_epics_ioc.py This does not match what the wiki page says should be shown, but the wiki page was last updated in 2015, so maybe it is just not up to date? I suspect this is the reason that there was no verbal alert of the earlier GRB (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=38051)? As an aside, the screen that I brought up locally shows a red block near end Y that reads 'RMS WD'. This is not present on the one currently on the wall monitor. Looking at the ETMY suspension MEDM screen, to my eye I do not see anything out of place (see attached).
The wiki page does not give instructions for manually starting the code.
19:46 Fil to CER to pick up his notebook. 19:51 Fil out 19:50 Beginning Initial alignment. 19:55 Shiela and Thomas to End stations to revert ETMX ESD Cabling 20:04 Jason to Mid X 20:30 Jason back 20:45 Having trouble keeping green arms locked, once ALS WFS + CAM are engaged, even with good flashes to start Found Green ITM camera image processing had stalled since h1script0 was rebooted. 21:00 Fire Alarm 21:15 Guardian node errors as a result of failed reverting of attempt to lock ALS on ETMY in the morning Sheila and Thomas also went to EX to confirm that ESD EX is cabled up correctly. While snooping, they found that C and D fault lights were on for the TMSX OSEM sat amps. Thinking this was a problem, we restarted the coil drivers twice (no change), then unplugged then reseated the CD to SatAmp cables at the satamp (no change). Then we remembered that the TMS's second top mass signal chain only runs 2 OSEMs on channels A and B, so C and D will always have fault lights on. All cables and power restored, TMSX returned to normal functionality.