After notifying the operator I drove past the CS @ 00:02 and arrived at Y-Mid station @ 00:07.
Started filling CP3 @ 00:08, opened the exhaust check valve bypass valve, then opened LLCV bypass valve 1/2 turn, started to notice flow @ 00:18. and closed the LLCV bypass valve @ 00:19.
5 minutes later I closed the exhaust check valve bypass valve.
Started driving back from Y-Mid @ 00:26, arrived at the CS @ 00:30.
TITLE: 1/5 DAY Shift: 16:00-00:00UTC (08:00-04:00PDT), all times posted in UTC
STATE of H1: DOWN
Incoming Operator: Cheryl
Support: Jenne (locking commissioner), Kiwamu fixing Dark Michelson (yay!)
Quick Summary:
Maintenance was fairly busy, but part of it included Calibration work. And Calibration work is going to go to around 4:30. Towards the end of the shift, Jeff/Kiwamu gave us the OK to run an Intial Alignment. Gary T. (visiting from LLO) did the honor of doing the alignment (only note here was that for Input Align, we took the XARM gain to 0.2 to lock the Xarm)....oh, and Dark Michelson locked up for the first time in weeks!
Leaving ISC_LOCK in DOWN while Jeff finishes up his ETMy coil driver work.
Maintenance/Shift Activities: (all times PST)
Jenne, Kiwamu,
We have spent some time today trying to fix the long-standing issue with MICH_DARK (which started on 20th of December as far as I know, alog 24349) because we needed to fix it for a calibration measurement scheduled in this week.
We finally fixed it and now MICH_DARK locks.
The reason why it has been not locking seems to be due to too high sensing noise in ASAIR_A which kept saturating the beam splitter's DACs (assuming that the digital filters and DACs of the beam splitter have been unchanged). I have checked the whitening filters and gain of ASAIR_A, but they have been unchanged at least in the past 4 months or so. I don't have quantitative comparison of the noise floor back then and now. So at this point we have no idea why the sensing noise increased.
Anyways, we have inserted two extra low-passes in order to shave off sensing noise above the unity gain frequency (which is typically somewhere between 4 and 10 Hz). These low-passes fixed the issue. I have not changed any foton filters; I have recycled unused filters. The first filter resides in LSC-MICH FM5, and the second in BS_ISCINF FM6. In addition, I adjusted the gains of the MICH loop so that it smoothly grabs a fringe. Now the initial acquisition gain is -333 and the final one is -500. We have edited the following guardian codes, both of which are checked into svn: ALIGN_IFO.py, lscparams.py and ISC_DRMI.py. The guardian was tested 5 or 6 times with the new settings and we have not seen a failure yet.
Patrick, Jim, Dave:
we investigated the conlog crash at 16:00 PDT Friday 31st Dec 2015. As Patrick logged, h1conlog1-master syslogged that the kernel applied a leap second at that time. Looking at other machines we noticed that all systems which are NTP clients to the CDS NTP server (a Symmetricom SyncServer S250 unit with GPS antenna) similarly logged the leap second being appied whereas machines not using this NTP server did not.
The details of the conlog crash are: UNIX TIME (seconds since the 01jan1970 epoch) repeats a second when a leap second is added, we had a noisy channel being monitored by conlog which changed every second around this time resulting in conlog attempting to update the database with two events one second apart but with the same time stamp which crashed conlog and alerted us to the problem.
We then verified that all the clocks and computer times within CDS have the correct time, and they are indeed correct.
We logged into the Symmetricom NTP server and viewed the logs (attached). It shows that it reported the leap second addtion 30th June 2015 (correct) and 31st December 2015 (incorrect).
We are assuming that all the computer clocks did leap by one second at 16:00:00 PST Friday 31st December 2015, but then skewed back to the correct time soon after.
We are planning on upgrading to a new NTP server soon and donate the old server to GC.
Pipes under the beam tube have been rerouted to allow access under the tube with a pallet jack. This relocation required a brief shutdown of the instrument air which caused some alarms. The control room was notified in advance of the shutdown. The instrument air is back to normal operation.
No new news here--Everyone chugging along. Attached is 40 days of the first pressure sensor on the Pump Station manifold and the control output to the motor.
Remarks: All four CS pumps have a perfect flatlined output (showing only one,) no this is not news. EX's excess noise continues in evidence, notice the ~15 days of quieter readings. At EndY, the daily and weekly glitches continue as before.
No indications of pump output loss.
Attached is a plot of the STS2's Mass Positions. Finally got them all centered (something below 5000cts)
Found that ETMY did not have the Binary I/O cable from the Bin Output to the STS2 Interface so this was not going to re-center remotely anyway. Vincent pulled that cable but we don't quite understand. I can tell you though that the recentering does not work via the medm/cable but the signal select and period select toggling does work... We've left that cable unplugged and did the centering at ETMY via the front panel button.
C. Cahillane I have constructed the uncertainty budget of LHO at GPSTime = 1135136350. The kappa values are given at 11546. Just a reminder on the differences between the calibration versions (according to me and my results): C00 = Sensing, Actuation, and Digital filters given by DARMmodel (I used the DARMparams from GPSTime = 1127083151) C01 = No kappa corrections, but static frequency dependence corrected. C02 = Kappa_tst and kappa_C corrected, as well as static frequency dependence corrected. Kappa_pu and cavity pole f_c NOT corrected. C03 = "Perfect" calibration, i.e. all kappas and static frequency dependence corrected. All comparisons between calibration versions have the C03 "Perfect" calibration in the denominator. C03 represents the best possible knowledge of the detector according to the calibration group. All others have some known systematic errors. For C01, strain magnitude uncertainty is below 7% and strain phase uncertainty is below 3.5 degrees. (See PDF 1 Plot 2) For C01, maximum strain magnitude error over C03 is 0.95 and maximum strain phase error over C03 is below 3 degrees. (See PDF 1 Plot 1)
Here are the trends of the PSL data froim the past 20 days. In reponse to Jeff K's request for more commentary on this data I have conversed with the expert(s) and Jason agreed to provide more analysis.
After reviewing the trends I posted yesterday I determined that the ITMy and SR3 oplevs needed re-centering. This is now complete. The old and new pitch/yaw values for these oplevs are below.
| Old (µrad) | New (µrad) | |||
| Pitch | Yaw | Pitch | Yaw | |
| ITMy | -11.1 | -12.1 | -0.2 | 0.0 |
| SR3 | -13.8 | +5.2 | -0.2 | -0.1 |
This completes LHO WP #5676.
I reset the PSL 35W FE watchdog at 17:33 UTC (9:33 PST).
Ops Owl Summary: 08-16UTC
State of H1: down for maintenance
Shift Summary:
10:20 Lockloss due to earthquake in the pacific.
12:30 Back up to observing
14:30 Lockloss from POP decay. Range was ratty, maybe due to bad alignment.
15:00 Start initial alignment. I had to touch TMSY to get the Y-arm back.
15:30 Richard to EX to work on webcam
15:45 JeffB to LVEA
J. Kissel, J. Warner Since the IFO is in such poor shape anyways with POP / alignment woes, I've suggested we just begin with calibration measurements needed for today. Jim has brought the acuiring IFO to DOWN, I'm heading down to the Y-End to begin measuring ETMY coil drivers, and the PCAL team should be in shortly and will go to the X-End to begin their standard calibration of the EX RX and TX PDs. Wish us luck!
Just got back to Observe after an earthquake in the Pacific knocked us down. Slightly more difficult than normal, as I lost lock a couple times at fairly high states (Engage ASC Part 3 and Coil Drivers). I noticed that every time I got to Engage ASC Part 3, the Y arm ALS power would start to drop. It would continue to roll off as I waited for ASC loops to converge. Might be an issue there.
LR behavior has changed over 150 days - trend attached.
Power spectrum of PR2 and SR2 to compare attached also.
I looked into the recent change in H1 behavior, with H1:LSC-POP_A_LF_OUT dropping, and causing lock loss, and see alignment changes that could be contributing to this issue.
The shifting IMs in HAM2 cannot account for alignment changes in the 150 day trend, so there is something else at work here.
Changes in yaw over 150 days:
Changes in pitch, comparing PRM to PR2 (different chambers)
The change in PR2 yaw of +51urad is significantly larger than the changes in the PRM pitch and yaw, and PR2 pitch.
The changes in PR2 yaw may be a response to the cumulative effects of the smaller changes seen in IM1, IM2, and IM3.
Changes in IM pitch:
The change is IM3 yaw of -26urad is significant;y larger than the changes in pitch or yaw in IM1, IM2, and IM3.
H1 out of Observe at 01:16:37UTC:
Lockloss at 01:35:56UTC:
Summary:
Ops Eve Summary: 00-08UTC
State of H1: locked in Observe, range is 74Mpc
Fellow: Keita
Shift Summary:
Executive summary:In regard to narrow lines, the (nearly) full O1 H1 data set is little changed from what was reported for the first week's data: a pervasive 16-Hz comb persists throughout the CW search band (below 2000 Hz), accompanied by a much weaker and more sporadic 8-Hz comb; there remain several distinct 1-Hz and nearly-1-Hz combs below 140 Hz, along with other sporadic combs. The 1459.5 hours of 30-minute FScan SFTs used here span from September 18 to the morning of January 3. The improved statistics make weaker and finer structures more visible than in the 1st week's data. As a result, many new singlet lines have been tagged, and it has become apparent that some previously marked singlets actually belong to newly spotted comb structures. The improved statistics also make it more apparent that the originally spotted combs span a broader bandwidth than marked before Details: Using 1459.5 hours of FScan-generated, Hann-windowed, 30-minute SFTs, I have gone through the first 2000 Hz of the DARM displacement spectrum (CW search band) to identify lines that could contaminate CW searches. This study is very similar to prior studies of ER7 data, ER8 data and the first week of O1 data, but for completeness, I will repeat below some earlier findings. Some sample displacement amplitude spectra are attached directly below, but more extensive sets of spectra are attached in a zipped file. As usual, the spectra look worse than they really are because single-bin lines (0.5 mHz wide) appear disproportionately wide in the graphics A flat-file line list is attached with the same alphabetic coding as in the figures. Findings:
In week 1 Keith identified a comb-on-comb (labeled K, see attached plot), fine spacing 0.08842 Hz, which shows up sporadically at around 77, 154, and 231 Hz. We found it in a large group of channels, centered at the INPUTOPTICS/SUS-BS/SUS-ITM (see full attached list). It remains clearly visible (especially at 77 Hz) in those channels until week 5 of O1, during which it disappears from all of them in all three regions (see attached example). Therefore, it seems likely that its presence in the full O1 data is an artifact from the first four weeks.
I recently re-analyzed this data while testing a comb-finding algorithm, and in the process found a new comb which accounts for several peaks marked as singlets in Keith's original post. This comb has a 2.040388 Hz spacing, with visible harmonics from 9th (18.3635 Hz) to 38th (77.5347 Hz). The code I used, and its docs, can be found on gitlab (requires login).
The conlog process on h1conlog1-master failed soon after the UTC new year. I'm pretty sure it did the same last year but I could not find an alog confirming this. I followed the wiki instructions on restarting the master process. I did initially try to mysqlcheck the databases, but after 40 minutes I abandoned that. I started the conlog process on the master and configured it for the channel list. After a couple of minutes all the channels were connected and the queue size went down to zero. H1 was out of lock at the time due to an earthquake.
For next year's occurance, here is the log file error this time around
root@h1conlog1-master:~# grep conlog: /var/log/syslog
Dec 31 16:00:12 h1conlog1-master conlog: ../conlog.cpp: 301: process_cac_messages: MySQL Exception: Error: Duplicate entry 'H1:SUS-SR3_M1_DITHER_P_OFFSET-1451606400000453212-3' for key 'PRIMARY': Error code: 1062: SQLState: 23000: Exiting.
Dec 31 16:00:12 h1conlog1-master conlog: ../conlog.cpp: 331: process_cas: Exception: boost: mutex lock failed in pthread_mutex_lock: Invalid argument Exiting.
This indicates that it tried to write an entry for H1:SUS-SR3_M1_DITHER_P_OFFSET twice with the same Unix time stamp of 1451606400.000453212. This corresponds to Fri, 01 Jan 2016 00:00:00 GMT. I'm guessing there was a leap second applied.
of course there was no actual leap second scheduled for Dec 31 2015, so we need to take a closer look at what happened here.
The previous line before the error reports the application of a leap second. I'm not sure why, since you are right, none were scheduled. Dec 31 15:59:59 h1conlog1-master kernel: [14099669.303998] Clock: inserting leap second 23:59:60 UTC Dec 31 16:00:12 h1conlog1-master conlog: ../conlog.cpp: 301: process_cac_messages: MySQL Exception: Error: Duplicate entry 'H1:SUS-SR3_M1_DITHER_P_OFFSET-1451606400000453212-3' for key 'PRIMARY': Error code: 1062: SQLState: 23000: Exiting. Dec 31 16:00:12 h1conlog1-master conlog: ../conlog.cpp: 331: process_cas: Exception: boost: mutex lock failed in pthread_mutex_lock: Invalid argument Exiting.