J. Kissel In efforts to finally nail down the mysterious "zero" around 50 [Hz] in the UIM (see LHO aLOG 24296), I've measured the TOP, UIM, and PUM driver using three different measurement configurations. The hope is that, with this "matrix" of differences, we can find out what's going on with the UIM to see if its something specific to the driver, or specific to the OSEM chain upstream of the driver. From what I've seen watching the measurements go by on the SR785, both the TOP and PUM stage show similar zero-like behavior as the UIM, though the TOP mass has a pole-like behavior above the zero and eventually rolls off. Both the PUM and the UIM look as though they are rolling up to "infinity" by 10 [kHz], without even a phase shift indicating their might be a pole "soon" in frequency. More detailed analysis and plots to come. The data has been committed to the repo here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PostO1/H1/Measurements/Electronics/ EYPUMDriver/2016-01-05/TFSR785*.txt EYTOPDriver/2016-01-05/TFSR785*.txt EYUIMDriver/2016-01-05/TFSR785*.txt where the key to each driver's measurement set is in the log files, /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PostO1/H1/Measurements/Electronics/ EYPUMDriver/2016-01-05/2016-01-05_EYTOP_Measurement_log.txt EYTOPDriver/2016-01-05/2016-01-05_EYUIM_Measurement_log.txt EYUIMDriver/2016-01-05/2016-01-05_EYPUM_Measurement_log.txt I also attach a white board sketch that indicates each of the measurement configurations. (0) [Not shown] The "reference" measurement is identical to the reference setup shown in page 2 of the 2nd attachment in 18518, which is divided out of every transfer function to get rid of the response of the differential driver. (1) This is measuring the response of the coil driver as it is typically measured on the bench by CDS. It uses a differential to single-ended receiver and the 40 [ohm] internal load (as I only found out later, the documentation on this box, D1000931 leaves something to be desired), so the response differs from configuration (2) only by a scale factor of 2. (2) This is measuring the response of the coil driver as is typically done in the field when the OSEM and/or satellite amplifier is not available. The box (D1100278) is, as far as the coil chain is concerned, only a two 20 [ohm] (for a total of 40 [Ohm]) resistor load. (3) This is the "real life" scenario, where we measure the response of the driver with the full SatAmp and OSEM included as a load on the output of the driver. It is in this configuration is where the magic happens, apparently.
Since the IFO has decided to give up on Calibration Week, I've put together some .graffle diagrams of what I show in the whiteboard picture above such that a future user and/or LLO can more clearly replicate my results, if need be (or they don't trust what Evan G's is about to post). Also note for future me, the source code for these diagrams lives in /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PostO1/H1/Measurements/Electronics/EYTOPDriver/2016-01-05/ CoilDriverChassisSetup_BenchTestSetup.graffle CoilDriverChassisSetup_DummyOsemChain.graffle CoilDriverChassisSetup_RealOsemChain.graffle
Summary:
I plotted the results of Jeff's measurements for the H1 End Y driver electronics for the top, UIM, and PUM masses. All show odd behavior at high frequencies that we cannot model as an ideal inductor.
Details:
For each measurement of a given mass' driver electronics (TOP, UIM, or PUM), I divided the measured BOSEM (for TOP/UIM) or AOSEM (for PUM) transfer functions (normalized) by the corresponding transfer function for when there is a dummy 40 ohm resistor terminating the output of the drive electronics (also normalized). In addition to plotting the results of the measurements, I also investigated the high frequency behavior. Nominally, we expect to observe the inductance of the BOSEMs or AOSEMs as a simple zero--the high frequency dependence goes as f. Instead, we observe different dependences for the BOSEMs of the top and UIM masses and the AOSEMs of the UIM (state 1 and 3). We attempted to fit by hand some zero-pole models to these Bode plots but were unsuccessful to replicate the high frequency dependence.
After notifying the operator I drove past the CS @ 00:02 and arrived at Y-Mid station @ 00:07.
Started filling CP3 @ 00:08, opened the exhaust check valve bypass valve, then opened LLCV bypass valve 1/2 turn, started to notice flow @ 00:18. and closed the LLCV bypass valve @ 00:19.
5 minutes later I closed the exhaust check valve bypass valve.
Started driving back from Y-Mid @ 00:26, arrived at the CS @ 00:30.
TITLE: 1/5 DAY Shift: 16:00-00:00UTC (08:00-04:00PDT), all times posted in UTC
STATE of H1: DOWN
Incoming Operator: Cheryl
Support: Jenne (locking commissioner), Kiwamu fixing Dark Michelson (yay!)
Quick Summary:
Maintenance was fairly busy, but part of it included Calibration work. And Calibration work is going to go to around 4:30. Towards the end of the shift, Jeff/Kiwamu gave us the OK to run an Intial Alignment. Gary T. (visiting from LLO) did the honor of doing the alignment (only note here was that for Input Align, we took the XARM gain to 0.2 to lock the Xarm)....oh, and Dark Michelson locked up for the first time in weeks!
Leaving ISC_LOCK in DOWN while Jeff finishes up his ETMy coil driver work.
Maintenance/Shift Activities: (all times PST)
Jenne, Kiwamu,
We have spent some time today trying to fix the long-standing issue with MICH_DARK (which started on 20th of December as far as I know, alog 24349) because we needed to fix it for a calibration measurement scheduled in this week.
We finally fixed it and now MICH_DARK locks.
The reason why it has been not locking seems to be due to too high sensing noise in ASAIR_A which kept saturating the beam splitter's DACs (assuming that the digital filters and DACs of the beam splitter have been unchanged). I have checked the whitening filters and gain of ASAIR_A, but they have been unchanged at least in the past 4 months or so. I don't have quantitative comparison of the noise floor back then and now. So at this point we have no idea why the sensing noise increased.
Anyways, we have inserted two extra low-passes in order to shave off sensing noise above the unity gain frequency (which is typically somewhere between 4 and 10 Hz). These low-passes fixed the issue. I have not changed any foton filters; I have recycled unused filters. The first filter resides in LSC-MICH FM5, and the second in BS_ISCINF FM6. In addition, I adjusted the gains of the MICH loop so that it smoothly grabs a fringe. Now the initial acquisition gain is -333 and the final one is -500. We have edited the following guardian codes, both of which are checked into svn: ALIGN_IFO.py, lscparams.py and ISC_DRMI.py. The guardian was tested 5 or 6 times with the new settings and we have not seen a failure yet.
Patrick, Jim, Dave:
we investigated the conlog crash at 16:00 PDT Friday 31st Dec 2015. As Patrick logged, h1conlog1-master syslogged that the kernel applied a leap second at that time. Looking at other machines we noticed that all systems which are NTP clients to the CDS NTP server (a Symmetricom SyncServer S250 unit with GPS antenna) similarly logged the leap second being appied whereas machines not using this NTP server did not.
The details of the conlog crash are: UNIX TIME (seconds since the 01jan1970 epoch) repeats a second when a leap second is added, we had a noisy channel being monitored by conlog which changed every second around this time resulting in conlog attempting to update the database with two events one second apart but with the same time stamp which crashed conlog and alerted us to the problem.
We then verified that all the clocks and computer times within CDS have the correct time, and they are indeed correct.
We logged into the Symmetricom NTP server and viewed the logs (attached). It shows that it reported the leap second addtion 30th June 2015 (correct) and 31st December 2015 (incorrect).
We are assuming that all the computer clocks did leap by one second at 16:00:00 PST Friday 31st December 2015, but then skewed back to the correct time soon after.
We are planning on upgrading to a new NTP server soon and donate the old server to GC.
Pipes under the beam tube have been rerouted to allow access under the tube with a pallet jack. This relocation required a brief shutdown of the instrument air which caused some alarms. The control room was notified in advance of the shutdown. The instrument air is back to normal operation.
No new news here--Everyone chugging along. Attached is 40 days of the first pressure sensor on the Pump Station manifold and the control output to the motor.
Remarks: All four CS pumps have a perfect flatlined output (showing only one,) no this is not news. EX's excess noise continues in evidence, notice the ~15 days of quieter readings. At EndY, the daily and weekly glitches continue as before.
No indications of pump output loss.
Attached is a plot of the STS2's Mass Positions. Finally got them all centered (something below 5000cts)
Found that ETMY did not have the Binary I/O cable from the Bin Output to the STS2 Interface so this was not going to re-center remotely anyway. Vincent pulled that cable but we don't quite understand. I can tell you though that the recentering does not work via the medm/cable but the signal select and period select toggling does work... We've left that cable unplugged and did the centering at ETMY via the front panel button.
C. Cahillane I have constructed the uncertainty budget of LHO at GPSTime = 1135136350. The kappa values are given at 11546. Just a reminder on the differences between the calibration versions (according to me and my results): C00 = Sensing, Actuation, and Digital filters given by DARMmodel (I used the DARMparams from GPSTime = 1127083151) C01 = No kappa corrections, but static frequency dependence corrected. C02 = Kappa_tst and kappa_C corrected, as well as static frequency dependence corrected. Kappa_pu and cavity pole f_c NOT corrected. C03 = "Perfect" calibration, i.e. all kappas and static frequency dependence corrected. All comparisons between calibration versions have the C03 "Perfect" calibration in the denominator. C03 represents the best possible knowledge of the detector according to the calibration group. All others have some known systematic errors. For C01, strain magnitude uncertainty is below 7% and strain phase uncertainty is below 3.5 degrees. (See PDF 1 Plot 2) For C01, maximum strain magnitude error over C03 is 0.95 and maximum strain phase error over C03 is below 3 degrees. (See PDF 1 Plot 1)
Here are the trends of the PSL data froim the past 20 days. In reponse to Jeff K's request for more commentary on this data I have conversed with the expert(s) and Jason agreed to provide more analysis.
After reviewing the trends I posted yesterday I determined that the ITMy and SR3 oplevs needed re-centering. This is now complete. The old and new pitch/yaw values for these oplevs are below.
| Old (µrad) | New (µrad) | |||
| Pitch | Yaw | Pitch | Yaw | |
| ITMy | -11.1 | -12.1 | -0.2 | 0.0 |
| SR3 | -13.8 | +5.2 | -0.2 | -0.1 |
This completes LHO WP #5676.
I reset the PSL 35W FE watchdog at 17:33 UTC (9:33 PST).
Ops Owl Summary: 08-16UTC
State of H1: down for maintenance
Shift Summary:
10:20 Lockloss due to earthquake in the pacific.
12:30 Back up to observing
14:30 Lockloss from POP decay. Range was ratty, maybe due to bad alignment.
15:00 Start initial alignment. I had to touch TMSY to get the Y-arm back.
15:30 Richard to EX to work on webcam
15:45 JeffB to LVEA
J. Kissel, J. Warner Since the IFO is in such poor shape anyways with POP / alignment woes, I've suggested we just begin with calibration measurements needed for today. Jim has brought the acuiring IFO to DOWN, I'm heading down to the Y-End to begin measuring ETMY coil drivers, and the PCAL team should be in shortly and will go to the X-End to begin their standard calibration of the EX RX and TX PDs. Wish us luck!
Just got back to Observe after an earthquake in the Pacific knocked us down. Slightly more difficult than normal, as I lost lock a couple times at fairly high states (Engage ASC Part 3 and Coil Drivers). I noticed that every time I got to Engage ASC Part 3, the Y arm ALS power would start to drop. It would continue to roll off as I waited for ASC loops to converge. Might be an issue there.
LR behavior has changed over 150 days - trend attached.
Power spectrum of PR2 and SR2 to compare attached also.
I looked into the recent change in H1 behavior, with H1:LSC-POP_A_LF_OUT dropping, and causing lock loss, and see alignment changes that could be contributing to this issue.
The shifting IMs in HAM2 cannot account for alignment changes in the 150 day trend, so there is something else at work here.
Changes in yaw over 150 days:
Changes in pitch, comparing PRM to PR2 (different chambers)
The change in PR2 yaw of +51urad is significantly larger than the changes in the PRM pitch and yaw, and PR2 pitch.
The changes in PR2 yaw may be a response to the cumulative effects of the smaller changes seen in IM1, IM2, and IM3.
Changes in IM pitch:
The change is IM3 yaw of -26urad is significant;y larger than the changes in pitch or yaw in IM1, IM2, and IM3.
H1 out of Observe at 01:16:37UTC:
Lockloss at 01:35:56UTC:
Summary:
Ops Eve Summary: 00-08UTC
State of H1: locked in Observe, range is 74Mpc
Fellow: Keita
Shift Summary:
The conlog process on h1conlog1-master failed soon after the UTC new year. I'm pretty sure it did the same last year but I could not find an alog confirming this. I followed the wiki instructions on restarting the master process. I did initially try to mysqlcheck the databases, but after 40 minutes I abandoned that. I started the conlog process on the master and configured it for the channel list. After a couple of minutes all the channels were connected and the queue size went down to zero. H1 was out of lock at the time due to an earthquake.
For next year's occurance, here is the log file error this time around
root@h1conlog1-master:~# grep conlog: /var/log/syslog
Dec 31 16:00:12 h1conlog1-master conlog: ../conlog.cpp: 301: process_cac_messages: MySQL Exception: Error: Duplicate entry 'H1:SUS-SR3_M1_DITHER_P_OFFSET-1451606400000453212-3' for key 'PRIMARY': Error code: 1062: SQLState: 23000: Exiting.
Dec 31 16:00:12 h1conlog1-master conlog: ../conlog.cpp: 331: process_cas: Exception: boost: mutex lock failed in pthread_mutex_lock: Invalid argument Exiting.
This indicates that it tried to write an entry for H1:SUS-SR3_M1_DITHER_P_OFFSET twice with the same Unix time stamp of 1451606400.000453212. This corresponds to Fri, 01 Jan 2016 00:00:00 GMT. I'm guessing there was a leap second applied.
of course there was no actual leap second scheduled for Dec 31 2015, so we need to take a closer look at what happened here.
The previous line before the error reports the application of a leap second. I'm not sure why, since you are right, none were scheduled. Dec 31 15:59:59 h1conlog1-master kernel: [14099669.303998] Clock: inserting leap second 23:59:60 UTC Dec 31 16:00:12 h1conlog1-master conlog: ../conlog.cpp: 301: process_cac_messages: MySQL Exception: Error: Duplicate entry 'H1:SUS-SR3_M1_DITHER_P_OFFSET-1451606400000453212-3' for key 'PRIMARY': Error code: 1062: SQLState: 23000: Exiting. Dec 31 16:00:12 h1conlog1-master conlog: ../conlog.cpp: 331: process_cas: Exception: boost: mutex lock failed in pthread_mutex_lock: Invalid argument Exiting.