The complete result of this DQ shift can be found here.
Activity Log: All Times in UTC (PT) 00:00 (16:00) Take over from TJ 01:30 (17:30) Robert – Going to End-X and End-Y to setup for injections 02:00 (18:00) Robert – Back from end stations 02:33 (18:33) Relocked at NOMINAL_LOW_NOISE 02:42 (18:42) In Observing mode after clearing SDFs 08:00 (00:00) Turn over to Nutsinee End of Shift Summary: Title: 01/14/2015, Evening Shift 00:00 – 08:00 (16:00 – 00:00) All times in UTC (PT) Support: Jenne, Evan H., Dave, Jim, TJ, Jeff K., Incoming Operator: Nutsinee Shift Detail Summary: After correcting problem with WD reset (see aLOG #24950) ran through initial alignment. The IFO relocked on first attempt. Went to Observing mode after clearing several SDF notifications. Balance of the shift (5 plus hours) was spend in Observing mode. Environmental conditions remained favorable. No issues or problems to report.
There is a danger to use DTT together with NDS2 as described here: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=22128
When you look at the data of a channel from the past, if the sampling rate of that channel was changed in the past, DTT could be confused and could output a totally bogus calculation. If you're only looking at the spectrum it's just a matter of wrong frequency axis, but if you look at the coherence between a channel with this problem and a channel without, your coherence is totally gone. Recently I was hit by this behavior again and spent a day to figure it out.
In the first attachment, I was looking at the coherence between PEM magnetometers and DARM using DTT. On the bottom row left half, you can see that DARM and H1:PEM-CS_MAG_LVEA_OUTPUTOPTICS_QUAD_SUM_DQ are sometimes coherent with each other (BTW this cannot be magnetic coupling into HAM6, because the level of magnetic noise is too small according to Robert) but indivisual X, Y and Z channles don't show any coherence (top row right half, middle row). Strange thing is, QUAD_SUM is made inside the frontend, adding square of X, Y, and Z.
In the second attachment, I did the same measurement using matlab, and X coherence is actually larger than QUAD_SUM. This is not as mysterious as I thought from the DTT plot.
The difference, it turns out, is that the sampling rate of X, Y and Z channels were increased from 4096Hz to 8192Hz (but that's not the case with SUM). DTT cannot handle this and assumes that they were always 4096Hz. If you look at the spectrum of, say, X channel in DTT, you'll see that the first mains line peak is at 30Hz, not 60.
This is really annoying.
After settling down from the problems during the first half of the shift, we are in Observing mode. General environmental conditions are good. At this time there are no apparent problems to report.
Conclusion:
We can account for 0.25% of the discrepancy measured by Jeff. If we also consider that the Foton filter values for H1 y-end Pcal are incorrect by 1.5% from the beginning of the run due to a change in measured optical efficiency. The current calibration factors in the front-end filters use a measurement from May 2015 and ETMY was misaligned during this measurement. The current hypothesis is that the ETM alignment during the end station calibration measurements can affect the measurements, so this could have changed the amount of light reaching the receiver module. We believe this should resolve the 1.8% discrepancy measured recently. Separately evaluating the two epochs could give a better measure of systematic uncertainties (difference between front-end filter and the correct values) during the two epochs rather than a combined uncertainty.
These measurements should be repeated for L1.
Dave and Jim corrected the problem with the WD reset. Did an initial alignment and relocked with no apparent problems. Power at 21.4W range at 80Mpc. Cleared a few SDF issues and set the intent bit to Observing.
Transition Summary: Title: 01/14/2016, Evening Shift 00:00 – 08:00 (16:00 – 00:00) All times in UTC (PT) State of H1: IFO unlocked and OMC WD triped. Working on fixing the WD trips (see Dave's aLOG), initial alignment, and relocking Outgoing Operator: TJ
Jeff K, Jeff B. , Jenne, TJ, Jim, Dave:
Around 4pm PST TJ reported that OMC had tripped and the watchdog could not be untripped. Jeff K. recommended a model restart. Unfortunately due to a communication problem we first mistakenly restarted the OMC model on the LSC front end (sorry OMC). Then we restarted the correct SUS-OMC model on SUSH56. This did not fix it. We then restarted all the models on SUSH56 (including the IOP). This did not fix it. We then stopped all models and only started IOP and SUS-SRM to do further debugging. (in the mean time the SWWD on the IOP had tripped SEI for HAM5 and HAM6). After some debugging we found that the PERL script sus/common/scripts/wdreset_all.pl was throwing an error about not finding the PERL CA LIBRARY. Jim tracked this down to a missing CaTools.pm perl module in the userapps/release/guardian directory. Turns out this file was removed from the SVN repository way back on 2nd March 2015 and the LHO working directory was only updated this afternoon by Jenne and TJ. This all nicely ties in with the watchdog resets working last night but not this afternoon.
In the mean time we had manually reset the watchdogs for SUS-SRM/SR3/OMC and SEI HAM5,6 and set the SDF back to OBSERVE for SUSH56IOP, SUSSRM/SR3/OMC and OMC.
For now we have manually copied the CaTools.pm file into userapps/release/sus/common/scripts to get the watchdog reset script working again.
This raises an FRS:
A perl module which is used by the watchdog systems has been deprecated. The watchdog system should be changed to no longer use PERL and instead use PYTHON (or perhaps BASH for exceptionally simple scripts).
We experienced a seemingly identical occurrence of this issue at LLO last Wednesday (see LLO aLOG entry 24156). However, as well as the SUS/SEI Watchdog reset scripts our initial alignment script was also affected, since it has Perl dependencies. It is still unknown how the symbolic-link to CaTools.pm became broken at LLO, see #4180.
Stuart, it was broken because I updated the same the same folder when I was visiting LLO. I am at fault for both of these CaTools.pm links being broken at both sites, though I had no idea that simply updating the SVN could cause this.
Thanks for shedding light on this mystery! I would suspect that svn'ing up pushed the changes to deprecate the Perl module sooner than was intended.
Last nights burst injections were scheduled for times when H1 was down, so we're attempting to do these again. This time I've schedule 5 lots of, the first lot starting at 22:00 CT (06:00 UTC), spaced 2 hours apart, with the last lot starting at 06:00 CT (14:00 UTC). Each injection is spaced 20 minutes apart. We're also including a BNS CBC injection in between the 5 lots of burst injections. All up there will be 30 injections, one every 20 minutes starting at 22:00 CT (06:00 UTC) and ending at 07:40 CT (15:40 UTC). I'll also mention that transient hardware injections only go ahead when the IFO is in observation mode, so they won't interfere with any PEM measurements that may be happening at the time. Here is the updated schedule: 1136872817 2 1.0 burst_GPS_76.259_ 1136874017 2 1.0 burst_GPS_76.262_ 1136875217 2 1.0 burst_GPS_76.263_ 1136876417 2 1.0 burst_GPS_76.264_ 1136877617 2 1.0 burst_GPS_76.266_ 1136878817 1 1.0 coherentbns1_1135135335_ 1136880017 2 1.0 burst_GPS_76.259_ 1136881217 2 1.0 burst_GPS_76.262_ 1136882417 2 1.0 burst_GPS_76.263_ 1136883617 2 1.0 burst_GPS_76.264_ 1136884817 2 1.0 burst_GPS_76.266_ 1136886017 1 1.0 coherentbns1_1135135335_ 1136887217 2 1.0 burst_GPS_76.259_ 1136888417 2 1.0 burst_GPS_76.262_ 1136889617 2 1.0 burst_GPS_76.263_ 1136890817 2 1.0 burst_GPS_76.264_ 1136892017 2 1.0 burst_GPS_76.266_ 1136893217 1 1.0 coherentbns1_1135135335_ 1136894417 2 1.0 burst_GPS_76.259_ 1136895617 2 1.0 burst_GPS_76.262_ 1136896817 2 1.0 burst_GPS_76.263_ 1136898017 2 1.0 burst_GPS_76.264_ 1136899217 2 1.0 burst_GPS_76.266_ 1136900417 1 1.0 coherentbns1_1135135335_ 1136901617 2 1.0 burst_GPS_76.259_ 1136902817 2 1.0 burst_GPS_76.262_ 1136904017 2 1.0 burst_GPS_76.263_ 1136905217 2 1.0 burst_GPS_76.264_ 1136906417 2 1.0 burst_GPS_76.266_ 1136907617 1 1.0 coherentbns1_1135135335_
06:00 UTC = 00:00 CT = 22:00 PT
TITLE: 1/14 Day Shift: 16:00-00:00UTC (08:00-16:00 PDT), all times posted in UTC"
STATE Of H1: Calibration on going
SHIFT SUMMARY: Calibration work for the majority of the day. Lockloss at 23:15 tripped the OMC WatchDog. It would not untrip so Dave had to restart the IOP, but this also tripped HAM5/6. Now SRM will not UNtrip. Dave and Jeff B are currently working to figure it out.
INCOMING OPERATOR: Jeff B
ACTIVITY LOG:
OMC WD would not UNtrip, Dave had to restart IOPSUSH56 to clear this. By restarting these, HAM5,6 also tripped.
We are currently untripping everything.
The SSD RAID for h1tw0 has failed, so h1tw0 is down until we can get it fixed. FRS 4228
For operators, you will notice that h1tw0 will be WHITE-boxed on the DAQ Detail medm (which you should be checking every shift in the Ops Checksheet).
I reduced one heater, HC1B, from 10ma to 9ma control current at 1:47 local time.
I have revamped my initial alignment lazy script and made a new medm to help operators and commissioners bring up what they need a bit faster.
The Script:
Location: /userapps/.../isc/h1/scripts/Init_align_lazy.py
Action: This will bring up 5 StripTools(XARM_GREEN_WFS.stp, YARM_GREEN_WFS.stp, PITCH_ASC_INITIAL_ALIGNMENT.stp, YAW_ASC_INITIAL_ALIGNMENT.stp, initall_alignment.stp) as well as the new medm I made (INIT_ALIGN.adl). The script will arrange them on the left monitor in a 3x2 grid. They can then be moved if so desired. If any of these windows are open in another workspace, the window manager will get confused and move the other one, so keep that in mind of they aren't in their proper positions.
I have aliased this in for "ops" as 'initial_alignment'
The medm:
Location: /userapps/.../isc/h1/medm/INIT_ALIGN.adl
Combines the ALIGN_IFO, X and Y ALS Guardians with the alignment sliders for ETMX, ETMY, ITMX, ITMY, BS, and PR3. These are the most commonly adjusted optics for IA.
I have not linked this medm off of the sitemap yet, but if/when I do it will most likely be under OPS.
Comments adn questions are always welcome.
Jenne noticed the control room symptoms of the ETMX ISI starting to ring up (H1:IMC-F_OUT16 on our Tidal.stp will be gin to oscillate [shot attached] along with the ASC control signals). I brought up the template to watch it and sure enough, ISI ETMX X was ringing up. I switched the X direction blend to the 90mHz and it imediately settled down.
I'm attaching screen shots of the environment control room tools to hopefully help.
TITLE: 1/14 Day Shift 16:00-00:00UTC (08:00-16:00 PDT), all times posted in UTC"
STATE Of H1: Observing at 78Mpc for 2hrs
OUTGOING OPERATOR: Travis
QUICK SUMMARY: wind <10mph, useism 0.4um/s, CWinj running, More calibration measurements today to come.
Title: 1/14 Owl Shift 8:00-16:00 UTC (0:00-8:00 PST). All times in UTC.
State of H1: Observing
Shift Summary: Unlocked at the beginning of my shift. After an IA, it locked up pretty quickly. Then there was an lockloss due to EQ. After EQ ringdown, we stumbled through the CARM section a few times before getting back to NLN.
Incoming operator: TJ
Activity log:
9:27 Locked at NLN
9:30 cleared ETMx TIM error
9:32 Observing
10:06 Set to Commissioning to tweak TMS in an attempt to get POP power higher (it wasn't drifting down, just not as high as usual,15800). Unsuccessful.
10:13 Observing
11:42 Lockloss. EQ?
13:50 Locked NLN, attempting to tweak POP power again
14:08 Another unsuccessful POP attempt. I did however manage to ring up the ASC loops like crazy! Waiting for them to ring down.
14:21 Observing
I have been silently checking the signal chain of the REFLAIR and POPAIR RFPDs using the AM laser (a.k.a. PD calibrator) to make sure that they are functional expectedly.
Summary
The RF frequency of the AM modulation was adjusted in each measurement such that the demodulated IF signal was below 50 Hz.
Calibration of the amplitude modulation depth
We recalibrated the AM laser.
The current setting of the laser was changed recently because we opened up the current driver when we thought the laser diode had been dead in the early December. Then the laser head and its current driver were sent to Rich at Caltech for his extensive testing although the laser magically fixed itself and he didn't find anything wrong. So this was the first time for us to use the AM laser which had been fixed. Because of that mysterious event, I wanted to recalibrate the laser. First of all, Yuta and I measured the power to be 2 mW with an Ophir Vega without the attenuation filter. Then we measured the modulation depth for the amplitude modulation by using a Newfocus 1611 as a reference.
The new calibration for the amplitude modulation is:
P_am = 5.13 mW x (P_dc / 1 mW) * (1 V / V_drive)
where P_dc is the laser power at DC and V_drive is the drive voltage when it is driven by a 50 Ohm source. For example, if one puts this laser to a PD which then shows a DC laser power of say 2 mW, the AM coefficient is now 5.13 mW x ( 2 mW / 1 mW) /V_drive = 10.26 mW/V_drive.
REFLAIR_A_RF9 (S1203919)
Remarks:
The signal chain is OK. The PD response is smaller by 15% for some reason.
It seems as if the transimpedance is smaller by 15% than what had been measured at Caltech (LIGO-S1203919). The cable loss from the RFPD to the rack was measured to be 0.47 dB. Be aware that the demod gain is half of the quad I/Q demodulator because this is a dual channel demod (see E1100044). The demod conversion gain is assumed to be 10.9 according to LIGO-F1100004-v4.
REFLAIR_A_RF45 (S1203919)
Remarks:
The signal chain is healthy.
Found cable loss of about 1.5 dB. The measurements excellently agree with the loss-included expectation.
POPAIR_A_RF9 (S1300521)
Remarks:
The signal chain is healthy.
The measurement suggests that there is loss of 1 dB somewhere. I didn't measure the cable loss this time.
POPAIR_A_RF45 (S1300521)
Remarks:
The signal chain is OK. Though loss sounds a bit too high.
The measurement suggests a possible loss of 2.6 dB somewhere. I didn't measure the cable loss.
REFLAIR_B_RF27 (S1200234)
Remarks:
The signal gain is bigger than the expectation by a factor of 2.3.
REFLAIR_B_RF135 (S1200234)
Remarks:
The signal gain is bigger than the expectation by a factor of 1.5
POPAIR_B_RF18 (S1200236)
Remarks:
The signal gain is bigger than the expectation by a factor of 2.3
POPAIR_B_RF90 (S1200236)
Remarks:
The signal gain matches with the expected value, but I don't believe this.
There was a typo:
P_am = 5.13 mW x (P_dc / 1 mW) * (1 V / V_drive)
P_am = 5.13 mW x (P_dc / 1 mW) x (V_drive / 1 V)
For 27MHz and 136.5MHz, the RF gains are +19.8dB and +50.7dB, respectively. S1400079
The response of the BBPD isn't really flat over all frequencies. See D1002969.
The description in D1002969 is for the initial version. (The schematics seems up-to-date.)
The latest version has the rf performance as attached.
This is a follow up of the calibration measurements for REFLAIR_B and POPAIR_B.
I have updated the expected signal gain for these photo detector chains using more realistic gains which Koji gave (see his comments above). Now all the values make sense. Note I did not perform any new measurements.
In the following calculations, the quantity in red represent the updated parameters.
REFLAIR_B_RF27(S1200234)
Remarks:
The signal chain is healthy. There is loss of 0.92 dB somewhere.
REFLAIR_B_RF135(S1200234)
Remarks:
The signal chain is OK. There is loss of 3.9 dB somewhere.
POPAIR_B_RF18 (S1200236)
Remarks:
The signal chain is healthy. The signal was bigger by 9% than the expected.
POPAIR_B_RF90 (S1200236)
Remarks:
The signal chain is healthy. There is loss of 1.2 dB somewhere.
From these measurements, we can use POPAIR to infer the calibration for POP.
I looked at a recent lock acquisition while the interferometer was trying to engage the outer ISS loop. The LSC is relatively stable during this time, and the POP beam diverter is still open.
After undoing whitening gain and digital gain (2 ct/ct for POPAIR9/45, and 32 ct/ct for POP9/45), we find the following TFs:
This implies calibrations of 1.7×106 ct/W for POP9 and 1.8×106 ct/W for POP45.
There's a factor of 4 difference in power between POP and POPAIR (17 mW versus 68 mW with a PSL power of 23 W), so the values I gave above are off by a factor of 4. The demod gains should be 6.4×106 ct/W for POP9 and 7.2×106 ct/W for POP45.