TITLE: 11/03 [EVE Shift]: 00:00-08:00 UTC (16:00-00:00 PDT), all times posted in UTC STATE Of H1: Lock acquisition OUTGOING OPERATOR: Jeff QUICK SUMMARY: Lights appear off in the LVEA, PSL enclosure, end X, end Y and mid X. I can not tell from the camera if they are off at mid Y. Winds are less than 10 mph. Appear to be increasing. ISI blends are at Quite_90. Earthquake seismic band is between 0.01 and 0.02 um/s. Microseism is between 0.1 and 0.3 um/s. Jeff has been having trouble with ITMY bounce and roll modes. Has been stopping at REDUCE_CARM_OFFSET_MORE to allow Jenne and Evan to investigate.
following Keith's LLO install, I have configured monit on h1fescript0 to automatically start the GraceDB notification system on system reboot,and to restart it if it stops running. Here are the details:
created a new exttrig account local to h1fescript0. Currently this has the standard controls password.
In the exttrig home directory, install script run_ext_alert.sh
Install a start/stop script in /etc/init.d/ext_alert
Install monit and mailx on the machine
Configure monit via /etc/monit/monitrc and /etc/monit/conf.d/monit_ext_alert
I changed the ownership of the cert files from user controls to user exttrig
Tested by killing the ext_alert process and checking monit restarted it
I have removed the obsolete startup instructions in the [[ExternalAlertNotification]] wiki page
This closes FRS3415.
Activity Log: All Times in UTC (PT) 15:51 (07:51) Norco - LN2 delivery to Mid-Y 16:00 (08:00) Take over from Jim 16:00 (08:00) Bubba – Greasing fans in the LVEA 16:00 (08:00) Ken – Working on solar panel on X-Aem 16:00 (08:00) Joe – Removing grouting forms from HAM1 16:15 (08:15) Christina & Karen – Cleaning at End-Y 16:15 (08:15) Jodi & Mitchel – Going to Mid-X to unload 1-Ton 16:18 (08:18) Filiberto & Leslie – Working on Temp sensors at BSC1 & BSC3 16:24 (08:24) Richard – Going into LVEA to check Beckhoff cabling 16:30 (08:30) Bubba – Finished in the LVEA going to End-X then Mid-X to grease fans 16:46 (08:46) Richard – Out of LVEA – Going to End-Y 16:50 (08:50) Restart GraceDB 16:52 (08:52) Norco – LN2 delivery to Mid-X 17:00 (09:00) Christina & Karen – Finished at End-Y – Going to End-X 17:04 (09:04) Jason – Finished with OpLev survey in the LVEA – Going to End-X 17:06 (09:06) Kyle & Gerardo – Going to Mid-Y, Mid-X, and End-X moving vacuum equipment 17:09 (09:09) Nutsinee – Going into LVEA to work on CO2Y table 17:14 (09:14) Jodi & Mitchel – Done with stroage moves at Mid-X 17:15 (09:15) Richard – Back from End-Y 17:16 (09:16) Hugh & Richard - Going into CER to work on ITM-Y Coil Driver 17:20 (09:20) Bubba – Finished with fans on Y arm – Going to X-Arm 17:21 (09:21) Jason – Finished with survey at End-X – Going to End-Y 17:22 (09:22) Keita – Going into LVEA to check transimpedance gain for ACB diode amps 17:45 (09:45) Restart GraceDB 17:48 (09:48) Christina & Karen – Finished at End-X – Going to the LVEA 17:49 (09:49) Richard – Going into CER to test ITM-Y Coil Driver with Hugh 17:50 (09:50) Jason – Back from End-Y – Finished with OpLev survey 18:10 (10:10) US Linen on site 18:25 (10:25) Nutsinee – Out of LVEA – Going to End-Y to restart PCal camera controller 18:30 (10:30) Richard & Ken – Going to work on solar panel on Y-Arm 18:33 (10:33) Peter – Going into the LVEA 18:35 (10:35) Keita – Finished at CS – Going to End-X and then End-Y 18:40 (10:40) Peter – Out of the LVEA 18:45 (10:45) Hugh – Finished with Coil Driver work 18:45 (10:45) Hugh – Going to end stations to check HEPI fluid levels 19:05 (11:05) Nutsinee – Back from end stations 19:11 (11:11) KingSoft – Going to RO to check water system 19:14 (11:14) Richard & Ken – Going to solar panel on Y-Arm 19:16 (11:16) Hugh – Back from End Stations HEPI check – Checking the CS HEPIs 19:19 (11:19) Bubba – Finished greasing fans 19:20 (11:20) Nutsinee – Going back to End-Y to work on camera controller 19:22 (11:22) Keita – Finished at End-X – Going to End-Y 19:32 (11:32) Kyle & Gerardo – Finished 19:35 (11:35) Filiberto & Leslie – Finished in LVEA 19:36 (11:36) Richard & Ken – Finished at Y-Arm – Going to X-Arm 19:45 (11:45) Nutsinee – Back from End-Y 19:55 (11:55) Ed & Jeff B – Sweep the LVEA 20:00 (12:00) Keita – Finished at End-Y – Turned off lights and WAP 20:02 (12:02) Jeff & Jason – Running charge measurements at End-Y and End-X 21:05 (13:05) Start relocking 22:16 (14:16) Kyle & Gerardo – Going to X28 23:42 (15:42) Keita & Kiwamu – Going into LVEA to check on ISS Second Loop 23:48 (15:48) Keita & Kiwamu - Out of the LVEA 00:00 (16:00) Turn over to Patrick End of Shift Summary: Title: 11/03/2015, Day Shift 16:00 – 00:00 (08:00 – 16:00) All times in UTC (PT) Support: Jeff K, Ed, Jenne Incoming Operator: Patrick Shift Summary: Completed maintenance and measurement tasks at ~13:00. Ran initial alignment. Working on relocking.
J. Kissel, J. Oberling Following the recently updated instructions, I've taught Jason how to measure charge today during the tail end of maintenance. We've gathered data today in hopes that with the bonus data that we'd gotten on Friday (LHO aLOG 22991), and last week's regular measurement (LHO aLOG 22903), we can make a more concrete conclusion about the current rate of charge accumulation on ETMY. This will help us predict whether we need to flip the ETMY bias voltage sign again before the (potential) end of the run on Jan 14. In short -- based on today's data I think we will need to flip ETMY's ESD bias voltage again before the run is over, unless we change how often we keep ETMY's bias ON, or reduce by half as LLO has done. I attach the usual trend plots for both test masses, showing the effective bias voltage as well as the trend in actuation strength (as measured by the optical lever; I hope to show the data with the last two weeks of kappa_TST as well, demostrating the longitudinal acutation strength. Stay tuned for future aLOG). Further, as the 5th attachment, I plot the same results for the actuation strength in ETMY, but with the X-Axis extended out to Jan 14th. If we take the last 4 dates worth of data points (and really the last three, since it appears the data just after the flip was anomolous), we can see, by-eye, we'll need a flip sometime around mid-to-late December 2015 if the trend continues as is. I'll call this Option (1). Alternatively, I can think of two options forward: (2) Reduce the bias voltage by 1/2 now-ish (again, as LLO has done), and take the same 8-hour shift duty-cycle hit to characterize the actuation strength at 1/2 the bias. Essentially taking the hit now or later. or (3) Edit the lock acuisition sequence in such a way that we turn OFF the H1 ETMY bias when we're not in low-noise. If we do (2), we're likely to slow down the charging rate continuously, and potentionally not have to flip the sign at all. It's essentially no different than a sign flip in characterization of its effects, but we'd *definitely* have to change the CAL-CS calibration where we did not before. In support of (3), The trends from ETMX confirm that turning OFF the ESD bias voltage for long periods of time (e.g. for 24+ hour-long observation segments [yeah!]) will reduce the charge accumulation rate. It's unclear (to my, right now) how *much* ETMY ESD OFF time we would get (though it is an answerable question by looking at a sort-of "anti"-duty-cycle pie chart), but I have a feeling that any little bit helps. This would require continued vigilance on charge measurements. Why bother changing anything, what's a well-known 10-20% systematic, if you're going to correct for it at the end of the run, you ask? The calibration group's stance on this is that we have many systematics of which we need to keep track, some of which we have no control over (unlike this one) and some of which we haven't yet identified. We believe we're within our required uncertainty budget, but unknown systematics are always killer. As such, we want to at least keep the controllable, known systematics as low as reasonable. If we can recover/reduce a slowly increasing 10% systematic in the instrument, without having to digitilly correct for it, chunk up the analysis segments into many "epochs" where you have to do this or that different thing over the course of the run, and/or they have different uncertainties, then we do so. It's essentially the "if you can, fix the fundamental problem" argument, especially considering the other known and unknown systematics over which we don't have control. As such, and because the accumulation is still sufficiently slow (~8 [V/week] or ~2 [% Act. Strength Change/week]) that we can decide when to make the change (and which of the options to persue) based on man-power avaibility of the next holiday-full two months. On a side-note: today's data set contains only contains 3 points per quadrant, where we normally get 4 to 5. This should not have a huge impact on accuracy, or our ability to make a statement about the trend, especially since the sqrt(weighted variance) uncertainty bars are so small. It might also mean, of pinched for maintanance day or bonus time, it may be OK to continue to get only 3 data points instead of 5.
I have spent about an hour today checking some basic behavior of the ISS 2nd loop servo circuits. I did the following two measurements.
No crazy thing was found in today's test. Our plan is that whenever we have issues with the ISS 2nd loop in some future, we will repeat the same type of measurements with an aim to identify what is changing.
Offset measurement
I checked how the reference signal propagates to the servo signal output by changing the reference signal. The measurement was done with the IMC intentionally unlocked. Therefore there was no light incident on the ISS PD array in HAM2. The result is shown in the first attachment. An important quantity to note here is the signal output value when zero reference voltage was requested. It was 2.76 V. Also note that zero output signal can be obtained when the reference signal is requested to be between 5 and 10 mV. This does not sound crazy to me. According to the resultant plot, the linearity seems also fine. Additionally I checked the reference monitor as well which also showed a good linearity. Later, I did a coarse version of the same test with the full power incident on IMC (~22 W). Even though the actual RIN was too high to make a precise measurement, the linear coefficient between the reference and output signals seemed consistent with what I have measurement with no light. I also attach the raw data in txt format.
Operating point measurement
Then I locked the IMC and changed the input power step by step with a step size of about 2 [W]. In each step, I manually moved the reference signal such that the output signal stays around zero [V]. This tells us how the operating point evolves as a function of the input power. I did this test from 2 [W] to 22 [W]. The result is shown in the second attachment. As you can see in the plot, the operating point evolves linearly against the input power as expected. Additionally, I recorded IM4_TRANS_SUM as well in order to check the linearity of the output power of the IMC. IM4_TRANS seems to be also linear. The data in txt format is attached as well.
Note that all the tests were done with PD1-4 as an intensity sensor. PD5-8 was not checked during the tests.
There are two things that don't make sense.
1. DC gain mismatch between the drawing/traveller and the measurement.
When Second Loop boost, integrator and additional gain are all off and the variable gain slider is set to 0dB, the gain from REF signal monitor to Second Loop Output should be -220, taking into account the modification for the second loop electronics on the floor (D1300439, https://dcc.ligo.org/S1400214): -1 for the REF summation, -10dB (or 1/3.2) instead of 0dB for the variable gain amplifier, -100 for the last stage of "whitening" path (U34), and -7 for the daughter board.
D1300439 DC gain = -1/3.2*-100*-7=-220.
From Kiwamu's plots, the measured DC gain is about -12.5V/0.04V~ -310.
2. Mystery factor of two in REFSIG DAC output.
"REFSIG" slider has a mystery factor of 2 in the filter for the DAC output. As a result, when the slider is set to -0.577V, the output of the filter reads -1889 counts instead of -0.577V/40Vpp * 2^16ctspp = -945counts.
However, REF signal monitor, which I think is a read back of the offset voltage coming out of REFSIG DAC channel, reads -940 counts and 0.574V.
Temperature sensors for BSC1 and BSC3 were installed this morning. Sensors were installed on southeast corner of each chamber. Power cables were run from SUS-R6 field rack (Next to BSC2). Units seem to have been modified and have a different gain compared to units installed at the end stations. Richard secured the temperature sensor at EY.
Ed & Jeff B. Sweep LVEA. Unplugged WAP, Removed stair from South side between HAM5 & HAM6, and turned off the lights.
Kyle, Gerardo CP5 LLCV typically 85% open under normal conditions -> Increased vapor pressure should reduce this and broaden the range of control
Kyle, Gerardo X-end @ 1000-1005 hrs. local. LEA @ ~1120-1125 hrs. local
Dave, Nutsinee
First I went to restart the UT1 unit at EY. Came back to find that the camera is still not connecting to the Camera Control Pro software. Dave was able to ping the camera so I restarted the h1pcaly minimac and went back to restart the UT1 unit. Still not working.
ECR E1500428
II 1149
WP 5587
BrianL RichardM DaveB JeffK Hugh
With Email Approval of the ECR from PeterF and Code Testing Completed at Stanford by BTL & Hugo (T1500555) we implemented this change at LHO today.
See T1500555 for details of the change with before & after model views and testing output.
First we Power cycled the coil drivers, bio chassis, and the FE computer for ITMY. This is an attempt to reset/clear who knows what might be causing the occassional glitching of that platform that is noted in Integration Issue 1149.
Next after retreaving the new isi2stagemaster.mdl from the svn and making/installing the new model, the ITMY was restarted. We did further testing by unplugging the Bio cable from the Coil Driver and confirmed the watchdog would not trip unless the signal was low for 10 seconds.
Then all BSC ISIs were made safe and all restarted after recompiling the model.
Per JimW's request (alog 23032), I edited the ISC_LOCK guardian such that it throws notification messages when the ISS 2nd loop fails in the engagement. I have reloaded ISC_LOCK and checked the code into the SVN.
The notification messages will be displayed (1) if the IMC_LOCK guardian falls back to the LOCKED state (which is the programmed failure sequence) and/or (2) if the engagement process takes more than 4 minutes. The below are the new version of the ENGAGE_ISS_2ND_LOOP state. The red lines are the ones I newly added.
* * * * * * in ENGAGE_ISS_2ND_LOOP * * * * * *
def main(self):
nodes['IMC_LOCK'] = 'ISS_ON'
self.wait_iss_minutes = 4.0 # in [min]
self.timer['ISSwait'] = self.wait_iss_minutes*60 # in [sec]
@get_watchdog_IMC_check_decorator(nodes)
@nodes.checker()
def run(self):
# notify when the ISS fails. 2015-Nov-3, KI
if nodes['IMC_LOCK'] == 'LOCKED' or nodes['IMC_LOCK'] == 'OPEN_ISS':
notify('!! 2nd loop engagement failed !!')
# notify when the ISS is taking too many minutes. 2015-Nov-3, KI
if self.timer['ISSwait']:
notify('ISS spent more than %d minutes. Check ISS'%(self.wait_iss_minutes))
# if IMC arrives at the requested state, move on.
return nodes['IMC_LOCK'].arrived
Later today, we confirmed that the ISC_LOCK guardian behaved as intended -- it displayed the messages when the ISS failed in the engagement. Also there were a few minor typos which are now fixed. The guardian code is checked into the SVN.
I reset the PSL power watchdog at 16:25 UTC (8:25 PST).
TITLE: 11/3 Owl 8-16 UTC
STATE Of H1: Observing
SUPPORT:
SHIFT SUMMARY: Lost lock early on, switched blends around, relocking had some issues
ACTIVITY LOG:
9:30 Lockloss, cause not clear
9:30-11:00 Relocking, switched blends a couple times, ultimately left ISI's in 90mhz blends, seems to have made some low frequency issues a little better
11:00 Relocked, back to observing..
Transition Summary:
Title: 11/03/2015, DayShift 16:00 – 00:00(08:00 – 16:00) All times in UTC (PT)
State of H1:16:00 (08:00) The IFO is locked at NOMINAL_LOW_NOISE, in Observing mode
Outgoing Operator:Jim
Quick Summary: Preparations for the maintenance window.
On Tuesdays at 9:05 AM and/or 9:06 AM Pacific Time, we will have regular testing of the control room alerts at Hanford, WA. Events will be type Burst CWB2G and will be labeled H1OPS, L1OPS, and DQV. All events will also have the log comment 'This is a fake event for testing the control room alerts.'.
LHO received and responded to this alert in the GraceDB. LHO and LLO did not receive an audible alert for this event via the VerbalAlerts.
TITLE: Ops Eve Shift, 00:00-08:00UTC (16:00-23:59 PDT), all times posted in UTC"
STATE Of H1: Locked, on it's way to Low Noise, in Observe
SUPPORT: Keita and Kiwamu
SHIFT SUMMARY: Locked - something rang up - lockloss - relocking issues - back to Low Noise
INCOMING OPERATOR: Jim
ACTIVITY LOG:
- IFO locked almost exactly 4 hours
- 04:13UTC - lockloss after something rang up, visible on FOMs, did not get a chance to investigate
- relocking, X arm VCO railed at -5V, and Keita and Kiwamu identified the issue, and Kiwamu fixed
- relocking, DRIM did not lock, and then PRMI did not lock
- initial alignment
- first attempt at DRMI resulted in lockloss/down
- second attempt at DRMI was successful
- transition from DRMI lock to Low Noise went well
- 07:54UTC - IFO back in Observe
[detchar whsitle team]
We see DARM whistles in H1 on October 30th. They are also seen in the REFL_SERVO channel. The plots below illustrate glitch frequency vs. PSL VCO frequency at the time of the glitch (color is log10(omicron snr)). They do not appear to couple below 2 kHz, and the whistle frequency that appears in DARM is very, very loud in REFL_SERVO.

Further investigation is ongoing for other channels and over recent locks for the last several days to try to pinpoint when this started.
The frequency of the crossing that we see in DARM is 79.073 MHz, and these RF whistles track the beatnote, not the harmonic (i.e. the absolute value of the PSL VCO frequency minus this fixed value). The other two lines seen in REFL servo are 79.1385 MHz, also a direct coupling, and 79.0018 MHz which seems to follow twice the frequency difference. We don't see these lines coupling in FSS mixer, ISS AOM driver, or MICH. Not seeing it in MICH (while we do in SRCL and PRCL) is maybe evidence that this is coupling through frequency noise. We should look over more locks at how these appear in the above-mentioned channels, and compare to L1 where things are much more complicated.
The roll modes have been conquered. When they were too high, the DHARD loops become unstable and we lose lock. We stopped at REDUCE_CARM_OFFSET_MORE, which was the highest state that we could sit at without losing the lock.
ITMY's roll mode was the worst offender according to the monitor screen, so we only worked on damping that mode. We used the usual filter settings (which are always left in the correct state) and the usual negative sign for the gain. The usual gain that guardian leaves the ITMY roll mode damping is -40, but I was using larger gains to get the mode to go down faster. Once the roll modes were all around the "okay" level on the monitor screen, we let guardian continue the locking sequence.
We were concerned at one point that guardian had frozen on us, but it turns out that we were just being fooled. It looked like guardian was not finishing the DC_READOUT.main state, and not even starting the DC_READOUT.run state. However, it was all fine. I think I have been fooled by this before, but as a reminder (to myself especially), if an EPICS value is not going to change, guardian will not log that it tried to write the value. So, the last few steps of the main sequence were to write some gain values that were not going to change, which is why it looked like it was just stuck. Also, there was nothing in the run part of the state that was going to cause any log messages, so it didn't look like anything had happened, even though we were already getting the "return true" at the end of the state. In the end, we just requested a higher state, and guardian moved on as usual.