The ROC and ITM substrate defocus estimator seems to be working correctly. The following two plots show the last 17 hours of (a) the XARM transmission and (b) the ROC of the four test masses as well as the single pass thermal lenses in the substrates of the two ITMs.
The estimator was engaged about 17 hours ago - which is when it first saw a signal from the ETM RHs and the ITMX CO2 laser. Hence, these ETM ROC is decreasing over time as the RH effect engages (also showing the classic RH overshoot after approximately 3 hours). Also the ITMX substrate lens starts from a static lens of around -1.2E-5 diopters and increases over around 4 hours to settle at it's nominal value.
Additionally, the IFO was being locked and unlocked. The six estimators show this transient behaviour as (a) ROC increasing during a lock and (b) ITM substrate defocus increasing during lock.
(all times in UTC)
Hand off from TJ with him aligning the SRC, but it looked off. Sheila & I found that the SR3 was off in yaw (thru oplev trend & AS_C PD), so this was steered back, and then went through an SRC Alignment.
Den: tested out some MICH & BS filters to test DRMI acquisition
Then spent most of the evening trying to get H1 to NLN, but there are some ASC issues floating around.
One cool thing is seeing DRMI locking super fast. I noted some of the acquisition times here:
o 0:32-1:05 Dave/Jim heading to EY to check seismometer
o 3:52 acknoweldged a GRB but we were locking.
Leaving Operating Mode in Locking as I leave H1 to Den for the evening.
I'm not sure why, but BS gets a large kick in Pit and Yaw just before it goes into the coil drivers state. We've seen this 3 times in a row now, although these were the first times since ~lunch that the IFO was locked past DC readout. So, it's perhaps related to our alignment / locking troubles, which are perhaps due to this LVEA temperature change that happened on Tuesday.
The transient is visible in the LOCK_P and LOCK_Y channels for BS M3, but not in LOCK_L. It is also visible in the WFS signals.
Had a couple of random HAM5 ISI/SRM_M1 watchdog trips. For both watchdog trips, it looks like HAM5ISI is first. The trips happened during two different states of working on H1.
1) At end of TJs shift when there was a lockloss from DRMI. The GS13s & Actuators both hit their limits. NOTE: on this one, SR3 was quite a bit off in yaw!
2) While aligning H1, HAM5 tripped in a similar way. NOTE: No issue with SR3 here!
For both cases, all the M1 OSEM RMS values start increasing (not enough to trip the WD), but it's interesting to see them all ramp up.
For the first case, Sheila made a change to Guardian to make sure we look at when DRMI drops out of lock. Still not really sure what caused case #2.
In the first case, plot 9, the ISC input goes up with a big ramp until the WD trip. Is it possible that SR3 was driven into the stop by some ?? control signal, and that the whack of the impact saturated the GS-13s (like the fast shutter on HAM6) which made the ISI trip? I don't see that in the second one, though. So even if true, that is not the only issue
Received an FMCS Air Handler HIGH RED alarm. This evening. I've acknowledged it, and it will probably stay in the RED state until addressed (if this is a new running state, we should update the alarm handler).
This might be correlated with John turning off heaters on Tuesday, but not sure.
Will send an email to John/Bubba.
Hmmm. I note that this seems to roughly correspond in time with our locking (and esp. alignment) troubles of the last 2 days.
A better representation of LVEA temperature is this signal:
H0:FMC-LVEA_CONTROL_AVTEMP_DEGF
see plot for the last 4 days.
Keita, Sheila
While the average temperature in the LVEA has been stable, the temperature has changed by about 1 degree C in several zones. Here is a plot of PR3 PIT (oplev and osem) as the temperature changes. These sensors could both be sensitive to temperature, but the optic could also have really moved with the temperature change.
It still seems plausible that our alignment difficulties of the last few days are related to temperature.
I reduced the gain of the OMC length locking loop by a factor of 3, the UGF has gone from 10 Hz to 3 Hz, and added a 30 Hz low pass. The new open loop is attached, as well as a spectrum of the drive signal before and after this change. There are low passes in the PZT driver that mean this is not as dramatic a change in the OMC length RMS as it seems.
Noise projections coming soon.
Sheila, Den We have tuned DRMI servos to reduce the acquisition time. First of all, we noticed that MICH servo saturates the BS actuator during the flash. In order to solve this issue we have copied control filters from LLO. Attached plot shows comparison of LLO and old LHO MICH servos. Next we have changed locking threshold for SRCL and changed ramping time for PRCL and SRCL boosts. New changes are in the guardian's "NEW_DRMI_SETTINGS" state. We tested these filters (~10 times) and concluded that there is an improvement in lock acquisition time. Average DRMI wait time is 1 minute.
The main change to the MICH loop is that we are no longer using the invPlant filter, and are now using a simpler compensator, and that we use more aggressive low pass filtering for acquisition. Attached is a (not great) measurement of the new loop compared the the old one (in blue), and uncalibrated spectra of the error signals showing that things are fairly similar in the end.
We have now made these new settings the default guardian settings, and there are no longer two alternative DRMI paths in the guardian.
This means that our MICH loop is a little different in low noise, and that we will have to adjust our MICH FF to compensate. We have put the new compensation filter in the MICH FF bank, and den found that a gain of -15.9 gave the best subtraction at 30 Hz, and that the coherence was low. We can check with braodband excitations but have had trouble getting back to low noise since we did this.
We also made these changes to the MICH loop that is used in initial alignment to lock the dark michelson.
[Cao, Ellie, Dave O, Aidan]
To whoever will be around in the control room last after commisioning, here are the steps for SR3 scanning to align Y arm HWS:
1- Turn off SR3 Cage Servo , Make sure that all mirrors are in ALIGNED state.
2- Run arbitrary waveform generator for H1: SUS-SR3_M1_OPTICALIGN_P_EXC.
Frequency has been set to 41.7 mHz
Amplitude has been set to 750 urad
3- Run arbitrary waveform generator for H1: SUS-SR3_M1_OPTICALIGN_Y_EXC immediately after step 2
Frequency has been set to 10.3 mHz
Amplitude has been set to 1550 urad
Let it run over night
All relevant windows have been left open on computer operator1 10.20.0.81
Started ca 08:33:00 Z.
Stopped ca 16:23:00 Z.
This afternoon, we replaced the Sat Amp units for TMSX. New units have the same modifications as the SR3/OMC units. Two decoupling capacitors were placed on the input of the negative regulator (U503), as implemented at LLO. C602 0.1uf C601 10uf New Unit S1100098 Old Unit S1100168 New Unit S1000292 Old Unit S1100066
Title: 2/17 OWL Shift: 16:00-00:00UTC (08:00-16:00PDT), all times posted in UTC
State of H1:
Shift Summary: Pretty unsuccessful in terms of locking. Earthquake in the beginning of the shift, then bits of commissioning/fixing to help relock, but then CDS went to replace the sat amps for TMSX bringing us down for a bit more.
Incoming Operator: Corey
Activity Log:
15:10 Hugh into LVEA STS2A setup
16:35 Hugh into LVEA by bier garten
17:08 Chris, Nicole into EX for parts hunt
17:06 hugh out
17:24 Mitchel, film crew into LVEA and out a few various times until ~20:00 UTC
19:50 Vinny, Brin to EY for magnetometer work
21:02 Vinny, Brin back
23:35 Vinny, Brin To EY again
at 14:11 PST h1dc0 froze up with a kernel panic. We reset the computer by pressing the front panel RESET button, but it did not come back cleanly.
When h1dc0's daqd process first was restarted, it ran for only a short period and then died. It was logging GPS timing jumps of one to many seconds.
Monit restarted it and it is now running correctly. The other DAQ systems restarted at this time correctly, except h1broadcast0 took a long time to get going.
Frame gap due to this problem is
1139782208 to 1139782912
Feb 17 2016 22:09:51 UTC to Feb 17 2016 22:21:35 UTC
The TCS simulation model to estimate the lenses and ROC of the optics is now running. The following lists gives the nominal values and the transfer functions used for all the filters:
This screen is accessible from the TCS_MASTER > SIMULATION button.
The static lenses from the ITMs have been added also (see T1400602)
I filled the PSL diode chiller with 350ml of water. Attached is a 30 day plot showing where the diode chilller was filled on Feb 4th, and then the alarm level increasing starting around Feb 13th.
How does this alarm? Is it on Verbal? Audible alarm out in the Diode Room? Or is the red light the only way to tell if it's low. In general, the Crystal Chiller is the one which requires action every week. For the Diode Chiller all we have is the red LED to warn us of low levels (I've never seen it). We (operators) just need to remember to also keep checking the Diode Chiller when performing this task.
Conceivably, we check this every Thursday. If the alarm goes off on Friday, is it OK for this chiller to be in alarm until it's checked the next Thursday?
I wonder if we should include trending the alarm level on the FAMIS procedure for this as Cheryl did.
Unfortunately the red LED on the face of the diode chiller is the only alarm we have and checking this LED is part of the FAMIS procedure. Additionally, especially with Peter and I in Germany until mid-March, if anyone is in the diode room for any reason and sees this red LED lit, fill the diode chiller immediately until the light goes out and post in the alog that water was added and how much. If the water level gets too low, the chiller will shut off, which also shuts off the entire PSL. We need a better alarm (especially in the hypothetical situation Corey brings up, this could result in a laser shut down if not caught before the next scheduled check), but this is what we have for now.
I have increased the LLCV valve setting to 18% (from 15%) to try and compensate for changing conditions.
Factors that are changing are:
The supply pressure is falling as the liquid level in the storage dewar falls.
Heat leak to the liquid transfer line is increasing with outdoor temperature. This results in more vapor in the line and less liquid to the pump.
Radiation from the beam tube as the beam tube warms.
The attached plot shows 100 days of the control valve setting. The flat line at 15% is due to the failure of the differential pressure sensing- we have had to resort to manual setting of this valve.
For reference the liquid capacity of the pump reservoir is ~77 gallons when full.
I forgot about the heat into the pump due to radiation from the surrounding vacuum chamber in the mid station. Here is a plot of recent VEA temperatures at MID Y.
That is illuminating, thanks. We take feedback controls for granted around here; don't miss it till it's gone. Hope this sophisticated PIL* control algorithm will tide us over until we can regen & fix the level sensor.
*Primate In Loop
Jeff, Darkhand, Kiwamu,
We have been meaning to do this, but kept missing a chance to do. Today we have fliiped the sign of the ESD bias voltage both on ETMX and ETMY.
This alog only describes what we did on the digital front end models. We will perform a set of confirmation measurement and verify the H1 DARM model later.
See alog 22135 for a concise summary of the bias flip.
[Bias flip on ETMY ]
[Bias flip on ETMX]
We took open loop and Pcal sweep measurements in nominal low noise after the biases were flipped. They reside at:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PostO1/H1/Measurements/DARMOLGTFs/2016-02-16_H1_DARM_OLGTF_7to1200Hz.xml
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PostO1/H1/Measurements/PCAL/2016-02-16_PCALY2DARMTF_7to1200Hz.xml
Darkhan will process the data later.
P.S.
I have updated the SDF accordingly for SUS-ETMX and ETMY and CAL-CS when the interferometer was in nominal low noise.
Kiwamu, Darkhan,
Summary
Our analysis of a DARM OLG TF and a PCALY to DARM TF measurements taken after the ESD bias sign flip on Feb 16, 2016 showed that:
Disscussion (actuation function)
The ETM ESD drivers' electronics and front-end FOTON filters associated with the ESD responses were recently updated, see LHO alogs 25468, 25485. Since the front end filter's do not perfectly cancel the ESD response, in the Matlab parameter files we used a better approximated ZPK responses that are based on the old ESD electronics measurements. Since currently we do not have TF measurements for the new ESD driver electronics, we have removed all of the compensations (for the non-perfect front-end filters) from the Matlab analysis of the DARM and PCALY to DARM TF measurements.
Measurement files, scripts
DARM OLG TF and PCALY to DARM TF measurements have been committed to CalSVN:
Runs/PostO1/H1/Measurements/DARMOLGTFs/2016-02-16_H1_DARM_OLGTF_*.txt
Runs/PostO1/H1/Measurements/PCAL/2016-02-16_PCALY2DARMTF_*.txt
Up to date filter files have been copied to
Common/H1CalFilterArchive/h1omc/H1OMC_1125881488.txt
Common/H1CalFilterArchive/h1susetmy/H1SUSETMY_1126240802.txt
DARM parameter file and comparison script have been committed to
Runs/PostO1/H1/Scripts/DARMOLGTFs/CompareDARMOLGTFs_O1andPostO1.m
Runs/PostO1/H1/Scripts/DARMOLGTFs/H1DARMparams_1139722878.m
Results (plots) are in
Runs/PostO1/H1/Results/DARMOLGTFs/2016-02-16_*.pdf