J. Kissel, N. Kijbunchoo Found ALS Fiber Polarization high at ~20% for X Arm Fiber PLL. Went to CER and reduced it. Couldn't get better than ~9%, but should be good enough.
(see Dave B's entry @ https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=32741)
TITLE: 12/19 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 66.0999Mpc
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY:
A bit of a slog. H1 was locked for a few hours at the beginning of the shift and then taken down (while L1 was down) to swap the ETMy L2 Coil Driver. This was thought to be the suspect of our glitchy H1. But we might be finding out that swapping this out might not be trivial.
LOG:
J. Kissel, J. Driggers, C. Gray Ever since the sudden drop in X-arm green laser power in Oct 2016 LHO aLOG 30884, it's been a challenge to pass the metric for determining whether the arms are locked well enough to begin locking ALS -- if ALS-C_TRX_A_LF_OUTMON and ALS-C_TRY_A_LF_OUTMON are greater than 0.9 (normalized [ct]). However, we often see via other metrics that the arms have settled and are in a fine enough condition to move through ALS. As such, we've lowered the threshold on X arm transmission to 0.85. This is modification to the /opt/rtcds/userapps/release/isc/h1/guardian/ISC_library.py The change has been committed to the repository.
What is going on to explain the exhaust pressures depicted in the attached?
It seems like the LVEA temperature is not as stable as it should be, and among other things this might be correlated to the slow trend of ALS diff signal going down. It used to be 0dBm, now it's -8 to -10 dBm.
Since diff (X-Y) is degrading much faster than comm (X-SHG), and since it's degrading faster than the SHG power, this is probably just an alignment of some steering mirror on ISCT1. This needs to be fixed tomorrow during the maintenance.
Keita and I looked at zones 4 and 5 temperatures (Output MC and Input MC ) and noticed that they are oscillating in opposition.
In response I have done the following;
Reduced heat zone 3A (from 10ma to 4ma) - this is a reduction of ~33kw or one stage of a 3 stage heater.
Reduced heat zone 5 (from 10ma to 8ma) - Variac controlled - not sure of the power reduction but duct temperature dropped by ~5 degF.
Reduced chilled water setpoint to 43F from 47F to provide more control authority for the chilled air portion of the air handler. The face bypass damper has been swinging from rail to rail for the last few days.
No warranties are implied.
Bubba and I will continue to monitor and adjust.
Temperature of the LVEA is controlled based on the average called H0:FMC-LVEA_CONTROL_AVTEMP. Don't expect H0:FMC-LVEA_AVTEMP to behave as nicely.
Alog entry from DetChar points to ETM PUM chassis as a possible source for the glitchiness seen over the lock stretches. Robert Schofield says magnetometer in ETM SUS rack confirms glitchiness is coming from inside rack. This morning we replaced PUM S1102652 with S1102648.
Tagging CAL. Also DetChar so they're aware the change was made and what collateral damage occurs after potentially fixing zee glitching. This change can potentially affect the calibration of the IFO at low frequency, since we use ETMY during nominal low-noise control. In order to quantify this, we're going to take an L2/PUM stage actuation strength sweep once we get the IFO back up and running. Then, tomorrow during maintenance, we'll re-measure the chassis alone to check if all poles and zeros of the filtering need to be re-compensated as we'd done back in May 2016 (see LHO aLOG 27180 and 21232). If there is a significant change in the frequency dependence then we must update the PUM stage compensation filters (and no change will be necessary to the calibration). If there is a significant change to the actuation strength, then we'll need to change the actuation strength of L2 stage in the front-end CAL-CS model. Further, the coil balancing for each stage will potentially be different as well. So we may need to resurrect re-balance the coils a. la. LHO aLOG 11392. I sincerely hope we don't have to change any of the finely tuned L2P or L2Y filters that are in place in the L2_DRIVEALIGN_L2P and L2_DRIVEALIGN_L2Y banks. I can't find the original design aLOG, but Sheila found that the L2 L2Y filter doesn't do anything back in March of 2015 LHO aLOG 17322, so I suspect only L2 L2P may need to be adjusted. Hopefully some remembers how that filter was designed.
Kyle, Gerardo, Dave:
This morning we ran the cp3 and cp4 autofill code, testing its monitoring of the discharge line pressure. Ryan was at MY during the tests, and introduced a high discharge line pressure (DLP) by manually controlling the bypass valve.
Attached are trend plots of the two events, CP3 on the left, CP4 on the right. Each 2x2 plot shows the two thermocouple (TC) readings on the left, the fill valve position upper right and the DLP bottom right.
CP3:
This CP filled before the DLP could climb over the 2.0 PSI threshold. The autofill terminated normally, with the TCs reading a temperature below the threshold of -30C. The log text shows a normal fill sequence:
Mon Dec 19 11:05:02 PST 2016
Starting CP3 fill. LLCV enabled. LLCV set to manual control. LLCV set to 25% open. Fill completed in 43 seconds. LLCV set back to 19.0% open.
CP4:
This CP caught the DLP above threshold error before or simultaneously with the TCs dropping below the -60C threshold. This is reflected in the autofill log text (error shown as bold text)
Mon Dec 19 11:10:02 PST 2016
Starting CP4 fill. LLCV enabled. LLCV set to manual control. LLCV set to 70% open. Over pressure. Fill stopped. LLCV set back to 36.0% open.
We believe this has successfully tested the code will stop filling if the DLP line goes high. On Wednesday 21st we will run the system in hands-off mode.
For the record, here is a description of the code and how it is ran.
The code runs as a crontab as user vacuum on the machine vacuum1.
The code runs on Monday, Wednesday and Fridays. CP3 fill happens at 11:00 local time, CP4 at 11:10.
Here is the crontab text:
00 11 * * 1,3,5 /opt/rtcds/userapps/release/cds/h1/scripts/run_autofill_cp3.bsh &
10 11 * * 1,3,5 /opt/rtcds/userapps/release/cds/h1/scripts/run_autofill_cp4.bsh &
The bash scripts are attached, they specify the LLCV percentage open, the TC threshold and the timeout as arguments to the cpn_fill.py python scripts.
The cpn_fill.py python scripts are also attached.
(Note: files were renamed with a .txt suffix so they can be uploaded into alog.)
all scripts are maintained in the CDS SVN.
This morning, with only H1 in OBSERVING, we took H1 to CORRECTIVE MAINTENANCE to swap out the ETMy L2 Coil Driver.
Shortly after this work started we received a GRB alarm & we chose to IGNORE this GRB since we were already OUT of OBSERVING. I did confirm that Joe Hanson (LLO operator) also received the GRB alarm.
There are continuing high dust counts in both the LVEA (mostly in the Bier Garten) and well as both rooms in the PSL. I have increased the alarm levels on PSL-101, PSL-102, LVEA-6, and LVEA-10 by 10x to lower operator alarm fatigue, while I try to figure out the source of these particles. Posted are a 5 day trend of these four dust monitors and wind at the CS.
Maintenance Discussion: Aiming for short day, but if we have long tasks, it could be allowable.
Went over Work Permits.
The philosophy for the PSL is to have all the systems the same as possible. With the help of folk from LHO I've attempted to summarize the differences in the site PSL's as they currently stand. This will hopefully help track all the differences so we can converge again at some stage and also help with noise hunting and why might see something at one site and not another.
See T1600592. I'm hoping for this to be a living document and as new changes are made, or old changes reverted, that this is tracked here
Andy, following work from TJ and Josh My best guess for the cause of the extreme glitchiness is an electronic problem in the ETMY L2 driver. I think TJ and Josh were thinking the same thing, but I don't want to speak for them. We've been seeing the glitchiness get worse and worse at LHO for at least a week. These are short glitches, and now they are loud and frequent (once every few minutes at least). The humidity has been dropping as the weather gets cold, and last winter Robert and others showed that was correlated with increases in these kinds of glitches. We also see that the only associated channels are the ETMY L2 noisemons. These do see the drive signal, so they will show glitches whenever DARM does. But these are very loud. And I quickly developed some code to measure a transfer function from a few weeks ago, and use that to subtract out the drive signal. The subtraction seems to work well, but the glitches get shorter, sharper, and louder. So my hypothesis would be that the low humidity is exacerbating an electronics problem in the ETMY L2 driver, maybe due to sparking. While LLO is down, would it be possible to replace the driver with a spare, or at least try to diagnose whether it is working properly?
As an alternative to swapping the coil driver, one could also try swapping the control back to ETMX.
While OBSERVING with a glitchy H1 for the last 19hrs, there was mention of a possible culprit being noisy violin modes.
Before taking any action, I consulted with Keita (Run Coordinator) to ask whether this would be acceptable (because L1 is currently down and we would not jeopordize double coincidence). Keita pointed me to a note about diagnosing violin modes here, (alog 32615). So, I went through is diagnostics for determining whether we have ADC or DAC saturations (which is a main symptom of big violin modes).
Diagnostics for saturations due to violin modes:
1) Make sure OMC DCPD is far from +/- 32k, and ESD is far from +/- 131k, and this can be checked via timeseries.
2) Make sure there are no large values for a a spectrum of these channels
Both of these measurements are attached, and I believe they look OK (please check me on this). So I will not bother with an attempt of violin mode damping.
The decision not to damp violin modes seems like a good one today, but I would add two more things to check, since we have seen glitches around the violins and harmonics even when we are not close to having saturations. (ie alog 32026 for the symptom, for a solution see 32050)
First, you can check the glitch plot on the wall or the summary page (under Lock> glitches)for a line of glitches around the violin modes or harmoincs (500Hz, 1kHz, 1.5kHz ect).
Second, you can look at the main spectrum on the wall and see if the violin modes are high compared to the reference.
Josh, Andy, David
After stumbling on the ringing phones in the LVEA in O2 (entry 32503) we sent this request, to find other times in h(t) that look like phone ringing glitches, to GravitySpy citizen scientists. Though it's dirty laundry airing this is a way to engage a network of many interested folks to find issues.
The reply from user @EcceruElme pointed to two similar glitches in O1/ER10/O2.
One is in Livingston at 1132398529.14 (Nov 24th 11:08 UTC - link) and looks like an ER10 instance of engaging violin mode damping while in observation ready but before SDF checks were being enforced, so ok.
One is in Hanford at 1128269436.32 which is 2015-10-07 at 16:00 during a long nice science mode lock (summary page for that day). This sounds a little like a fax machine then some bangs and a human voice through a loudspeaker and some more bangs. In the hour around that time there are also some other loud bangs. We have no idea what it could be but it happened in otherwise unbroken excellent O1 data so perhaps folks on site know what it is and if it can be (or has been) turned off.
Attachments:
- Audio file of Input Optics Microphone with no filtering.
- Spectrogram of Input Optics Mic (it was also loud in BS Mic, others in LVEA but quiet in PSL room Mics) and Strain
- OmegaScan of Strain (from GravitySpy)
I listened to this in Quicktime with the bass at max and treble at min, and it sounds like a crane being actuated on/off multiple times, which I hear in the clicking. I think the screaching is metal to metal contact as the crane moved. I hear one voice.
Listening with trebble at max and bass at min gave the same results for me.
inputoptics-mic-2015-10-07.wav sounds to me like an accidental dial into the public address system in the OSB/LVEA with feedback coming from standing near one of the overhead speakers throughout the building.
Is it possible to dial into PA systems in the VEAs? Seems that should definitely be disabled during science runs. I can see that for safety it may need to be turned on for e.g., Tuesday maintenance, but it is such an easy way to corrupt the data with rich signals that it needs to be off bounds. May I also ask that the card reader records at the time of the Event (audio event that is) be inspected, in order to determine if anyone entered the LVEA around that time?
Card reader shows no activity in the LVEA between Oct /07/2015 00:00:00 and Oct/08/2015 00:00:00 Pacific (that should be Oct/07/2015 07:00:00 to Oct/08/2015 07:00:00 UTC).
The event time is around Oct/07/2015 16:09:43 UTC.