After the chiller problems noted in the LLO aLOG #25879, I did a visual inspection of both the LHO chillers. There was no evidence of damage or deformation to the compressor or the crossbar. The coolant reservoir is not leaking. When the chiller is shut off there is a big back surge of water, which has enough force to blow the filler plug across the room. If the filler plug is screwed in too tightly and there is no other pressure relief in the coolant system, it is possible the rapid pressure increase due to the water surging back into the reservoir could burst a seam in the reservoir. We leave our filler plug loosely secured so as to act like a pressure relief valve.
PEM: All new dust monitors installed except in PSL enclosure. VAC: Replacement of fuse for CP1 electric fill valve. Investigate failure of BSC4 Pirani gauge. Investigate fill control of CP5.
Phil is replacing the fuse at CP1 with one larger than 1 A since the new electronic valves draw at least that much current. CP1 has overfilled as a result. Current fill level at 5%.
Chris B., Jamie R. The hardware injection guardian node has been setup at LHO. The node should be ready to perform injections for the engineering run. Many thanks to Jamie. The node is called INJ_TRANS. I have paused it. Code is in: /opt/rtcds/userapps/release/cal/common/guardian States that can be requested A graph of the guardian states is attached. There are two states that can be requested: * INJECT_SUCCESS: Request this when you want to do injections * INJECT_KILL: Request this to cancel an injection You should request INJECT_SUCCESS to perform an injection. The node will move to the WAIT_FOR_NEXT_INJECT will continuously check for an injection that are going to happen in the next five minutes (so if there are no injections for a long time, the node will spend a long time in this state). Once an injection is soon, it uploads an event to gracedb, reads the waveform data, and waits to inject. Eventually it will move into the injection state and inject the waveform. It will move back to the WAIT_FOR_NEXT_INJECT state and begin waiting for the next injection. While the node is preparing to do an injection, eg. gracedb upload, etc., there will be a USERMSG letting the operator know an injection is about to occur. See MEDM screen below. How to schedule an injection This is just some short hand notes for how to schedule an injection with the guardian node until a document is in the DCC. There are three steps: (1) Update the schedule file and validate it (2) Reload the guardian node (3) Request INJECT_SUCCESS if its not already The current schedule file at the time of writing is located here: https://redoubt.ligo-wa.caltech.edu/svn/cds_user_apps/trunk/cal/common/guardian/schedule/schedule_1148558052.txt The location of the schedule file is defined in https://redoubt.ligo-wa.caltech.edu/svn/cds_user_apps/trunk/cal/common/guardian/INJ_TRANS.py, search for the variable schedule_path. An example line is: 1145685602 INJECT_DETCHAR_ACTIVE 0 1.0 /ligo/home/christopher.biwer/projects/guardian_hwinj/test_waveforms/box_test.txt None Where: * First column is GPS start time of the injection. * Second column is the name of the guardian state that will perform the injection. Choices are INJECT_CBC_ACTIVE, INJECTION_BURST_ACTIVE, INJECT_DETCHAR_ACTIVE, and INJECT_STOCHASTIC_ACTIVE. * Third column says whither you want to do the injection in observing mode. If this is 1, then do the injection only if the IFO is in observing mode. Otherwise set this to 0. * The fourth column is the scale factor. This is a float that is multiplied with the timeseries. For example, 2.0 makes the waveform's amplitude twice as large and 0.5 makes the waveform's amplitude twice as small. * The fifth column is the path to the waveform file. Please use full paths. * The sixth column is the path to the meta-data file. Please use full paths. If there is no meta-data file, then type None. Do not schedule injections closer than 300 seconds apart. If you want to do schedule injections closer than 300 seconds, then you will want to tune imminent_seconds in INJ_TRANS.py. You should validate the schedule file. To run the script on a LHO work stations do: PYTHONPATH=/opt/rtcds/userapps/release/cal/common/guardian/:${PYTHONPATH} python /opt/rtcds/userapps/release/cal/common/scripts/guardian_inj_schedule_validation.py --schedule /opt/rtcds/userapps/release/cal/common/guardian/schedule/schedule_1148558052.txt --min-cadence 300 Note you need the glue and gracedb python packages to run this script - currently an FRS to get this installed. Failure states There are a number of failure states, eg. waveform file cannot be read, etc. If you validate the schedule the node shouldn't run into any failures. If a failure state is entered, the node will not leave it on its own. To leave a failure state identify the problem, resolve the problem, request INJECT_SUCCESS, and reload the node. Places where a failure could occur will print a traceback in the guardian log. GraceDB authentication I write this for anyone not familiar with the process. Running this guardian node will require a robot certificate because the node will upload events to GraceDB automatically. To get a robot certificate follow the instructions at https://wiki.ligo.org/viewauth/AuthProject/LIGOCARobotCertificate. We created a robot certificate for the controls account at LHO for the h1guardian0 machine. We had to ask the GraceDB admins (Alex P.) to add the subject line from the cert to the grid-map file. In the hardware injection guardian node env we set X509_USER_CERT to the file path of the cert and X509_USER_KEY to the file path of the public key. Tested gracedb API with: gracedb ping. Successful injections on GraceDB Injections on GraceDB are given the INJ label if they are successful. There is a success message also printed in the GraceDB event page, with the line from the schedule file. For example H236068. Test injections At the end of the night I did a 2 hour series of CBC injections separated by 400 seconds, I've attached plots of those injections as sanity checks that everything looks alright.
Command line to bring up MEDM screen: guardmedm INJ_TRANS
TITLE: 04/27 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
SHIFT SUMMARY:
LOG:
To continue the ASC input matrix stability testing from last night, I restored the AS_A / AS_B RF36 WFS combinations we were using (-1 / 1 for PIT, and -0.8 / 0.5 for YAW). Apparently these haven't made it into the guardian yet. Ramping them on by hand appeared to improve the alignment. I also enabled a length dither, which we plan to use to monitor the DARM pole (LSC-OUTPUT_MTRX_1_9 set to 1). Leaving H1 locked in this state overnight.
DIAG_MAIN notification shows that HWSX had bad peak counts for a moment. Timeseries shows that this happened between 4:10 and 4:13 UTC but got better on its own. The glitches is probably caused by me stopping and restarting the code to look at stream images. They look fine. The output power off HWSX sled is ~0.8W.
I think you mean 0.8mW
Oops yes. Sorry.
Chris, Keita, Evan
Today we were able to lock the outer ISS loop with the modecleaner at 20 W (and no interferometer). We looked at several PSL/IOO PD signals (the FSS transmission PD, the ISS inner-loop PDs, the IM4 transmission PD, and the ISS outer-loop PDs) and tried to understand their behavior in different ISS configurations.
Naively one would expect all these signals (except the in-loop ISS PDs) to agree with each other, since they should all be out-of-loop sensors for the RIN leaving the PMC. Together, these signals monitor three of the four PMC ports: the FSS transmission sees the RIN of one port, the out-of-loop inner-loop ISS PD sees the RIN of another port, and IM4 trans and the out-of-loop outer-loop ISS PD sees the RIN of yet another port.
These are the behaviors we observed (see attached pdf):
We think that a possible explanation for these effects is that both ISS PDs are seeing some correlated noise that is not seen by either the FSS PD or the post-IMC PDs. In this scenario, the inner-loop ISS would suppress the HPO noise but impress this correlated noise on the light entering the PMC.
Briefly we entertained the idea that the light circulating in the PMC could be multimoded (either from the NPRO or the HPO), but judging from the RIN before and after the IMC, this seems to not be the case (png attachment).
One other idea is that some of the 808 nm light is getting through the PMC and onto the ISS.
Is this really incompatible with jitter? There are a lot of variations visible on the PMC reflected camera. The finesse of the PMC isn't that great (~100), and neither is jitter supression. If there is a static misalignment into the PMC, there would also be a linear term for the jitter to intensity conversion. The two inner loop detectors see rather different signals at 10Hz, if the inner loop is engaged but not the outer one.
Certainly the jitter seen on the IMC WFS is worse than before the HPO turn-on.
Before the turn-on, the jitter below 100 Hz was 1 nrad/Hz1/2 or so (LHO#21212). Now it is 10 nrad/Hz1/2 at 10 Hz, with a 1/f slope.
The attachment shows IMC signals with the inner ISS loop off (dashed) and on (solid).
Update: BS alert. Read the next entry.
Jitter is much larger than before, but the jitter alone doesn't seem to explain all of our observations at the same time when the 1st loop is closed but the 2nd loop open.
PDA=P+a*J+Sa, PDB=P+b*J+Sb, IM4=P+x*c*J+Sim4
P is the intensity noise leaving the AOM. When the loop is open it's just the free running noise P0.
J is the beam jitter (01 amplitude relative to 00) coming out of PMC.
a, b and c are the jitter to intensity coupling at PDA, PDB and IM4 trans due to clipping or diode inhomogeneity or whatever.
x is the attenuation of 01 mode amplitude by IMC, which is about 0.3%.
Sa, Sb and Sim4 are the sensing noise.
When 1st loop is closed, J is imprinted on P:
P=P0/(1+G) - G/(1+G) *(b*J + Sb) ~ P0/G - b*J - Sb,
PDA ~ P0/G + (a-b)*J +Sa-Sb,
IM4 ~ P0/G + (x*c-b)*J + Sim4-Sb ~ P0/G -b*J +Sim4-Sb. (note x=3E-3.)
where G is the OLTF.
Allowing some conspiracies but not extreme ones, lack of coherence between PDA and IM4 is explained in either of the following:
The first case is false because swapping PDA and PDB makes no difference in IM4.
In the second case, PDA spectrum should look like all sensing noise, but this "sensing" noise in reality is big at 10Hz.
So, even if the clipping effect is common in PDA and PDB so the PDA and IM4 becomes incoherent, we need another noise that is not unlike big sensing noise, i.e. of about the same amplitude on PDA and PDB, is incoherent between PDA and PDB, and does not appear on downstream sensors.
I take my words back about PDA-downstream coherence.
I was looking at the coherences from this morning, and it seems like when only the first loop is on, 1st loop out of loop sensor is coherent with downstream sensor before and after the IMC (attached, bottom red and blue). The plot is calibrated in RIN.
Note that we switched the control photodiode from PDB to PDA last night, so in this plot the out of loop sensor is PDB. I switched them back again at 17:49:10 UTC.
Anyway, out of loop sensor is more coherent with downstream sensors than HPL monitor is at f<10Hz (bottom red|blue VS brown|pink), but HPL is more coherent from 10 to 200 Hz. Difference between bottom brown and bottom pink probably doesn't mean much, just the noise floor difference between IMC-PWR and MC2_TRANS.
Some thinking necessary, but at the moment I cannot say that jitter cannot explain everything.
John and I visited CP5 before lunch today to investigate why the liquid level is so noisy (compared to CP6). We verified wires were tight at the controls rack, and eventually made our way to the LL transducer. We closed the exhaust at transducer and the flow stabilized suggesting the instability is caused by a real pressure differential and not electric. We did not check the LLCV pneumatic actuator. After trending the numbers this evening, looks like we did a real number on the system. See plots attached. The % valve open is ranging full scale and LL full spans 90-95%.
Could this be related to the midstation air compressor replacement?
Richard just reset the PID values (same values). The LL seems to have stabilized (well, to its prior stability which is still relatively noisy). We will watch it throughout the day.
Per https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=26797 Set valve 25% open in manual mode. It currently reads 94% full. Tomorrow we will transition to PID.
Gerardo, Kyle We torqued BSC4's dome bolts and found many that were very loose. We then valved-out the pump cart and the ion pump responded by temporarily "railing" but has since come on-scale on its own - i.e. we fixed the outer O-ring leak -> The pump cart was then shut down (this pump cart has been running near the SW corner of BSC4 for the past few weeks and shutting it off, finally, is "very exciting" for us!). During this process, it looks as if the signal cable to the Diagonal Volume's pressure gauge-pair, PT140, was disturbed resulting in an anomalous reading from the Pirani gauge. This then tripped off the Cold Cathode gauge -> We were able to "wiggle" the cable etc. and get them to resume normal readings. Additionally, we determined that HAM11's ion pump needs to be replaced but are electing not to replace it as HAM12's ion pump can keep the combined HAM annulus volumes at adequate vacuum for the time being. Currently there are two pump carts running in the LVEA by HAM11 and HAM12, these will be shut down in the next day or two.
Yesterday (Monday 25th April) the MY vacuum controls system was upgraded to Beckhoff. Today (Tuesday 26th April) both LVEA systems (LX and LY) were upgraded to Beckhoff. I have performed the following on all three systems:
* = in the new system CP pump levels cannot exceed 100%, therefore I reduced the HIGH ALARM limit from 100% to 99%
still to do:
I updated the control room alarm handler.
Reset ITMX.
15:02 UTC Peter to H1 PSL enclosure 15:06 UTC Turned off BRS sensor correction at end X and end Y for Karen to enter the VEA to clean 15:19 UTC Gerardo shimming CP1 LLCV and CP2 LLCV in preparation for Beckhoff vacuum controls upgrade 15:23 UTC Filiberto taking tools to LVEA for Beckhoff vacuum controls upgrade 15:27 UTC Joe to LVEA to charge batteries 15:27 UTC Jeff K. starting charge measurements on ETMX and ETMY 15:30 UTC Bubba and Nicole to LVEA 15:33 UTC Sprague through gate 15:33 UTC Richard to LVEA to work with Filiberto 15:33 UTC Betsy and Travis to LVEA and optics lab 15:42 UTC Ed to LVEA to work with Richard and Filiberto 15:59 UTC Joe out of LVEA, taking Sprague to mid and end stations 16:05 UTC Carlos working on DMT network 16:12 UTC LN2 delivery through gate 16:12 UTC Karen out of end X, getting shoe covers from corner station 16:34 UTC Karen done dropping off shoe covers at end X, going to end Y 16:39 UTC Beckhoff vacuum controls upgrade done at LY 16:46 UTC John and Chandra to mid X to look at signals 16:47 UTC Hugh to end stations to check HEPI fluid levels 16:54 UTC Jeff K. done charge measurements 16:56 UTC Joe back from escorting Sprague 17:19 UTC Kyle to end Y to record number for property audit 17:21 UTC Gerardo to BSC4 to tighten bolts around dome 17:22 UTC Paradise water delivery 17:24 UTC Karen and Chris done at end Y 17:30 UTC Ryan done WP 5831 (internet back) 17:37 UTC Kyle back from end Y Richard and Filiberto starting Beckhoff vacuum controls upgrade at LX 17:56 UTC John and Chandra back from mid X 17:59 UTC Keita escorting film crew to X arm mid point beam tube tunnel 18:03 UTC DAQ restart for LX and LY Beckhoff vacuum control channels 18:10 UTC Joe to LVEA 18:15 UTC Jim W. taking ETMY ISI and HEPI down for measurements 18:28 UTC Joe back 18:44 UTC Keita done from escorting film crew 18:45 UTC Gerardo done at BSC4 18:46 UTC Vending machine delivery 19:42 UTC Filiberto to LVEA to work on dust monitor wiring 19:42 UTC Carlos done 20:28 UTC Richard to look at CP1 20:29 UTC Filiberto done with dust monitor work, going to beer garden to see what is needed for cabling PT170 and PT180 BPG402 gauges to Beckhoff vacuum controls 20:35 UTC Bubba replacing mid X instrument air compressor 20:36 UTC Peter done in H1 PSL enclosure 20:45 UTC Travis to LVEA drop off bins and parts 20:59 UTC Travis back 21:23 UTC Filiberto to LVEA to setup for pulling cable 21:45 UTC Filiberto pulling cable from LX vacuum rack to beer garden 21:58 UTC Jeff B. to LVEA to work on dust monitor wiring 23:18 UTC Filiberto done
C. Cahillane I have produced statistical uncertainty spectrograms for all of O1. For the most part the uncertainty doesn't change much over all of O1. The biggest concern is a couple days around Nov 17th with large kappa variations. Again, Jeff has suggested that I detrend all of the kappas to eliminate this spike in uncertainty. I believe that this is good evidence that the statistical uncertainty over all of O1 is fairly constant. For the LLO statistical uncertainty spectrograms, please see LLO aLOG 25652.
C. Cahillane I have detrended the kappas and reproduced the above spectrograms without the massive spikes of uncertainty.