TITLE: 09/13 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 1mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.17 μm/s
QUICK SUMMARY: I will start H1 running an alignment then try locking to see where we're still having trouble.
TITLE: 09/13 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: NONE
SHIFT SUMMARY:
IFO is in IDLE for MAINTENANCE (and EQ) for the night. OWL is cancelled due to ongoing corrective maintenance from the most recent outage.
7.4 EQ from Kamchatka, Russia.
Sheila got to it first and untripped the tripped systems (alog 86896). I noticed that some of the suspensions (PRs and SRs) stalled so set them to their respective request states - and they got there successfully.
IFO is still in EQ mode but ringing down.
No HW WDs tripped and everything seems to be "normal" now.
LOG:
None
ops overvoew screenshot attached. I untripped things but will not try locking tonight.
While untripping suspensions I noticed that oddly IM2 watchdog rms monitors were just below the trip threshold, while the other IMs were 2-3 orders of magnitude smaller. The second screen shot shows this curious behavoir. What about IM2 undamped is so different from the other IMs?
Summary: omicron glitch rate increases after the power outage, and when the ISS loop is turned off
Using omicron, I made some plots of the glitch rates both before/after the power outage, and then again when the ISS loops were turn on/off. The times chosen were based on the detchar request made by Elenna in alog 86878. For the glitches, I used the omicron frequency/snr parameters of 10 < freq < 1000 Hz and 5 < snr < 500. The first slide in the attached pdf is a comparison of before/after the power outage. It's fairly obvous from the summary pages that the glitch rate increased, but I wanted to quantify just how much it changed. Before the power outage, it was around 45 glitches per hour, and after, it jumped up to about 600 glitches per hour. For both periods, it was around 10 hours of low noise I used for the comparison.
I then did a comparison between when the ISS loop was on versus off. I have found that the glitch rate increased when the ISS loop was turned back off, as can be seen in slide 2. On slide 3, I have summary page plots showing the WFS A YAW error signal, which looks noisier when the ISS loop is off, and the glitchgram below. On slide 4, I made a spectra comparing H1:CALIB_STRAIN_CLEAN during when the ISS loop was off/on, and we can see an increase in noise from ~ 0.06 Hz - 2 Hz, and at slightly higer frequencies of ~ 10 Hz - 40 Hz.
IFO is still trying to lock. I was given instructions to lock until PREP_ASC_FOR_FULL_IFO but after an initial alignment, the highest we could get to was CARM_TO_ANALOG. DRMI is just very unstable, even with manual intervention to maximize POP signal width.
IFO will keep trying to lock until PREP_ASC_FOR_FULL_IFO. I will change the request per instructions.
TITLE: 09/12 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Been another busy day of troubleshooting following the power outage. There have been many logs entered with things people have looked into and tried (please see those for more information, as it would be impossible for me to summarize all of it here), but the gist is that we have accepted for now the apparent alignment shift through the IMC, even though this shows much higher reflected power, and reduced the amount of light on the IMC REFL WFS to keep them safe. Since then, we have been trying to relock H1 in this new configuration, but this has been so far plagued with challenges; see Elenna's alog for more on that.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:41 | FAC | Randy | Hi-bay | N | Moving boom lift outside | 16:41 |
15:54 | ISC | Sheila | LVEA | N | Plugging SR785 in for IMC measurement | 16:06 |
16:11 | VAC | Richard | LVEA | N | Checking vacuum pump | 16:18 |
17:48 | PEM | TJ | LVEA | N | Check DM by HAM6 | 18:18 |
18:44 | VAC | Gerardo | LVEA | N | Replacing AIP controller on HAM6 | 19:05 |
19:42 | ISC | Camilla | Opt Lab | N | Getting some bolts and power meter | 20:15 |
19:58 | ISC | Sheila, Elenna | LVEA | LOCAL | Adjusting IOT2L waveplate | 20:59 |
21:01 | ISC | Elenna | LVEA | N | Check SR785 | 21:37 |
21:37 | CDS | Marc | MY | N | Finding parts | 22:37 |
We have been trying to lock the IFO since we reduced the IMC REFL power. DRMI locking has been causing lots of trouble. First time we locked DRMI quickly, but then as the ASC engaged it pulled the buildups away and I couldn't turn off the ASC fast enough before a lockloss occurred. Then, locking DRMI and PRMI starting talking a very long time, 30 minutes or more. Sheila and I tried turning off the DRMI ASC except the beamsplitter. Then, Ryan S touched up the rest of the DRMI alignment by hand. We made it to ENGAGE ASC FOR FULL IFO, but then as the ASC engaged again the POP 18 buildup started dropping, and then we saw large glitches in all the signals- LSC and ASC, then lockloss. We don't know what caused the issues in full IFO ASC, but it seems similar to the DRMI issue- for whatever reason the ASC is not working properly.
We would like to go back to PREP ASC FOR FULL IFO and slowly engage the ASC by hand to figure out what's going wrong. But DRMI locking is still taking 30 minutes or more, and our recent attempt even the beamsplitter ASC didn't work properly. I think it's safe to say we won't be able to lock, let alone observe, for a while. Even if we could solve these ASC issues, we still need to get to laser noise suppression and check various things to ensure the IMC and ISS loops are working properly, we have the right amount of laser power, no significant glitches or degradations, etc.
TITLE: 09/12 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 8mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.23 μm/s
QUICK SUMMARY:
IFO is LOCKING in DRMI
We still don't know excactly what happened and if we fixed it but:
Now dealing with DRMI instability. Hoping to get locked.
Removed and replaced the ion pump controller for the annulus system at HAM6.
No issues doing the replacement, but system AIP system continues railed, we wait and we will continue with troubleshooting next week.
(Anna I., Jordan V., Gerardo M.)
Late entry.
Last Tuesday we attempted to "activate" the NEG pump located on one of the top ports of the output mode cleaner tube, but we opted to do a "conditioning" instead, this due to issues encountered with the gauge related to this NEG pump. Also, what prompted the work on the NEG pump was the "noted" bump on the pressure inside the NEG housing by the gauge, bump on pressure also noted inside the main vacuum envelope, see attachment for pressure at the NEG housing (PT193) and pressure at PT170.
Side note: Two modes of heating for the NEG pump are available, ACTIVATION and CONDITIONING. The difference between both options is the maximum applied temperature, for "conditioning" the max temperature is 250 oC, where for "conditioning" the max temperature is 550 oC. The maximum temperature holds for 60 minutes.
We started by making sure that the isolation valve was closed, checked it and it was closed. We connected a pump cart+can turbo pump to the system and pump down the NEG housing volume, the aux cart dropped down to 5.4x10^-05 torr, ready for "activation". However, we noted no response from the gauge attached to this volume, PT193 signal remained flat. So, we aborted the "activation" and instead we ran a "conditioning" on the NEG. After the controller finished running the program, we allowed the system to cool down, while pumping on it, then the NEG housing was isolated from the active pumping, hoses and can turbo were removed, then system was returned to "nominal status" or no mechanical pumping on it.
Patrick logged in h0vaclx, but system showed the gauge nominal, and the system did not allowed to turn filaments because pressure is high.
Long story short, the gauge is broken. Next week we'll visit NEG1, and its gauge PT191.
ls /opt/rtcds/userapps/release/sus/common/scripts/quad/InLockChargeMeasurements/rec_LHO -lt | head -n 6
total 295
-rw-r--r-- 1 guardian controls 160 Sep 9 07:58 ETMX_12_Hz_1441465134.txt
-rw-r--r-- 1 guardian controls 160 Sep 9 07:50 ETMY_12_Hz_1441464662.txt
-rw-r--r-- 1 guardian controls 160 Sep 9 07:50 ITMY_15_Hz_1441464646.txt
-rw-r--r-- 1 guardian controls 160 Sep 9 07:50 ITMX_13_Hz_1441464644.txt
-rw-r--r-- 1 guardian controls 160 Sep 2 07:58 ETMX_12_Hz_1440860334.txt
In-Lock SUS Charge Measurements did indeed run this Tuesday.
python3 all_single_charge_meas_noplots.py
Cannot calculate beta/beta2 because some measurements failed or have insufficient coherence!
Cannot calculate alpha/gamma because some measurements failed or have insufficient coherence!
Something went wrong with analysis, skipping ITMX_13_Hz_1440859844
Ive put togther a number of Histograms that can be used for quick glance of locking state performance marks.
Sheila & Elenna wanted some DRMI Stats dating back to Aug 19th .
Sheila and I went out to IOT2L. We measured the power before and after the splitter called "IO_MCR_BS1" in this diagram of IOT2L. This is listed as a 50/50 beamsplitter. However, there is no possible way it can be, because we measure about 2.7 mW just before that splitter, and 2.45 mW on the IMC refl path and 0.25 mW on the WFS path. So we think it must be a 90/10.
Sheila then adjusted the have wave plate upstream of this splitter so that the light on IMC refl dropped from 2.4 mW to 1.2 mW., as measured from the IMC refl diode.
We compensated this change by raising the IMC gains by 6 dB in the guardian. I doubled all the IMC WFS input matrix gains, SDF attached.
SDFed the IMC PZT position that we think we like.
After the waveplate adjustment and updating of gains in Guardian, we took an IMC OLG measurement with the IMC locked at 2W input. The UGF was measured at 35.0 kHz.
(In the attached 1 week trend, [-7 day, -2 day] range is before the power outage and [-1day, 0 day] is after the long high power lock post power outage. Ignore [-2 day, -1 day] because the ISS 2nd loop was messing with the AOM during that time.)
ISS QPD (top right) shows a clearly measurable shift in the alignment mostly in DX (YAW). The QPD is downstream of the PMC, the calibration is supposed to be in mm so nominally it's just 5.4 um but there's a focusing lens in front of it and unfortunately I don't know the focal length nor the distance between the lens and the QPD for now, so the only thing I can say is that the alignment changed downstream of PMC in YAW.
BEA (Bull's Eye Sensor, top left) also shows an alignment shift upstream of PMC. Input matrix of this one is funny, don't read too much into the magnitude of the change. I'm not even sure if it was mostly in PIT or in YAW. (The beam size, 2nd from the top on the left column, also shows a change but I won't trust that because that output is supposed to change when there's an alignment change.)
In our first long lock after the power outage, ISS second loop increased the input power to the IMC by 0.1 W/ hour for 10 hours, MC2 trans was stable, and IMC refl DC increased by 1.7 (mW) per hour (first screenshot)
Last night we left the IMC at 2W, and saw no further degredation. (second screenshot).
Now we increased the power to 60W for an hour and see no further degredation after the transient while the ASC was moving and maybe there was a thermal transient. (third screenshot)
Daniel suggested that as the power has increased on the IMC REFL PD (IOT2L D1300357) by a factor of around 3, was 19mW at 60W in, now 60mW at 60Win, we should reduce the power on this diode as it doesn't work well with high powers. H1:IMC-REFL_DC_OUT16
At 2W input, it was 0.73mW, is now 2mW. Aim for a factor of 2 reduction (to make it simple to reduce electronics gains).
Aim to reduce power on IMC REFL to 1mW using IOT2L waveplate when there is 2W incident on IMC.
Decided we should do this with the IMC OFFLINE. Ryan S put 0.1W into the IMC with the PSL waveplate. This gives 2.37mW on H1:IMC-REFL_DC_OUT16.
Aim to reduce this by a factor of 2 to 1.18mW.
The SEI and SUS trips from the EQ moved alignment of the IMs quite a bit, so to lock XARM_IR, I restored these alignments based on top-mass OSEMs to what they were pre-trip. This was the only intervention needed for initial alignment. Starting main locking now.
Lockloss at DHARD_WFS. After locking DRMI, I tried moving PRM, SRM, and IM4 to get the buildups better, but I could not. I then was stepping through the following states slowly to see if an issue cropped up, and sure enough, after DHARD_WFS had completed I was waiting for DHARD to converge, and alignment was pulled away to eventually cause a lockloss; screenshot attached. I believe this was happening yesterday also. POP18/90 had been slowly degrading since locking DRMI, which may be because only MICH is enabled during DRMI ASC.