TITLE: 06/27 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 137Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 4mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.05 μm/s
QUICK SUMMARY: In NLN for 5h45. ITMY Mode5/6 violins slightly high
Dust monitors, SUS, SEI, CDS, VAC okay. Our DMT online.ligo.org/ "Observe" data isn't updating.
TITLE: 06/26 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 137Mpc
SHIFT SUMMARY:
- H1 had a power glitch, but has since been recovered
- During an IA post recovery, I have to move SRM by 20 ur in P, and 13 in Y to get SRY to catch - something we haven't had to do for a bit
- Lockloss - due to commissioning error
- Upon relocking - I had an issue not getting any light on ALS X. After trying to move the EX sus to no avail, I tried doing a slider revert for ETMX and TMSX to the last time green arms were locked - no dice
- Held at OMC whitening to damp violins for ~10 minutes
- Acquired NLN @ 1:22, OBSERVE @ 1:23
- S230627c @ 1:53
- EX saturation @ 2:55
- Leaving H1 to Camilla with H1 still going strong :)
LOG:
No log for this shift.
H1 has been locked and in obeserving for just under 2 hours. Ground motion and wind are low, and the IFO appears stable. Event @ 1:53 UTC.
Since we are back to a lower and more stable operating power, I reduced the power step up time in the MAX_POWER guardian state. Each 5W step up from 25W took 60 seconds, now that time is down to 45 seconds. Tomorrow, I'd like to try a 30 second step time, which I think should be fine since the ASC is much more stable up to 60W. This has already reduced the relocking time by almost 2 minutes, and will reduce it more.
Today I reduced this timer to 30 seconds. We just had a successful power up! This is now guardianized. Should shave 3.5 minutes off the locking process.
Note: the ASC loops are notoriously unreliable. I encourage operators to pay attention to the power up process and if there are problems with ASC instabilities during the final power up, increasing this timer will likely be a good solution. Tagging ops.
When relocking (currently at MAX POWER), Elenna and I noticed a possible error on the violin mode overview. The overview itself was reading that some of the 500 Hz monitor filters, particularly ITMY 5/6 and ETMX 18/19 are showing abhorrently high values. However, DARM is showing that the peaks, while not great, are nowhere near as bad as the violin overview is saying it is - both screenshots attached. In addition, the DCPD overview is actually converging and were are getting no verbal saturations anywhere. This is the first we're seeing of this, so we're not really sure what to make of it (Elenna theorized that maybe it could have been a faulty impulse response). Will keep monitoring as we continue up, curious what will happen with the VIOLIN_DAMPING guardian once ISC LOCK gets to DAMP VIOLINS FULL POWER.
(Jordan V., Gerardo M.)
Today at 21:06 utc, we removed BSC5 AIP controller and replace it with a "new" one, this was done in an attempt to solve the oscillation noted on BSC5 readback channel, see attachment.
Unfortunately, after looking at the trend data it appears that the oscillation remains despite the new controller. We will continue to look into this.
Dust pump 2950 Removed from EX Failed to turn over when plugged in on the test bench.
After we opened it up following the procedure dcc.ligo.org/LIGO-T2100415dcc.ligo.org/LIGO-T2100415in T2100415-v1 we found the vanes were broken and had Jammed the pump, stopping it from rotating. See pictures attached.
When we tested the pump after the rebuild it had excellent performance on the test bench. But once again when we redeployed the unit it seemed to have a hard time pumping down.
After adjusting the tightness of the plastic black filter caps and seals on the front of the pump housing, I noticed that it was leaking from the exhaust seal. Once I realized that the there wasn't anywhere that the exhaust could easily be expelled from, it all made sense. The brass manifold muffler was clogged and installing a new one resolved the issues of the pumps not being able to pump down properly when deployed. The reason that the pumps work on the test bench and not when deployed is that the volume of air that needs to be pumped down when deployed is much greater than the volume of air they pump down during testing.
Dust pump from CS 1194 was opened up. The filters had holes in them and there was a lot of black fuzz on the inside. Vanes looked good though. We rebuilt it anyways because it hadn't been rebuilt in a while.
While on the test bench, I made a cable to power it from AC Power. This pump seemed to lurch forward and then sputter out, lurch then sputter. Kinda sounded like a lawn mower that was running out of gas. I think the extention cablewe used in the welding shop may not be able to provide enough energy to run this particular pump as it's much bigger than the corner station pumps. But the vacuum pressure was good during the sputtering. So we took it to the Corner Station tested in place and decided to leave it running. The Brass Manifold Muffler was replaced on this pump.
3238 was removed from duty and placed on the floor as it likely needs a new Brass Manifold like all the others.
I updated the maintenance log.
Documents:
LIGO Document T2100415-v1 Checking Dust Monitor Vacuum Pumps
LIGO Document E1600132-v4 Gast Rotary Vane Vacuum Pump rebuild procedures (note: I have not yet modified this document to denote the replacement of the Brass Manifold Muffler)
LIGO Document Q2200010-v4 Dust Monitor Pump History & Rebuild Maintenance Log
I've adapted Camilla's script for squeezer measurements for a measurement that we plan to do early in the morning before the maintence window.
The test will chop the DARM offset up and down twice, sitting for a minute at each DARM offset. Once it has finished it should set the DARM offset back to the original value and then set the OM2 ring heater to maximum. The DARM offset step will be an SDF difference that will take us out of observing, after the steps are done (by 5:05 am) we should be able to go back to observing, but will have to unmonitor or accept the OM2 setting. The script will then wait 2 hours an repeat the DARM offset changes at 7am.
The script is attached here, and at /ligo/home/sheila.dwyer/OMC/script_to_move_DARM_offset_move_TSAMS.py
to run it: python script_move_DARM_offset_move_TSAMS.py -s 1371902418 where the gps time is the start time. This script is already running on ZOTWS17.
For the owl shift operator (Corey), all that you should need to do is wait for the script to run, then when it finishes at around 5:05, accept the SDF difference for H1:AWC-OM2_TSAMS_POWER_SET. At a few minutes after 7 am it will again change the DARM offset and kick us out of observing.
I caused a lockloss from 2W DC readout testing this script, because the DARM offset value I used there was too extreme. Sorry!
Tagging CDS, please do not restart zotws17 until this script has finished runnning.
TITLE: 06/26 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
SHIFT SUMMARY:
Lock#1:
Out of Observing at 16:20 for some DETCHAR safety injections, back into Observing at 16:30
The PCALX OFS saturated during the injection at 16:26 and was toggled off and on at 16:29
Out of Observing at 17:16 where we went to NLN_CAL_MEAS to rerun the DetChar safety injections LHO:70817, back into Observing at 17:26
19:33 Hanford Site emergency monthly test call
20:40 Within a minute, we had a glitch with corresponding EX saturation, the lights flickered in the control room and boths arm HEPI and both stage ISI watchdogs all tripped and then we lost lock. There was a small fast storm that was passing over the site at the time. I tried to reset the watchdogs, it worked fine for the ISIs but the HEPIs retripped after a few minutes. I put everything into SAFE for Team CDS to try and restart the models, to no avail, after calling Jim we sent Fil and Erik to the Ends to hit the physical reset button for the VFD panels after seeing that the HEPIs were reporting 0 pressure (H1:HPI-PUMP_EX_PRESS_DIFF_PSI) instead of the nominal 70.
Once the button was pressed the voltage was recovered manually from the medm X/YArm->HEPI->Beckoff Pump Controller
I opened an ndscope to monitor the pressure H1:HPI-PUMP_EX_PRESS_DIFF_PSI and set the right most controller to manual and set the step size to 0.1V and stepped fairly quickly up to 1V then slowly up to about 2 watching the pressure the whole time. Once the pressure reached about 68 I set the controller back to auto and the automation took over and overshot a bit but recovered and we got back to a steady 70.00 +/- .03 [V].
Lock#2:
Yarm wasn't terrible but there was no light on Xarm, I started off by trending the OPLEVs for the ETMX and tried to return to the same positions which yielded some light but it wasn't great. After some tapping around I decided just to do an initial alignment and to recover Xarm I used the restore script and set it back to when green arms locked during the last acquisition and it immediately looked better. Increase flashes then came on and brought it back.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 14:57 | fac | karen | optics lab | N | technical cleaning | 15:26 |
| 15:20 | FAC | Karen | MidY | N | Tech clea | 16:36 |
| 16:20 | CAL | Jenne | CR | N | DETCHAR safety injections | 16:33 |
| 16:27 | FAC | Kim | MidX | N | Tech clean | 17:15 |
| 17:17 | CAL | Jenne | CR | N | DetChar inj | 17:29 |
| 20:01 | FAC | Bubba | Overpass, FCES | N | Driving to the FCES | 20:20 |
| 20:06 | FAC | Ken | Mech room | N | Check out some lights | 20:14 |
| 20:50 | VAC | Gerardo & Jordan | EndX | N | Repalce a pump | 21:21 |
| 21:03 | FAC | Randy | N | Move horse trailer | 21:44 | |
| 21:08 | PEM | Tony & Mitchell | EndX | N | Replace Dust monitor vac pump | 22:08 |
| 21:29 | EE | Erik & Fil | EndY | N | Reset VFD panel | 22:14 |
TITLE: 06/25 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 137Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 11mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.04 μm/s
QUICK SUMMARY:
- H1 is currently going through an IA, recovering from a power glitch
- SEI motion low, DMs ok
- Will commence locking once the IA is complete
All the HEPI and ISI watchdogs tripped at both Ends and we saw the lights flicker in the control room. No issues on CDS, I tried to untrip them after seeing the values were below the trip points, but the HEPIs retripped after a few minutes and the ISIs stayed untripped.
Attached mainsmon channels for EX show a glitch at 13:39 PDT
Here is a zoomed-in plot of the 3 phases for each building, about 5 cycles of the 60Hz shown. I think I can convince myself that the CS glitch was not as great at the EX, EY glitches.
Storms were just SW of us at the time of the lights flickering and although didn't look crazy, the radar shows more sever thunderstorms in the area than anticipated. A series of them appeared to move East and a bit North within the hour.
Reduced the baffle PD gain in ETMX PD 1 to 0dB from 20dB (H1:AOS-ETMX_BAFFLEPD_1_GAIN), since it was pegged at 10V.
ETMX baffle PDs 2, 3 and 4 are ok and gain at 20dB.
ETMY baffled PD 1 is close to saturation and gain is 20dB.
ETMY baffle PDs 2 and 3 have a gain of 0dB and show very small signals. Could be increased to 20dB.
ETMY baffle PD 4 saturates at the begining of a lock. Gain could be decreased to 0dB.
ITMX baffle PD 1 gain at 0dB, 20dB for all others, signal ok.
The ITMY baffle PD 3 readout seems broken since at least January 2022; maybe related to this upgrade.
All gains at 20dB, signals for all but PD 3 seem ok.
The attached trend plot shows all baffled PDs at 75W and 60W input. The transition happens near -5d in the plot. The power reduction is as high as a factor of 2 but typically much less.
Daniel has also taken it as an action item to work with an operator to ensure that the gain changes are handled nicely by the baffle PD script, since we use PDs 1 and 4 on each test mass to help us align the green beam if we cannot see flashes. Daniel's suggestion is to have the script increase the gain values to what they had been, and then after the script completes, return them to the lower values that they have now. This should allow the PDs to be useful for both the pre-initial alignment, as well as be useful when in full lock.
Changed and restored the baffled PD gain in the baffleAlign.py script (function alignToBafflePD). Needs testing.
Script tested
Since LLO was briefly out of lock, and the DetChar safety injections are way overdue, and those don't need to be done in coincidence with LLO, I ran them.
I used the instructions in 69723, and ran hwinj detchar safety --run. I specifically did not set a gps time, so that it would just go at 10 sec after I hit Enter, and it automatically avoided beginning on an integer second. Sidd notes that there is a Google doc with instructions, although the script should take care of most of it, including the output switch. These filter modules and the gain that are noted in the first section of the document are the same that are needed for the stochastic injections, so should already be SDFed on at both sites after last Friday.
The first time I ran this, initializing took a few seconds longer than I think the developers anticipated, so I got an error about 'Block time is already past'. I just re-ran it, and it initialized faster the second time, and worked as expected. Git issue started here.
Once the injections were complete, RyanC noticed that PCALx (used for these injections) had its OFS servo saturating. RyanC toggled it on and off, and it is happy again. I wonder if the amplitudes were tuned at a time when the now-always-on PCALx lines were not yet on. It sounds like the amplitude tuning happened at LLO and at that time they were well away from the saturation limits, so it could also be an issue of we've got a sligthly different set of things going on with our PCALx. Git issue started here.
Restarted, but this time in NLN_CAL_MEAS. 1371835071.45 is the approx start time.
This seems to have solved the saturation issue.
The gpstime of first waveform:
first set of injection: 1371831717.24
second set of injection: 1371835081.45
The injection parameters and gracedb id etc can be found here https://git.ligo.org/siddharth.soni/hardware_injections/-/tree/main/data
Attached in below.png is H1:CAL-PCALX_OFS_PD_OUT16, with the injections that were picked up by kleineWelle in the vertical lines, for the first set of injections before the calibration lines were turned off. Seems like the PCAL changes right before the injections stop being picked up (they should last more or less the entire duration of the plot). The same channel for the second set of injections (after the calibration lines were turned off) is shown in after.png, and looks better behaved. after_omegacan.png also shows that we recover what we expect. We'll add to the hwinj instructions to turn off calibration lines before performing safety injections in the future.
We have updated the google document to check for turning off the PCAL x lines before peforming the injections.
I have installed a 'hwinj' application at both sites that handle opportunistic hardware injections for the detchar and stochastic groups:
The command takes two arguments: the injection group (in this case "detchar" or "stochastic"), and a specific injection name. The available injection names/waveforms are configured in the hwinj.yaml config file. By default injections are scheduledf or 10 seconds after the command is executed. You need to pass the '--run' option to actually execute the injection (see --help for more info). The detchar injection will be followed by a GraceDB upload, and the script should handle automatically fetching the authentication cert for the user executing it. The '--dry' option can be used to test everything without actually initiating the injection or gracedb upload.
These injections should happen in nominal low noise. Assuming everything is configured correctly and the DIAG_EXEC guardian nodes are properly tied into the overall guardian IFO status, initiation of these nodes should take us out of OBSERVING automatically.
The setup current supports three different injections, one "safety" injection for detchar (~7 min), and a long (30 min) and short (13 min) injection for the stochastic group. The relevant injection commands would then be:
We should coordinate with the detchar and stochastic groups to run these injections during this last week of engineering run.
Before performing the Detchar Safety injections, please go through the checks in this google document.
ITMY modes 5/6 damped fine once I engaged the nominal damping settings (IY05 @ -0.04). As the modes were rang up the VIOLIN_DAMPING guardian had set IYmode5 gain to "max_gain" value of 0, we could change this to -0.04 or -0.01, tagging SUS.