Kyle, Chandra These will provide a means to mount 55 l/s ion pumps to the floor and eventually to connect these to the vent/purge valves of these two 80K pumps (once the ECR is approved, that is!)
All four turbine flow sensors for heads 1-4 were replaced with new ones. The signals from all four sensors look okay but the one from head 3 is the noisiest. Some air bubbles were seen in the flow sensors but seemingly have worked their way out of the sensors by the time the oscillator lid was put back. Going by the froth in the crystal chiller spout, we might still have some air bubbles in the system. A large metal, copper coloured chip came out of the head 1 flow sensor. The chip was approximately 2 mm x 2 mm. The vortex flow sensors for the power meter and front end laser were re-attached and released in Beckhoff - they were previously force written. If problems come up again, we can force write them again. The bypass valve is fully open. Jason / Peter
Given the colour of the chip, it is likely that it came from one of the following: i. the pump light block ii. the ASE block, or iii. the pump light monitoring block
Maintenance:
15:15 - Chris and Joe to EY - done
15:15 - Christina and Karen to EX - done
15:20 - Fil and Alfredo to CER to pull cables - done
15:20 - SEI_CONF set to SC_OFF_NOBRSXY which is: sensor correction off, BRSX and BRSY set to be not in use
15:21 - Hugh to EY - done
15:30 - Christina and Karen to MX, then EY - done
15:31 - Travis to EX - done
16:00 - Chris and Joe to EX then MX - done
16:10 - Kiwamu help me diagnose the state of H1, because the Down scritp did not complete - done
16:20 - Hugh leaving EY heading to EX - done
16:21 - Ryan restarted alog - done
16:25 - Bubba to LVEA to check 3IFO - done
16:27 - Jonathan done restarting the DMT login box - done
16:37 - Kyle to LVEA to install bolts - done
16:41 - Richard done working on ITMY oplev, it came back and damped as expected - done
16:42 - Bubba and John at the exit gate which isn't working
16:45 - Karen leaving EY - done
16:50 - Betsy to LVEA for 3IFO - done
16:56 - Richard - back to LVEA - changing to laser hazard
17:00 - LVEA is laser hazard
17:10 - Betsy and Nutsinee to LVEA for TCS clipping check
17:15 - Tour of high school students
17:45 - Travis back from EY
17:46 - Travis filled the crystal chiller while Peter and Jason restarted it after swapping flow sensors
18:06 - PSL chiller back on
18:07 - Kyle and Chandra out of LVEA
18:30 - Kyle to EY
19:06 - Kyle back from EY
AS OF 19:30UTC:
AS OF 20:00UTC:
This morning Verbal Alarms raised two bogus hardware injection alarms. The code looking at these alarms is monitoring h1calcs channels instead of h1calex. I've opened an FRS ticket 7626
Moved the ITMY Oplev power from the PEM chassis to the OpLev supply at the top of the rack between BSC1 and BSC2. This is per Filiberto's design.
I have added notes to the procedure outside the PSL to account for the AC units needing to be off. We now turn off the breakers while in science and turn them on before entree in to the enclosure. Please note additional sheet.
Hoping to reduce the lowest frequency noise of the T240, I added 3/4" thick (radial) pipe insulation to the three platform legs. See the photos for before and after with a shot of the insulation too. The legs are 2" diameter and 12ish" long. Don't think much thicker insulation could be used until we do something different with the C-clamps holding the table legs to the base platform.
Cheryl, Hugh, & Krishna (on phone)
I ignored it doing other things but Cheryl noticed that the BRS was not damping down after my insulating incursion and more importantly that the BRS BOXBIT (health evidence seen in H1:IOP-SEI_EY_MADC0_EPICS_CH30) was zero'd after typically running at 8192.
The problem was the ethernet cable on the Beckoff box internal to the BRS enclosure. The box is barely accessible and this cable is on the blind backside. I was able to pull this out w/o releasing any locking tab and I was unable to push it in such that it clicked/locked in place. After pushing (actually blind toward me pulling) hard it now seems connected (although again, not locked in place) and the BOXBIT is good and the damping worked.
For operators: if we ISI guys walk away and say everything is okay and you see that the signals are still rung up after an hour, it isn't going to damp and something needs attention. You tell us to get it together and fix it! Thanks!
Jim--watch the drift mon, it may hit -15k before it returns on the warm up!
Lowered CP3 actuator setting to 14% open after Dewar was filled this morning.
TITLE: 03/14 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT:
Wind: 9mph Gusts, 7mph 5min avg
Primary useism: 0.26 μm/s
Secondary useism: 0.22 μm/s
QUICK SUMMARY:
TITLE: 03/14 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 67Mpc
INCOMING OPERATOR: Cheryl
SHIFT SUMMARY: Quiet shift.
LOG: Nothing to report. Not even a peep from a PI.
TITLE: 03/14 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 66Mpc
INCOMING OPERATOR: Jim
SHIFT SUMMARY:
Nice quiet shift with H1 riding through a 6.0 India quake. H1 has been locked for over 24hrs. A 5.9 Indonesia quake was supposed to shake us at 6:53utc, but have not seen evidence of it locally yet (as of 7:00utc).
TITLE: 03/14 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 68Mpc
OUTGOING OPERATOR: Jeff
CURRENT ENVIRONMENT:
Wind: 7mph Gusts, 5mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.21 μm/s
Overall useism trends slowly getting quieter over the last 24hrs & have low winds as well (it's currently raining out).
QUICK SUMMARY:
Bartlett handed off an OBSERVING H1 (currently over 17.5hr of lock).
Noticed a User Message on the IFO Guardian Node (which is part of Intent Bit). Basically with a flag about an SPM Diff for the HAM2 HEPI (have Time Ramp values of 30sec [vs 0sec] for horiz & vert channel). I reckon these could be taken back to 0sec, but don't want to do that while in OBSERVING.
Shift Summary: Ran A2L DTT check – Pitch OK, Yaw are a bit elevated. The tumbleweed balers are starting to clear the Y-Arm today so the N2 delivery to CP-7 on Tuesday.
There is a vehicle stuck in the sand approximately 500 meters down from End-X, near the beam tube. Bubba is investigating.
Good Observing shift. Environment is good. No issues to report.
Activity Log: Time - UTC (PT)
Around 10:30 A M local time while inspecting the arm roads for tumbleweeds, John and I found a GSA vehicle stuck ~ 500 meters south of the Y End station and ~50 meters east of the access road. I have tracked the vehicle by license plate # to the fleet manager and she is contacting the manager that the vehicle is assigned to. I told her that in the future, anyone entering the site should, 1. Enter by the gate and not the desert and 2. Notify the control room when on site. I also asked that the vehicle be removed tomorrow during our maintenance period.
The manager of GWS has contacted me and agreed to remove the vehicle tomorrow during our maintenance period. He indicated that they did not think anyone was on site yesterday which is why they choose to enter from the desert. I informed him that there is someone here 24/7 and to please inform the control room of their intentions henceforth.
GWS = Ground Water Services - ie well testing.
I distinctly remember the X ARM.....
Starting CP3 fill. LLCV enabled. LLCV set to manual control. LLCV set to 50% open. Fill completed in 2192 seconds. TC B did not register fill. LLCV set back to 19.0% open. Starting CP4 fill. LLCV enabled. LLCV set to manual control. LLCV set to 70% open. Fill completed in 2276 seconds. TC A did not register fill. LLCV set back to 41.0% open.
Increased each by 1%. CP3 now 20% open and CP4 42% open.
Miriam, Hunter, Andy A subset of blip glitches appear to be due to a glitch in the ETMY L2 coil driver chain. We measured the transfer function from the ETMY L2 MASTER channel to the NOISEMON channel (specifically, for the LR quadrant). We used this to subtract the drive signal out of the noisemon, so what remains would be any glitches in the coil drive chain itself (and not just feedback from DARM). The subtraction works very well as seen in plot 1, with the noise floor a factor of 100 below the signal from about 4 to 800 Hz. We identified some blip glitches from Feb 11 and 12 as well as Mar 6 and 7. Some of the Omega scans of the raw noisemon signals look suspicious, so we performed the subtraction. The noisemons seem to have an analog saturation limit at +/- 22,000 counts, so we looked for cases where the noisemon signal is clearly below this. In some cases, there was nothing seen in the noisemon after subtraction, or what remained was small and seemed like it might be due to a soft saturation or nonlinearity in the noisemon. However we have identified at least three times where there is a strong residual. These are the second through fourth plots. We now plan to automate this process to look at many more blip and check all test mass L2 coils in all quadrants.
In case someone wants to know, the times we report here are:
1170833873.5
1170934017
1170975288.38
I have noticed similarly caused glitches on the 10th March, in particular for the highest SNR Omicron glitch for the day:
Looking at the OmegaScan of this glitch in H(t) and then the highest SNR coincident channels which are all the quadrants of H1:SUS-ETMY_L2_NOISEMON:
Hi Borja,
could you point us to the link to those omega scans? I would like to see the time series plots to check if the noisemon channels are saturating (we saw that sometimes they look like that in the spectrogram when it saturates).
I am also going to look into the blip glitches I got for March 10 to see if I find more of those (although I won't have glitches with such a high SNR like the one you posted).
Thanks!
Hi Miriam,
The above OmegaScan can be found here
Also I noticed that yesterday the highest New SNR glitch for the whole day reported by PyCBC live 'Short' is of this type as well. The OmegaScan for this one can be found here.
Hope if helps!
Hi Miriam, Borja,
While following up on a GraceDB trigger, I looked at several glitches from March 1 which seem to match those that Borja posted. The omegascans are here, in case these are also of interest to you.
Hi,
Borja, in the first omega scan you sent, the noisemon channels are indeed saturated. In that case it is difficult to tell apart if that is the reason for the spectrogram looking like that or if indeed it might be a glitch in the coil drive. Once Andy has a more final version of his code, we can check on that. In the second omega scan, the noisemon channels look just like the blip glitch looks in the calib_strain channel, which means the blip was probably already in the DARM loop before and the noisemon channels are just hearing it. Notice also that, besides the PyCBC_Live 'short', we have a version of PyCBC_Live that is dedicated specifically to find blip glitches (see aLog 34257), so at some point we will be looking into times coming from there (I will keep in mind to look into the March 10 list).
Paul, those omega scans do not quite look like what we are looking for. We did look into some blip glitches where the noisemon channels looked like what you sent and we did not find any evidence for glitches in the coil drive. But thanks for your omega scans, I will be checking those times when Andy has a final version of the subtraction code.