TITLE: 05/03 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 67Mpc
OUTGOING OPERATOR: Travis
CURRENT ENVIRONMENT:
Wind: 8mph Gusts, 6mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.20 μm/s
QUICK SUMMARY: LLO is not locked anymore and seems like may be down for a bit due to severe weather. Meanwhile we have been going for 15hrs.
TITLE: 05/03 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 65Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: Locked for the entire shift and for ~15 hours total so far. No issues.
LOG:
22:09 John adjusting temperature at EY from control room.
22:33 LLO is down. Taking advantage to run some measurements for Sheila. INJ_TRANS set back to INJECT_SUCCESS. Commissioning mode.
22:35 Betsy to LVEA to take pics.
22:42 Betsy out.
22:42 Giving up on taking Sheila's measurements. I increased the amplitude of the excitation by a factor of 10^4 above her suggestion (started at 10^-3, ended at 10). Not entirely sure I am doing it correctly and wanted to play it safe and not break the lock at the end of my shift. Will try again tomorrow if conditions allow.
22:45 Back to Observing.
FAMIS 6896 I do not see any that are particularly elevated.
Outdoor temperatures have risen dramatically today and I noticed that we were on the verge of losing control of temperature in the END Y VEA.
I have reduced the drive signal to the HC3 heater from 12.5ma to 11.5 ma. I'll continue to monitor.
No issues to report. Lock is 11.5 hours old.
Starting CP3 fill. LLCV enabled. LLCV set to manual control. LLCV set to 50% open. Fill completed in 309 seconds. TC B did not register fill. LLCV set back to 18.0% open. Starting CP4 fill. LLCV enabled. LLCV set to manual control. LLCV set to 70% open. Fill completed in 326 seconds. TC A did not register fill. LLCV set back to 42.0% open.
this was a fully automated fill using the new debian8 system.
TJ Massinger, Derek Davis, Laura Nuttall
Summary: the intervention on April 26th cleaned up the ETMY oplev glitching but the ETMX oplev is still causing problems. Both noise sources have been seen to cause loud background triggers for transient searches when they're glitching.
Since the end station oplev laser glitches have been seen to couple into h(t), we had to develop vetoes based on the BLRMS of the OPLEV_SUM channels to flag and remove these transients from CBC and Burst searches. Using the thresholds that capture the coupling into h(t), we were able to take some long trends of the impact of the oplev glitching using the veto evaluation toolkit (VET).
ETMY results: https://ldas-jobs.ligo-wa.caltech.edu/~tjmassin/detchar/VET/ETMY_OPLEV/1176422418-1177779618/65/
The take home message from this result page is the Omicron glitchgrams. In the first attached image, every blue point is an Omicron trigger. Those in red are triggers coincident with the ETMY L3 OPLEV SUM having a high 10-50 Hz BLRMS. This population of glitches in the bucket was severely damaging for transient searches, but the problem seems to have gone away since the intervention on April 26th (alog 35798). Looking at the segments where the OPLEV SUM BLRMS was high (attachment 2), we see that after April 26th there are few times when the flag is active, which indicates that there are fewer fluctuations in the OPLEV SUM readout.
ETMX results: https://ldas-jobs.ligo-wa.caltech.edu/~tjmassin/detchar/VET/ETMX_OPLEV/1176422418-1177779618/7_5/
Once again, looking at the Omicron glitchgram (attachment 3), every blue point is an Omicron trigger. Those in red are trigger coincident with ETMX L3 OPLEV SUM having a high 45-100 Hz BLRMS (chosen to capture the power fluctuations in the oplev). The ETMX oplev glitches aren't quite as damaging as the ETMY oplev glitches, but they're still producing loud background triggers in the transient searches. Looking at the segments where this flag was active (attachment 4), we see that the ETMX oplev laser has been glitching on and off over the last two weeks and coupling into h(t).
I talked to Jason Oberling about this earlier, and it sounds like the same quick fix that worked for ETMY won't work for ETMX, since that laser is already operating near the high end of its power range and I turned up the power on friday (35887). Jason is working on tuning up the one other laser that he has to be ready to make a swap on Tuesday, but the ITMY oplev laser may also be failing so the new laser might be needed for that. If we need to fix the problem in hardware, the only immediate option we have is to turn off the oplev completely, which will mean that we loose our only independent reference of the optic angle. We're reluctant to loose this especially since we have had several computer crashes recently.
Is the message of the alog above that the veto is good enough until the laser can be fixed (probably not until the vent)?
Also, although Jason can probably fix this laser in a few weeks, these lasers fail frequently and we probably will continue to have problems like this throughout the rest of the run.
It would be interesting if detchar can also have a look at ITMs and L1. I'd like to know if similar glitches in oplev power causes similar glitches in DARM regardless of the test mass and IFO, which should be the case if it's radiation pressure.
Sheila: The OPLEV SUM channels are good witnesses for this, so we can monitor them and veto glitchy times from the searches. The deadtime from the ETMX glitches isn't much, VET shows 0.04% deadtime over the roughly two weeks I ran this for, so it's not damaging to the search in terms of analysis time if we need to continue to veto them.
Keita: I'm also curious about the ITMs and L1, they're next up on the list.
Keita: It looks like the glitches in the ITMs typically have a lower SNR than those in the ETMs. I mentioned this in LLO alog 33531, but will attach the relevant figure here as well.
The attached figure shows SNR histograms of Omicron triggers from the LHO test mass OPLEV_SUM channels with the x-axis restricted to show loud (SNR > 100) triggers. The loudest SNR Omicron triggers in the ITM OPLEV_SUM channels are 450 and 550 for ITMX and ITMY respectively and they're part of a sparse tail of loud triggers. For ETMX and ETMY, the loudest Omicron triggers have SNR 750 and 5500 for ETMX and ETMY respectively and both have a higher population of loud glitches (ETMY in particular).
TITLE: 05/3 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing
OUTGOING OPERATOR: Jeff
QUICK SUMMARY: No issues were handed off. Lock is just over 7 hours old.
Shift Summary: Run A2L check script. Pitch is OK; Yaw is slightly elevated below 20Hz.
Lockloss – Unknown – No problems or issues at lockloss. Relocked after a quick tweak of ETM-Y for ALS, PR3, and the BS.
After lockloss, A2L Pitch slightly elevated; Yaw up to 0.9. LLO is down, dropped out of Observing to run A2L repair script.
Lock and Observing for past 6.75 hours. A smooth shift with favorable environmental conditions and a mostly well-behaved interferometer.
Looks like CP4 is too full so I lowered LLCV actuator setting from 43% open to 42% open.
One lock loss early in thew shift. Relocked with no problems. Ran A2L repair script to fix Yaw. Environmental conditions are good. No problems or issues to report.
FAMIS #8296 Completed monthly PSL Chiller filter inspection. Both filter are clean and good. No debris or discoloration noted. Photos of both filter are attached. Close FAMIS #8296
TITLE: 05/03 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 65Mpc
INCOMING OPERATOR: Jeff
SHIFT SUMMARY: Had one scare with a 5.9M earthquake in the Atlantic, but made it through and have been at 65Mpc for 9.5hrs
Here is a link to a BRUCO scan that Evan G and I ran today on just a minute of data.
Below 30 Hz DHARD Y is dominating, we could try adding more aggressive cut offs as Brett Shapiro has noted and as we did a few weeks ago for CHARD.
From 30-60 Hz we have coherence with ETMX oplev sum. Looking at the summary pages you can see that ETMX oplev is glitching. Taking a longer coherence measurement, I don't see coherence with the oplev, but this just might be a result of the glitching causing momentary problems in DARM. Maybe we shoudl think about unplugging oplevs unless we think that it will get easier to prevent the glitching.
At higher frequencies, our jitter coupling has gotten worse as noted before in the last two weeks. IMC WFS P PSL bullseye