After the jump here, we resynchronized the atomic clock with GPS.
The fault codes listed correspond to: 0x16 - reboot alert 0x07 - CBT signal degradation. So this looks like a reminder that we have had the clock running for a long time and it is getting older.
TITLE: 06/13 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
CURRENT ENVIRONMENT:
SEI_ENV state: MAINTENANCE
Wind: 6mph Gusts, 4mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.10 μm/s
QUICK SUMMARY: Starting Maintenance day. Ryan has taken the SEI_CONF to Maintenance, IFO has stayed locked so far.
Dust monitors, VAC, SUS, SEI, CDS okay
TITLE: 06/13 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
SHIFT SUMMARY:
Lock #1
Lock #2
Both arms went through increase flashes, Xarm took a while (10 mins)
Couldn't catch DRMI, or PRMI and it was clear from AS AIR that something was badly aligned so I ran an initial alignment. DRMI still struggled a bit despite good flashes and took about 5 minutes to lock.
NLN at 14:51
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 13:10 | CDS | Erik | Remote | N | Restarting NUCs OPSLogin0 | 13:22 |
| 14:24 | FAC | Tyler | MidY | N | Slowly move snorkle lift to MidY | Ongoing |
Lockloss at 13:16UTC, we were getting hit by a 5.4 from Papau New Guinea, seismic was starting to increase. Verbal did not say if SEI_CONF went to EQ mode
Control room workstations were updated and rebooted, including opslogin0 (nomachine). Only operating systems were updated. Conda environments were not affected.
STATE of H1: Observing at 137Mpc
We've been locked for 4:18, in observing for 4:07
We had an EX saturation at 10:19:45UTC
No alarms, 1 very small EQ ~2hrs ago (max of 165 on Peakmon)
TITLE: 06/13 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 132Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 7mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.10 μm/s
QUICK SUMMARY:
Taking over from Austin
TITLE: 06/12 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 133Mpc
SHIFT SUMMARY:
- Lock #1:
- Lock #2:
- Had issues with ALS locking, so redoing an IA, during which it seems the X beatnote was poor again, so I reset the crystal frquency to get the beatnote to ~39 MHz, that seemed to have fixed the issue, rest of IA went fine
- Lock #3:
- Leaving H1 to Ryan C. in observing
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 00:31 | PCAL | Tony | PCAL Lab | N | Prep for PCAL measurement | 00:43 |
LOCKLOSS @ 4:35, not seeing a whole lot of ASC movement, but am seeing some motion in LSC loops - scope attached
Following a lockloss, H1 has now been locked and in observing for ~1:30, all appears nominal.
The Following features have been removed from the Vacuum screen on nuc22 found at /opt/rtcds/userapps/release/cds/h1/scripts/fom_startup/nuc22/H0_VAC_SITE_OVERVIEW_CUSTOM.ui and H0_VAC_SITE_OVERVIEW_CUSTOM.adl upon the request of Gerardo.
Neg pumps pressure transducers at the corner station: PT-191, PT-192, and PT-193.
Neg pumps pressure transducers at the Y-End station: PT-426 and PT-428.
Neg pump pressure transducer at the X-End station: PT-526.
The corner station "CP1 IP" and "CP2 IP".
The old versions still exist and have been renamed with an .adl.old file extension. For the MEDM screen that Sitemap uses the file lives here /opt/rtcds/userapps/release/vacuum/h0/medm/Target/H0_VAC_SITE_OVERVIEW_CUSTOM.adl
LOCKLOSS @ 0:07, had a SRM saturation right before the lockloss. Seeing some movement in INP1 P - scope attached
We have had several locklosses recently that can probably be attributed to a CSOFT P ring up. Starting on 6/8, I looked through all the locklosses and found 6 in total that had a CSOFT P ringup with the oscillation at 0.45 Hz. Some of these have already been noted in various alogs (shout out to Camilla for pointing them out here 70267).
CSOFT P locklosses:
6/8 2:45 UTC
6/8 6:53 UTC
6/10 6:53 UTC
6/10 19:50 UTC
6/11 21:45 UTC
6/12 12:12 UTC
The operating power for each of these locks seems to be about 430 kW in the X arm and 432 kW in the Y arm +/- 1 or 2 kW which is pretty standard, so it does not appear to be power related (at least lock to lock). The TMS servo signals also look stable and we are staying well centered on the TMS B QPDs and are not very far off center in the A QPDs.
Each of these locks have a different durations ranging from 2 to 9 hours, and there have been several other locks in this period where the locklosses are not attributed to ASC. Whatever the problem is, it does not appear to be consistent from lock to lock
We haven't had ASC problems for a while. These are some main differences from our known-to-be-stable configuration that could affect the ASC: ring heater change on EX, 1 W drop in input power, new input alignment from PR3 move.
One thing we could try is increasing the loop gain slightly and seeing if these locklosses persist. Right now the EPICs gain is set to 20. I have been able to increase this gain by up to 3 dB in the past without issues.
TITLE: 06/12 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 133Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 11mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY:
- IFO is locked and in observing as of 12:41
- CDS/SEI/DMs ok
TITLE: 06/12 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 133Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 12mph Gusts, 10mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.11 μm/s
QUICK SUMMARY:
Lockloss at 16:13 UTC
Started trying to immediatley relock.
Lockloss at ACQUIRE_DRMI_1F 16:29 UTC.
Starting Initial_Alignment 16:30 UTC.
Initial Alignment went smoothly.
Relocking went smoothly as well.
Arrived at NOMINAL_LOW_NOISE at 17:33 UTC.
Had a Strange SDF Diff H1:ISC-RF_Y_AMP71M_OUTPUTMON Dr. it was 20.5 and is now 19.5. Dr.Sigg Accepted it.
OBSERVING Reached at 17:46 UTC
Unlocked Due someone Clicking an MEDM button they though was a safe button to click, but it was not.
See alog https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=70375
Reached NOMINAL_LOW_NOISE at 21:40 UTC
Shelia, & Brina did an excitation test for the gain frequency before the IFO was taken to Observing mode.
Observing 22:06 UTC
Current IFO Status: NOMINAL_LOW_NOISE & OBSERVING
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 16:17 | FAC | Kim | Mid X | N | Technical Cleaning | 17:30 |
| 16:44 | FAC | Karen | MidY | N | Technical Cleaning | 17:27 |
| 18:06 | FAC | Karen | optic lab | N | Technical Cleaning | 18:20 |
| 21:27 | VAC | Gerardo | MIDX | N | Checking on a Vacuum pump | 21:57 |
Earlier Lockloss from unknown cause.
Lockloss from NLN:
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1370621620
I tried trending a number of channels back that seemed like they had a large magnitude or interesting abbiration, But when compairing them to live IFO channels they seem to be channels that usually have larger magnitudes.
Even after the Analysis finished there isn't an well defined reason for the lockloss that is obvious to me.
This has been kind of hard to pick out, but it definitely seems like the IFO is less robust now against earthquakes that it was before increasing power over 60W in the IMC. Basically since then, we haven't been able to stay in NLN for any earthquakes over ~.5um/s, as measured by the peakmon eq witness channel.
Attached plot shows peakmon vs lock state trends since Feb 10 this year. The blue trace is the peakmon ground velocity minute trend, each green point indicates where the IFO was still locked 2 minutes after a local maximum ground velocity (found using the matlab peakfinder routine with some prominence and time separation requirements). The two marked points indicate where IMC power was increased above 60W (X=79730 minutes after Feb 10th) and the start of the run (X=148255 minutes after Feb10). Before power up we had multiple locks survive multipe 500nm/s eqs, a few around 1micron/s and one at almost 3micron/s. After the power up, the IFO doesn't ride out any eqs over 500nm/s and we basically not survived any notable eqs since the start of the run.
This is further reinforced by TJs alog from earlier today, when SEI_CS transitioning knocked us out of OBSERVE. Unless SEI_CS got dropped off the exclude recently, I suspect we never noticed because we were losing lock before the seismic system switched.
There have been no changes to the SEI eq controls. I looked at the small eq Camilla noted this morning and didn't see anything suspicious in the seismic systems, but I'll keep digging.
I will also add that this ~.5micron/s eq-band velocities are a level that the primary microseism can hit for a week or two at a time during the winter. Conditions that were already challenging for us in prior runs.
Another possible culprit from around the time of April 7: removing 12 dB of low-frequency (< 1 Hz) gain from the Michelson loop to try to reduce the amount of sensor noise reinjection in the GW band (LHO:68432).
It would be a relatively simple test to (1) increase the EPICS gain of the loop by 3 dB to place the UGF back around 10 Hz, and then (2) re-engage the 4:1 Hz boost (LSC-MICH2 FM3). If the noise increase in DARM is acceptably low, perhaps you could run like this for a week to see if the duty cycle improves.
If this extra gain helps during EQs, it could also be made part of the EQ guardian response.
Brina, Sheila,
We ran some excitations while locked, need to review plots (will come back to alog to update info)