TITLE: 12/11 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 72.7965Mpc
INCOMING OPERATOR: Jim
SHIFT SUMMARY:
Lock is going on 31+hrs with a range drifting slightly down from 75 to 70Mpc over the last 8-10hrs. useism has sorty of flattened out just under the 90 percentile.
LOG:
Cheryl observed an instance of H1 being dropped out of OBSERVING due to SDF changes tracked down to the Computer/SDF Node: sysecaty1plc2
(This was for the Yarm. We noticed this same issue a week ago for the analogous Computer/Node for the Xarm: sysecatx1plc2.)
I continued the work of figuring out who is our pesky channel dropping us out of OBSERVING. The first thing I did was look at the (3) channels Keita found for X last week and see if the Y-arm counterparts changed today---found nothing in dataviewer. I then ran the scripts Cheryl ran and came with the same result of seeing a change with the channel H1:FEC-1031_SDF_DIFF_CNT. But this is just a name for a channel SDF uses.
I then just went to where sysecaty1plc2 is on medm. This is related to Beckhoff, so maybe the channel can be tracked down by snooping around medm land. To get to a baseline/starting point, I went to:
SITE MAP / SYS / EtherCAT overview / H1 Y1 PLC2 /
From here you have several different subsystems (Als, Asc, Isc, Lsc, Sys). So, I went through all of these subsystems and the screens nested within them. The first thing I did was to find the "*_ERROR_FLAG" status box for each subsystem (it's green for all, and I reckon if there was a change to the system, it would go red). So I grabbed this channel for all the subsystems mentioned above, and the only one which changed when we dropped from OBSERVING was the Als one. I then played the same game--go into the nested windows within and trend "*_ERROR_FLAG" channels for each component within Als. Ultimately, I ended up finding a single channel which had activity around the time in question. It was found here:
SITE MAP / SYS / EtherCAT overview / H1 Y1 PLC2 / Als / Y / Fibr / Lock / Temeraturecontrols (i.e. H1ALS_Y1PLC2_Y_FIBR_LOCK_TEMPERATURECONTROLS.adl)
And on this medm, the channel in question is: H1:ALS-Y_FIBR_LOCK_TEMPERATURECONTROLS_ON
I'm not saying this is the ONLY channel which could be culprit for the OBSERVING drop, but this is one I saw drop out at that time (see attachment#1), BUT there is a caveat. If I look at 20min before the Drop, the ALS channel in question had some similar drop outs (see attachement#2). For the earlier one, the drops only lasted about 10sec (attachment#3). For the drops which took us out of OBSERVING (attachment#1), after 15sec of drops, then we dropped out of OBSERVING (& overall the ALS ON switch went off/on for about 40sec). So maybe the SDF changes have to happen for a certain amount of time before latching us out of OBSERVING?
As another check, I looked at the last 12hrs of this lock and the only time H1:ALS-Y_FIBR_LOCK_TEMPERATURECONTROLS_ON had these fits of turning OFF for a handful of seconds were in that 20min time period when we dropped out.
Question: Is this enough to warrant NOT MONITORING H1:ALS-Y_FIBR_LOCK_TEMPERATURECONTROLS_ON? Or should we keep searching?
WP 6385 I installed Conlog on the conlog-master and conlog-replica machines. I am going to leave it stopped and not acquiring data until I finalize the channel list with Dave. The work permit should be left open.
Keita made a template to passively measure the Quality of A2L for H1 & he said it would be good to look at this over a long period (alog#32106). We are currently on a 25+ Hour Lock. During this lock:
Before running the 2nd A2L I ran Keita's template for once an hour during the lock. I saw that many hours (~16hrs) after the 1st A2L that the DARM & ASC Coherence for PITCH began to increase again, and then I ran the second A2L. (I mainly focused on PITCH because it was the DOF which showed the most change.)
After both A2Ls, didn't really notice an improvement in range. Did notice a better looking DARM spectra (on nuc3).
Question: If the DARM & ASC Coherence increases like this, do we want to run the A2L? Is this the right thing to do? Or do we just let the Coherence increase?
Attached is a look at the PITCH coherence for every hour during this lock (with A2L moments marked).
Timeline for A2L on current 24+hr lock:
SDF Channel File Changed
Also wanted to take this time to look at the channels involved with SDF node which dropped Cheryl out of Observing during her shift. (Do we really want to be dropped out of OBSERVING for looking at some SDF channel files??)
OBSERVATORY_MODE
Took to "CALIBRATION" for this 4mins of downtime from OBSERVING
TITLE: 12/11 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 74.5957Mpc
OUTGOING OPERATOR: Cheryl
CURRENT ENVIRONMENT:
Wind: 4mph Gusts, 3mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.47 μm/s
QUICK SUMMARY:
Can see a slight increase in the "secondary useism" over the last 12hrs.
Looking at Violin Mode peaks on DARM, we have:
LiyuanZ, PeterK, BetsyW, AlenaA, RickS (with support from EddieS, CalumT, StephenA, DennisC, GarilynnB, MalikR, et al.)
Some time ago, LLO removed PMC SN08 from operation due to a glitchy PZT.
We recently removed and replaced the PZT and the curved mirror it actuates, using an original-style PZT ordered by Pking and a spare mirror from the original PMC mirrors (supposedly) provided by BennoW.
We characterized the losses in the cavity using a setup in the LHO "Triples Lab" (upper floor of Staging Building) that utilizes an NPRO and three Pcal-style integrating spheres and associated photodetectors (see LIGO-T1600204-v3).
We made some improvements to our measurement setup that preclude direct comparisons of the estimated losses before and after replacing the M4 mirror, but we estimate that the total round trip losses were reduced by about a factor of two by replacing the one mirror (investigations of other highly contaminated PMC indicate that the PZT is the source of the contaminants and that the mirror bonded to the PZT is the most contaminated).
Our current best estimate of the average losses per mirror for this cavity is about 60 ppm (see attached table).
We were surprised to find that the transmitted light level through M4 is about 40 times smaller than M3. The spec for the M3 and M4 transmission is 60 ppm and we calculate the M3 transmission to be about 65 ppm, but the M4 transmission of only 1.6 ppm is a mystery. It doesn't seem it was from the same coating run, as expected. However, discussions with DanielS indicated that this might be acceptable. The M4 transmitted light is used for the ISS path and is typically attenuated by about a factor of 100 on the PSL table.
We also tested a new concept for fabricating the PMCs that relies on machining tolerances for setting the orientation of the four cavity mirrors and eliminates all gluing from the assembly. Two original PMC bodies were re-machined at a local machine shop in a single setup with the hope of achieving relative accuracy between the points on which the cavity mirrors register on the level of 5 micrometers.
We assembled both "all-bolted" prototypes under a clean bench by mounting the mirrors against three balls that register at the bottoms of counterbores in the aluminum bodies and holding the mirrors (and the sandwiched PZT) in place using off-the-shelf SS flexures (see attached photos). We used mirrors recently procured by PeterK from ATF.
We discovered that there was an error in the coating of the new PMC flat mirrors; the reflectivity is only 2,400 ppm when it was supposed to be 24,000 ppm. Thus the cavity finesse is 10x higher than desired. While this won't work for the PSL, it aids in measuring the mirror losses. The results of two measuremens are tabulated in the attached table for the S/N10 body. The average losses per mirror are estimated to be about 11 ppm. We have not measured the losses for the other "all-bolted" cavity yet.
These measurements confirm the abiltiy to machine the bodies to the required tolerances. We will test the PZT performance as best we can in our lab setup when time allows.
TITLE: 12/10 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 73.9225Mpc
INCOMING OPERATOR: Corey
Assistance:
SHIFT SUMMARY:
Corey suggested looking at DIAG_SDF log, and there is activity that coincides with H1 going out of Observe:
19:19:45UTC - H1 out of Observe, and DIAG_SDF shows:
Now, how do I know what "USERMSG 0: DIFFS: sysecaty1plc2: 1" is?
Keita's alog 32134 - instructions on how to look for channels that changed
My bad - investigating - looked at SDF - kicked H1 out of Observe:
DIAG_SDF log:
From Keita:
I took
/opt/rtcds/lho/h1/target/h1sysecaty1plc2sdf/h1sysecaty1plc2sdfepics/OBSERVE.snap
and stripped unnecessary information, split into 20 line chunks and
put them here:
/ligo/home/keita.kawabe/LockLoss/SDFERRORS/h1sysecaty1plc2
Could you again run the lockloss tool by
for ii in ecaty1plc2*; do lockloss -c ${ii} plot -w '[-10,10]' gpstime; done
This morning (Monday Dec 12) I ran the lockloss script and I can see that H1:ALS-Y_FIBR_LOCK_TEMPERATURECOMTROLS_ON was flipping (see attached, second column from the left). Other things like LASER_HEAD_CRYSTALFREQUENCY, CRYSTALTEMPERATURE and VCO_TUNEOFS were also changing but these were not monitored.
Anyway, it's strange that this was not found when Cheryl and Corey ran lockloss tool. Maybe NDS2 was misbehaving?
Just to make sure, what I did is:
cd /ligo/home/keita.kawabe/LockLoss/SDFERRORS/h1sysecaty1plc2
for ii in ecaty1plc2_a*; do lockloss -c ${ii} plot -w '[-10, 10]' 1165432802; done
H1 has been locked 15+ hours, and I ran some dataviewer trends looking at 2 hours, and found some optic alignment changes that I think are interesting, and some that might show up in DARM.
TITLE: 12/10 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 70.6121Mpc
OUTGOING OPERATOR: Jim
CURRENT ENVIRONMENT:
Wind: 19mph Gusts, 14mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.38 μm/s
QUICK SUMMARY:
TITLE: 12/10 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Earthquake
INCOMING OPERATOR: Jim
SHIFT SUMMARY:
Once I sat down to H1, able to get to NLN with no issues. Have had a handful of EYSaturations during this lock. Seismic trends are steady & winds are getting quiet.
LOG:
Forgot to note that while handing off to Jim last night, that he did have to make some quick adjustments to the MODE28 (in the 8-9utc hour).
With the ASC IMC model now running at 16384 Hz we look at the coherence of jitter as measured by the IMC WFS and other channels up to 7.4 kHz. Not sure we can conclude anything except that pointing errros contaminate everything.
We can compare this with an older 900-Hz bandwidth measurement from alog 31631 which was taken before the piezo peak fix (alog 31974).
Note that 1084Hz thing doesn't have coherence with IMC WFS.
Can you check the DC sum channels for the IMC WFS as well? They are the ones that hVeto keeps finding as related to the 1080 Hz noise, and they see a modulation in the noise rather than a steady spectrum.
Done, again nothing for the bump in question though there are coherence bumps for f>1100Hz and f<800Hz.