Not sure if anyone has already caught this. Switching the ETMX blend to Quiet_90 on Dec 14th caused glitches to appear around 20Hz region (Fig 1 - starting at 20:00:00 UTC.) while switching both ETMX and ETMY to Quiet_90 everywhere caused gliches to appear around 10 Hz and 30 Hz region (Fig 2 - starting at 9:00:00 UTC). Wind speed has been low (<5mph) and the useism (0.03-0.1Hz) has been around 0.4 um/s. BNS range has been glitchy since the blend was switched but the lock has been relatively more stable. The question is, do we want clean data but constantly risk losing lock when the tidal rings up, or slightly glitchy data but relatively more stable interferometer?
Tried switching ETMX X to 45 mHz again. Looking good so far.
After talking to Mike on the phone we decided to try switching both ETMs back to 45mHz blend. I'm doing this slowly. One dof at a time. Things got better momentary when I switched ETMX X to 45 mHz blend but soon tidal and CSOFT started running away. I had to leave ETMX X at 90 mHz. Out of Observing from 11:58:07 - 12:11:02 UTC.
And the tidal is back... I switched ETMX X to 90mHz. 45 mHz is used everywhere else.
Switching to the 90 mHz blends resulted in the DARM residual becoming dominated by the microseism. The attachment shows the residual before and after the blend switch on the 14th; the rms increases from 5×10−14 m to 8×10−14 m.
As a first test in eliminating this nonstationarity, we should try engaging a boost to reduce the microseism contribution to DARM.
The other length loops (PRCL, MICH, SRCL) are not microseism dominated.
Similar to DARM, the dHard residuals are microseism-dominated and could also stand to be boosted, although this would require some care to make sure that the loops remain stable.
[Also, the whitening filter for the calibrated DARM residual is misnamed; the actual filter is two zeros at 0.1 Hz and two poles at 100 Hz, but the filter name said 1^2:100^2. I've changed the foton file to fix this, so it should be reloaded on next lock loss.]
Tidal was running away. Switched the 45mHz blend to 90mHz and saved the lock. Out of Observing from 09:00:12 - 09:01:43 UTC. ETMY blend remains unchanged.
TITLE: 12/16 OWL Shift 08:00-16:00UTC (00:00-08:00 PST), all times posted in UTC
Out going Ops: Corey
Quick Summary: Useism still near 50th percentile but slowly increasing. Wind speed ~10-20 mph. Tidal fluctuation comes and goes. Using Quiet90 blend on ETMX Y and ETMY X the rest is on 45 mHz.
TITLE: 12/15 EVE Shift: 00:00-08:00UTC (16:00-00:00PDT), all times posted in UTC
STATE of H1: NLN
Incoming Operator: Nutsinee
Support: Jenne as Maintenance Re-Locking Commissioner
Quick Summary:
Not sure about what caused the Lockloss. DRMI locked up in about 11min (just before I was going to try PRMI). Everything went smoothly through Guardian steps, except for when I had to Engage the 2nd ISS Loop by hand, I think I was not quite at zero, and this generated an OMC DCPD Saturation (which produced huge dips on many signals on the StripTool...but managed to stay locked!). Waited for the range to get up to around 80Mpc, and then took to Observing.
All is going well. useism looks like it has been increasing slightly over last 24hrs. Thought winds were going to die down (jumped up to around 10mph).
Doug at LLO called to inform me they are out of Observing mode due to A2L measurments and will need to stay out to bring back their TCSy laser (which will most likley break lock).
Notice that there is some redness on the ODC Mini Overview window on nuc1 (some of this is related to the BS ISI Fully Isolated change).
Range is at a nice 80Mpc.
After being down 12.5+hrs, H1 is back to Observing. For the recent 1-2 hrs, Bounce & Roll modes had to be carefully addressed.
There were a few SDFs which needed to be ACCEPTED (Namely, ETMx & y BLENDS...we are basically still running with the 45mHz Blend state [with useism at 0.5um/s]).
Jenne ran an A2L measurement just prior to going to Observing.
TITLE: 12/15 EVE Shift: 00:00-08:00UTC (16:00-00:00PDT), all times posted in UTC
STATE of H1: Locking & recovering from Maintenance Day
Outgoing Operator: Jim
Quick Summary:
TITLE: 12/15 Day Shift 16:00-24:00UTC
STATE Of H1: Re-locking
SUPPORT: Typical CR crowd
END-OF-SHIFT SUMMARY: Busy Tuesday.
Activity:
J. Kissel Gathered the weekly charge measurements today, and -- because we're getting close to the (prior) threshold for flipping the ESD bias sign on ETMY, and we weren't sure if we were seeing a convincing pattern / trend of charge accumulation / slow-down (see LHO aLOG 23866) -- gathered the longitudinal actuation strength as well. The messages: - ETMX = still fine. No worries there. - ETMY = - The ETMY UR quadrant, measured in Yaw has now surpassed -30 [V] effective bias voltage according to the optical lever measurements. This was level at which we'd flipped the bias last time; when 3 or 4 out 8 measurements had shown ~ -30 [V]. (see LHO aLOG 22561 and LHO aLOG 22135) - All other quadrants are only at roughly -10 [V] effective bias, including the Pitch measurement for the same quadrant. - This quadrant (as measured by Yaw) is definitely charging faster than the other quadrants (again, as measured by Yaw), and the relative actuation strength in Yaw is already back to +7% of the reference time. - The longitudinal relative actuation strength has not changed as much, it has only trended to 5%. Still difficult to say whether there is a significant decrease in the rate of strength change (let's call it the deceleration of actuation strentgh). By eye, you could easily convince yourself that the rate of change is decreasing, but if you gather data points in the longitunal strength spaced ~two weeks apart, the rate is inconsistently jumping around 0.7 - 1% of change per 2 weeks, or 0.4 - 0.5% per week. There's 1 month left in the run, or 4 weeks. So, if 5-6% of relative actuation strength discrepancy from the reference time is our threshold (like it was for the last flip) then we'll need to do the flip before the end of the run. As usual, you can ask "we'll what's LLO doing?" They have actuation strength discrepancy of the test mass stage w.r.t. their reference time also at roughly 5%. They had made noise about fixing it (not by flipping the bias, but by creating a new reference model) two-three weeks when they were first investigating their 10 [Mpc] drop problems (e.g. LLO aLOG 23488), but they've not pulled the trigger since (which I prefer!!). However -- though Stuart hasn't yet made the same L, P, and Y comparison -- my impression is that their actuation strength acceleration / deceleration is lower than ours. In summary, we're still in a (dark) grey area for making the call for whether it's worth the 8-hour down-time of flipping the bias, especially with respect to getting another round of full-actuation strength measurements a. la. LHO aLOG 21280. Of course, we *could* try to gather both at the same time, but that starts to get really ambitious -- and we've already seen that when pressed for time, we make mistakes (e.g. LHO aLOG 21005). Further complicating the issue are the holidays -- such measurements require Kiwamu and I present for the full time, and holidays make that cross-section lower than normal (this is why I suggested to Mike and Brian we either do it this week or wait until the end of the run). ---------------------------- Some details on how to gather the test mass longitudinal actuation strength data from the LDAS/DMT tool "SLM" (Standard Line Monitor): 1) Gather the ascii dump of the raw data (which merely calculates the amplitude of each calibration line) from the SLM Tool's webpage. These are available / updated once a day (https://ldas-jobs.ligo-wa.caltech.edu/~gmendell/pcalmon/daily-pcalmonNavigation.html) but for these studies you want the monthly collection, which is available here (https://ldas-jobs.ligo-wa.caltech.edu/~gmendell/pcalmon/monthly-pcalmonNavigation.html). These should be saved to /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Measurements/CAL_PARAM/ (simply because the CalSVN is far more accessible in the control room and on laptops than Greg's home folder on the LDAS clusters.) 2) Analyze the raw data using /ligo/svncommon/CalSVN/aligocalibration/trunk/Projects/PhotonCalibrator/scripts/SLMTool/SLMData_analysis.m for which you should only need to change the name of the parameter file that is called on Line 6 (right under where a comment says "Load Parameter File"). Of course, you should create a new parameter file (of which there are plenty of examples in that same folder, called parameter_*.m. This will save a new .mat file into /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Results/CAL_PARAM/ (with name as specified in your SLMData_analysis parameter file) that covers the duration of data that is new. 3) Concatinate earlier results with the new results you've gathered. This, regrettably is still a rather klunky, by-hand, process. For example, today, I had to take the previously generated .mat file, /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Results/CAL_PARAM/2015-12-01_Sep-Oct-Nov_ALLKappas.mat and load it with my newly generated data from thus far in December, /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Results/2015-12-15_H1_Kappas_Nov30_to_Dec15_2015.mat concatinate (e.g. kappa_TST = [file1.kappa_TST(:);file(2).kappa_TST(:)]), each variable and then save it as a new file, /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Results/2015-12-15_H1_ALLKappas_Sep10_to_Dec15.mat Since the results from SLM tool are only available in at-most month-long chunks, this has too be done every new month. Note that, for the current month, even though the link suggest only up to the first of the month is available, all data up to 00:00 UTC of the current day of the month is available. To compare these new longitudinal results against charge measurements, I ran the usual analysis instructions (see aWiki) to completion, ending with /ligo/svncommon/SusSVN/sus/trunk/QUAD/Common/Scripts/Long_Trend.m but then by-hand saved select variables from the work space, save([dataDir chargeDataFile],'DayDate','WeightedMeanPitch','AppliedBias','VariancePitch','WeightedMeanYaw','VarianceYaw','limit') where I've defined dataDir and chargeDataFile in the work-space before hand (also by hand) to be also in /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Results/CAL_PARAM/2015-12-15_H1SUSETMY_ChargeMeasResults.mat Then, I run the (sadly still scripted, and copied from date to date when I have a new time I want this run) /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O1/H1/Scripts/CAL_PARAM/compare_chargevskappaTST_20151215.m which produces the last plot attached, as long as all of the input file names are changed correctly to the latest date.
FRS 4077 The GraceDB monitoring process ext_alert failed with the maintenance of GraceDB this morning, and Monit stopped trying to restart it. I modified the monit configuration to NOT time out when GraceDB is down. Instead, Monit will continue to attempt to restart ext_alert indefinitely.
Following aLog 24208, all LHO ISI platforms were restarted. Most platforms deisolated nicely. But, HAM2 decidedly did not. New safe.snaps were captured for all platforms. This captured the FF paths being off at start up and GS13 being in low gain. Guardian was adjusted for HAM2 to desable the GS13 gain switching.
All snaps and isi/h1 guardians were committed to the svn.
The HAM ISIs were restarted to capture a code correction that clears saturation immediately upon request. The BSCs got this fix as well.
Also, since HAM2 & 3 do not tolerate GS13 gain switching via guardian, that feature, while available, is disabled. So, upon FE restart, the GS13s will be in low gain and the safe.snap SDF will be happy. But, under OBSERVE.snap, the GS13s in low gain will be red. These will need to be switched via the SEI Commands scripts.
Attached is a comparison of the GS13s with stage 2 isolated (blue) vs damped (red).
We still haven't seen the impact on DARM, but I've added this to the ISC_LOCK guardian similar to the way it is done at LLO. This means I also changed the nominal state for SEI_BS to FULLY_ISOLATED
This morning we investigated the problem with the Lighting controller at EndX. Water dripping from a pipe had enter the cabinet causing a fault. We verified no further damage was done. Replaced the relay that faulted with an unused one in the same cabinet and restored the system. The Cabinet is dry and working.
J. Oberling, E. Merilh
Today we swapped the glitching BS oplev laser with one that has been stabilized. Old laser SN 130-1, new laser SN 129-2. This laser will thermally stabilize over the next 4-6 hours or so. Once this is done, I will evaluate it to see if we need to do any small power tweaks to acheive stable operation (this is because the thermal environment in the LVEA differs from the lab the laser was stabilized in).
We also adjusted the whitening gain at the BS whitening chassis to suit the new laser. The old SUM counts were ~75k with 15 dB of whitening gain. After adjustment the new SUM counts are ~34k with a whitening gain of 9 dB; this is safer as we won't cause saturation if all the light happens to fall on one quadrant of the QPD. A picture of the Output Configuration Switch is attached and these changes are captured here: T1500556-v2.
This completes WP #5651.
Increased the power by ~5%. New SUM count reading ~33.8k. Will see if this clears the glitching. Continuing to monitor...
WP--5649, ECR E1500456, II 1167. See aLog 23853 and SEI log 887 and 890.
In preparation for updating the ISI per above, I'm updating our local model and src areas:
hugh.radkins@opsws1:models 0$ svn st -u
? gnd_sts_library_NEW.mdl
* 12022 isi2stagemaster.mdl
Status against revision: 12272
hugh.radkins@opsws1:models 0$ svn up isi2stagemaster.mdl
U isi2stagemaster.mdl
Updated to revision 12272.
hugh.radkins@opsws1:src 0$ svn st -u
* 11156 WD_SATCOUNT.c
? ISIWD_GPS_veryOLD.c
* BLENDMASTER_ST1.c
* 10437 .
Status against revision: 12272
hugh.radkins@opsws1:src 0$ svn up
U WD_SATCOUNT.c
A BLENDMASTER_ST1.c
Updated to revision 12272.
hugh.radkins@opsws1:bscisi 0$ svn up
U ISI_CUST_CHAMBER_ST1_BLEND_ALL.adl
Updated to revision 12272.
hugh.radkins@opsws1:bscisi 0$
I've successfully rebuilt all the ISI platforms. Will wait until tomorrow for install & restart
FYI The SATCOUNT.c update was a bug fix which applies to all ISI platforms. This is why the HAMs were also rebuilt and restarted. This fix will have the ISI saturations (seen n watchdog screen) clear immediately. Added benefit for the BSC ISIs for not tripping those platforms from stale T240 saturations. All good.