Epics variables that originate from the RCG don't seem to be updating this morning, although ones that originate in Beckhoff are updating. I don't see anything wrong on the CDS overview except the timing system errors which have been there since Tuesday.
On the GDS screens, the GPS times are all stuck at slightly different times around Sep 03 2017 11:42:43 UTC (so far I have seen times within about 10 seconds of each other with all models on the same IOP stopped at the same time.)
We have had what looks like many nearby EQs over the last 16 hours.
Today I ran some charge measurements and worked on the script for analyzing and plotting. My script for transitioning DARM control to ETMX in low noise had a bug which caused a lockloss, after this I grounded the ETMY ESD cables and tried to relock but spent ~40 minutes on violin modes. While I was damping violins we got a 5.3 in Idaho which tripped several ISIs, everything is re-isolated now but I'm leaving the SEI state as Large EQ. The next person to try locking should be careful about violins.
ETMY ESD is re-connected for now, I hope to do the grounding test tomorow.
S Dwyer, J Kissel, T Vo
After running absorption measurements and adjuisting the initial alignment once again, we were able to get past the CARM_5PM stage by adjusting the CARM gain manually. Then Sheila adjusted the references for alignment to make re-locking easier.
We've returned to Nominal Low Noise, but there is no appreciable change in the sensitivity from before the discharging at X-End. Sheila and Jeff took in-lock charge measurements and BSC-ISI to DARM coupling measurements, respectively, they will post results in a separate aLOG.
Sheila, Daniel, TVo
Executive Summary
It doesn't look like the TMDS caused any extra absorption on ETMX.
Following my aLOG-38476, where I alluded to the possibility of increased loss due to absorption as a reason we had trouble locking last night, we wanted to try to measure a loss in each arm and compare them to each other. This is done by looking at the reflected power and seeing the difference between locked and unlocked. In the attached time series, the first drop is the X-ARM locking, the second drop is the Y-ARM locking.
For this measurement, we configured the IFO as such:
- The arm cavity of interest was first locked on ALS to get aligned well (it would be mis-aligned during the actual measurement)
- The other arm cavity was misaligned
- Then we locked a single arm with Guardian (ALIGN_IFO)-
- We also turned on the DC Centering on the AS WFS in order to maximize their power.
- Then to get a decent dip in the reflected power off the arm cavity, we increased the input power from the PSL from 2 Watts to 25 Watts.
Below is the summary of results:
XARM
| Channel | Locked Power(Cts) | Unlocked Power(Cts) | Visbility | Abs PPM (Calc'd) |
| LSC-REFLAIR_B_LF | 0.0114 | 0.0124 | 92% | 343 |
| LSC-ASCAIR_B_LF | 0.0423 | 0.0462 | 91% | 357 |
| ASC-AS_A_DC_NSUM | 4241 | 4454 | 95% | 220 |
YARM
| Channel | Locked Power(Cts) | Unlocked Power(Cts) | Visbility | Abs PPM (Calc'd) |
| LSC-REFLAIR_B_LF | 0.0114 | 0.0124 | 92% | 479 |
| LSC-ASCAIR_B_LF | 0.0417 | 0.0461 | 91% | 409 |
| ASC-AS_A_DC_NSUM | 4265 | 4458 | 95% | 209 |
Some of the sensors in the AS port didn't give us good results when we locked and unlocked but it's not fully understood why. The total loss is a combination of mode-matching, alignment etc and these were not taken into account.
J. Kissel, S. Dwyer We're getting verbal alarms about timing errors constantly because the end station GPS receivers are in the wrong configuration (see LHO aLOG 38439). To save our sanity until the GPS receiver configurations are fixed, we've commented out the TIMING error function from the verbal alarms script by excluding it from the , all_tests list defined on line 1253 of /opt/rtcds/userapps/release/sys/h1/scripts/VerbalAlarms/VerbalAlarms.py This should be put back in once the GPS receiver configuration is fixed.
It seems that our trouble locking last night was due to a change in the initial alignment references. We are now locked at DC readout with a recycling gain of just over 30.
In order to lock I changed the TR_CARM offset (sqrt(TRX+TRY), each normalized to single arm power) at which we transition to the REFL PD from -41 to -35. This allowed us to lock with a recycling gain of just over 20. After that I moved the ITMs by hand with all the ASC except the soft loops on, and we could recover a good recycling gain with the old TMS offsets. That means that the problem was just our initial alignment references.
At DC readout, I used the scripts described in 37033 to reset the green alignment references, although I had trouble getting the Yarm to lock in green. Screenshot shows the changes in settings.
TITLE: 09/01 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
SHIFT SUMMARY: efforts to recover NLN continue
LOG: activities today
Summary: It floats!
Previous update here: 38330
On Thursday afternoon, Travis helped me pull the wires and get them clamped. The one issue we had was two screws failing and almost breaking off. Neither of us thought that we were over torquing them, but it is possible. Luckily we stopped as soon as we felt them giving so they were still partially attached and we could just back them out slowly. I forgot to get a picture of these but they are still sitting on the table and I'll grab one next week.
Today I went in to release and balance the suspension and successfully was able to do so. It took a fair amount of poking, sliding, and exchanging masses, but eventually I used 19.65kg excluding the screws.
Attached are a few pictures of the wires and the floating suspension
And what else floats?
Apples! Bread! Very small rocks!
A duck!
J. Kissel Even though we no longer suspect there's anything wrong with the ETMX test mass after discharging on Tuesday and Wednesday (aLOGs with convincing refuting evidence to come from Sheila and Thomas; see claims of suspicion in LHO aLOG 38476), I've gathered some digital red camera images from the camera archive, /ligo/data/camera/archive/2017/09/01/ H1 ETMX (h1cam25)_2017-09-01-22-04-39.tiff (or without the linux handling of badly used characters 'H1 ETMX (h1cam25)_2017-09-01-22-04-39.tiff') /ligo/data/camera/archive/2017/08/29/ H1 ETMX (h1cam25)_2017-08-29-02-02-08.tiff ('H1 ETMX (h1cam25)_2017-08-29-02-02-08.tiff') imported them into matlab (I'm still using the workstation's default version 8.0.0.783 [R2012b]), [dcreadout_0829,~] = imread([imageDir{1} 'H1 ETMX (h1cam25)_2017-08-29-02-02-08.tiff']); [dcreadout_0901,~] = imread([imageDir{2} 'H1 ETMX (h1cam25)_2017-09-01-22-04-39.tiff']); and subtracted the images, difference_ETMX = (dcreadout_0829 - dcreadout_0901); and plotted them, e.g. figure(1); imagesc(dcreadout_0829); colormap(gray); just to see if there were any major changes in point scatters because of the TMDS use. There is no major change in point scatters as a result of the use of the TMDS system on ETMX.
Script to import and plot the images lives here:
/ligo/home/jeffrey.kissel/2017-09-01/
plot_H1ETMX_Camera_Images_20170901.m
but is attached for future convenience.
Pump down curve attached. Corey helped me move the TMDS stuff from the X-end to the Y-end today. I'll file a WP today for September 4th (Monday-Thursday) TMDS of ETMy.
BRSX looks stable. BRSY shows a re-centering by Jim W.
[ colour code reminder: GREEN = front end restart, BLUE = daq restart, PURPLE = unexpected or unanticipated restart, RED = crash ]
model restarts logged for Thu 31/Aug/2017 - Wed 30/Aug/2017 No restarts reported
model restarts logged for Tue 29/Aug/2017
2017_08_29 10:01 h1iopsusey
2017_08_29 10:01 h1susetmy
2017_08_29 10:01 h1susetmypi
2017_08_29 10:01 h1sustmsy
2017_08_29 10:13 h1hpietmy
2017_08_29 10:13 h1iopiscey
2017_08_29 10:13 h1iopseiey
2017_08_29 10:13 h1pemey
2017_08_29 10:14 h1alsey
2017_08_29 10:14 h1caley
2017_08_29 10:14 h1iscey
2017_08_29 10:14 h1isietmy
2017_08_29 13:18 h1hpiham1
2017_08_29 13:18 h1hpiham6
2017_08_29 13:18 h1iopseih16
2017_08_29 13:18 h1isiham6
2017_08_29 16:38 h1broadcast0
2017_08_29 16:38 h1dc0
2017_08_29 16:38 h1fw0
2017_08_29 16:38 h1fw1
2017_08_29 16:38 h1fw2
2017_08_29 16:38 h1nds0
2017_08_29 16:38 h1nds1
2017_08_29 16:38 h1tw1
Maintenance day. BIOS work on h1susey, unexpected restarts of h1seisy and h1iscey. Unexpected restart of h1seih16. DAQ restart for Beckhoff ecatc1 upgrades.
model restarts logged for Mon 28/Aug/2017
2017_08_28 11:53 h1susitmy
2017_08_28 12:00 h1broadcast0
2017_08_28 12:00 h1dc0
2017_08_28 12:02 h1fw0
2017_08_28 12:02 h1fw1
2017_08_28 12:02 h1fw2
2017_08_28 12:02 h1nds0
2017_08_28 12:02 h1nds1
2017_08_28 12:02 h1tw1
SUS model fixing with associated DAQ restart
model restarts logged for Sun 27/Aug/2017
2017_08_27 16:16 h1lsc
2017_08_27 16:18 h1omc
2017_08_27 16:18 h1susitmx
2017_08_27 16:20 h1susitmy
2017_08_27 16:23 h1broadcast0
2017_08_27 16:23 h1dc0
2017_08_27 16:23 h1fw0
2017_08_27 16:23 h1fw1
2017_08_27 16:23 h1fw2
2017_08_27 16:23 h1nds0
2017_08_27 16:23 h1nds1
2017_08_27 16:23 h1tw1
New ITM-SUS, LSC and OMC code. Associated DAQ restart.
model restarts logged for Sat 26/Aug/2017 No restarts reported
ETMY and ITMY both seem to be either near or out of spec.
As can be seen from the attached 2-hour trend, both ITMy and ETMx have been in both the Aligned and Misaligned state recently, likely due to ongoing comminssioning activities. Both are now Aligned, as a locking attempt is being made, and the pitch and yaw values of the oplevs are well within spec.
Added 100ml to crystal chiller.
We preform more FINESSE simulations on the AS72 scheme, focusing on how the signal changes when we switch to the new SRM (T_srm=0.37 -> 0.32), and possible solutions if we are sitting on the ill-conditioned regions.
Conclusions:
1. If the AS72 sensing matrix measured now (with differential wavefront distortion; T_srm=0.37) behaves well, the sensing should be fine after we replace the SRM.
2. If the sensing matrix for AS72 is close to being degenerate, after replacing the SRM the sensing is likely to be even worse.
In case that the sensing matrix is bad (e.g. we are sitting at AS port gouy phase 45-75 deg in the plots), possible solutions:
i. Increase the SRC one-way gouy phase to ~20 deg (e.g. with SR3 ring heater; current nominal SRC gouy phase is ~18 deg). This should be the BEST solution. For more details, please see 37222.
ii. Use ASA_72Q and ASB_36Q for SRM/BS sensing. This scheme work only for a very narrow (AS port gouy phase + diff lens) space.
iii. Use ASA_72Q and (ASA_72I - ff x ASA_DCQPD) where the ff is set to decouple the spot centering. This should cover the AS port gouy phase space corresponding to 45-70 deg in the plots.
Sheila, Jeff, Corey, TVo
After opening the gate valve at ETMX, we started locking to measure the charge in-lock using Sheila's script in aLOG-38387
However, we had trouble getting past the CARM_10PM state and Sheila noticed that the alignment might be bad because of the ratio of REFL power locked to unlocked was too low and the PRC Gain was lower than normal as well. Even after running initial alignment twice, the same problem still persisted so we're thinking there may be some extra absorption in the arm cavities, we will follow up tomorrow.
For the record, Sheila referenced LHO aLOG 36439 that documents symptoms of poor alignment during the CARM reduction causing lock losses. However, after several initial alignment attempts, and some by-hand tuning of the alignment, the power recycling gain did not improve; hence this investigation into arm losses.
h1boot locked up around 04:40 PDT. Sheila is rebooting it.
h1boot is back, front ends look good. Sheila will try some testpoints and excitations.
here are h1boot's system messages for early this morning, last message before freeze up was an ntpd status change at approximately the time of the freeze. The next message is the reboot at 09:39:08
Sep 3 01:19:48 h1boot -- MARK --
Sep 3 01:39:48 h1boot -- MARK --
Sep 3 01:59:48 h1boot -- MARK --
Sep 3 02:19:48 h1boot -- MARK --
Sep 3 02:39:48 h1boot -- MARK --
Sep 3 02:59:48 h1boot -- MARK --
Sep 3 03:19:48 h1boot -- MARK --
Sep 3 03:39:48 h1boot -- MARK --
Sep 3 03:59:49 h1boot -- MARK --
Sep 3 04:19:49 h1boot -- MARK --
Sep 3 04:39:49 h1boot -- MARK --
Sep 3 04:41:40 h1boot ntpd[4865]: kernel time sync status change 6001
Sep 3 09:39:08 h1boot syslog-ng[4227]: Syslog connection established; fd='7', server='AF_INET(10.99.0.99:514)', local='AF_INET(0.0.0.0:0)'
Impact of h1boot freeze up:
The front end real-time processes were not affected by the freeze, neither was their data transfer to the DAQ. All EPICS IOCs on the front ends froze up, which mainly impacted the Guardian nodes which received stuck data. MEDMs were also frozen at their 04:41 PDT values, and conlog also did not receive any updates. I suspect testpoint and excitation operations would have been unavailable during the freeze.