So the user might be more quickly alerted to an issue with the Fluid system making problems for platform isolating. This is also alarmed in the alarm handler system.
Attached is a snap of the HEPI and BSC & HAM ISI overviews with the OK/NOT OK widget circled. If these lights are not green, HEPI Isolation may not happen likely with Actuator Watchdog trips.
Edited files have been commited to the svn.
David.M, Filiberto.C
Yesterday I was looking at the NN array output channels to check everything was working and noticed that the 7th NN channel (H1:NGN-CS_L4C_Z_7_OUT) was producing a noisy output about 3 orders of magnitude higher than expected. I thought potentially the L4C might be busted, so I went into the LVEA this morning and swapped it out for another one that we tested earlier (L41429). The problem remained even with the new sensor and also even when the sensor was unplugged. We went into the CER to diagnose where the problem was. Turning off the L4C interface chassis and the AA chassis both didn't fix the problem, which seems to indicate that the large noise level in this channel is caused by a problem in the I/O Chassis.
This problem was not fixed with an I/O chassis power cycle, so it may be a problem with the ADC card.
Attached is a plot of the cooling circuit performance since the water manifold swap out for the last ~1-5/8 days. About halfway through there is an increase in the manifold pressure, which is reflected in the flow rates of heads [1-3] but not 4, nor the power meter circuit. Not sure what to make of things yet other than to simply provide a time reference.
I'm testing an addition to the PI guardian code. If PI MODE2 starts to ring up while we're in INCREASE_POWER, please wait to attempt damping until it is above ~30 amplitude on the PI RMS Monitor StripTool. This ring up is due to a gain sign phase flip and I'd like to automate the necessary gain sign change. Turn around should happen around 10 amplitude so if it continues to rise, damp away.
[Jenne, Terra, Koji, Ed]
Koji suspects that we might have a length offset in PRCL of about 1nm, so we tried giving PRCL an offset to see if that would help the recycling gain. Nope. Using the calibration for the PRCL error signal, Kiwamu told us that 1nm in PRCL is about 1,000 counts of offset. Empirically, about 100 counts of offset breaks the lock, and we don't see any change in the PRC gain. We'd expect a change of about 1 for 0.1nm if that were the true problem, so since we don't, we infer that PRCL length is not the source of all our PRC gain troubles.
After doing math today, I realized that yesterday's moves of about 16urad in IM3 corresponds to only about 50um in spot motion on PRM (IM4 is flat, 1.5889m between IM3 and PRM AR face according to E1200616). That's basically nothing. Tonight I moved much farther, and was able to see changes in the PRC gain, although I couldn't get it above 30. On the other hand, this was done without optimizing the soft offsets, so maybe we can get some more out of that.
I also tried moving the spot on PR2, but that didn't do anything for my recycling gain.
Also of note is that I was able to get more PRC gain out of PRC1 offsets by also including a yaw offset. In the end, I was happiest with +0.5 pitch offset, and +0.4 yaw offset. This is the starting PRC gain in the striptool plot below, before I got even more PRC gain from moving the PRM spot.
When we lost lock, the alignment was terrible, which is perhaps not too surprising. What is surprising is that somehow the PRM sliders got changed by more than 1,000 urad after the lockloss. In the PRM plot below, you can see that the OSEM readbacks start aligned, and the sliders are at some value. Then, at lockloss, the PRM is misaligned, so the sliders stay the same but the OSEMs read different values. Then, the sliders get moved. This should never happen, and isn't called for in any guardian that I (a) can find now or (b) have ever seen. Unclear what that was.
Just finished initial alignment, so the IFO is ready to go for the morning operator to start locking. With the OMC situation and my guardian "hack" (see alog 28686), you should be able to select IncreasePower basically any time after DRMI has locked, and everything else will run through smoothly and get you to 50W.
23:00 Kyle to MY for CP3 spillage
23:20 Kyle back from MY
23:39 Initial Alignment
6:13 After lockloss we are searching for IR resonance
6:25 Initial alignmet
Opening statement: Script to do the transition aLog not working. Initial alignment following left doing entry by the wayside. Script seems to be working now.
TITLE: 07/28 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Commissioning
We have a serious issue with the OMC. Even after a day of trying, we are unable to resonate a 00 mode.
Many people,
(Anyone, please add comments if I am missing something or inaccurate)
[Time line]
The interferometer was locked with a 50 W PSL last night (28670) with the DC readout. At around 8:13 UTC (1:13 local), the interferometer was unlocked due to an human error where an integrator of the OMC LSC servo in the digital system (FM2 of OMC-LSC-SERVO) was accidentally disengaged. 20-30 msec after the disengagement of the integrator, the laser power in HAM6, according to ASAIR_A_LF, went up to at least 150 W for a short duration of roughly 50 msec. Since ASAIR_A saturated, this power is a lower limit of the actual laser power in HAM6. In terms of energy, it is about (50 msec) * (150 W) = 7.5 J at least. According to OMC-LSC_SERVO_OUT, the OMC seemed to have escaped the resonance before the laser pulse arrived. Therefore it is unclear how much energy was actually deposited to the cavity mirrors of the OMC from this particular lockloss.
No locking attempt was made until 16:00 UTC (9:00 AM local) in this morning. Later, the interferometer was locked with a 2W PSL with the RF readout. We noticed that the OMC were unable to acquire a 00 carrier mode at all. After one hour or so of investigation, the interferomter was intentionally unlocked. We started investigating the OMC with a single bounce configuration.
[The symptom]
No matter how we changed the length offset, the OMC did not show a visible 00 mode in the OMC trans camera. Instead, resonance the OMC went across appeared to be higher order modes with some airy disk-type halos around. In fact, we could not get a visible 01 or 10 mode either. Keita studied the effect of the OMC SUS and OM tip-tilts alignment and he was able to get a visible TEM11 mode though.
We do not think this symptom is due to some kind of misalignment --- we steered the OM mirrors and OMC suspension around by more than several 100 urad typically, but were never able to get visible 00, 01 or 10 mode in the camera. The PZT2 DC voltage monitor told us that the PZT2 was getting correct voltage.
The beam shape of OMC REFL at ISCT6 visually looked fine -- it appeared to be a gaussian beam. We steered the input optics back to where they used to be (28670) before Jenne moved them.
[Shutters were not functioning]
Daniel discovered that neither mechanical shutter nor PZT shutter had been working in the past months after the HAM6 vent on April. Richard and Daniel found that the shutter trigger box had a wrong cabling. So for the reason, we believe that the OMC and the DCPDs have been exposed to high intensity light at every lockloss. They fixed the cable and now the shutters should be triggered as intended.
We are going to try going forward with high power work tonight using RF instead of DC readout. There is a new value in lscparams.py, "use_dc_readout". It is currently set to zero, so guardian will not try to transition to DC readout. When we're ready, we should just have to flip this to 1.
The plot shows that the shutters were not triggered since Apr 4, 2016.
(Stefan was working on this but I extended it to look at the other lock losses)
Plots of ASAIR_B and DCPD_SUM for last 4 lock loss
Jul 27, 2016
lockloss1: 3:48
lockloss2: 5:38
lockloss3: 6:15
lockloss4: 8:15 (Last one)
These tell us that the last one was not particular lock loss. We regularly had the similar level lock losses.
The mode which give us ~10% of transmitted light thru the OMC doesn't look like a mode of a misaligned cavity. There are multiple concentric rings around the center spot, more reminiscence of a fringe pattern with a central aperture.
This would be compatible with a worst case scenario where we have an OMC optics with a damaged coating. The DCPDs look healthy without any indication of elevated dark current. This counters our intuition where the DCPDs are most vulnerable.
We tried mode scan using a single shot beam with QPD alignment and no sensible mode was visible at all. The maximum transmission measured by DCPD_SUM was about 0.7mA or so when we expect O(100mA) for 00.
Later I found that when I misalign the OMC enough, I recover some of the sensible-looking higher order modes, but only the ones with the node at the center. We were never able to visibly identify any mode that doesn't have the node at the center.
In the attached, OMC suspension was YAWed considerably, OMC automatic alignment was disabled, and PZT was scanned a bit more than the FSR. X axis is the PZT2 voltage, Y axis is DCPD_SUM.
Two modes visibly identified were plus-shaped HG11 type mode (i.e. 2nd order, about 8mA) and LG3 type mode (i.e. 3rd order, 6 bright spots, about 6.5mA), these both have a node at the center. These are both O(10%) of the power coming to the OMC.
We were also able to see what is arguably HG10-type mode, but one of the two bright spots was more like an ugly blob with a lot of structures in it. And this HG10-type thing is very broad compared with HG11 and LG3 type peaks.
Everything else was kind of hard to identify, but the transverse mode spacing tells us the positions of 00, 4th and 5th HOM.
It seems like 00 peak is tiny, and even broader than the first order mode.
Attached is a trace of ASAIR_B_LF_OUT, calibrated in Watt out of HAM6. The top panel is the fatal lock-loss, the bottom one is the one before.
For the OMC REFL light; we have realigned the gigE camera and took some pictures to quantitatively assess how Gaussian the beam is.
The measurement was done with a 2 W PSL in a single bounce configuration (with ITMY misaligned). The OMC was in a non-resonant state where I see almost no light in the OMC trans camera. The OMs and OMC SUS was initially servoed to the nominal operating points using the ASC DC loops and the OMC SUS QPD loop.
Clearly, the OMC REFL showed some discrepancy from a pure Gaussian, but not a lot. It is unclear what optic introduced the distortion form the image. Moving the OMC REFL camera around did not improve the beam quality in the camera.
The last attachement is a tar.gz of the images in numpy npz format.
David McManus, Jenne Driggers
IMPORTANT: The channel numbers which correspond to locations of the L4Cs have been changed since previous posts, see the updated table and map below and ignore information in previous posts.
When i reference channel numbers in this post I mean the channels H1:NGN-CS_L4C_Z_#_OUT where # is the channel number
After today 29 of the 30 L4Cs have been fixed to the floor and connected to cds. Each sensor on the floor is positioned inside the foam coolers as in previous posts, the boxes are now clearly labelled with the channel number of the L4C as shown in the first picture (with sensor 25). There are now proper threaded connections between each sensor and cable which prevent the cables from being pulled out.
The map attached to this post shows the locations of all sensors in the array, and their channel numbers in cds. L4C number 1 is the only one which is not currently placed because it is right in the middle of a busy and narrow walkway.
The attached table shows which L4C serial numbers correspond to which channels and also which channel this serial number used during the calibration huddle. Note that because all 30 sensors could not be plugged in during the huddle certain channels were used for different sensors at different times. For this reason this table includes the start date and end date for when this L4C was attached to the channel listed during the huddle.
The other pictures attached show areas where cable protectors are now layed out in the LVEA to stop people tripping on cables.
Jenne approved, I set the IMs back to nominal values from alog 28016
Lowered CP3 LLCV from 19% to 18% open. Interesting change in exhaust pressure trend going from 20% to 19% this morning.
I had a look at lock loss times and their durations during O1 (only lock losses from nominal low noise). I've had a brief look at a couple of the physical environment channels (H1:PEM-EY_WIND_ROOF_WEATHER_MPH for wind and H1:ISI-GND_STS_ITMY_Z_BLRMS_100M_300M for seismic) to see how they are correlated to these lock losses. There are a few plots I've attached here below. All of the data for these plots are mean minute trends
Plots:
Duty_cycle_noncumulative.pdf - This is a 3D plot showing the duty cycle for different wind and seismic bins (percentiles)
Duty_cycle_cumulative.pdf - This is a 3D plot showing the duty cycle when the wind or seismic behaviour is greater than or equal to the given percentiles
Downtime_wind_seis.pdf - This is a 3D column plot showing the integrated 'downtime' that results from a lock loss in the given wind and seismic bins (percentiles)
lock_losses.pdf - This plot shows the number of lock losses per percentile bins, surf (matlab) makes it look a bit weird.
I've also attached lock_losses.txt which is a list of the times in GPS when the lock losses occur (first column) and their duration in seconds (second column). The durations and loss times are only to the nearest minute unfortunately since I used minute trend data.
Percentiles.txt contains the wind (first row) and seismic (second row) channel values that correspond to 5% intervals starting from the 0th percentile (0,5,10...). These are for the mean minute trends though which are much lower than the maximum minute trends.
The scripts used to generate these plots are located at: https://dcc.ligo.org/T1600211