This morning I adjusted the alignment into the reference cavity. Mostly vertical adjustments on the periscope were done. When I started the reference cavity transmission was ~0.6V, afterwards ~3.56V. Measured the open loop transfer function. Originally with the common gain slider at 16 dB. After the measurement I changed this to 18 dB to sit at the peak of phase bump. As I was measuring the transfer function at 18 dB, a glitch of some sort occurred which resulted in a loss of 2 dB in the unity gain frequency and not sitting on the peak of the phase bubble. Coincident with the glitch I noticed that the Pockels cell monitor saturated, perhaps caused by the input modecleaner trying to acquire? The difference in the common gain is not much. It might be better to return to 16 dB if 18 dB is problematic for some reason.
Attached are some images of the old and new scheme for holding the calcite polarisers in the output Faraday isolator. A fit check was executed - with the old scratched up polarisers. Everything fits just fine.
Sheila, Thomas Vo, Hang Yu,
There was a magnitude 8 EQ off the coast of Mexico tonight. We went into LARGE_EQ_NOBRSXY on the seismic configuration node after the first waves hit, but all the ISIs tripped anyway. We were able to keep most of the quads damped most of the time, although ITMY was tripped for a little bit. LLO called and we set the WD thresholds up to allow us to bring the ISIs to damped after talking with them. This helped to stop the optics from swinging so much. A few minutes after we got everything damped, a few more ISIs tripped.
The attached screenshot shows a comparison of the ETMX STS during this EQ to the Montana one July 5th. The STS was saturated tonight.
We quickly checked that all four quads moved when we applied offsets to the top mass. Although ITMY only moved 2 urad for a 10 urad offset, I think this could be a calibration issue with the alignment slider. We left all ISIs‌ and suspensions damped for the night, with all HPIs isolated.
It probably would have been better to have switched the GS13s and T240s to low gain, rather than bypass the watchdogs (by setting the saturation threshold above the dac range). It seems like a particularly bad idea to bypass the HEPI actuators. If the HEPI drive is saturating with sensor correction turned off (i.e. SEI CONF is in LARGE_EQ or SC_OFF), it seems like it would be best to just let HEPI trip and try to get the ISI's damped with the sensors in low gain. Remember the senors can be switched from the Commands link on the ISI overview. The switch should be done with the ISI damped or off, switching to low gain while isolated will probably trip the ISI.
Attached is a trend of the PSL status in the past 60 days, it seems like the FSS_TPD is slowly diminishing over time and needs to be looked at. We think this is the reason why we had trouble locking in aLOG-38568 and the reason why we had to increase the FSS gain by +6dB.
The ALS requirement for power going into the fibre after the reference cavity is 10 mW. Fortuitously 10 mW corresponds to 1 V for the reference cavity transmission. I do not know how forgiving the ALS power requirement is but the reduced reference cavity transmission might have something to do with locking in green. Increasing the servo gain by 6 dB probably means that the unity gain of the FSS had dropped to a point where it may have presented problems for the IMC.
Sheila, Hang, Patrick, TVo
The ALS was particularly noisy tonight so the ISC_LOCK guardian had trouble finding IR on the Y-ARM. To get around this, we had to manually shift the ALS-DIFF OFFSET to get some resonant flashing.
We saw that the Diffracted Power % in the ISS was too low and adjusted the ISS Second Loop following the instructions of aLOG-31245 and saw that the IMC would not stay locked when turning on the second loop. After this procedure, the IMC wouldn't stay locked for more than 5 minutes at a time until Sheila added 6dB to the PSL-FSS_COMMON_GAIN and now it seems to be more stable.
Hang was able to lock DRMI_1f for his 72 MHz measurements after the change in the common gain but an 8.0 Earthquake off the coast of Meixco broke the lock.
Sheila and I had similar problems, we weren't able to get the realtime graph to come up while connnected to h1nds1 server. Not sure if this is a temporary problem...
The DAQ uptime on the CDS overview screen for h1nds1 is not going up, h1nds0 still works.
I could not find any obvious error messages in any of h1nds1's logs. I restarted the daqd process 09:24 PDT and this seems to have fixed the issue. Investigation continues.
24 channels added. 95 channels removed. (list attached)
TITLE: 09/07 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC STATE of H1: Commissioning INCOMING OPERATOR: None SHIFT SUMMARY: End X HEPI, ISI & TMS were tripped upon arrival (see alogs 38549, 38554). Have been trying unsuccessfully to get back to NLN all day. Have not made it past ENGAGE_DRMI_ASC. Sheila and TVo continuing to diagnose. LOG: 15:02 UTC Peter to optics lab 15:10 UTC Richard to Biergarten to retrieve equipment 15:14 UTC Set ISI config to SC_OFF_NOBRSXY 15:23 UTC Richard back 15:26 UTC Set ISI config back to WINDY 15:29 UTC Reset IOP DACKILL WD for TMS, then SEI, then HEPI WD, then ISI WD 15:31 UTC Peter back 16:42 UTC Peter to optics lab 17:31 UTC Kyle to end Y VEA, then end X MR 17:47 UTC Corey to LVEA to drop off periscope in squeezer bay 17:49 UTC Running ditherAlign.py TMSX 17:51 UTC Corey back Dither align script did not find photodiodes. 18:01 UTC Kyle back 18:39 UTC Peter done 19:15 UTC h1ecaty1 EPICS IOC crashed. Restarted. 19:28 UTC Initial alignment done. 19:31 UTC Lock loss from ALS DIFF 19:33 UTC Christina to end X (not VEA) then mid X. 19:35 UTC Lock loss from ALS DIFF (ramping DARM1_GAIN to 400). 20:22 UTC Lock loss (same) 20:25 UTC Ed to mid Y 21:34 UTC Elizabeth to LVEA to retrieve equipment 21:40 UTC h1asc model restart (WP 7142) 21:42 UTC DAQ restart Elizabeth back 22:40 UTC SRM tripped
The original PERL script cds/common/scritps/beckhoff_gang_whiten no longer works on the new Debian8 machines. As per ECR E1700056, when we find broken PERL scripts we re-write them in PYTHON.
beckhoff_gang_whiten is now a python script. I think there was a bug in the PERL scipt which with an invalid argument could cause FE whitening digital switching without its corresponding Beckhoff analog switching, which I fixed in the new code. There was also some redundency in the code which I removed.
Sheila tested the new code on ASC, it appears to work.
FAMIS6539
No water was added to either chiller, both were full. Looks like it was filled two days ago so this makes sense. Filter was clear and water looked good
Work Permits:
https://services.ligo-la.caltech.edu/LHO/workpermits/view.php?permit_id=7141
https://services.ligo-la.caltech.edu/LHO/workpermits/view.php?permit_id=7142
Hang, Daniel, TiVo, Sheila, Patrick, Dave:
The isc/common/models/WFS.mdl file was edited to modify the WFS_DOUBLE_DEMOD part (only used by the h1asc model). h1asc was compiled, installed and restarted.
New h1ecatc1plc2 code was installed, which complements the front end h1asc change.
A new H1EDCU_ECATC1PLC2.ini file was installed into the DAQ, along with a new H1EDCU_GRD.ini which removes the temporary nodes (bounce, als-esd).
The DAQ was restarted to use new H1ASC.ini, H1EDCU_ECATC1PLC2.ini and H1EDCU_GRD.ini. Only slow channels were added/removed.
This is an update on my investigation into alignment of HAM2 SEI and the alignment of the IMs on HAM2, both when the chamber is vented and when under vacuum.
In May 2017, the IM optics were aligned and damped during the vent, and the ISI was in it's nominal state before the vent, and briefly when the chamber was vented, while the IMs were still aligned and damped.
Using the numbers from that vent, I calculated the largest alignment change in IM optics, 160urad for pitch, 108urad for yaw, and the total change in the HAM2 optical table alignment, 15urad.
For vented alignment, with doors off, and a level and centered beam through the IO optics, I used the SEI and IM alignment data from the 2014 vent when Kiwamu aligned and recorded the IM alignments in the alog.
For current alignment under vacuum I used the alignment in May 2017, when I did an IMC beam spot measurement.
Using the vented 2014 data and under-vacuum data from May 2017, the change in the SEI (HEPI + ISI) is 142urad.
I set the maximum expected change due to SEI equal to 300urad (~SEI change * 2).
In May 2017 the IMC beam centering was measured, and found to be off-center on MC3 by -5.7mm in yaw, and from this I calculated that the beam on IM1 was -6.6mm off-center in yaw.
I calculate the position and angle changes of the beam on CW1 of the IO Faraday, and the position results from my Matlab code are shown in the table below:
IMC beam centering | IM1 and IM2 alignment changes in Yaw (2014 to 2017) | beam position from center on CW1 |
calculated, centered | M1 = -737urad, IM2 = -90.5urad | 1.1mm |
measured, off-center | no change to IM1 or IM2 alignment | 6.9mm |
measured, off-center | IM1 = -737urad, IM2 = -90.5urad | 8.0mm |
measured, off-center | IM1 = -737urad, IM2 = -90.5urad, and +/-300urad on one or both | smallest value was 7.4mm |
The measured off-center beam on MC3 is consistent with the loss of the IMC Trans beam. My diagram showing this, is in my alog 36408.
Some changes to IM1 and IM2 alignment have been made since May, however those changes are small compared to what would be required to recover a centered beam on the IO Faraday. From the May alignment values for IM1 and IM2, a single change of IM1 yaw of 4050urad, or a single change on IM2 yaw of 22500urad would bring the CW1 beam back to within 1mm of center. Both changes are significantly larger than the changes measured in SEI or IMs.
Same as FRS 8220.
Restarted IOC.
Here is the timeline of what happened this morning at EX (see attached minute trend plot, showing time span 03:00 - 10:00 PDT).
At 03:28 h1iopsusex lost track of its DAC channels, and went into the safe state of not driving any DAC outputs. At this point TMSX and ETMX are not being damped, but the ISI is still operational (red line jumps to 132 in plot). Over the next hour, TMSX slowly rang up and eventually hit the software watchdog limit of 110mV RMS. Ten minutes later the SWWD tripped h1iopseiex DACs (puple line drop to zero) stopping the ISI isolation, which then quietened the suspensions down over the next three hours. When the control room untripped the SWWD at 08:28 the sequence repeated itself. The problem was resolved by restarting the h1iopsusex model at 09:43.
The initial problem of the IOP model losing track of its DAC channels has been seen before, and is more likely on the end station SUS machines due to their increased ADC glitch rate (faster computers).
Bottom line is the software watchdog acted correctly, and in this case resolved the ring-up of TMSX. For O3, the hardware watchdog will be active at EX as a fall-back watchdog in case the SWWD becomes non-functional.
Sheila restarted the models on h1iopsusex to fix the problem.
Kyle, Gerardo Today we leak tested the new 2.75 CFF, valved-in the RGA and NEG pump, dumped GV18's unpumped gate annulus volume into the adjacent (pumped) annulus volume and opened GV18. This completes WP #7139 and #7139. Attached is the pumpdown. Note: the controller for the Y2-8 ion pump is showing 5600V, 1.0 x 10-11 Torr and 0.0 microamps pump current - hmmm. I STOPPED then STARTED the HV but get same reading. Neither the "Torr" nor the "current" are as expected. This might be related to the controller setup parameters. Otherwise, the pump isn't actually pumping! I'll consult Gerardo tomorrow.
Another ion pump cable issue?
What does X2-8 read?
Yes - the ion pump current could be at the limit of the controller's measurement circuit. Pressures on the two arms look to be the same now. I thought there was a factor of 10 difference yesterday but maybe I had the wrong glasses on. Kyle or Gerardo?
I checked the display of the controller for X2-8 this morning and found it displays the same as the Y2-8 controller. That's great and all and, obviously they are pumping, but I thought that we had observed leakage current in 300' HV cable that amounted to micro-amps. Also, as a general rule of thumb, you figure 1 micro-amp of pump current for a 100 l/s ion pump @ 1 x 10-9 torr. As such, we should be seeing 5 - 10 micro-amps of pump current in addition to any leakage current.
I suggest turning off one or both ion pumps and looking for a response on the vacuum gauge over a period of a day or more. Hopefully this would confirm that they are pumping. As to leakage current - maybe things have dried out significantly over the summer and therefore leakage has fallen? Need to get some of these signals into epics to enable trending.