Everything looks normal with these trends.
J. Kissel, D. Sigg Investigating the last corner station Beckhoff error, regarding the 9 MHz EOM driver offset. There are complaints that the read back (H1:LSC-MOD_RF9_AM_RFOUT and H1:LSC-MOD_RF9_AM_RFSETMON) is out of range at ~18.6 [dB], where the request (H1:LSC-MOD_RF9_AM_RFSET) is 16.8 [dB]. Unclear what the tolerance is; appears to be internally set and not broadcast over EPICs. Side note: leading up to and during O2, when IFO is down/acquiring, it's reduced to 10.8 in NLN. The offset was set to 16.8 dB on Feb 23 2016 (see LHO aLOG 25687), to balance the RF power of the driver, and its readback has been slowly drifting *up* from there since. Looking back further, we see similar upward drift during O1, even though the offset's request was different. 45 MHz Beckhoff readbacks are stable. Fast ADC version of the control voltage (H1:LSC-MOD_RF9_AM_CTRL_OUT16) doesn't show the drift. Also the values don't agree. Tolerance increased LHO aLOG 30406 (perhaps unbenowst to the increaser) to accommodate the drift. Chassis assembly E1400445 Controller Circuit D0900761. These channels are all readout by a single 4 channel ADC as the readback: ecatc6 Middle Rail 10 is the module (from D1300745). Maybe internal voltage reference drift? Recommendations: - either check the cabling at the ADC - measure the input to the ADC to confirm the expected voltages - replace the ADC.
Opened FRS ticket 8250 with regards to suspicious Beckhoff readback for the 9 MHz EOM driver.
Starting CP3 fill. LLCV enabled. LLCV set to manual control. LLCV set to 50% open. Fill completed in 34 seconds. LLCV set back to 20.0% open. Starting CP4 fill. LLCV enabled. LLCV set to manual control. LLCV set to 70% open. Fill completed in 985 seconds. TC B did not register fill. LLCV set back to 39.0% open.
Last week we took advantage of the down time to do Z diagonalization measurements on the BSC ISI's, as Jeff outlined in his alog 36360. I used the data we got to calculate and install coefficients on all of the chambers that needed decoupling. The first five plots are the closeout Z to X/Y transfer functions comparing the before and after results showing the reduced coupling via tilt from Z drive to X/Y motion seen by the St1 T240s. For these plots, blue is always before, red is the after measurement. I forgot to include Y units but they are in nm/s / nm*rthz because I didn't convert the T240 signal to displacement.
Yesterday I looked at repeating this on the output HAM's, I was just efficient enough to get HAM4 mostly done, but I need more time finish HAMs 2,3,5,6. The last attached plot is the Z to X/Y plots before and after for HAM4. The magnitude plots are in nm / nm*rthz. I also got data for HAM4 Y to RX, but it looks like the ISI is good enough in that DOF. HAM5 was similarly clean for Z to RX/RY, and I have a coefficient calculated for HAM6 Z to RX, but I would like to do a measurement after installing the cpsalign elements before I make a permanent change.
The measurements for the HAMs are essentially the same as the BSCs, driving at the inputs to isolation banks while the ISI is isolated with high blends (750mhz for the HAMs) with sensor correction off. Drives are usually 1-5e7 in awggui, for as long as you have patience for.
For the record, and to be explicit for future reference, here're the BSC-ISI ST1 tilt decoupling matrix elements (H1:ISI-${CHAMBER}_ST1_CPSALIGN_${M}_${N}) that Jim has now installed. One reads the matrix as "The corrected RX and RY CPS signals used in the isolation loops contain small portions of X, Y, and Z signals." ETMX misaligned X Y Z a l X 1 0 0 i Y 0 1 0 g Z 0 0 1 n RX -0.0030 -0.0020 +0.0060 e RY +0.0013 +0.0001 -0.0025 d RZ 0 0 0 ETMY misaligned X Y Z a l X 1 0 0 i Y 0 1 0 g Z 0 0 1 n RX +0.0010 -0.0010 0 e RY +0.0005 +0.00075 0 d RZ 0 0 0 ITMX misaligned X Y Z a l X 1 0 0 i Y 0 1 0 g Z 0 0 1 n RX +0.0015 -0.0025 0 e RY +0.0015 +0.0015 +0.0025 d RZ 0 0 0 ITMY misaligned X Y Z a l X 1 0 0 i Y 0 1 0 g Z 0 0 1 n RX +0.0015 -0.0007 -0.0057 e RY +0.00075 +0.0005 +0.0030 d RZ 0 0 0 BS misaligned X Y Z a l X 1 0 0 i Y 0 1 0 g Z 0 0 1 n RX -0.0001 -0.0007 -0.0035 e RY +0.00075 0 +0.0025 d RZ 0 0 0
We've seen lately (since the IMC transmission has been different - see alog 36361 and others) that the IMC won't re-lock itself consistently. Most of the time someone has to give a kick to the MC2 suspension to make the cavity flash so that it can lock.
Jim and I were looking at it, and it appears that with the different IMC transmission situation, the lowpass that effectively helps the IMC sweep while unlocked (IMC-MCL FM10) was not being triggered properly. It is supposed to be on when the IMC is unlocked, and off once we get locked. It was properly off when the IMC was locked, but it was flickering on and off when the IMC was unlocked. The FM trigger thresholds just needed to be reset, which included changing the hard-coded values in the IMC_LOCK guardian.
With one or two unlock tests, the IMC seems to be once again relocking itself reliably. Please let me know if someone finds that it's not working again.
model restarts logged for Sat 27/May/2017 - Tue 30/May/2017 No restarts reported
model restarts logged for Fri 26/May/2017
2017_05_26 12:56 h1nds1
2017_05_26 13:03 h1nds0
2017_05_26 15:43 h1fw0
2017_05_26 16:15 h1fw0
2017_05_26 16:36 h1fw0
2017_05_26 16:46 h1ldasgw0
Restart of NDS for minute trend offloading. Unexpected crash of h1fw0 (disk error), restart of h1ldasgw0 fixed issue.
model restarts logged for Thu 25/May/2017
2017_05_25 16:43 h1ecaty1
PLC code freeze up on EY beckhoff, required reboot to fix.
I analyzed the spherical power measured by HWSY from last night's lock and compared it to the estimated thermal lens from the SIM model. Once the HWS and SIM models were all put into the same single-pass scale (HWS data measures a double-pass of the test mass thermal lens and the SIM data estimates the single-pass thermal lens), I could see that the SIM model was underestimating the thermal lens by about 25%.
The old absorption value was 280ppb. The new absorption value (required to fit the SIM model estimate to the HWS data) is 360ppb.
TITLE: 05/31 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Engineering
OUTGOING OPERATOR: N/A
CURRENT ENVIRONMENT:
Winds calm, primary and secondary uSeiem are calm
QUICK SUMMARY: ops lazy script isn't doing transition (-t).
[Vaishali, Patrick, Sheila, Jenne]
A few other locking notes from today, although nothing ground-breaking.
* I asked Vaishali to write a note "Xarm green weirdo - adjusting fiber polarization fixed it", thinking to myself that of course I'd remember exactly what I meant by weirdo. I think (esp. from the attached screenshot) that I thought adjusting the fiber polarization fixed the drifting of the Xend laser power, since that stopped for about 2 hours, although obviously it didn't actually fix it. Anyhow, Vaishali can comment here if she remembers more than I do....
* We really, really struggled with ALS DIFF while the wind was high earlier this afternoon. For a long while the whole wind trend plot was above the 20 mph line, gusting to above 40 mph. We weren't having trouble before that (it started pretty suddenly), and we were fine again after it died down and was averaging closer to 10 mph. The seismic config was in "Windy" the whole time, per the table on the config medm screen. We tried going through DIFF by hand several times, but I think the real key was just to wait until the wind was more calm.
* I saw that the power recycling gain dropped when we increased power. I tried re-setting the SOFT offsets, and in the end it turns out that the QPD offsets that Sheila had found were being overwritten in the ISC_LOCK guardian - a remnant of a time when we had 2 states for those that we liked - so we have reverted them to Sheila's offsets (since mine were nearly identical), and removed the hard-coded offsets from the guardian.
* I measured DHARD yaw just after increasing the PSL power to 30W (was actually closer to 27W during the measurement), and everything looked fine for going through with the Lownoise_ASC state. I did so by hand, and then again through at least one more lock letting guardian run through the state, and I had no problems. We aren't sure what this is, but it's not due to the SOFT offsets, since Sheila had been re-setting them by hand each lock at that time.
* On our final successful lock from this evening, I did the reduction of 45MHz modulation depth, skipped reducing the 9MHz modulation depth, and skipped the SRC ASC high power state, but otherwise was able to complete Noise_tunings, which basically puts us at NomLowNoise. After a while the CSOFT / dP/dTheta instability showed up, and we lost lock. I had increased the CSOFT pit gain from 0.6 to 0.8, but we lost lock while that was ramping, so I don't know if it would have helped or not. If all the oplevs were functional, we'd like to try oplev damping again. Alternatively, perhaps we'll try using the ISS 3rd loop again.
* After this, we've been unable to hold ALS_COMM locked, and we're suspicious of both the CARM / IMC loop (see the thread that starts with alog 36546), and the new ALSX laser (see alog 36550).
In conclusion, lots of measurements and testing, but no fundamental progress beyond the pre-ALS-laser-fiasco situation. Hopefully someone has inspiration for us in the morning regarding the ALSX laser power fluctuations, and we'll be able to keep moving forward.
If I remember correctly, we found it to be strange that the transmitted power was dropping on its own accord and when you would tweak up the alignment a bit, it would pretend to increase and drop again.
S. Dwyer, J. Kissel We found that the Beckhoff error handling system was reporting an intermittent error on ALS Y WFS A Demod Local Oscillator (LO) Power (H1:ALS-Y_WFS_A_DEMOD_LOMON) having surpassed its threshold (H1:ALS-Y_WFS_A_DEMOD_LONOM). Trending reveals that (a) The error bit has been green since the Beckhoff ADC was replace on Sept 22nd 2016 (see LHO aLOG 29848) (b) the LO's power has been slowly increasing over time, and was now on the hairy edge of the threshold. After consulting Sheila, she suggests "it's probably fine," so I increased the threshold from 21 [dBm?] to 22.5 [dBm], and accepted the new threshold in both safe and OBSERVE.snaps in the SDF system. (I'm not confident that this monitor is actually calibrated into physical units, but 21 [dBm] doesn't sound crazy so I put confidence in the designer of the system to have done so.) Hopefully this doesn't turn out to be a canary for the demodulator like the diode laser power was to the now replaced ALSX laser...
Opened FRS Ticket 8248 regarding missing 6 dB attenuator on the ALS COMM VCO path.
Other thresholds updated / cleaned up: H1:ALS-C_COMM_A_DEMOD_LONOM increased from 12 to 19 [dBm?] The monitor channel H1:ALS-C_COMM_A_DEMOD_LOMON increase from about 12.4 to 18.3 [dBm?] on March 01 2017 ~23:00 UTC (mid-afternoon local time). I could find an associated aLOG about this. H1:ISC-RF_C_AMP137M_OUTPUTNOM increased from 20 to 21.5 [dBm?] The monitor channel H1:ISC-RF_C_AMP137M_OUTPUTMON has been as high as 22 [dBm?] between Feb 7 and Mar 23rd, but then was brought back to ~21 with some small steps between then and now. No aLOGs about this one either. All of these monitor channel changes were through the course of the observing run, so we presume that this is the new normal. New thresholds have been accepted into the SDF system.
A nominal LO for a length sensor is around ~10-13 dBm. For a WFS the signal level is divided between the 4 segments. In software, the readbacks are added back together, so the LO sum should be similar in value. For the demodulators the RF power is measured after an internal 10-11 dB amplifier. So, a normal readback will be around 21-24 dBm for the LSC and the ASC sum. There is no amplifier for the phase-frequency discriminators, so their readbacks will be between 10-13 dBm.
For a distribution amplifier the power is measured before the 8-way splitter and is nominally around 22 dBm.
The ALS-C_COMM demod is driven by the COMM VCO. Alog 34512 describes a measurement involving the COMM VCO. It looks like a 6dB attenuator was left out when the changes were reverted. This should be fixed.
Not sure what happened on Feb 7, but on March 23 the harmonics oscillator was swapped which required a readjustment of some of the RF attenuators. Looks like H1:ISC-RF_C_AMP137M_OUTPUTNOM was effectively reduced by 1dB, see alog 35051. This is fine.
Checking the ALS WFS RF readbacks:
This indicates that the EL3104 module L7 in EtherCAT end 3 (D1400175-v2) of EY is broken too.
Temp A: 24.4 ºC
Temp B: 21.6 ºC
Diode: 1.80A
Laser Crystal Temp: 29.60 ºC
Doubler Temp: 33.62 ºC
Changed set points:
H1:ALS-X_FIBR_A_DEMOD_RFMAX: 10 from 4
H1:ALS-X_FIBR_LOCK_BEAT_RFMIN: -10 from -15
H1:ALS-X-LASER_HEAD_LASERDIODEPOWERNOMINAL: 1.80
H1:ALS-X-LASER_HEAD_LASERDIODEPOWERTOLERANCE: 0.2
The cable to the H1:ALS-X_LASER_IR_DC photodetector is intermittent and needs to be re-terminated. Currently, the readback is broken.
S/N (on the controller) Pulled out -> 2011B, Installed -> 2011A
Interlock cable was installed by Filiberto.
Handles on the controllers were swapped as the new one didn't have the tapped holes necessary for mounting on the table enclosure extension. The untapped handles will be drilled and tapped later.
Kiwmau and I went out the end station after they swapped the laser and tweaked the beatnote alignment. We ended up with more power on the BBPD than before the laser swap (20mW now compared to 19mW from the old laser before the power dropped on sunday evening.) We also moved the lens in the laser path before teh beatnote beam splitter 1.5 inches closer to the beamsplitter. This increased the beat note power to 8dBm, compared to about 0dBm from the old laser before Sunday evening.
After we finished up the hardware work on the ISCTEX table, Jenne and Ed aligned the X arm for the green laser, which resulted in a highest (normalized) transmission of roughly 0.8 when it was fully resonant. Therefore the amount of light power reaching the corner station decreased from what it used to be by 20%. Since the output power of the new laser at that point was lower than it had been with the old laser by 10%, the half of the reduction in the transmission can be explained by the reduced laser power. I think the remaining 10% is due to mode-matching loss.
Opened FRS Ticket 8249 regarding broken readback of ALS X IR PD.
We had a potentially scary situation tonight at mid-Y and through crazy coincidence managed to fix it before it became a serious problem. Sheila contacted me around 10 pm local time about a verbal pressure alarm that was going off in control room for BSC7 (gauge PT-170). I checked the MEDM screen from home and didn't see anything abnormal - except that the pressure is a bit high since the vent (7e-8 Torr). Most likely it's alarming because of set point setting.
This alarm made me look at our site pressure trend (48 hr trend here) and noticed that PT-210 at mid-Y was quickly drifting up starting around 7pm. Suspected CP3 and/or CP4 were warming up due to very hot temperatures we've had this weekend. Gerardo was unable to remotely log into CDS to initiate a remote overfill even though we were supposed to have permission until June 1. I drove out to the site to manually overfill both cryopumps at the skid by opening the bypass valve 1/2 turn (just like the good ole days). Filled CP3 first and observed an almost immediate drop in pressure. Took 50 min. to overfill (verified by watching LN2 pour out of exhaust). As soon as I started the fill, the exhaust flow increased to turbulent. CP4 didn't exhibit the same turbulent behavior, and took 30 minutes to overfill. Conclusion is CP3's valve actuator setting from Friday was too low at 15% open. I reset to 18%. Also increased CP4 from 37% to 39% open. Tomorrow is supposed to be 98F!
Need to learn what the current pressure alarms are set to; I propose we tighten these just for mid-Y so vacuum staff is alerted quickly when pressure starts to rise. I also suggest we try to maintain seconds vs. minutes of overfill time as we approach a hot summer.
Based on this log entry from last June 24, it took 35 minutes to overfill CP4 until LN2 poured out the exhaust. This was before CP4 clog - we were experimenting with durations and flow rates to create a work-around for CP3.
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=27950
Tonight's real life scenario has been my nightmare for the past 18 months (since ice plugs have required manual filling of CP3 and CP4). It happened on my watch and I am responsible for it. In my defense, this was not the result of inattention or a false sense of security on my part. I had lowered CP3's manual LLCV %open value to 15% open, down from 17% open, in response to Friday's automated fill having only taken 17 seconds. This would have been an appropriate response, perhaps, for springtime ambient temperature conditions but proved too much of a reduction/correction for this weekend's warmest-of-the-year weather. I look at the vacuum site overview screen multiple times on non-work days and am quite familiar with what the "normal" values are. Today was no different. At around 07:30 pm local time, I looked and noticed that PT243 was 3.97 x 10-9 torr which is higher than normal and caught my attention. I reasoned that this was probably hydrogen emitting from the BT steel on this "hot" day but was concerned enough that I resolved to check it again before going to bed. At approx. 10:30 pm local time, I looked and saw that PT243 had fallen to 2.?? x 10-9 torr. Minutes later, I checked my phone before going to bed and became aware of a text thread between Chandra R. and Gerardo M. which had been in progress for the previous 30 minutes. So, the reduction in PT243 at 10:30 pm was the result of the fact that Chandra was already on site and had started filling CP3 manually via the opening the LLCV bypass valve. Had Sheila D. not contacted Chandra at approx. 09:50 pm and Chandra not responded by doing a manual fill, the pressure shown by PT243 10:30 pm would have been much higher than the previuosly "concerning" value seen at ~07:30 and I feel that I would have responded appropriately. Still, this didn't have to happen. As Chandra reminded me, pressure trends are available (new location) for remote viewing and, had I reviewed these in addition to the Vacuum Site Overview, I would have noticed that the Y-mid values were increasing independent of the rest of the site pressures. This would have dispelled my "hydrogen" theory at 07:30 and I would have done a manual refill then.
Kyle, you shouldn't feel responsible. I'm usually the one who manipulates the LLCV settings based on temperature fluctuations and fullness of Dewar and have a better feel for adjustments. Sorry I didn't explain that better before I left. We can start to think about the next level of automation on this system which would increase/decrease valve setting based on how long it took to fill the previous time. Folks should also recognize that the work we're proposing to do post O2 by either decommissioning and/or regenerating these CPs will eliminate these risks.
Also, I doubt PT-170 alarming was actually a crazy coincidence. I forgot to trend its pressure but am guessing it was also starting to increase due to loss of cryopump action at MY. And because its pressure was already high from vent, it alerted us before we had to wait for -8 torr range alarms in arm. Thank goodness Sheila was in the lab at the time to catch it!
Good save, I should have thought of this. The dominant boiloff load is (should be) blackbody radiation from the tube, which is at BTE ambient temperature. I will make a number for the fractional effect on liquid mass flow per degree, so we can add that % on to our "open-loop estimate".
EDIT: see post 36496
Worth noting, though, the high ambient (BTE ) temp is raising the hydrogen diffusion flux, which doubles every 6 C or so (harmlessly, as long as we have ion pumps). So the pressure trend (and particularly, any attribution to the CP) has to be interpreted carefully.
Even before today, the ice plugs gave me nightmares. We have to fix them, and stop any more from happening.
Dave Barker suggested increasing to daily auto overfills rather than Mon-Wed-Fri. I like this idea. We'll discuss with vacuum group this week.
The cell phone alarm system currently monitors the vacuum gauge pairs at the ends of the 2km beam tube sections. For MY those are PT243 (closest to corner) and PT246 (closest to EX). I can certainly add all the other gauges in MY (PT244, PT245, PT210) to the system if needed.
I've created a remote access permit for the vacuum group, good through the end of the year.
No cell phone alarms were raised for this event, their upper alarm range is 5.0e-08 torr, an order of magnitude higher than what MY saw Sunday night (trend attached).
Gerardo was able to remotely log into CDS from home, he had a permit open. He was unable to directly log into the vacuum1 machine to make vacuum changes due to recent ssh cert changed. I recommend that every week the vacuum group test remote log into vacuum1 to verify this is possible.
More clues....or confusion. MidY IP9 current plotted with PT-210 pressure increase from Sunday evening. Strange behavior in IP.
Trended PT-170 over the weekend to understand the verbal alarm Sheila heard in control room. PT-170 pressure has steadily been falling since the vent, but at about 9:30pm local time on Sunday, it was crossing over the 7e-8 Torr alarm threshold causing the verbal alarm. What luck!
Outside temperature plotted over 30 days along with PT-243 at mid-Y.
Daniel, Kiwamu per WP6650,
We made two major changes to IOO today as follows.
After these tasks, we have confirmed that the IMC could still lock with an UGF of 50 kHz (which had been 54-ish kHz before we made changes today, approximately corresponding to a 1 dB loss in the sensing system). Tomorrow, we will try to measure the modulation index for 118 MHz using the Michelson fringes.
[IMC length RFPD]
The old unit (S1203263) was replaced by the one with the correct resonant circuit (S1203775). I have repeated the same measurement as yesterday with the AM laser (36322) to check out the response of the new RFPD. The first attached figure shows comparison of the new and old RFPD responses. The magnitude at 24 MHz was found to be lower than that with the old one by half a dB. So the response is virtually unchanged at 24 MHz. The raw data is attached as well.
I made sure about the alignment of the beam onto the RFPD by touching the 2" steering mirror in front of it. I also made sure that the reflection from the RFPD was dumped properly with a razor blade.
[Addition of 118 MHz modulation]
We newly installed two IFR function generators in an ISC rack in the CER. One of them provide us with the modulation rf source at 118 MHz while the other does for the demodulation at 72 MHz or whatever frequency we desire. The 118 MHz generator was then hooked up to an RF amplifier, ZHL-1-2W, and then to the patch panel which transfers it to the ISC field racks by the PSL. Then, the 118 MHz signals is combined with the existing 24 MHz modulation signal via a power combiner, MECA 2-way combiner. As it turned out that the power combiner gave us a relatively high insertion loss of ~3 dB, we decided to increase the rf power for 24 MHz as a compensation. We increased it by installing another MECA 2-way combiner and let it do the coherent sum. So in total we have installed two 2-way combiner in series as seen in the attached picture.
We also studied rf losses for 118 MHz and found extra frequency dependent losses in the cable that connects the field and remote racks. The cable loss for 118 MHz was found to be higher than that for 24 MHz by a couple of dB. In the end, we measured an rf power of 30 dBm for 118 MHz at the output of the last power combiner, which we think is reasonably high and not too high for causing serious problems with the EOM. Note that we tried inserting a balun which turned out to be as lossy as 3-4 dB. So we decided not to use a balun for 118 MHz. We are thinking of installing a DC block instead.
As this setup messed up the rf phase, we then re-adjusted the delay of the delay line unit for 24 MHz, by flipping 1 and 2 nsec switches. This gave us almost the same UGF of 50 kHz when the IMC was locked with a 2 W input and with the new RFPD in use. Good.
The 118 MHz phase modulation capability is for a new scheme of alignment sensing for the SRM, described in T1700215.
Hooked up the demodulation chain by switching all 90 MHz sensors at the AS port to 72 MHz. This includes the two ASC AS WFS and the LSC ASAIR_B.
The modulation frequencies are:
Currently, the demodulation chain is set with a 100 Hz offset to 72.801940 MHz. This means the DC signal will appear at 100 Hz which should allow us to measure the modulation index.
The 30 dBm RF power to the modulator was measured with the signal source set to 13 dBm. It is currently set to -10 dBm, but can easily be increased when needed.
The 8th and 13th harmonics are driven by two aeroflex 2023A which are synchronized to the external 10 MHz source. We also reinstalled the 2023A RF source used for diagnostics. I would expect the RF phase between the 2023A to be stable, but not relative to the OCXO.
Something is odd here. The serial numbers may be backwards. Here's why I think that: The "old" RFPD is listed in this log entry as SN S1203263 The "new" RFPD is listed in this log entry as SN S1203775 Unless someone has re-tuned things from the original values, my records indicate S1203775 is actually tuned for 21.5MHz, which would make it a PSL FSS spare. The other unit (S1203263) is tuned for 24.078MHz and was intended to lock the IMC. It would appear as though the log entry may have swapped the serial numbers.
Rich,
You are right, I had them mixed up in the above entry. It should read, "S1203775 was replaced by S1203263". Additionally, the label in the plot is wrong in the same manner. Sorry for confusion.