With the AC off in the PSL, and the laser restored, we are back to locking. Tonight we started to implement the high bandwidth hard loops that we made filters for over the weekend. The idea here to to make some ASC loops that will be high bandwidth and introduce noise into darm, but be relatively simple to keep stable as we power up. Then we can worry about a low noise loop only at our final power. The loop designs using Jenne's model are attached, they are the same filt ers for CHARD and DHARD so only CHARD is attached. Since the top mass damping was made more uniform today, 27464
the damping is probably not acurate in the model anymore. We were able to turn the CHARD yaw loop up to 10 Hz, (gain of -250) and turn the pit loop up to about 5 Hz, but there is something that is unstable when they are both high bandwidth. We checked that the gain of the MICH loop is not changing as we increase the gain in CHARD.
Driving an admixture of hard/soft is bad, as illustrated in these Bode plots. The left plot shows the PUM to UM TF in the optic basis, and the right one is in the hard/soft basis.
With different damping in the TOP stage, the actuation TF is different for each suspension and so we end up with some of these zeros that we see in the left plot. Matching the actuators should make the loop shapes more like the ones on the right, avoiding some of the multiple UGFs we see in measurements.
But what's a good way to make sure that we have pure hard/soft actuators? We don't have pure sensors.
Unfortunately the current RMS watchdog on the L2 stage of ETMX tripped at around 3:00 AM local this morning. This seems to have shut the drive signal of all four colis and therefore the ETMX mode was not damping at a good rate.
By the way, we were unable to untrip the watchdog from the control room using the SUS-ETMX_BIO_L2_RMSRESET channel. This seems to be a known issue (alog 20282). I drove to EX and power-cycled the PUM coil driver.
Travis opened an FRS for this issue of being unable to reset the watchdog of the current rms on the ETMX PUM (aka L2) driver.
Evan, Rana, Sheila
This afternoon there was a problem with the ITMX M0 BIO. The TEST/COIL enable indicator was red, and setting H1:SUS-ITMX_BIO_M0_CTENABLE to 0 and back to 1 could not make it green. Evan tried power cycling the coil driver, which did not work. We were able to reset this and damp the suspension again by setting the BIO state to 1, it seems that anything other than state 2 works.
This might be a hardware problem that needs to be fixed, but for now we can use the suspension like this.
Opened FRS Ticket 5616.
After the weekend power outage, we see that the ITMX M0 BIO settings are back to their nominal combo of:
STATE REQUEST 2.000
COIL TEST ENABLE 1.000
- And the BIT statuses all show green.
Toggling them to alternate 1.000 and 2.000 values respectively and back to niminal turns them from red back to green. Nothing seems to be stuck now and the ill combo that Sheila reported the other day above doesn't seem to be a problem now.
TITLE: 5/31 Eve shift 16:00-0:00 PST
STATE Of H1: Commissioning
SHIFT SUMMARY: Mostly Sheila and Evan trying to lock
INCOMING OPERATOR: Empty chair.
ACTIVITY LOG:
16:20 Gerardo and Manny out of LVEA
19:00 Sheila and I adjust Y arm polarization
Other than that, a lot of lock losses in DRMI/PRMI.
Rana, Evan
The top mass pitch and yaw damping filters for the quads were inconsistent between different suspensions. We cleaned up and regularized the filters (in particular, by turning off the lower-Q "newboo" filter in one of the ETM pitch loops), and settled on the following configurations:
The step responses look reasonable (Q of 4 or so). We tuned the gains by increasing them until we saw oscillation, and then we backed off by 6 dB or so. The overall pitch loops are basically the same as before, while the yaw loops have 12 dB more gain.
Chandra, Ken (with help from John & Richard) Removed the pneumatic actuators from LLCVs on CP 2,3,5,6 and replaced with the electronic version. Upgraded fuses to 2.5 A. Will finish up WP 5906 with CP3 tomorrow morning. CP 2,4,5,6 are in PID mode.
I decoupled the flow meter at CP4 exhaust until PID settled. Will reconnect tomorrow.
Kiwamu, Nutsinee, Jim, Dave
At noon we power cycled the h1oaf0 IO Chassis to hard reset the 16bit DAC card. We were suprised that both TCS chillers stopped running. During last week's IOP dac-disable events (which zeroed the DAC drives), the chillers kept on running for about an hour before tripping. As a further test, at 14:00 PDT we again powered down h1oaf0 CPU followed by its IO Chassis. At this point the chillers were operational. We then power cycled the AI chassis, which tripped the chillers. We then powered up the IO Chassis and computer, at which point the chillers tripped again. We were unsure if the trip was at the time the IOP model started, or h1tcscs model started driving the DAC channels.
Take home message: before restarting h1oaf0 IOP, CPU, IO Chassis or DAC-AI unit please contact Nutsinee.
Nutsinee, Kiwamu,
Today, we made a final touch on the alignment of the CO2Y table optics. As a result, we got a 5.7 W output power coming out of the table which is twice bigger than the value we measured back in April (26645). The beam profile now looks good -- almost axis symmetrical about the center of the beam area.
The next step: the alignment of the CO2 beam with respect to the interferometer beam.
[Details]
At the beginning, as a coarse alignment between the CO2 beam and the aperture mask, we moved the position of the mask by approximately a few mm in order to improve the horizontal beam profile. We then touched M5 in both pitch and yaw to further refine the alignment. Later we repositioned the beam dump which catches the reflection from the mask. This time, we used the FLIR camera as a reference which is much more sensitive than an image card with the UV light. The attached images are the ones after the fine adjustment.
Once we optimized the intensity profile, we re-aligned the beam through the two irises that have been serving as the fiducial points for projecting the beam to the test mass. After the alignment, we placed a power meter right behind the first iris which read 5.7 W when the rotation stage was at 18 deg (which should give us almost the maximum transmission). Before closing the table, we put the beam dump back in place to block the beam to the FLIR camera.
TITLE: 05/31 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: Jim
SHIFT SUMMARY: After PSL was brought back to life and maintenance activties subsided, we are having issues getting past DRMI locked.
LOG:
14:56 Ken to MY working on HVAC
15:06 Cristina and Karen to LVEA
15:35 Fil and Ed pulling cables in beer garden
15:43 Hugh to ends HEPI maint.
15:50 turned of BRS at both ends for maint. work
15:50 Gerardo and Manny to all VEAs, pulling extension cords in prep for power outage
16:12 Kiwamu working on TCS alignment
16:34 Hugh done
16:48 Nutsinee to TCS table
17:00 Karen to EY
17:05 Fil and Ed out
17:54 Fil working on ESD in beer garden
18:00 Gerardo and Manny back from ends, to LVEA
18:02 Jason to diode room
18:05 PSL back up
19:00 Kiwamu done
19:51 Fil to beer garden
20:21 Bubba to LVEA checking 3IFO N2 purge
21:07 Fil out
21:25 Kiwamu and Nutsinee to TCS table
22:27 turned BRS back on at both ends
22:39 Kiwamu and Nutsinee out
Required use of N. crane to stage equipment (~0900 - 0930 hrs. local). Temporary aux. pumps removed and HAM4 AIP system in nominal, as found, configuration.
In the morning we were wondering what the SRM composite mass looks like, in case we want to drive it to find out the frequency and Q of the first internal mode.
https://dcc.ligo.org/LIGO-D1200886
There is a ring shaped holder which has two nubs and a set screw. This holds the 2" optic. This holder is then bolted into the larger aluminum mass. Two views from the solidworks drawings are shows with some parts made transparent. If this drawing is right, I expect that the resonant frequency is in the 500-1000 Hz range.
Here is a picture of the actual SRM surrogte optic in it's inner holder (the LLO one shown). The 2" x 0.75" thick optic is held in the ring by what I believe are 2 black carbon peek set screws which are not shown in the assembly drawings.
Saturday 28th
08:47 PSL Diode Chiller monitor signals flat-line (compuer had been running for about 24 hours since last reboot)
10:24 TCS DAC output drops to zero volts (recovered soon after)
17:04 h1nds1 kernel panics (computer rebooted)
23:46 TCS DAC output drops to zero volts (recovered Monday)
Sunday 29th
Monday 30th
11:00 PSL laser power drops to zero (recovered Tuesday)
TCS Y-table FLIR camera
Kiwamu, Dave:
the TCS-Y infrared camera was not responding. Restarting the controller unit at the table appears to have resolved it.
h1nds1 restart
h1nds1 stopped running. Monit was not monitoring it, we manually instructed monit to restart daqd and it is running again.
h1oaf0 power cycle
Following three occurences of the TCS DAC being driven to zero volts, this morning we power cycled the IO Chassis for h1oaf0. Power was removed from the chassis using the front panel switch, it was down for about one minute. h1tcscs was manually burt restored to 09:10 PDT this morning.
roof camera
Richard, Jim, Dave:
We went onto the roof and verified the CDS AUX vlan was operational at the fiber media converter, it was.
J. Oberling, J. Bartlett, P. King (via phone)
Following yesterday's initial investigation into the PSl diode chiller issues (see alogs here), we swapped the control panel for the diode chiller with the one from the chiller we recently removed from service (this is a known working unit, just installed last year). After installation the chiller restarted without issue and ran for several minutes, also without issue. The serial number of the control panel we installed in the diode chiller is 44806P605; the faulty unit we removed from the diode chiller had no SN on it.
We then took the time to restart the PSL Beckhoff PC to unstick the frozen diode chiller channels. According to Dave Barker these channels froze sometime on Saturday morning. Fortunately the PSL interlocks are not tied to these channels, otherwise we would not have the PSL shutting down when the diode chiller shuts down (as we did yesterday). Once restarted the channels appeared to be reading OK. It is possible that these channels freezing for no reason can be used as an early warning sign of imminent chiller failure. The chiller communicates with the PSL Beckhoff PC via a serial RS-232 interface and we have seen channels freeze when the cables are unplugged and plugged back in but the PC not restarted (which is expected behavior as RS-232 is not hot swappable), but this is the first time I've seen the chiller spontaneously loose communication with the PSL Beckhoff PC. Jeff Bartlett is setting up a temporary Strip Tool on the PSL monitor in the control room that will monitor these channels. If anyone sees that these are flatlined (there should always be some variation), please let someone on the PSL team (Peter King, Jeff Bartlett, Ed Merilh, Rick Savage, or myself) know immediately. Thank you.
We then turned on the HPO, which came up with no problems. We let it sit and run for a little while, then restarted the rest of the PSL. As of now, everything is up an running. We are going to continue to monitor over the next couple days to ensure that everything is working correctly. The removed control panel will go back to Termotek with the chiller we are sending back and will be replaced as part of the service being done on that chiller.
Completely forgot to mention that after performing the above front panel swap and restarting the laser, we adjusted the calibration of the vortex flow sensors in both chillers. Using the chiller we just recently removed from service, we hooked it up to an external flow meter and compared the 2 readings (1 from the chiller's internal flow meter and 1 from our external flow meter) and calculated a new pulses/liter calibration for the vortex flow sensors. According to that measurement the vortex flow sensors should be set to 494 pulses/liter (they were originally set to ~970 pulses/liter, a number we got from LZH). The flow information from both chillers is now accurate.
Sheila, Terra, Craig
The PSL is off, is has been since about 17:30 UTC. The chillers don't seem to be tripped. Confusingly the laser screen indicates that the chillers are fine, (two green boxes) while the PSL_STATUS screen has a red box for the crystal chiller (screenshot attached). Jason will come out to the site to investigate/restart the laser.
We also noted that the temperature trends for the PSL have been usual since thursday morning's incursion. (2nd screnshot) I went to the controller box and saw that the north AS unit was on, which was probably unintentional (the south unit was off, and they are normally both off in science mode). I turned it off at noon local time. Terra noted that the PSL microphone has seen an elevated level of noise in the last few days, which went back to normal as soon as the AC unit was off. (In the third attached screenshot, blue traces are from the time when the AC was on). The montors on the RF AM stabilization also changed when we turned off the AC, and some channels on the AM stabilization box seem to have been sensitive to some kind of switching of the PSL HVAC over the last few days.
It seems like we need some kind of a better way to monitor if the PSL environment settings are correct, maybe adding them to diag main if we can find a good set of tests to write. It also is suprising to me that our RF system seems to be so sensitive to accoustic pick up in the PSL. Has anyone in detchar looked at PSL PEM monitors to see if glitches there are correlated to the "RF45" glitches seen durring O1?
The TCS chillers are also tripped, with the same DAC problem we have been having. (27435) This happened about 36 hours ago.
Bottom chiller screen; flashing between 'temperature' and 'warning'
The laser was up and running this morning when I checked it around 6 am (local). The gibberish message on the diode chiller controller I've never seen before and is most likely a controller malfunction. To fix the problem, I would try (in order): - power cycling the chiller with the power switch located at the rear of the chiller - replacing the chiller controller (if Jeff Bartlett happens to have a spare handy) - install the spare chiller (which will take a bit of work because ... ) * the turbine flow sensors need to be replaced with the vortex ones * the 3-phase power plug needs to be installed * some filters need to be removed * the coolant lines will need to have any air pockets removed The problem with the first solution is that it is hard to gauge how long the "fix" might be valid for before the laser could trip out again.
We used Sheila's very instructive alog to kill and restart all the models on the OAF machine, reset the TCS chillers and restart the TCS laser.
J. Oberling, S. Dwyer
Attempted to bring the PSL back up but were ultimately unsuccessful. Came in and found the crystal chiller running and the diode chiller off, although the Laser MEDM screen indicated the diode chiller was up and running. EPICS channels frozen again?
The diode chiller turned on without an issue, although the weirdness on the main screen, seen in Terra's photos above, did not go away. We let the chiller run for several minutes and then attempted to power on the HPO. Approximately midway during the pump diode power up everything stopped and we found the diode chiller shut off. To see if it was a coincidence we reset the interlocks and attempted to turn the HPO on again, this time monitoring the chillers. The HPO got to its second stability range and the diode chiller immediately shut off. We power cycled the diode chiller (which by the way cleared the funky front panel issue seen in the photos above). This time the HPO acheived the second stability range for 10 whole seconds before the diode chiller shut off again; it almost seems as if the chiller is shutting off as soon as it sees a heat load. During all this the crystal chiller remained up and running without issue.
At this time I'm out of ideas, although the chiller behavior copuled with the front screen weirdness makes me think we may have a control panel problem with the diode chiller (as Peter mentions above); I seem to recall that when we had the chiller flow sensor issues last year (April/May 2015) we also had some weird issues with the chiller (the one we just recently removed from service) that were solved by replacing the control panels. I left the PSL off; the diode chiller is also off and the crystal chiller is running. Please do not attempt to turn it on, we will investigate more fully tomorrow morning.
Filed FRS #5605.
(see attached) Will investigate Tues.
AIP = "annulus ion pump"
Evan and I spent most of the day trying to investigate the sudden locklosses we've had over the last 3 days.
1) We can stay locked for ~20 minutes with ALS and DRMI if we don't turn on the REFL WFS loops. If we turn these loops on we loose lock within a minute or so. Even with these loops off we are still not stable though, and saw last night that we can't make it through the lock acquisition sequence.
2)In almost every lockloss, you can see a glitch in SR3 M2 UR and LL noisemons just before the lockloss, which lines up well in time with glitches in POP18. Since the UR noisemon has a lot of 60 Hz noise, the glitches can only be seen there in the OUT16 channel, but the UR glitches are much larger. (We do not actuate on this stage at all). However, there are two reasons to be skeptical that this is the real problem:
It could be that the RF problem that started in the last few days somehow makes us more senstive to loosing lock because of tiny SR3 glitches, or that the noisemons are just showing some spurious signal which is related to the lockloss/ RF problems. Some lockloss plots are attached.
It seems like the thing to do would be trying to fix the RF problem, but we don't have many ideas for what to do.
We also tried running the Hang's automatic lockloss tool, but it is a little difficult to interpret the results from this. There are some AS 45 WFS channels that show up in the third plot that apprears, which could be related to either a glitchy SR3 or an RF problem.
One more thing: Nnds1 chrashed today and Dave helped us restart it over the phone.
For the three locklosses that Sheila plotted, there actually is something visible on the M3 OSEM in length. It looks like about two seconds of noise from 15 to 25 Hz; see first plot. There's also a huge ongoing burst of noise in the M2 UR NOISEMON that starts when POP18 starts to drop. The second through fourth attachments are these three channels plotted together, with causal whitening applied to the noisemon and osem. Maybe the OSEM is just witnessing the same electrical problem as is affecting the noisemon, because it does seem a bit high in frequency to be real. But I'm not sure. It seems like whatever these two channels are seeing has to be related to the lockloss even if it's not the cause. It's possible that the other M2 coils are glitching as well. None of the other noisemons look as healthy as UR, so they might not be as sensitive to what's going on.
RF "problem" is probably not a real RF problem.
Bad RFAM excess was only observed in out-of-loop RFAM sensor but not in the RFAM stabilization control signal. In the attached, top is out-of-loop, middle is the control signal, and the bottom is the error signal.
Anyway, whatever this low frequency excess is, it should come in after the RF splitter for in- and out-of-loop board. Since this is observed both in 9 and 45MHz RFAM chassis, it should be in the difference in how in- and out-of-loop boards are configured. See D0900761. I cannot pinpoint what that is but my guess is that this is some DC stuff coming into the out-of-loop board (e.g. auto bias adjustment feedback which only exists in out-of-loop).
Note that even if it's a real RFAM, 1ppm RIN at 0.5Hz is nothing assuming that the calibration of that channel is correct.
Correction: The glitches are visible on both the M2 and M3 OSEMs in length, also weakly in pitch on M3. The central frequency looks to be 20 Hz. The height of the peaks in length looks suspiciously similar between M2 and M3.
Just to be complete, I've made a PDF with several plots. Every time the noise in the noisemons comes on, POP18 drops and it looks like lock is lost. There are some times when the lock comes back with the noise still there, and the buildup of POP18 is depressed. When the noise ends, the buildup goes back up to its normal value. The burst of noise in the OSEMs seems to happen each time the noise in the noisemons pops up. The noise is in a few of the noisemons, on M2 and M3.