8:30 am local 1/2 turn open on LLCV bypass valve - took 54 sec. to overfill CP3. Ken and I changed the LLCV actuator right before hand and may have contributed to overfilling when decoupling to needle valve.
Ken, Chandra Removed pneumatic actuator from LLCV on CP3 and replaced with electronic version. Changed fuse to 2.5 A. Set to 19%. Need to remove shims.
The attached spectra suggests that the H3 performance is pretty indistinguishable from the other sensors and looks much better than the H3 sensor looked in March after some card swapping. We ended up with the same board in place ultimately and have done nothing since. I recommend we close 4424 and monitor.
Below are the past 10 day trends for thye PSL and it's environment. Unfortuntely it remains riddled with higher than normal incursions and the ongoing chiller woes.
This task has officially been added to FAMIS. Task Number was 6098.
I just reloaded all the coefficients in the TCS SIM model. None of the coefficients has been set for 4 days. It's likely the existing coefficients are being backed up but not in the SAFE.SNAP file.
I also just restored the CO2 QPD coefficients and the SIM_ITMX_SUB_DEFOCUS_CO2_GAIN to 3.57E-5
I forgot to post these yesterday. Perhaps we should move this task to Wednesday on the OPS checklist so as not to get lost in maintenance day activities. See attached screenshot for values before reset.
Good point. The Ops Check Sheet has been updated to move the HEPI WD Counter check to Wednesday. Thanks, Travis!
With the AC off in the PSL, and the laser restored, we are back to locking. Tonight we started to implement the high bandwidth hard loops that we made filters for over the weekend. The idea here to to make some ASC loops that will be high bandwidth and introduce noise into darm, but be relatively simple to keep stable as we power up. Then we can worry about a low noise loop only at our final power. The loop designs using Jenne's model are attached, they are the same filt ers for CHARD and DHARD so only CHARD is attached. Since the top mass damping was made more uniform today, 27464
the damping is probably not acurate in the model anymore. We were able to turn the CHARD yaw loop up to 10 Hz, (gain of -250) and turn the pit loop up to about 5 Hz, but there is something that is unstable when they are both high bandwidth. We checked that the gain of the MICH loop is not changing as we increase the gain in CHARD.
Driving an admixture of hard/soft is bad, as illustrated in these Bode plots. The left plot shows the PUM to UM TF in the optic basis, and the right one is in the hard/soft basis.
With different damping in the TOP stage, the actuation TF is different for each suspension and so we end up with some of these zeros that we see in the left plot. Matching the actuators should make the loop shapes more like the ones on the right, avoiding some of the multiple UGFs we see in measurements.
But what's a good way to make sure that we have pure hard/soft actuators? We don't have pure sensors.
Unfortunately the current RMS watchdog on the L2 stage of ETMX tripped at around 3:00 AM local this morning. This seems to have shut the drive signal of all four colis and therefore the ETMX mode was not damping at a good rate.
By the way, we were unable to untrip the watchdog from the control room using the SUS-ETMX_BIO_L2_RMSRESET channel. This seems to be a known issue (alog 20282). I drove to EX and power-cycled the PUM coil driver.
Travis opened an FRS for this issue of being unable to reset the watchdog of the current rms on the ETMX PUM (aka L2) driver.
Evan, Rana, Sheila
This afternoon there was a problem with the ITMX M0 BIO. The TEST/COIL enable indicator was red, and setting H1:SUS-ITMX_BIO_M0_CTENABLE to 0 and back to 1 could not make it green. Evan tried power cycling the coil driver, which did not work. We were able to reset this and damp the suspension again by setting the BIO state to 1, it seems that anything other than state 2 works.
This might be a hardware problem that needs to be fixed, but for now we can use the suspension like this.
Opened FRS Ticket 5616.
After the weekend power outage, we see that the ITMX M0 BIO settings are back to their nominal combo of:
STATE REQUEST 2.000
COIL TEST ENABLE 1.000
- And the BIT statuses all show green.
Toggling them to alternate 1.000 and 2.000 values respectively and back to niminal turns them from red back to green. Nothing seems to be stuck now and the ill combo that Sheila reported the other day above doesn't seem to be a problem now.
TITLE: 5/31 Eve shift 16:00-0:00 PST
STATE Of H1: Commissioning
SHIFT SUMMARY: Mostly Sheila and Evan trying to lock
INCOMING OPERATOR: Empty chair.
ACTIVITY LOG:
16:20 Gerardo and Manny out of LVEA
19:00 Sheila and I adjust Y arm polarization
Other than that, a lot of lock losses in DRMI/PRMI.
Rana, Evan
The top mass pitch and yaw damping filters for the quads were inconsistent between different suspensions. We cleaned up and regularized the filters (in particular, by turning off the lower-Q "newboo" filter in one of the ETM pitch loops), and settled on the following configurations:
The step responses look reasonable (Q of 4 or so). We tuned the gains by increasing them until we saw oscillation, and then we backed off by 6 dB or so. The overall pitch loops are basically the same as before, while the yaw loops have 12 dB more gain.
Chandra, Ken (with help from John & Richard) Removed the pneumatic actuators from LLCVs on CP 2,3,5,6 and replaced with the electronic version. Upgraded fuses to 2.5 A. Will finish up WP 5906 with CP3 tomorrow morning. CP 2,4,5,6 are in PID mode.
I decoupled the flow meter at CP4 exhaust until PID settled. Will reconnect tomorrow.
Kiwamu, Nutsinee, Jim, Dave
At noon we power cycled the h1oaf0 IO Chassis to hard reset the 16bit DAC card. We were suprised that both TCS chillers stopped running. During last week's IOP dac-disable events (which zeroed the DAC drives), the chillers kept on running for about an hour before tripping. As a further test, at 14:00 PDT we again powered down h1oaf0 CPU followed by its IO Chassis. At this point the chillers were operational. We then power cycled the AI chassis, which tripped the chillers. We then powered up the IO Chassis and computer, at which point the chillers tripped again. We were unsure if the trip was at the time the IOP model started, or h1tcscs model started driving the DAC channels.
Take home message: before restarting h1oaf0 IOP, CPU, IO Chassis or DAC-AI unit please contact Nutsinee.
Nutsinee, Kiwamu,
Today, we made a final touch on the alignment of the CO2Y table optics. As a result, we got a 5.7 W output power coming out of the table which is twice bigger than the value we measured back in April (26645). The beam profile now looks good -- almost axis symmetrical about the center of the beam area.
The next step: the alignment of the CO2 beam with respect to the interferometer beam.
[Details]
At the beginning, as a coarse alignment between the CO2 beam and the aperture mask, we moved the position of the mask by approximately a few mm in order to improve the horizontal beam profile. We then touched M5 in both pitch and yaw to further refine the alignment. Later we repositioned the beam dump which catches the reflection from the mask. This time, we used the FLIR camera as a reference which is much more sensitive than an image card with the UV light. The attached images are the ones after the fine adjustment.
Once we optimized the intensity profile, we re-aligned the beam through the two irises that have been serving as the fiducial points for projecting the beam to the test mass. After the alignment, we placed a power meter right behind the first iris which read 5.7 W when the rotation stage was at 18 deg (which should give us almost the maximum transmission). Before closing the table, we put the beam dump back in place to block the beam to the FLIR camera.
TITLE: 05/31 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: Jim
SHIFT SUMMARY: After PSL was brought back to life and maintenance activties subsided, we are having issues getting past DRMI locked.
LOG:
14:56 Ken to MY working on HVAC
15:06 Cristina and Karen to LVEA
15:35 Fil and Ed pulling cables in beer garden
15:43 Hugh to ends HEPI maint.
15:50 turned of BRS at both ends for maint. work
15:50 Gerardo and Manny to all VEAs, pulling extension cords in prep for power outage
16:12 Kiwamu working on TCS alignment
16:34 Hugh done
16:48 Nutsinee to TCS table
17:00 Karen to EY
17:05 Fil and Ed out
17:54 Fil working on ESD in beer garden
18:00 Gerardo and Manny back from ends, to LVEA
18:02 Jason to diode room
18:05 PSL back up
19:00 Kiwamu done
19:51 Fil to beer garden
20:21 Bubba to LVEA checking 3IFO N2 purge
21:07 Fil out
21:25 Kiwamu and Nutsinee to TCS table
22:27 turned BRS back on at both ends
22:39 Kiwamu and Nutsinee out
Required use of N. crane to stage equipment (~0900 - 0930 hrs. local). Temporary aux. pumps removed and HAM4 AIP system in nominal, as found, configuration.
In the morning we were wondering what the SRM composite mass looks like, in case we want to drive it to find out the frequency and Q of the first internal mode.
https://dcc.ligo.org/LIGO-D1200886
There is a ring shaped holder which has two nubs and a set screw. This holds the 2" optic. This holder is then bolted into the larger aluminum mass. Two views from the solidworks drawings are shows with some parts made transparent. If this drawing is right, I expect that the resonant frequency is in the 500-1000 Hz range.
Here is a picture of the actual SRM surrogte optic in it's inner holder (the LLO one shown). The 2" x 0.75" thick optic is held in the ring by what I believe are 2 black carbon peek set screws which are not shown in the assembly drawings.
Evan and I spent most of the day trying to investigate the sudden locklosses we've had over the last 3 days.
1) We can stay locked for ~20 minutes with ALS and DRMI if we don't turn on the REFL WFS loops. If we turn these loops on we loose lock within a minute or so. Even with these loops off we are still not stable though, and saw last night that we can't make it through the lock acquisition sequence.
2)In almost every lockloss, you can see a glitch in SR3 M2 UR and LL noisemons just before the lockloss, which lines up well in time with glitches in POP18. Since the UR noisemon has a lot of 60 Hz noise, the glitches can only be seen there in the OUT16 channel, but the UR glitches are much larger. (We do not actuate on this stage at all). However, there are two reasons to be skeptical that this is the real problem:
It could be that the RF problem that started in the last few days somehow makes us more senstive to loosing lock because of tiny SR3 glitches, or that the noisemons are just showing some spurious signal which is related to the lockloss/ RF problems. Some lockloss plots are attached.
It seems like the thing to do would be trying to fix the RF problem, but we don't have many ideas for what to do.
We also tried running the Hang's automatic lockloss tool, but it is a little difficult to interpret the results from this. There are some AS 45 WFS channels that show up in the third plot that apprears, which could be related to either a glitchy SR3 or an RF problem.
One more thing: Nnds1 chrashed today and Dave helped us restart it over the phone.
For the three locklosses that Sheila plotted, there actually is something visible on the M3 OSEM in length. It looks like about two seconds of noise from 15 to 25 Hz; see first plot. There's also a huge ongoing burst of noise in the M2 UR NOISEMON that starts when POP18 starts to drop. The second through fourth attachments are these three channels plotted together, with causal whitening applied to the noisemon and osem. Maybe the OSEM is just witnessing the same electrical problem as is affecting the noisemon, because it does seem a bit high in frequency to be real. But I'm not sure. It seems like whatever these two channels are seeing has to be related to the lockloss even if it's not the cause. It's possible that the other M2 coils are glitching as well. None of the other noisemons look as healthy as UR, so they might not be as sensitive to what's going on.
RF "problem" is probably not a real RF problem.
Bad RFAM excess was only observed in out-of-loop RFAM sensor but not in the RFAM stabilization control signal. In the attached, top is out-of-loop, middle is the control signal, and the bottom is the error signal.
Anyway, whatever this low frequency excess is, it should come in after the RF splitter for in- and out-of-loop board. Since this is observed both in 9 and 45MHz RFAM chassis, it should be in the difference in how in- and out-of-loop boards are configured. See D0900761. I cannot pinpoint what that is but my guess is that this is some DC stuff coming into the out-of-loop board (e.g. auto bias adjustment feedback which only exists in out-of-loop).
Note that even if it's a real RFAM, 1ppm RIN at 0.5Hz is nothing assuming that the calibration of that channel is correct.
Correction: The glitches are visible on both the M2 and M3 OSEMs in length, also weakly in pitch on M3. The central frequency looks to be 20 Hz. The height of the peaks in length looks suspiciously similar between M2 and M3.
Just to be complete, I've made a PDF with several plots. Every time the noise in the noisemons comes on, POP18 drops and it looks like lock is lost. There are some times when the lock comes back with the noise still there, and the buildup of POP18 is depressed. When the noise ends, the buildup goes back up to its normal value. The burst of noise in the OSEMs seems to happen each time the noise in the noisemons pops up. The noise is in a few of the noisemons, on M2 and M3.