TITLE: 06/15 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 134Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 16mph Gusts, 12mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
H1 is Observing, but will be taken out for COMMISSIONING shortly for Robert EX imaging work & Elenna measurement. Nuc27 had not been showing H1's range but Erik recently fixed it.
Low winds *knock on wood*
TITLE: 06/15 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 135Mpc
SHIFT SUMMARY: Large EQ today kept us DOWN for ~ 4 hours. EX has been left in LASER HAZARD for Robert's photos he plans to take in a commissioning time later. Plan for commissioning time at 23:25UTC.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 15:10 | CDS | Dave | Remote | N | Rebooting nuc26, nuc30 and cameras with issues | 15:33 |
| 15:54 | CDS | WAP | Remote | N | Dave turned MSR WAP on/off 70480 | 15:56 |
| 17:08 | PEM | Plane | CR | N | Plane heard overhead (tagging DetChar) | 17:10 |
| 17:15 | VAC | Janos | MY | N | VAC checks | 17:50 |
| 18:33 | PEM | Robert | EX | YES | Installing cameras at EX WP#11264 | 19:59 |
| 18:43 | WAP | Robert | EX | N | Wifi turned ON | 22:40 |
| 19:52 | CDS | Erik, Jonathan | CER | N | Check on HAM6 camera cable 70478 | 20:00 |
| 20:00 | LASER | EX | EX | YES | EX Left in LASER HAZARD | 04:00 |
Thu Jun 15 10:06:34 2023 INFO: Fill completed in 6min 33secs
Jordan confirmed a good fill curbside.
Holding IFO in IDLE for large Earthquake to pass through 70485. Peakmon is now below 2000, plot attached, so I will start untripping WDs and relocking soon.
There was some 20+mph winds that has made the dust counts increase, up to 400 0.3um in PSL101, see attached. Tagging PSL. Slightly unusual as we often have anteroom (102) dust counts increase more than main (101) PSL room.
I chose three times when we had BNS ranges of 140+ Mpc in three different configurations, see the second screenshot. The sensitivity below 50Hz was best at 65W input power, even compared to the current configuration with cleaning. This agrees roughly with 70437; the high frequency sensitivity doesn't seem to be degraded at the current setting which is different from 70289 and comments. This is probably due to choosing a different point in the thermalization to compare.
1370888354 Lost lock from the s-wave of a Mag7.2 Earthquake, near Tonga. Holding in DOWN until the Earthquake passes.
All BSC Watchdogs tripped, leaving them tripped until after R-waves and Peakmon back down. Jim suggested taking SEI_ENV to LARGE_EARTHQUAKE and IMC_LOCK to OFFLINE.
On Robert's request, we are keeing the CDS WIFI transmitter in the MSR powered on. I have modified the CDS SDF accordingly.
Camilla, Dave:
nuc26:
nuc26 had an OS issue overnight which caused it to transition its root filesystem to read-only mode. I rebooted it at 08:11 PDT and it is operating correctly now.
nuc30:
The camera image for cam15, HAM6 OMC TRANS, went blue screen at 07:56 PDT this morning. I restarted the camera server on h1digivideo1 and then power cycled the camera by POE toggling on sw-lvea-aux to no avail, h1cam15 still does not respond to pings. Further investigation is needed.
FRS28295 opened for h1cam15 not responding issue.
Erik and Jonathan unplugged and plugged in the cam15 cable at the switch during a lock loss (about 1pm localtime). No change on the camera. Next step is sending someone to the camera itself, likely with a spare.
Connected to camera via laptop, no response. Camera will need to be replaced but requires laser hazard. Camera not required for relocking. Working with commissioners to find time to replace/align camera. Patrick T. will update serial number and mac address of new camera.
WP11274 including laser hazards submitted: https://services2.ligo-la.caltech.edu/LHO/workpermits/view.php?permit_id=11274
TITLE: 06/15 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
SHIFT SUMMARY:
Lock#1:
NLN at 06:00, in Observing at 06:12, out at 06:43 from the SQZer unlocking, back in Observing at 06:45
Lockloss at 08:04, most likely from some local ground motion event. Ground motion seen mostly in the 1-3, 3-10 Hz bands. The 10-30Hz band saw the motion too but not as strongly.
Lock#2:
Xarm ran through increase flashes twice, couldn't get DRMI, went through PRMI which locked quickly, then locked DRMI in ~5 minutes.
NLN at 09:23, in Observing at 09:29
Lockloss at 13:52
Lock#3:
Yarm went through increase flashes, DRMI locked without needed to go through PRMI
NLN at 14:55, waiting on ADS to converge to go into observing
LOG:
Went out of Observing dur to OPO_SERVO railing and SQZ_OPO guardian requested itself DOWN, see attached.
TITLE: 06/14 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
SHIFT SUMMARY:
More observing than last night, but there was another shorter wind storm (winds over 30mph) which prevented locking for 3+hrs. Commissioing time was used for a Robert measurement and PRCL OLG. Squeezer knocked H1 out of Observing for about 3min toward the end of the shift, but came back on its own.
LOG:
Tagging SUS. At the first 23:21UTC lock acquisition, the last guardian to arrive and allow us into Observing was VIOLIN_DAMPING. Can we think about reducing it's sleep timer?
Another pretty windy day, I've been looking at the ETMY ISI to see if any changes can be made to help staying locked. Only easy thing I've found so far is turning off the off-axis sensor correction. Watching the outputs here, it is contributing to the ISI swinging several microns. I don't think it will change much, but don't have a lot of other things to try, at the moment, and it probably can't hurt. I've accepted the diff in the observe file, so this shouldn't affect our ability to go to observe when Corey is done relocking.
Attached trend shows the St1 X cps on the top, on the bottom 2 different points of the X sensor correction path to show when I turned it off, green is the filter output, orange is the output of the downstream match bank where I turned off the output. When the sensor correction is on the CPS is mostly dominated by it at low frequency, swiinging almost 10 microns at points. With sensor correction off, when the orange trace goes to 0 on the bottom, the cps stops moving around so much.
This was turned off 06/14 23:50UTC, and will be kept off for now. Tagging DetChar.
Naoki, Vicky
To try damping the 80 kHz PI recently causing locklosses (LHO:70434), today we installed a new path on PI28, with drive sent to ETMX. To phase-lock the ESD damping drive to the PI mode, we bandpassed the DCPD signal around 80.296 kHz (foton here). This frequency was chosen based on the DCPDs full-spectrum signal around 80.3 kHz, which today we saw the 3 peaks in red here between 80.295-80.301 kHz in full lock. It seems like our problem is the bigger peak around *296, where pink cursors are centered. We are trying to damp on ETMX first, since it seems like PI29 80kHz damping on ETMX could impact this mode.
The new path for PI28 has been updated and damping is guardianized, but this PI28 damping is untested, and there are no verbal alarms for PI28 yet.
Summary with the current status of PI damping:
| PI frequency | PI damping mode number | Test Mass | PI Guardian status | _DAMP_GAIN |
| 10.428 kHz | 24 | ETMY | automated | 1000 |
| 10.431 kHz | 31 | ETMY | automated | 1000 |
|
80.302 kHz (LHO:68760) |
29 | ETMX | automated, likely working (LHO:70243) |
50000 |
|
80.296 kHz (LHO:70443) |
28 | ETMX | guardianized, testing now |
50000 |
SDFs were recoinciled after, see screenshots -- mainly, we un-monitored guardian-controlled things like the damping phase and PLL integrator.
I've added in a test for PI mode 28 into verbal alarms with an RMS threshold of 1 for now (the same as mode 29).
Just checking in, looks like PI28 does at least see a mode come through ~1-2 hours into lock (RMSMON spikes just before NLN b/c OMC whitening, the second peak is the real one), but this hasn't run away yet. Then ~10 min after PI28, looks like PI29 sees something pass through.
Looking at disaggregated temperature trends across the EX VEA over the last year, it might be a bit misleading to only look at the "average EX VEA" temperature. Temperatures in some parts of the EX VEA seem to have drifted by up to 0.5-2 deg F over the past few months. Temps now seem to be returning to where they were about 6mo - 1 year ago.
I think Robert and Aidan have both made the point that, to understand temperature drifts, it's helpful to look at the individual temperature sensors across the VEA, rather than the average VEA temperature. For example, see Aidan's alog LLO:25785 where he also thought about stabilizing the VEA temperatures to a different sensor, which is better correlated with the test mass temperature. For this 80 kHz PI though, Aidan's also said that the temperature dependence of the mechanical mode freq is about 80ppm/K, so for ~0.5K (delta~1degF), the HOM frequency changes by ~3 Hz for the 80 kHz mechanical mode, and it seems unlikely we're just within 3 Hz of the PI going unstable -- so it's not totally convincing that our recent 80 kHz PI ring ups are simply b/c VEA temperature drifts. But at least, at LLO, he found their ETMY is most correlated with the EY VEA_202B sensor.
I'm not sure which of LHO's EX VEA sensors are most correlated with ETMX. But, it may be worth considering more than the average VEA temperature. Especially since the individual temperature sensors have seen some drifts over the past year, which aren't seen by trends of the average VEA temperature.
Commissioning period,
The Ham1 FF was turned off for testing on 06/14/23 21:32:15. It stayed off for 5 minutes and was turned back on at 21:37:15.
My motivation for this test came from a bruco that I ran on the data from a long lock over the weekend: https://ldas-jobs.ligo-wa.caltech.edu/~elenna.capote/brucos/CAL_1370343461/
Specifically, the CHARD P, INP1 P and HAM1 TT L4C RY coherence were much higher than expected, and much higher than they had been in the past after successful HAM1 FF tuning and A2L gain adjustments.
The test first confirmed that we are still seeing decent subtraction of the HAM1 noise from the ASC loops, as seen in an OMC DCPD sum comparison with the feedforward on and off. I also grabbed spectra of each of the ASC loops with the feedforward on and off (in this plot the red, live traces are with the feedforward off, and the blue reference traces are with the feedforward on).
I used the feedforward off time to run the NonSENS training code and calculate a new feedforward for CHARD P, INP1 P and PRC2 P. The code also makes plots of the expected subtraction of the loops. I compared the expected subtraction plots (linked below) to the current subtraction plots linked above, and I conclude that:
I didn't check any yaw loops because there is already decent noise removal, and coherences are low. I don't expect to see much improvement there.
I will install the new INP1 P and PRC2 P feedforward filters, labeled with today's date. I think they should be engaged for the next lock if possible.
I don't understand why the coupling has changed, but I think this is a similar mystery change that changed like several other things in the IFO changed recently- perhaps some new alignment from the PR3 move? In other words, unless we have to make another big alignment change like that, I don't expect us to need to update this feedforward for a while.
Turning off the HAM1 feedforward made the CHARD and PRC2 noise signficantly larger, by about a factor 10. The effect on DARM is significant, and consistent with the fact that CHARD or PRC2 are coupling more now than before, and when the FF was on were just below the measured DARM. This is consistent with the measured coherence between DARM and CHARD or HAM1 sensors.
One possible reason for the higher coupling of HAM1 noise to DARM is that the beam spot might have moved on PR2 (where PRC2 is driven) and therefore the A2L we tuned some time ago might be wrong. It's worth and quick retuning the PR2 A2L and see if that improves the coupling of PRC2 to DARM and maybe even DARM noise.
The new filters have not yet been installed because I am getting errors from foton when I try to copy them in. I will try again tomorrow.
I have installed the new feedforward filters for INP1 and PRC2. They are labeled with "0616" for today's date. They are currently not in use, but can be quickly tested during a commissioning period. A thermalized IFO is best for the test, but they can be tried at any time.
New filters implemented and SDFed.
Our range and SQZ BLRMs were low in this lock compared to yesterday so after checking that our RF6 power H1:SQZ-CLF_REFL_RF6_DEMOD_RFMON was still high (Good NLG) after Sheila adjusted it last week. I adjusted the SQZ angle from 16:45 to 16:55UTC and 17:05 to 17:15UTC, this is currently allowed during Observing but tagging DetChar just incase. It's a little confusing that improving SQZ BLRMs decreased the H1 range, see attached. Started at 141deg, ended at 130deg, I think there's room form more improvement and something is off.
Before Tuesday Maintenance (first t-cursor) our SQZ BLRMs and range looked good.
After Tuesday Maintenance and before the SQZ angle change they were not goof, unsure what changed.
Since yesterdays SQZ angle change (2nd t-cursor) the BLRMs are better but are seem like the angle is not perfect as the BLRMs are reaching a minimum and then turning up after a few hours.
TITLE: 06/13 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 131Mpc
SHIFT SUMMARY: Easy relock after Tuesday Maintenance but the wind is forecasts\ to pick up this evening.
LOG:
There is a computer Patrick left turned on in the EY Mech room on HEPI pump controller, Robert powered off the monitor but said that he has previously done research that new style LED monitors are less nosy than computers.
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 13:14 | CDS | Erik | Remote | N | Restarting NUCs OPSLogin0 | 13:22 |
| 14:24 | FAC | Tyler | MidY | N | Slowly move snorkle lift to MidY | 15:10 |
| 15:00 | PCAL | Tony | PCAL lab then CER | N | PCAL measurement prep | 15:27 |
| 15:00 | FAC | Karen | EndY | N | Technical Cleaning | 16:01 |
| 15:07 | FAC | Bubba | Air Handler Room | N | Fans alog 70404 | 17:07 |
| 15:09 | VAC | Travis, | EX, MX | N | Monthly Turbo Pump Checks | 17:54 |
| 15:14 | FAC | Tyler | OSB Recieving | N | Forklift Move | 15:15 |
| 15:15 | COMM | Sheila, Jason | LVEA | N | PSL Racks - turn off sidebands | 15:25 |
| 15:16 | PCAL | Rick, Tony, Julianna | EX | YES | PCAL Measurement, WP 11253 | 17:40 |
| 15:17 | FAC | Kim | EX | YES | Technical Cleaning | 16:17 |
| 15:22 | PSL | Jason | CR | N | Touch up RevCav Alginment alog70402 | 16:12 |
| 15:23 | EE | Fil, Patrick | EY | N | New HEPI Beckhoff, Jim turned off HEPI | 19:06 |
| 15:31 | COMM | Sheila | CR | N | Prep for OMC scans and OMc scans (ETMs misaligned) alog 70409 | 17:44 |
| 15:36 | SEI | (Jim) | EY | N | EY HEPI offline | 19:17 |
| 15:44 | CDS | WAP | All buildings | N | WAP turned on | 19:17 |
| 15:50 | COMM | Daniel | LVEA/CER | N | PSL Racks turn off 117MHz sideband | 15:58 |
| 16:26 | FAC | Karen, Kim | FCES | N | Technical Cleaning | 17:19 |
| 16:29 | SEI | Jim | FCES | n | Feedthrough protection | 17:24 |
| 16:54 | EE | Marc | CER Mezz, EX, EY | N | Check on Kepco Power supplys (not in VEAs) | 19:04 |
| 17:40 | EPO | Mike + 12 | LVEA | N | Tour | 18:13 |
| 17:43 | VAC | Gerardo | FCES | N | length measurements - cables and pipes | 18:18 |
| 17:43 | COMM | Shiela | LVEA | N | Turn on RF sidebands | 17:54 |
| 17:45 | VAC | Janos, Jordan | LVEA, EY, EY | N | Labeling Equipment | 18:57 |
| 17:46 | OPS | Tony | EX | N | Dust Moniter Pump Replacement | 18:54 |
| 18:13 | Tour | Rick + 5 | LVEA | N | Tour | 18:55 |
| 18:26 | PEM | Robert, Bubba | EX | N | HVAC setup | 18:59 |
| 18:48 | SEI | Jim | CR | N | HAM4 TFs | 21:06 |
| 18:10 | SEI | Jim | EY | N | Assist with HEPI Beckoff | 18:40 |
| 18:00 | FAC | Kim, Karen, Cindy | LVEA | N | Technical Cleaning | 18:50 |
| 19:05 | PCAL | Tony | PCAL Lab | N | PCAL lab | 19:45 |
| 19:07 | PEM | Robert | EY | N | Get equipment | 20:02 |
| 19:24 | Betsy | LVEA | N | Walkthrough | 19:36 | |
| 19:31 | VAC | Janos, Jordan | Mx, MY | N | Lableing Equimpent in VEA | 19:58 |
| 19:40 | CDS | Dave | Remote | N | als retarts and DAQ restart | 19:40 |
| 19:52 | PEM | Robert | LVEA | N | Accelerometers near Oplevs | 20:12 |
| 21:03 | PCAL | Tony | PCAL Lab | N | PCAL | ongoing |
Went out of Observing due to SQZ-OPO_PZT_1 getting to edge of it's range 40-110V. No changes have been made to mitigate this as it only changed so rapidly as the LVEA temperature changed, see 70477.
I think the recent issues with squeezer pump ISS railing are likely resolved after today's on-table realignments. The green pump AOM was significantly clipping the beam (SQZT0 layout here, "GAOM1"), both reducing the total power after the AOM, and degrading the fiber coupling efficiency due to the clipped+bad mode shape. See GAOM1 sweeps from this morning (before aligning, left) and now (after, right). Along the pump path, we now have better GAOM1 throughput (~90%), comparable diffraction efficiency (~60%), and improved fiber coupling. For 60uW opo trans we've been using, we previously launched 22mW with no buffer (hence iss railed); now we only have to launch 15mW for the same opo transmitted power, with lots of buffers.
GAOM1 typically has ~90% throughput, e.g. previously this was 14.4mW out from 16mW incident. Today I first measured 25mW out for 44mW incident (~57%): this means >30% of power from SHG was clipping on the AOM apeture. After re-aligning GAOM1 using the mount screws, we're back up to 40mW out for 45mW incident, so almost 90% throughput again. I think this could be better, but good enough for now.
Relieving the clipping also significantly improved fiber coupling efficiency, presumably b/c it improves the mode shape, as we are using the 0-th order AOM mode for pump light, and not the nice diffracted order that gets cleaned up in the diffraction. So aom clipping shows up on the beam we are trying to fiber couple.
-- Before, for 20mW into the pump fiber, we had 1.8mW on OPO_REFL_DC_POWER on the other end, with GAOM1 driven at 2-3V (almost no room), and all power on that path going into the fiber (SHG_REJECTED = 0mW).
-- After, for 20mW into the pump fiber, we have 2.9mW on OPO_REFL_DC_POWER (60% higher power), with GAOM1 driven at 4.5V (mid-range), and a healthy buffer of rejected power on the fiber path (SHG_REJECTED = 8mW).
I've upped the generated sqz levels, back to more "normal", hopefully nominal, values. In $(userapps)/sqz/h1/guardian/sqzparams.py, Line 12, I've set opo_grTrans_setpoint_uW = 85, and re-tuned the OPO co-resonance temperature at this pump power (temp decreased from 31.7C to 31.68C, expected as the higher power heats the opo crystal more).
This OPO green trans setpoint of 85 uW corresponds to a generated SQZ level of about 15 dB, or non-linear gain of 12.3. By comparison, 60uW ~ Gen SQZ = 12dB ~ NLG 6.4. So, this is nearly doubling the squeezer gain, aka increasing the generated squeeze level by about 3dB.
To check whether pump ISS failures are from aom clipping in the future, we can try the following. SQZT0 medm stuff should look like this screenshot, with teal circle highlighting the path we're talking about, and red arrows showing some knobs to turn.
1) Bring SQZ_OPO_LR guardian "DOWN". Now PUMP_ISS is disabled, so GAOM1 box is yellow.
3) Turn off pump fiber flipper.
2) Set H1:SQZ-OPO_ISS_DRIVEPOINT = 0. This turns off GAOM1 diffraction, and sends all power to the fiber (pump light uses 0-order).
3) See the total pump power after GAOM1 (sum of H1:SQZ-SHG_REJECTED_DC_POWERMON + H1:SQZ-SHG_LAUNCH_DC_POWERMON).
4) If GAOM1 is well-aligned and not clipping badly, I'd expect total power after GAOM1 to be about 70% of the SHG output green power, H1:SQZ-SHG_GR_DC_POWERMON.
Broken down, this ratio is from SHG's green output light (H1:SQZ-SHG_GR_DC_POWERMON) being split ~25% to FC green locking, and the remaining ~75% to the pump path GAOM1 which lets through ~90%.
I'm a bit suspious how this happened in this first place: I wonder if the beam is slightly large for the AOM aperture (but aom's throughput is not terrible), whether SHG or lab temperature drifts are changing the output alignment or mode-shape slightly, or whether we've bumped the SHG output path doing other on-table work. If ISS railing becomes a problem again at this SHG power, we can look into it more. For reference, we last increased SHG power by ~15% on May 16th (69671) in an attempt to resolve exactly this issue; we don't need all this shg power anymore.
Attaching quick SHG, OPO, CLF transfer functions taken after table alignments, with pd powers on medm.
Just following up on trends since this on-table AOM alignment -- this on-table fix seems to have mostly resolved the previous squeezer pump ISS issues.