Gabriele and I think we have found the problem causing the large 102 Hz line. Today I plotted the LSC control and error signals, the LSC FF out signals and the OMC DCPD sum.
The 102 Hz line is clearly evident in SRCLFF out and OMC DCPD sum, but not present in the LSC control or error signals, or in MICHFF out.
The line showed up on August 4 when the SRCL feedforward was retuned. We have made no changes to the SRCL FF since.
Looking at the actual SRCL ff filter, there is an incredibly high Q (therefore narrow and hard to see without fine resolution) feature in the SRCL FF filter at precisely 102.1 Hz. Gabriele will post more details in a comment.
In short, we think this is the problem, and are taking the steps to fix it.
Edit to add:
How can we avoid this problem in the future? This feature is likely an artifact of running the injection to measure the feedforward with the calibration lines on, so a spurious feature right at the calibration line appeared in the fit. Since it is so narrow, it required incredibly fine resolution to see it in the plot. For example, Gabriele and I had to bode plot in foton from 100 to 105 Hz with 10000 points to see the feature. However, this feature is incredibly evident just by inspecting the zpk of the filter, especially if you use the "mag/Q" of foton and look for the poles and zeros with a Q of 3e5 (!!). If we ensure to both run the feedforward injection with cal lines off and/or do a better job of checking our work after we produce a fit, we can avoid this problem.
We did make sure to check the MICH feedforward in case the same error had occurred, but luckily everything looks fine there!
We removed the high Q zero/pole pair, saved and reloaded the filter.
We have relocked with no sign of the 102 Hz peak. Great! Tagging DetChar since they pointed out this problem first and Cal since they made adjustments to calibration lines to avoid this problem (and may want to undo those changes).
Lockloss @ 23:50 UTC - no immediate obvious cause.
H1 had relocked to NLN by 23:18 UTC after SQZ team ran their checks, but the range was significantly lower (~120Mpc), so investigations were ongoing into the loss of sensitivity. H1 was not observing between reaching NLN and this lockloss.
Winds have picked up this afternoon, now over 30mph, although unsure if this would be the cause of the lockloss. It will make relocking challenging, however.
The hourly forecast (link) suggests we should have similar / worse than right now wind for the next several hours, and then the wind will start to calm down (a bit, not a lot) around 8pm or 9pm. I've suggested to RyanS that if the IFO doesn't lock in the next few tries, that he leave the IFO in DOWN for 2-3 hours until the wind starts to come down.
Separately, we had a surprising amount of trouble locking this afternoon, but much of that time was before the wind picked up. We don't really know why we were able to successfully lock the one time we did. I'm hopeful that once the wind calms down, locking will be straightforeward, but it may not be.
(Janos C., Gerardo M.)
Old ion pump body was removed and replaced with a refurbished model type (Galaxy). The annulus system was pumped down while the replacement took place.
After the 4-1/2" flange was torqued, the ion pump volume was added to the rest of the annulus system. The aux cart pumping down the annulus system worked for 3 hours to get the pressure down, after pressure reached 4.5x10^-05 Torr, the ion pump took over the pumping with no problem, then after 20 minutes the aux cart and "can" turbo were removed. System back to normal.
Vicky, Naoki, Sheila, Daniel
Details of homodyne measurement:
This morning Daniel and Vicky reverted the cable change to allow us to lock the local oscillator loop on the homodyne (undoing change described in 69013). Vicky then locked the OPO on the seed using the dither lock, and increased the power into the seed fiber to 75mW (it can't go above 100mW for the safety of the fiber switch). We then reduced the LO power so that the seed and LO power were matched on PDA, and adjusted the alignment of the sqz path to get good (~97%) visibility measured on PDA. We removed the half wave plate from the seed path, without adjusting the rotation. With it removed, we checked the visibility on PDB, and saw that the powers were imbalanced.
Polarization issue (revisiting the polarization of sqz beam, same conclusion as previous work):
There is a PBS in the LO path close to the homodyne, so we believe that the polarization should be set to horizontal at the beamsplitter in that path. The LO power on the two PDs is balanced (imbalanced by 0.4%), so we believe this means that the beamsplitter angle was set correctly for p polarized light as we found it, and there is no need to adjut the beamsplitter angle. However, when we switched to the seed power, there is a 10% difference between the power on the two PDs without the halfwave plate in the path. We put the halfwave plate back, and the powers were again balanced (with the HWP angle as we found it). We believe this means that the polarization of the sqz path is not horizontal arriving at the homodyne, and that the half wave plate is restoring the polarization to horizontal. If the polarization rotation is happening on SQZT7, the half wave plate should be able to mitigate the problem, if it's happening in HAM7 it will look like a loss for squeezing in the IFO. Vicky re-adjusted the alignment of the sqz path after we put the HWP back in, because it slightly shifts the alignment. After this the visibility measured on PDA is 95.7% (efficiency of 91.6%) and on PDB visibility is 96.9% (efficiency of 93.9%).
SQZ measurements, unclipping:
While the IFO was relocking Vicky and Naoki measured SQZ, SN, ASQZ and mean SQZ on the homodyne and found 4.46dB sqz, 10.4dB mean sqz and 13.14dB anti-sqz measured from 500-550Hz. Vicky then checked for clipping, and saw some evidence of small clipping (order 1% clipping with 10urad yaw dither on ZM2). We went to the table to check that the problem wasn't in the path to the IR PD and camera, we adjusted the angle of the 50/50 beamsplitter that sends light to the camera, and set the angle of the camera to be more normal to the PD path. This improved the image quality on the camera. Vicky moved ZM3 to reduce the clipping seen by the IR PD slightly. She restored good visibility by maximizing the ADF, and also adjusted both PSAMs, moving ZM4 from 100V to 95V. (We use different PSAMs for the homodyne than the IFO). After this, she re-measured sqz at 800-850Hz: 5.2dB sqz, 13.6dB anti-sqz, and 10.6dB mean sqz.
Using the nonlinear gain of 11 (Naoki and Vicky checked it's calibration yesterday), and the equations from aoki, this sqz/asqz level implies total efficiency of 0.72 without phase noise, the mean sqz measurement implies a total efficiency of 0.704. From the sqz loss spreadsheet we have 6.13% known HAM7 losses, if we also use the lower visibility measured using PDA we should have a total efficiency for the homodyne of 0.916*0.9387 = 0.86. This means that we would infer an extra 16-18% losses from these homodyne measurements, which seems too large for homodyne PD QE and optics losses in the path. Since we believe that the polarization issue is reflected in the visibility, this means that these are extra losses in addition to any losses the IFO sees due to the polarization issue.
Screenshot from Vicky shows the measurement made including the dark noise.
Including losses from phase noise of 20mrad, dark noise -21dB below shot noise, and a more accurate calibration of our measured non-linear gain to generated sqz level (from adf paper vs the aoki paper sheila referenced), the total efficiency could marginally be increased to 0.74. This suggests 26% loss based on sqz/asqz. This is also consistent with the 27% loss calculated separately from the mean sqz and generated sqz levels.
From the sqz wiki, we could budget 17% known homodyne losses. This includes 7% in-chamber loss to the homodyne (opo escape efficiency * ham7 optics losses * bdiverter loss), and 11% HD on-table losses (incl. 2% optics losses on SQZT7, and visibility losses of 1- 91.6% as Sheila said above (note this visibility was measured before changing alignments for the -5.2dB measurement; so there remains some uncertainty from visibility losses)).
In total, after including more loss effects (phase noise, dark noise), a more accurate generated sqz level, and updating the known losses -- of the 27% total HD losses observed, we can plausibly account for 17% known losses, lowering the unexplained homodyne losses to ~10-11% (this is still high).
From Sheila's alog LHO:72604 regarding the quantum efficiency of the homodyne photodiodes (99.6% QE for PDA, and 95% QE for PDB), if we accept this at face value (which could be plausible due to e.g. the angle of incidence on PD B), this would change the 1% budgeted HD PD QE loss to 5% loss.
This increases the amount of total budgeted/known homodyne losses to ~21%: 1 - [0.985(opo)*0.953 (ham7)*0.99 (bdiverter) * 0.98(on-table optics loss)*0.95(PD B QE)*0.916(hd visibility)].
From the 27% total HD losses observed, we can then likely account for about 21% known losses (~7% in-chamber, ~15% on-table), lowering unexplained homodyne losses to < 7%.
TITLE: 08/29 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 21mph Gusts, 17mph 5min avg
Primary useism: 0.13 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY: Taking over H1 while it's relocking following maintenance day; currently holding in ADS_TO_CAMERAS while SQZ team runs some checks.
TITLE: 08/29 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
PSL anteroom (101) dust counts have been elevated throughout the day, related alog72507. The EY chiller work continued into the afternoon, and the annulus pump was left on at EX till 21:00UTC. We struggled to relock today after maintenance.
Lock#1:
No flashes during DRMI or PRMI, lost it during CHECK_MICH_FRINGES
Lock#2-6:
Lock#7-12:
We survived to PARK_ALS_VCO but then lost it again from Xarm. So this isn't an ALS issue, since ALS isn't used anymore at this point.
We lost lock at DMRI_LOCKED_PREP_ASC twice without the Xarm issue happening.
After several more attempts and a few more set of eyes looking at the IFO we were able to slowly step up to a higher state and it got to PREP_ASC where we decided to just request NLN and let it go. Lost lock at CHECK_VIOLINS at 21:40UTC.
We continued to struggle at ALS, IR, and DRMI, but after a bit we were able to make it up to ADS_TO_CAMERAS where we are as of 23:00UTC
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:00 | FAC | Karen | EndX | N | Tech clean | 16:09 |
15:02 | FAC | Cindy | FCES | N | Tech clean | 15:55 |
15:08 | OAF | Dave | Remote | N | WP11387: New h1oaf and h1bos models for Jenne, no DAQ restart | 18:31 |
15:09 | PEM | Dave | Remote | N | WP11393: New h1pemcs model for Cosmic Ray, DAQ restart is required | 18:31 |
15:10 | DAQ | Dave | Remote | N | WP11391: Restart DAQ to enable MD5 checksum file generation, pemcs changes, and possibly Beckhoff changes | 18:32 |
15:12 | OPS | Oli | LVEA | N | Quarterly DM checks | 15:56 |
15:16 | VAC | Jordan | MidY, EndY | N | Turbo pump tests | 17:26 |
15:28 | FAC | Tyler + Mac Miller | EndY | N | Chiller investigation | 18:00 |
15:32 | VAC | Gerardo | LVEA | N | Grab parts | 15:41 |
15:41 | FAC | Mitch | LVEA | B | Checks | 16:20 |
15:42 | VAC | Janos, Gerardo | EndX | N | Annulus pump replacement | 18:04 |
15:48 | FAC | Chris | LVEA | N | FAMIS checks | 16:38 |
15:49 | SQZ | Vicky | LVEA | LOCAL | SQZT7 LOCAL HAZARD work | 18:33 |
15:58 | OPS | Oli | FCES | N | DM checks | 16:33 |
16:00 | EE | Fil | CER | N | Inventory | 18:54 |
16:08 | SEI | Jim | CR | N | HAM1 TFs | 17:51 |
16:09 | FAC | Karen | MidX, beamtube | N | Tech clean | 16:47 |
16:12 | EE | Marc | CER | N | Cosmic ray detector | 18:54 |
16:39 | FAC | Chris | EndY, then X | N | FAMIS checks | 17:41 |
16:40 | FAC | Mitch, Randy | LVEA | N | Craning HEPI piers | 18:50 |
16:45 | OPS | Oli, Richard | Ends | N | DM checks, PA checks | 17:40 |
16:47 | SQZ | Sheila | LVEA | LOCAL | SQZT7 work | 18:33 |
16:51 | EE | Ken | MSR | N | Lights | 20:51 |
17:06 | FAC | Cindy | LVEA | N | Tech clean | 18:37 |
17:08 | CDS | Patrick | Office | N | Camera servo updates | 17:33 |
17:58 | SQZ | Naoki | LVEA | LOCAL | SQZT7 work | 18:32 |
18:04 | VAC | Jordan, Janos | LVEA | N | Check hepta line | 18:48 |
18:19 | EPO | Jeff + Tour | LVEA | N | Tour | 19:05 |
18:41 | PEM | Lance | LVEA | N | Check a bolt | 18:50 |
18:30 | FAC | Cindy | High bay | N | Tech clean | 18:49 |
18:51 | VAC | Gerardo | FCES | N | Measurements | 19:11 |
20:38 | SQZ | Vicky, Sheila | LVEA | LOCAL | SQZT7 pd alignment check | 21:06 |
20:57 | VAC | Gerardo | Xarm | N | Check on annulus pump | 22:06 |
21:46 | CAL | Tony | PCAL lab | LOCAL | PCAL work | 23:46 |
22:18 | FAC | Tyler + 1 | SIte, EndY | N | Site tour, air handlers | 23:18 |
WP11393 Add fast Cosmic Ray channels to DAQ
Ray Frey, Robert Schofield, Marc Pirello, Dave:
A new h1pemcs model was installed, which corrects a naming error with the photomultiplier tube test points and adds them to the DAQ at the full 16kHz rate. DAQ restart was required.
WP11387 NONsens changed to h1oaf and h1bos models
Jenne, Dave:
New h1oaf and h1bos models were installed, no DAQ restart was needed.
WP11391 DAQ Frame Writers storing frame file MD5 checksums
Jonathan, Dave:
Jonathan reconfigured the frame writers, via puppet, to store the MD5 checksum for all gwf files written. DAQ restart was required.
Adding New Digivideo Saturated EPICS Channels to the DAQ
Patrick, Dave:
Patrick made an EPICS IOC change to the new video server, which created a SATURATED channel for each camera. I modified the script which generates H1EPICS_DIGVID.ini to add these channels to the DAQ. DAQ restart was required.
DAQ Restart
Jonathan, Dave:
The DAQ and the EDC were restarted for the above tasks. As was seen before, FW0 spontaneously restarted soon after it got running. In this case after it had only written one full frame file, and before we got a chance to start FW1.
Also as before, both GDS0 and GDS1 needed a second restart to sync up their channel lists.
Swept after maintenance activities ended. The only thing out of the ordinary was the equipment plugged into the OM2 heater driver in the rack near HAM6 (picture attached).
Last week and yesterday I regenerated the NonSENS c-code for all of the types of subtraction I have in the online infrastructure (Jitter, LaserNoise, LSC_PRCL on h1oaf, and LSC MICH/SRCL, ASC Hard, Mainsmon, and calibration lines on h1bos), using either cdsws03 or opslogin0. Dave rebooted both models during maintenance today.
Some of the previous c-code had been generated early this spring using one of the clusters, with an old-ish version of NonSENS. It seems that the versions of NonSENS were different enough that to get good subtraction, I needed to train on the same version of NonSENS as the model was compiled on. Since the LHO control room workstations have been upgraded with more memory, it's working quite well to do the training in the control room, I've compiled everything using the newer version of NonSENS on the control room workstations. This re-compile last week for the Jitter fixed some things I was confused about (see, ex, alog 72276), so I'm hopeful that it'll fix the similar minus sign issue I found the other day in the LSC subtraction (alog 72397).
The attached screenshot is there to remind me when, prior to this week, I had last generated each of the c-codes for the various subtraction types.
We've begun relocking after maintenance activities have mostly finished.
WP 11398. I updated the pylon-camera-server code on h1digivideo3 to 0.1.10. This version adds an attempt at a fix for the "memory leak" and an EPICS channel to indicate if the current frame is saturated: {channel_prefix}SATURATED: 1 if and only if the maximum pixel value of the frame is greater than or equal to the pixel dynamic range. (read only) Currently monitoring H1:CDS-MONITOR_H1DIGIVIDEO3_MEMORY_AVAIL_PERCENT to see if it improves the "memory leak".
J. Kissel, J. Warner Given the lovely "natural experiment" of the Aug 25th 2023 ~3.2 [deg F] / 1.8 [deg C] temperature increase "impulse" over 4 hours at EY (see LHO:72444 and LHO:72428), I wanted to understand both (a) How the IFO handles it / what caused the lock loss (b) Document the levels and time-scales of alignment change that resulted in bad alignment for *several* lock stretches -- indeed *days* after the impulse. We often wave our hands saying things like "well, the vacuum system acts like a low pass filter, with a time constant of [[insert hand-waivers favorite time-scale on the order of hours]]." I wanted to see if we could quantify that, and if not, add a bit more clarity to how complicated the situation is. To do so, I looked at the Y End Stations' signals in Z, RZ, PIT, and YAW that are either (1) out of loop, or (2) when in-loop -- the loop's feedback control output using the "classic trick" of approximating G/(1+G) ~ 1, where G >> 1, such that (plant) * CTRL = out-of-loop signal as though the loop wasn't there. Those sensors include - HPI ST1 ISO OUT (which are the IPS, under DC-coupled feedback control) -- calibrated into nano- meters or radians - ISI ST1 ISO OUT (which are the CPS, under DC-coupled feedback control) -- calibrated into nano- - ISI ST2 ISO OUT (which are the CPS, under DC-coupled feedback control) -- calibrated into nano- - SUS ETMY M0 LOCK OUT (i.e. the WFS, under DC-coupled global ASC control) -- calibrated into micro- meters or radians - SUS ETMY M0 DAMP IN (which are "out-of-loop" because the local damping loops are AC-coupled) -- calibrated into micro- - SUS TMSY M1 DAMP IN (equally "out-of-loop") -- calibrated into micro- - ETMY L3 Optical Levers -- calibrated into micro- (I'm pleasantly surprised at how well they all agree, to the ~0.25 micro- kind of level that I have for these trends.) I conclude: (I) The IFO Yaw is most impacted by the SEI system's RZ motion, due to the Z to RZ cross-coupling of the radially symmetric system of triangular ISI blade springs, as the blades sag from temperature increase (i) The total SEI system's yaw swung ~2 [urad] during the excursion dominated by ISI ST1, (ii) The SUS ETMY and SUS TMSY follow this input in common, and (ii) ISI ST1 takes the longest time to recover alignment -- then trending over *days* slowly back to pre-impulse equilibrium -- and still not yet there as of Aug 28 (II) The IFO Pitch most impacted by the ETMY and TMSY SUS system's expected Pitch and Vertical sag from temperature increase. (i) The IFO's global alignment drives the pitch of ETMY, which drifted *down* in pitch over ~10 [urad] before losing lock, presumably from running out of range (ii) The ASC signals seem to slowly drive the ETMY off into-the weeds trying to recover the original, pre-impulse, alignment, causing *eventual* subsequent lock losses as it pushes the optic *past* the pre-impulse alignment position (iii) The TMS, which does not have global control, pitches a similar amount, ~14 [urad] in the *opposite direction, up* in pitch, and also taking *days* to get back the original value (somewhat alleviated ) (iv) The fact that the Sus-point blades are the *same* for the QUAD and TMS, they're the biggest blades in either SUS, and the order of pitching is about the same implies to me that the pitching is dominated by the upper, Sus-point blades. I attached the trends that drive me towards these conclusions. Give yourself time -- I've stared at these all afternoon to come to these conclusions. And honestly, I *still* don't think I've looked at enough plots (e.g. I don't show the ETMY alignment sliders that are the operators and/or initial alignment drives trying to make up for the SEI blades yaw and SUS blades pitch). I also attach a .txt file that goes into more detail about how I calibrated the various CTRL signals, including where I got the transfer function values that are scaling the trends that you see.
Very Interesting!
I've added a few more plots which show the in-loop motion as measured by the CPS sensors on ETMY. These show that the platform didn't move down or twist during the temperature excursion. This is what you would expect, given Jeff's plots from above - the springs sag, and the servos compensate. That's all just peachy, so long as there is no yaw seen at the optic - but the oplev does see yaw.
Either - there is some yaw in the ISI which is not seen by the CPS sensor (eg the sensor itself is temperature sensivity - but this should be pretty small) , or
- the yaw is just from SUS, or
- oplev is affected by temperature, or
- the yaw is coming from somewhere else (HEPI, piers, SUS, the devil, etc)
I've not thought about this very hard yet - but I attach a 20 hour time stretch from the temp sensor (calibration is crazy, but the shape matches. not sure what's up) and 4 CPS cart-basis sensor signals (calibration should be nanometers or nanoradians). The in-loop change on the CPS is less than 1 nanorad.
J. Kissel, D. Barker As of today Dave helped me install the new front-end, EPICs controlled oscillators discussed in LHO:71746. Then, after crafting a few new MEDM screens (see comments below), I've turned ON some of those oscillators in order to replace the unstable function of the CAL_AWG_LINES guardian. So, there're no "new" calibration lines (not since we turned CAL_AWG_LINES back ON last week at 2023-07-25 22:21:15 UTC -- see LHO:71706) -- but they're now driven by front-end, EPICs controlled oscillators rather than by guardian using the python bindings for awg (which was unstable across computer crashes, and other connection interruptions). This is true as of the first observation segment today: 2023-08-01 22:02 UTC However, due to a mishap with me misunderstanding the state of the PCALY SDF system (see LHO:71879), I accidentally overwrote the PCALXY comparison line at 284.01 Hz, and we went into observe. Thus, The short observation segment between 22:02 - 22:11 UTC is out of nominal configration, because there's no PCALY line contributing to the PCALXY comparison. The was rectified by the second observation segment starting on 2023-08-01 22:16 UTC. Also, because of these changes the subtraction team should switch their witness channel for the DARM_EXC frequencies to H1:LSC-CAL_LINE_SUM_DQ. The PCALY witness channel remains the same, H1:CAL-PCALY_EXC_SUM_DQ, as the newly used oscillators sum in to the same channel. Below, I define which oscillator number is assigned to which frequency. Here's the latest list of calibration lines: Freq (Hz) Actuator Purpose Channel that defines Freq Changes Since Last Update (LHO:69736) 8.825 DARM (via ETMX L1,L2,L3) Live DARM OLGTFs H1:LSC-DARMOSC1_OSC_FREQ Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG 8.925 PCALY Live Sensing Function H1:CAL-PCALY_PCALOSC5_OSC_FREQ Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG 11.475 DARM (via ETMX L1,L2,L3) Live DARM OLGTFs H1:LSC-DARMOSC2_OSC_FREQ Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG 11.575 PCALY Live Sensing Function H1:CAL-PCALY_PCALOSC6_OSC_FREQ Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG 15.175 DARM (via ETMX L1,L2,L3) Live DARM OLGTFs H1:LSC-DARMOSC3_OSC_FREQ Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG 15.275 PCALY Live Sensing Function H1:CAL-PCALY_PCALOSC7_OSC_FREQ Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG 24.400 DARM (via ETMX L1,L2,L3) Live DARM OLGTFs H1:LSC-DARMOSC4_OSC_FREQ Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG 24.500 PCALY Live Sensing Function H1:CAL-PCALY_PCALOSC8_OSC_FREQ Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG 15.6 ETMX UIM (L1) SUS \kappa_UIM excitation H1:SUS-ETMY_L1_CAL_LINE_FREQ No change 16.4 ETMX PUM (L2) SUS \kappa_PUM excitation H1:SUS-ETMY_L2_CAL_LINE_FREQ No change 17.1 PCALY actuator kappa reference H1:CAL-PCALY_PCALOSC1_OSC_FREQ No change 17.6 ETMX TST (L3) SUS \kappa_TST excitation H1:SUS-ETMY_L3_CAL_LINE_FREQ No change 33.43 PCALX Systematic error lines H1:CAL-PCALX_PCALOSC4_OSC_FREQ No change 53.67 | | H1:CAL-PCALX_PCALOSC5_OSC_FREQ No change 77.73 | | H1:CAL-PCALX_PCALOSC6_OSC_FREQ No change 102.13 | | H1:CAL-PCALX_PCALOSC7_OSC_FREQ No change 283.91 V V H1:CAL-PCALX_PCALOSC8_OSC_FREQ No change 284.01 PCALY PCALXY comparison H1:CAL-PCALY_PCALOSC4_OSC_FREQ Off briefly between 2023-08-01 22:02 - 22:11 UTC, back on as of 22:16 UTC 410.3 PCALY f_cc and kappa_C H1:CAL-PCALY_PCALOSC2_OSC_FREQ No Change 1083.7 PCALY f_cc and kappa_C monitor H1:CAL-PCALY_PCALOSC3_OSC_FREQ No Change n*500+1.3 PCALX Systematic error lines H1:CAL-PCALX_PCALOSC1_OSC_FREQ No Change (n=[2,3,4,5,6,7,8])
As a part of depricating CAL_AWG_LINES, I've updated the ISC_LOCK guardian to use the new main switches for the DARM_EXC lines for the transitions between NOMINAL_LOW_NOISE and NLN_CAL_MEAS. That main switch channel is H1:LSC-DARMOSC_SUM_ON, which enables excitations to flow through to the DARM error point when set to 1.0 (and blocks it when set to 0.0). I've committed the new version of ISC_LOCK to the userapps repo, rev 26039.
Here's the updated /opt/rtcds/userapps/release/lsc/common/medm/ LSC_OVERVIEW.adl LSC_DARM_EXC_OSC_OVERVIEW.adl LSC_CUST_DARMOSC_SUM_MTRX.adl The new DARM oscillators screen (LSC_DARM_EXC_OSC_OVERVIEW.adl) is linked in the top-middle of the LSC_OVERVIEW.adl. The only sub screen on the LSC_DARM_EXC_OSC_OVERVIEW.adl is the summation matrix (LSC_CUST_DARMOSC_SUM_MTRX.adl). I have not yet gotten to adding all the new PCAL oscillators to their MEDM screens, but I'll do so in the fullness of time.
detchar-request git issue for tracking purposes.
I found a bug in the /opt/rtcds/userapps/release/lsc/common/medm/ LSC_DARM_EXC_OSC_OVERVIEW.adl where DARMOSC1's TRAMP field was errantly displayed as all the 10 oscillator's TRAMPs; a residual from the copy pasta I made during the screen generation. Fixed it. Now committed to the above location as of rev 26170.
Finally got around to updating the PCAL screens. Check out /opt/rtcds/userapps/release/cal/common/medm/ PCAL_END_EXC.adl CAL_PCAL_OSC_SUM_MATRIX.adl as of userapps repo rev 26179. See attached screenshots.
on cdsws04, there is a script running that will step the darm offset up and down for a few minutes, then step up the heat on om2, wait two hours for the thermal transient, and repeat this process 4 times.
The DARM offset steps will cause an SDF diff that will knock H1 out of observing, after it finishes each set of darm offset steps and move om2 the operators can take us back to observing. the om2 heater will cause some SDF diffs which can be accepted for tonight.
If the operators need to stop the script (if there is a reason to stand down or if H1 losses lock) you can hit control c on the terminal on cdsws04.
This script also sets the DARM offset TRAMP to 5seconds, I've accepted this in SDF for now but it will go back to 1 second next lock.
Because of EQ locklosses the operators stopped this script twice, 71493 and 71492. Just now I've started it for the last step. Hopefully it is not overly confusing that we got this data set in three different locks, we may want to re-run this sometime.
This aLOG marks the start of the OM2 heater being ON and "HOT" permanently from July 19 2023 17:16:16 UTC (10:16:16 PDT) until now (2023-08-29 and in to the forseeable future). Below, I document the OM2 TSAMS heater's "step period" cross-referenced against the observation intent bit being set high, using the channels - H1:AWC-OM2_TSAMS_POWER_SET - H1:AWC-OM2_TSAMS_THERMISTOR_1_TEMPERATURE - H1:AWC-OM2_TSAMS_THERMISTOR_2_TEMPERATURE as the channels which indicates the requested voltage applied to the TSAMS heater, and the corresponding temperature of the TSAMS heater. 2023-07-18 10:33:28 UTC (2023-07-18 03:33:28 PDT) [lock loss] 0.00 V # End of last Observation Stretch with OM2 COLD 2023-07-19 00:00:37 UTC (2023-07-18 17:00:37 PDT) OBSERVING 1.15 V 00:32:08 UTC ( 17:32:08 PDT) [lock loss] 03:34:55 UTC ( 20:34:55 PDT) OBSERVING 05:31:22 UTC ( 22:31:22 PDT) out of observe 05:35:24 UTC ( 22:35:24 PDT) 2.30 V 05:38:28 UTC ( 22:38:28 PDT) OBSERVING 07:38:51 UTC (2023-07-19 00:38:51 PDT) [lock loss] 07:39:24 UTC ( 00:39:24 PDT) 3.35 V 10:01:30 UTC ( 03:01:30 PDT) OBSERVING 17:06:31 UTC ( 10:06:31 PDT) out of observe 17:10:35 UTC ( 10:10:35 PDT) 4.60 V 17:16:16 UTC ( 10:16:16 PDT) OBSERVING Thus, as of 2023-07-19 17:16:16 UTC (10:16:16 PDT), the OM2 Ring heater has been "ON" and "HOT" (with 4.6 V requested, with a temperature of 33.1 and 56.7 [deg C] on thermistor 1 and 2 respectively.) Note -- as of 2023-08-15 20:15:55 UTC (13:15:55 PDT), Keita disconnected the Beckhoff system that monitored the TSAMS temperature -- see LHO:72241.
Since we have relocked with a cold OM2, our range has dropped to 123 Mpc. Some of this range loss can be explained by the increase of the 120 Hz jitter peak, but there is also a loss of low frequency range (see first plot). This loss of sensitivity is likely to due an increase in MICH and PRCL coherence (see second plot). Gabriele and I performed an iterative retuning of MICH feedforward during last week's commissioning time. We used MICH feedforward injection data from when OM2 was hot. Now that OM2 is cold, it appears this new iterative feedforward is no longer optimal. Luckily, the previous MICH feedforward was tuned on June 22 after powering down to 60W and with a cold OM2. If we revert to this MICH FF, this should take care of some (perhaps all) of the low frequency sensitivity loss. We can investigate later if further iterative tuning is needed.
What to change: MICHFF filter FM3 to FM5. This should be changed in the LOWNOISE_LENGTH_CONTROL guardian state (ISC_LOCK.py line 5431, change "FM3" to "FM5"). It can also be changed in lock: ramp the MICH FF gain to zero, switch FM3 to FM5, and ramp the gain back to 1. This will also need to be SDFed in Observe. This is a very quick fix; I recommend that if we go out of observe or lose lock it should be quickly changed.
There is also increased PRCL coherence, but that is likely to change once the MICH FF is fixed. It doesn't appear that the SRCL FF is affected by the OM2 change.
Edit: I updated the ISC_LOCK code. It will need to be reloaded whenever we next lose lock. Tagging opsinfo. Also expect an SDF diff for the LSC model.
The filter has been changed back to FM5. I think this bought us a few more Mpc, previous lock was around 125ish and we're at 130 Mpc now. The rest of the limit to the range appears to be from jitter peaks such as the 120 Hz and the 52 Hz and other 13 Hz harmonics.
There is still some MICH coherence present (see the attachment) which indicates to me we could get a bit more range around 40 Hz if we did another iterative tuning. I don't know what our plan for the OM2 TSAMS is- we should hold off on the iterative tuning until we are happy with the settings there. Once the OM2 and OMC alignment fiddling is complete, dedicating maybe 20 minutes of future commissioning time to this would be beneficial.
This aLOG implicitly marks the start of a brief, 6-day period when we operated with the OM2 TSAMS system "cold" again, after originally turning it on Jun 27 2023 12:04:02 UTC (05:04:02 PDT -- see LHO:70849) The last observation ready segment that the OM2 TSAMS heater was ON and "HOT" ended on 2023-07-12 14:48:47 UTC (07:38 PDT). The first observation ready segment with the OM2 TSAMS heater OFF and "COLD" of this 6-day period started 2023-07-13 01:04:41 UTC (2023-07-12 18:04:41 PDT) (It gets turned on again, "permanently," on 2023-07-19 with the first observation ready segment with the TSAMS back ON starting 2023-07-19 00:00:37 UTC -- see LHO:71484) ON and "HOT" = means = - 4.6 V requested of the heater (H1:AWC-OM2_TSAMS_POWER_SET) - its temperature steady at 33.02 and 56.60 [deg C] on Thermistor 1 and 2, respectively (H1:AWC-OM2_TSAMS_THERMISTOR_[1/2]_TEMPERATURE).
Out of Observing 12:00 to 12:05UTC for DARM Offset Steps as described in 70835, back in Observing now with H1:AWC-OM2_TSAMS_POWER_SET set to 4.6, sdf accepted for 2 hour test.
As OM2 warms up, our Kappa C is dropping, and reported range increasing, see plot. Our SQZ BLRMs are reporting better squeezing around 100Hz, visible in DARM too, see attached. Unsure if this is real or due to the calibration changing...
No SQZ time with hot OM2 was taken 14:11 to 14:16 UTC. Plot attached.
Optical gain seem to have reduced by ~2%, cavity pole higher by ~3Hz.
Looking closer at the sqz vs. no-sqz times from Camilla, at hot vs. cold OM2 settings, here are some things I notice:
Coherences for a time with lower range (OM2 cold) and higher range (OM2 hot):
https://ldas-jobs.ligo-wa.caltech.edu/~gabriele.vajente/bruco_GDS_1371896710_lower_range/
https://ldas-jobs.ligo-wa.caltech.edu/~gabriele.vajente/bruco_GDS_1371906344_higher_range/
Coherence with jitter is reduced with the hot OM2. Also there is some brodband improvement.
SRCL coherence is slighlty larger when the range is higher, so there might be even more improvement to be gained with a retuned FF.
Also, CHARD_P has larger coherence, while CHARD_Y is slightly better, so I guess the optimal A2L must have changed
Tagging CAL. This is when the OM2 TSAMS heater first gets turned ON during O4. As Camilla indicates, the "ON" button was hit -- i.e. H1:AWC-OM2_TSAMS_POWER_SET was set to 4.6 V at 2023-06-27 12:04:02 UTC (05:04:02 PDT) -- yes, at 5a in the morning on Tuesday prior to Maintenance day; she was on OWL shifts during O4, prior to when owl shifts became remote/on-call. WE were back in observing by 2023-06-27 12:05:30 UTC (05:05:30 PDT). The thermistors on the TSAMS heater unit took much longer to thermalize, with - H1:AWC-OM2_TSAMS_THERMISTOR_1_TEMPERATURE taking *days* to reach equilibrium 33.0 [deg C]. - H1:AWC-OM2_TSAMS_THERMISTOR_2_TEMPERATURE taking *hours* to reach equilibrium at 56.60 [deg C]. Indeed the calibration measurements taken on Tuesday 2023-06-28 01:50 UTC (2023-06-27 18:50 PDT -- see LHO:70902 and LHO:70908), the lock and observation stretch *after* the above mentioned turn on segment was taken in the middle of the THERMISTOR_2_TEMPERATURE thermalization. The TSAMS heater remained ON until 15 days later on 2023-07-12 14:48:47 UTC -- LHO:71285.
Sheila suggested that we should skip shuttering ALS (as we did by hand during one of the locks yesterday), so that we don't have to re-lock the green lasers when we're ready to check the green initial alignment setpoints. So, I've uncommented the edge that we've used in the past, that allows us to jump from DHARD_WFS to CARM_OFFSET_REDUCTION.
In addition to that, I've added a flag to lscparams, and have tried to have DHARD_WFS look at that flag, and DHARD_WFS will either return True or will return the PARK_ALS_VCO state. I've loaded this, but we're already past this point in the guardian, so we'll see how it works next lock.
Attached is a screenshot showing what I did (in case it's not the right thing to have done).
We seem to have successfully left the greens open this lock. If I think about it, I'll switch the flag to check the opposite behavior, for next lock.
This lock, I tried with the flag set to False, and it did what I wanted - went through shuttering ALS, then kept on going toward the requested state. So, I think it works.
Jenne, Sheila
We found this logic confusing when we tried to request DHARD WFS as part of locking problems diagnosis, as it forces DHARD WFS to return a later state. We removed this logic and instead added a weight of 5 to the arrow from DHARD WFS to CARM_OFFSET_REDUCTION. If someone wants to return to locking with green on, they can change this weight to 1 by editing the ISC_LOCK edges (line 6157 for now):
('DHARD_WFS', 'CARM_OFFSET_REDUCTION', 5), #if you want to leave green locked (to reset green refs) change this weight to 1)