ISC_LOCK.py changes since 2024-01-07
Jonathan, Patrick, Dave:
Around 14:00 yesterday (Wed 06mar2024) we upgraded the camera network link to h1digivideo3 from 1GE TPE to 10GE fiber. After running overnight, all of the cameras on this server are still running and, with the exception of ETMY, no further VALID=0 have been seen (2 day trend attached, upgrade at the 18h mark).
ETMY continues to have an hourly VALID=0 which flashes the camera client images blue-screen for a few seconds. This happens at roughly the 20 minute mark in the hour and it slowly advances through the hour.
The ETMY periodicity is not changed by h1digivideo3 reboots, suggesting it is in the camera itself. To test this, I power cycled the ETMY camera at 08:34 this morning.
Power cycling the ETMY camera (h1cam27) appears to have fixed the hourly blue-screen flashing.
We have had no camera drop-outs or VALID=0 issues over the past 24 hours. Looks like h1digivideo3's problems have been resolved, I'm closing out FRS30615
TITLE: 03/07 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 1mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.15 μm/s
QUICK SUMMARY:
Arrived with Sheila & Gabriele running an Manual Initial Alignmnent. Randy and Eric are moving items around the site as well. Low winds currently and a couple of earthquakes over night (5 & 7hrs ago). No alarms or major red-ness on the CDS Overview.
Whoopps--missed this: Sheila let me know that on DIAG MAIN there is a notification of "Check PSL Chiller." (RyanC messaged Jason)
[Additionally there is a notification for SEI_CONF "being stuck", but Sheila mentioned work by Jim yesterday (i.e. EX BRS).]
I looked at the PSL chiller this morning and the water level was hovering right above the minimum fill line (this is the most common reason for the "check PSL chiller" message on DAIG_MAIN), so I added 150mL of lab water.
Jenne, Elenna, Jennie W., Camilla, Ryan S., Austin, Matt, Gabriele, Georgia.
Going to offload DRMI ASC and try and offload things by hand as build-ups are decaying.
Got to Offload DRMI ASC.
Elenna changing PRM to improve build-ups. LSC-POPAIR_B_RF90_I_NORM and LSC-POPAIR_B_RF18_I_NORM.
PR2 helped.
SRM and SR2.
All of these pitch needed changing.
DHARD hit some limits at CARM offset reduction and we lost lock. First image.
DHARD YAW and DHARD PITCH rung up.
Annother lockloss (image 2) from state 309 but lockloss is reporting state number incorrectly.
We think its losing lock around DHARD WFS.
SRM yaw changed. Improves things.
Noticing some glitches on DHARD Pitch out - not sure what is causing them. Third image.
Changing CHARD alignment to improve ASC.
Lost lock. Possibly from state 305 (DHARD WFS)
PRM pitch being changed helps with build-ups (OFFLOAD DRMI ASC state).
Then stepping the guardian manually up the states making tweaks.
Tried to go from CARM offset reduction to Carm 5 picometers and LSC channels rung up at 60 Hz and we lost lock.
One of the LSC loops might have too much gain.
Elenna measuring TFs to check the loops, but DARM measurement is not giving good coherence and increasing the gain.
Power Normalisation of some loops uses IM4_TRANS_QPD and this is very mis-centred so may be adding in noise. (image 4)
Lockloss due to excitation?
Reached OFFLOAD DRMI ASC again and Elenna moving IM3.
Moved IM4 and less clipping on IM4_TRANS.
We lost lock.
DARM is normalised by X_ARM_TRANS and BRS X has been taken out of loop so Jenne changed the normalisation to be with Y-ARM in ISC_LOCK guardian.
Jenne aligning BS and we are in FIND IR.
Lost lock again.
Elenna doing went to MANUAL Initial Alignment state.
Had to undo the changes Jenne made to IM3 and IM4.
Gabriele is going to measure DARM with white noise measurement.
This shows the UGF is 15Hz instead of what we think it should be (50Hz).
Increased DARM gain by 50%.
Lockloss.
Problem comes with DHARD gain increase during DHARD WFS state (305). Georgis is commenting this out in line 2337 and 2338 of ISC_LOCK.
Keep losing LOCK from LOCKING GReen ARMS State.
When we lose lock the power normalisation for DARM should reset to 0 but it did not and so was causing noise on the ALS DIFF input. Probably because the DOWN STATE of the guardian is not set to do it.
Elenna updated prep for locking to set this state to 0 even with using Y ARM for the power normalisation instead of X.
We fell out at CARM to TR.
Adding some overall thoughts/conclusions:
It seems like there is some alignment *somewhere* that is still bad, but we can't figure out what it is. Earlier today we could reach "Prep asc for full ifo", and things seem to have degraded to where we cannot pass DHARD WFS. It seems like part of the issue is matching the alignment of the arms, since engaging DHARD WFS is such a problem. However, we are struggling to correct that alignment during the carm reduction in a way that maintains the lock. Also, some earlier attempts involved us trying to fix some of the input pointing that could be clipping, but the changes that Jenne made to IM3 and IM4 were "bad" in the sense that the INPUT ALIGN state no longer worked after that change.
We are also worried about the DARM unity gain in "darm to rf". It seems low to us (~15 Hz) but we actually don't know how high it should be (it should be closer to 60 Hz by the time carm is on resonance). It's also worrying to see the POP alignment degrade through the carm reduction process, but that's the normal process- we don't have PRC or SRC ASC engaged during the process normally either. We can achieve decent DRMI alignment by hand before the carm offset reduction phase.
It would be helpful to think of some way we can "offload" beneficial alignment steps from lock to lock as we retry carm offset reduction so we don't need to start from scratch every lock process. I think we should still also be concerned with whatever is going on at EX (BRS, ISI, etc). We get saturation warnings for EX that don't match our actions so maybe things aren't great down there.
Adding: Georgia moved the SRM before engaging DHARD and it helped significantly, especially with the glitchiness in the DHARD control signal.
Important to note: we commented out the gain increase for DHARD in the guardian. After the SRM move, Georgia tried increasing the gains again by hand. We immediately lost lock. So the higher DHARD gain is no good and is still commented out in the guardian.
Further notes:
In a fit of frustration I re-engaged the DRMI ASC loops in the guardian except for PRC1 (PRM) and SRC2 (SR2) since those rely on QPDs whose offsets we do not trust. This made life easier, and before engaging DHARD WFS, I adjusted the SR2 alignment to improve the arm matching, and then the PRM alignment to increase the RF18 buildup (SRM is controlled by SRC1 and was offloaded in the DRMI ASC state). I also further walked DHARD in pitch and yaw using the move_arms_dev.py script. This improved the DHARD engagement further and the DHARD alignment converged decently. However, before I could move to the next state, I watched the green arm signals. The ALS Y signal drops out slightly before lockloss. I don't think there is any feedback on this signal, but maybe this is a sign that the arms aren't doing so well after all in this alignment, even though DHARD converges.
My current method: take the IFO to "DRMI_LOCKED_CHECK_ASC" and move PRM and SR2 to improve the POP build ups and camera image. Wait for other ASC signals to reconverge. Then, go to DRMI ASC alignment offload. From there, go to "DARM_TO_RF" and check the arm alignment.
I decided to revert the ITMs to the position they were in the last time we achieved "prep asc for full ifo" (see screenshot). I trended back the ITM oplevs. It appears that the most movement has occurred in ITMY yaw (4.8 versus 1.5). Perhaps this is one reason why our attempts to engage DHARD are failing. After this change I reran manual initial alignment to get the beamsplitter back to a good place.
Even with that change, the ALS Y arm buildup is still less than one (at locking arms green). This seems wrong, but nothing we have done makes it better. We did have a better buildup for ALS Y early yesterday that all our alignment efforts seem to degrade.
Ok, this did not work. Engaging DHARD WFS still pulls the Y arm ALS buildup off and then causes a lockloss. I am leaving in down. Please check ITM alignment before beginning locking attempts in the morning.
Camilla Naoki Daniel Georgia Nutsinee
This morning we had difficulty seeing squeezing in the HD. The problem was a combination of railing CLF ISS and alignment. We had to reduced the power sent to CLF fiber from 1.2mW back to 0.7 mW (the value prior to PMC alignment work). SHG ISS was working fine. We moved FC1, ZM5 and ZM4 (mostly FC1) and eventually saw some squeezing. OPO IR camera was a useful indication to which DOF we should move these optics. We optimized the OPO crystal temperature. NLG was 17.39. We tried squeezing at CLF launch power of 0.07, 0.1, and 0.7 mW. Common mode board gain and squeeze angle was optimized everytime the power changed. No significan't change in squeezing level observed. If we lower CLF or LO gain a coherence between HD DIFF anf LO/CLF around 1-2 kHz can be observed. I suspect this is due to vacuum pump noise on the floor. Reducing LO gain will degrade the over all squeeze level. Reducing CLF level will make acoustinc noise worse.
The pressures: HAM7: ~2.9E-7 Torr HAM8: ~3.3E-7 Torr Corner: ~4.9E-8 Torr EX: ~5.4E-9 Torr Today's activities: - The X-manifold turbo-station water lines have been updated - now the controller and pump lines are separated - HAM8 RGA scan is done, details in the comments - HAM8 IP was valved in - HAM7 Annulus Ion Pump has railed; it is caused supposedly by a leaky plug on the septum plate, or just the AIP controller - either way, it will be found out soon - Relay tube - HAM7 - HAM8 further schedule: This volume will be valved in to the main volume around the end of the week - RV1; FCV1; FCV2; FCV3; FCV4; FCV8; FCV9 will be opened; preferably after the HAM7 AIP issue is solved
HAM8 scans collected at T240018
RGA tree was baked at 150C for 5 days following the replacement of leaking calibrated leak with a blank.
TITLE: 03/06 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: None
SHIFT SUMMARY:
Locking troubleshooting continues. The commissioning team was able to get to PREP ASC before losing lock, so good progess is being made. Alog on today's locking extravaganza here.
Last shift from me, peace out yall :)
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:29 | SAF | LASER SAFE | LVEA | - | The LVEA is LASER SAFE | 03:14 |
16:00 | FAC | Ken | High Bay | N | Lighting | ?? |
16:09 | FAC | Karen/Kim | LVEA | N | Tech clean (Kim out 1646) | 17:26 |
17:05 | VAC | Jordan | HAM 8 | N | RGA prep | 17:33 |
17:15 | Hanford Fire | Site | Tumbleweed burn | ?? | ||
17:47 | SEI | Jim | Remote | N | Restart BRS X | ?? |
18:25 | FAC | Karen | MY | N | Technical cleaning | 19:11 |
18:48 | VAC | Travis/Janos | LVEA | N | Turbo pump water line work | 20:32 |
19:12 | ISC | Sheila/Matt/Trent | ISCT1 | YEYE (LOCAL) | Beam dump installation | 19:46 |
20:33 | VAC | Gerardo, Jordan | LVEA | N | Climbing on HAM6 for turbopump | 21:33 |
21:59 | VAC | Jordan | HAM 8 | N | RGA scans | 23:25 |
22:08 | VAC | Janos | HAM 7 | N | Annulus ion pump work | 22:20 |
22:46 | VAC | Travis | LVEA | N | Disconnect pump cart | 23:12 |
Elenna, Sheila, Gabriele, Jennie W et al.
NB: LHO alog 62110 shows PRG gain plots.
[Jennie W took this log over from Gabriele]
Matt and I checked on the POPAIR checker (added 70882). It was acting correctly, moving to PRMI when RF18 (not RF90) doesn't flash above 95 in the first minute.
Matt lowered the threshold to flashes of RF18 not above 80 in the first 120 seconds. This threshold should be rechecked once we are locking reliably.
We had issues yesterday with h1dmt1 receiving both gds broadcast streams. Last night John Zweizig and I left it working with only one stream active. Today John and I did two tests. 1. We switched which ethernet port was deactivated. During this test we still saw issues with retransmits during periods with both broadcasts being received. When we disabled the other port that receives the broadcasts things settled down. 2. We rebooted the machine to see if there was some state that was bad and just needed a reboot. This seems to have done the trick. Presently we do not know what exactly was cleared up. We checked the network settings (including buffer sizes, tweaks, ...), software package versions, physical links, ... between h1dmt1 and h1dmt2 and could not find a difference that would explain this behavior. h1dmt1 is running, and the control room should have access to all the DMT products that are needed (range, darm, ...).
The CDS view into this is facilitated through two PVs, H1:DAQ-GDS0_BCAST_RETR and H1:DAQ-GDS1_BCAST_RETR. These PVs show the broadcast retransmit requests that h1daqgds0 and h1daqgds1 receive each second. We get a stream of these, usually in sets of 3 (1 for each dmt machine). When the pattern changes and/or we start receiving many more requests that is a sign that their are problems with the broadcast into the DMT.
WP11753
Jonathan, Dave:
As part of the investigation into recent camera image instabilities we upgraded h1digivideo3's 106 VLAN network port from 1GE copper to 10GE fiber. A solar flare PCIe card was installed in h1digivideo3 at 14:05 this afternoon.
The new fiber is connected to a spare 10GE port on sw-msr-h1aux (ethernet 3/1/2).
On h1digivideo3 we have left the original copper connection to eno2, the new fiber port is enp1s0f0np0.
Currently the EPICS_LOAD_MON channel being trended by the DAQ is still H1:CDS-MONITOR_H1DIGIVIDEO3_NET_RX_ENO2_MBIT, the new channel is H1:CDS-MONITOR_H1DIGIVIDEO3_NET_RX_ENP1S0F0NP0_MBIT, which Jonathan is working on remapping to ENO2 so we don't need a DAQ restart.
I updated the load_mon_epics on digivideo3 to make the eno2 channel hold the traffic data from ENP1S0F0NP0 so we can keep a consistent trend of the traffic from the cameras.
As we went through SDF revert yesterday when locking all the OMC SDFs were reset. I turned off the QPD A offsets and reset the input and output ASC matrices to those we had found allowed us to lock the OMC with ASC and no saturation of the OMC suspension.
First picture is the reference picture from yesterday before the revert. I have not set back the changes to DCPD offsets as I think these were to make sure DCPD SUM output did not dip to a negative number due to dark noise.
Second picture is the OMC model SDFs now.
The guardian has been set to not go through SDF revert when we lose lock however the guardian has the ASC gains hard-coded in so we may want to replace these with the new values in the OMC guardian once we verufy these are correct by manually locking the OMC once the full IFO is locked.
I accepted these OMC model values above in SDF. Picture attached.
The new POS_X and ANG_Y gain values (accepted in previous comment's screenshot) have been updated in the OMC_LOCK Guardian's ASC_QPD_ON state (where they are hard-coded in). Changes loaded and updated in svn.
Per commissioner request, I've made two changes to the early main locking steps as set in ISC_LOCK:
Changes have been loaded and committed to svn.
I've also commented out SQZ_MANAGER from the list of managed nodes in ISC_LOCK. This allows SQZ to work independently without main IFO locking telling SQZ_MANAGER what to do for now.
EDIT: We later learned that lines 214-215 of ISC_LOCK needed to be commented out as well since this is a request of SQZ_MANAGER in the DOWN state.
Furthering this effort as main IFO locking is progressing, I've commented out the first couple lines in the LOWNOISE_LENGTH_CONTROL state which interacts with SQZ_MANAGER, which at this point is not managed.
Naoki, Vicky - We have undone these changes guardian changes for the break (brought back SQZ_MANAGER in list of managed nodes, in first few lines of LOWNOISE_LENGTH_CONTROL, and lines 214-215 requesting sqz to down).
SQZ_MANAGER is back to being managed in ISC_LOCK as usual. We will see the lock sequence through a few times and get it running smoothly, and update on that after relocking.
Matthew, Jennie W, Gabriele
In the initialization of the OMCscan code (which gets OMC scan data, analyzes it and then plots it), I updated several values to reflect the transition from OMC003 to OMC001, so that omc analyses are accurately done. For example, several small changes include:
The values were obtained from T1500060 Table 19, which report the OMC optical test results for OMC001; note: the conversion from nm/V to MHz/V is found by the relation delta(f)/f = delta(L)/L, where delta(L) is 2*PZTresponse in nm/V, L is the round-trip cavity length, and f is 1064nm converted to MHz
Are we sure that the previous OMC used OMC's "PZT2" (12.7nm/V) for the scan, not OMC's "PZT1" (11.3nm/V)?
I mean: there is a possibility that the indication of PZT2 on the screen may not mean PZT2 on the OMC.
Also the response of the PZT is nonlinear and hysteretic.
I'd rather believe the frequency calibration using the cavity peaks (e.g. FSR/Modulation freqs) than the table top calibration of the PZTs.
Good suggestion!
Computing the PZT response from the FSRs we get around 6.3 MHz/V.
And on your note about certainty of using PZT2 response, I am not sure.
I think we usually used the channel PZT2 to perform scans with OMC 003. But yeah, I am not sure if this corresponds to PZT2 on the real OMC. The PZT calibration we just use in the scan analysis to get an initial guess for the calibration but the final calibrated scan does indeed find the carrier 00 and 45 MHz 00 peaks to fit the non-linearity of the PZT.
No changes in ALIGN_IFO, ALS_ARM (ALS_XARM/YARM), ALS_COMM, ALS_DIFF
ISC_DRMI changes since 2023-12-21
ALS_DIFF changes since 2023-11-07
IMC_LOCK changes since 2023-11-21
Some notes on sqz guardian changes that were made, then reverted: 76154
The change to use TRY as the DARM normalization was reverted back to the formerly-nominal TRX.
When we were using TRY, we had also made some changes in PREP_DC_READOUT_TRANSITION, but those have now been reverted.