I went down to end Y to retrieve the usb stick that I remotely copied the c:\slowcontrols directory on h1brsey to, and also to try to connect h1brsey to the kvm switch in the rack. I eventually realized that what I thought was a vga port on the back of h1brsey was probably not, and instead I found this odd seeming wiring connected from what I am guessing is a hdmi or dvi port on the back of h1brsey, to some kind of converter device, then to a usb port on a network switch. I'm not sure what this is about, so I am attaching pictures.
Tue Jun 10 10:12:01 2025 INFO: Fill completed in 11min 57secs
around 11:15 local I observed an outlier on the VEA temperature trend. Zone 2 (Y output) appeared to be running beyond its norm. Because Eric was troubleshooting a heater coil in this particular zone (per WP 12589) this morning, this was not terribly surprising, but I decided to investigate anyway. According to FMCS, heating stage 1 was manually forced on. It appeared to hold at least 40% heating command in this condition. I don't recall a reason for this being manually enabled, nor did Eric. Since disabling it, heating command dropped from 40% to 0 and supply temperatures have fallen from 73F to 58. This might cause a sharper than usual course correct, but I would expect zone 2 to fall in line with the rest of the VEA by days end. E. Otterman T. Guidry
Sheila, Elenna, Camilla
Sheila was questioning if something is drifting for us to need an initial alignment after the majority of relocks. Elenna and I noticed that BS PIT moves a lot both while powering up /moving spots and while in NLN. Unsure from the BS alignment inputs plot what's causing this.
This was also happening before the break (see below) but the operators were similarly needing more regular initial alignments before the break too. 1 year ago this was not happening, plot.
These large BS PIT changes began 5th to 6th July 2024 (plot). This is the day shift from the time that the first lock like this happened 5th July 2024 19:26UTC (12:26PT): 78877 at the time we were doing PR2 spot moves. There also was a SUS computer restart 78892 but that appeared to be a day after this started happening.
Sheila, Camilla
This reminded Sheila of when we were heating a SUS in the past and causing the bottom mass to pitch and the ASC to move the top mass to counteract this. Then after lockloss, the bottom mass would slowly go back to it's nominal position.
We do see this on the BS since the PR2 move, see attached (top 2 left plots). See in the green bottom mass oplev trace, when the ASC is turned off on lockloss, the BS moves quickly and then slowly moves again over the next ~30 minutes, do not see simular things on PR3. Attached is the same plot before the PR2 move. And below is a list of other PR2 positions we tried, all the other positions have also made this BS drift. The total PR2 move since the good place is ~3500urad in Yaw.
To avoid this heating and BS drift, we should move back towards a PR2 YAW of closer to 3200. But, we moved PR2 to avoid the spot clipping on the scrapper baffle, e.g. 77631, 80319, 82722, 82641.
I did a bit of alog archaeology to re-remember what we'd done in the past.
To put back the soft turn-off of the BS ASC, I think we need to:
Camilla made the good point that we probably don't want to implement this and then have the first trial of it be overnight. Maybe I'll put it in sometime Monday (when we again have commissioning time), and if we lose lock we can check that it did all the right things.
I've now implemented this soft let-go of BS pit in the ISC_DRMI guardian, and loaded. We'll be able to watch it throughout the day today, including while we're commissioning, so hopefully we'll be able to see it work properly at least once (eg, from a DRMI lockloss).
This 'slow let-go' mode for BS pitch certainly makes the behavior of the BS pit oplev qualitatively different.
In the attached plots, the sharp spike up and decay down behavior around -8 hours is how it had been looking for a long time (as Camilla notes in previous logs in this thread). Around -2 hours we lost lock from NomLowNoise, and while we do get a glitch upon lockloss, the BS doesn't seem to move quite as much, and is mostly flattened out after a shorter amount of time. I also note that this time (-2 hours ago) we didn't need to do an initial alignment (which was done at the -8 hours ago time). However, as Jeff pointed out, we held at DOWN for a while to reconcile SDFs, it's not quite a fair comparison.
We'll see how things go, but there's at least a chance that this will help reduce the need for initial alignments. If needed, we can try to tweak the time constant of the 'soft let-go' to further make the optical lever signal stay more overall flat.
The SUSBS SDF safe.snap file is saved with FM1 off, so that it won't get turned back on in SDF revert. The PREP_PRMI_ASC and PREP_DRMI_ASC states both re-enable FM1 - I may need to go through and ensure it's on for MICH initial alignment.
RyanS, Jenne
We've looked at a couple of times that the BS has been let go of slowly, and it seems like the cooldown time is usually about 17 minutes until it's basically done and at where it wants to be for the next acquisition of DRMI. Attached is one such example.
Alternatively, a day or so ago Tony had to do an initial alignment. On that day, it seemed like the BS took much longer to get to its quiescent spot. I'm not yet sure why the behavior is different sometimes.
Tony is working on taking a look at our average reacquisition time, which will help tell us whether we should make another change to further improve the time it takes to get the BS to where it wants to be for acquisition.
Last night Corey ran a simulines measurement shortly into the start of the lock, 84908. This measurement was mainly done as a test to confirm simulines wasn't breaking the lock, so we were not well thermalized. We can first report that simulines did not break the lock, so the previous lockloss that occurred during the simulines measurement was likely unrelated to simulines itself.
GPS start of measurement: 1433563079
I did a time machine on the calibration monitor screen for the GPS start of the measurement.
I was able to generate a report by running pydarm report --skip-gds
I have attached the generated PDF from this report, and I took a screenshot of the first page since there is an interesting result. The sensing function shows a large spring, which is probably due to the fact that we are operating with a significant SRCL offset, which is designed to compensate for 1.4 degrees of SRCL detuning, 84794.
However, it is important to remember that this calibration measurement was made while the IFO was unthermalized, but the SRCL offset measurement I linked here was performed with the IFO thermalized.
The results from this calibration report are saved in /ligo/groups/cal/H1/reports/20250610T035741Z/
Camilla and I swept the LVEA this morning between locks. The VAC team still has HAM1 pumps to turn off and valve in, as well as a setup with a laptop on the Y arm near the manifold, and a turbo on the output arm. They will get to this later in the day. Other notable things found on out walk through:
The heating coil in zone 2A of the LVEA was repaired this morning. There will be some variation in the trending while the PI loop adjusts.
The CDS SDF has had 1 diff for the past 4 days because the second outlet of the EY Tripplite power-strip was turned on around 9am Friday 06jun2025. Strangely EY's lights were not turned on at all on Friday, so either this was a mistake for something had been plugged into the outlet ahead of time.
Asking around the control room, no one knows why this was turned on. Because driving to EY and entering the VEA is invasive on locking, we elected to turn it off for now.
Following the drop in LN2 level over the last few days (see plot) we decided to bump up the alarm level from 80% to 85% on all CPs to give the vacuum team an earlier warning that the PID controls system is not able to maintain a nominal level.
alarms was restarted 08:16 with the following change
Channel name="H0:VAC-LX_CP2_LT150_PUMP_LEVEL_PCT" low="8.5e+01" high="9.9e+01" description="CP2 Pump LN2 Level">
Channel name="H0:VAC-MY_CP3_LT200_PUMP_LEVEL_PCT" low="8.5e+01" high="9.9e+01" description="CP3 Pump LN2 Level">
Channel name="H0:VAC-MX_CP5_LT300_PUMP_LEVEL_PCT" low="8.5e+01" high="9.9e+01" description="CP5 Pump LN2 Level">
Channel name="H0:VAC-MX_CP6_LT350_PUMP_LEVEL_PCT" low="8.5e+01" high="9.9e+01" description="CP6 Pump LN2 Level">
Channel name="H0:VAC-EY_CP7_LT400_PUMP_LEVEL_PCT" low="8.5e+01" high="9.9e+01" description="CP7 Pump LN2 Level">
Channel name="H0:VAC-EX_CP8_LT500_PUMP_LEVEL_PCT" low="8.5e+01" high="9.9e+01" description="CP8 Pump LN2 Level">
TITLE: 06/10 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 1mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.18 μm/s
QUICK SUMMARY:
H1 was in IDLE when I arrived.
I will start trying to lock now.
I've changed the sign of the damping gain for ITMX 13 in lscparams from +0.2 to -0.2 after seeing it damp correctly in 2 lock stretches. The VIOLIN_DAMPING GRD could use a reload to see this change.
I have loaded the violin damping guardian, since the setting RyanC found still works.
TITLE: 06/09 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: n/a tonight
SHIFT SUMMARY:
LOG:
Elenna C., Corey G., Oli P.
With this being the first lock at NLN post the HAM1 vent, and since we had a rung up fundamental violin, decided to address for overnight operation. It looked liked like it was our infamous ITMy MODE5/6.
Before Oli left, they mentioned that if IY M5/6 rings up it might be worth it to try 2W settings if it rings up.
Sure enough, it was slowing ringing up and with Elenna also assisting, we decided to change the settings. Here they are:
NEW:
ITMy MODE5 : FM6 + FM7 + FM10, gain = 0.01
OLD:
ITMy MODE5 : FM6 + FM8 + FM10, gain = 0.01
The 3rd image shows when the change was made marked with the cursor and how the mode begins turning around about 10min later. Saved this new change in lscparams and hit LOAD on the VIOLIN DAMPING STATES and ISC LOCK notes.
I did some quick reconciliation of some of the observe SDF diffs. An easy one was all dark offsets. I also accepted many of the ASC SDFs, since I am responsible for many of them. This includes things like PD phasing, new intrix values, gain changes, etc. I also accepted several LSC SDFs, which also includes PD phasing, matrix changes, feedforward changes, and the MICH filter change. The only one I am not familiar with in the attached screenshot is the MCL trig threshhold.
Headline summary: We are very nearly back to NLN, only prevented in returning by the violin modes which are too high to engage OMC whitening. We have not yet been able to calibrate because of a very fast lockloss of unknown source.
The alog was down for most of the afternoon and evening, so I will do my best here to copy mesages from the mattermost chat, which served as the temporary alog.
Minor struggles in returning to full lock:
Once we achieved full power, we proceeded to try to solve some of the final instabilities leftover from last week. Changes made to avoid locking stability problems:
The plan was to try testing simulines again, however! We have had multiple very fast locklosses with no known origin. Two have happened shortly after arriving at OMC whitening. These are not coming from ASC and I don't see any ringup in DARM or LSC loops.
Had a look a the fast locklosses from OMC_WHITENING, lockloss tool tags windy for both (~20mph so not bad) and we are waiting for the violins to damp before engaging OMC_WHITENING.
Note, although these aren't slow ring ups, these are our typical type of locklosses and are not IMC fast locklosses.
TITLE: 06/09 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 18mph Gusts, 9mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.17 μm/s
SHIFT SUMMARY:
Covering for Ryan C.
OFI Returned to aligned... No output. Apparently this is normal.
SRM already aligned by Elenna.
Attempting to start relocking 19:00 UTC
Potential Lockloss from FIND_IR when I requested SEI_ENV to cycle between Maintenance & CALM.
Lots of saturations when we ran through LOWNOISE_COIL_DRIVERS
Unknown Lockloss from Low_Noise ASC ( We manualed over a handful of States and Ended up manualing back to this state.)
Adjusted the ALS polarization of both X and Y arms.
Ran a Manual_Initial_Alignment Finished @ 22:26 UTC
No internet do to a GC switch restart 22:40 UTC. Internet back up a few moments later.
Lots of saturations when we ran through LOWNOISE_COIL_DRIVERS again, we did not lose lock either time.
Lockloss due to DHARD P ring up while in LOW_NOISE_LENGTH_CONTROL.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
14:47 | OPS | TJ, Camilla | LVEA | N -> Y | Put in bellows | 15:28 |
14:56 | FAC | Kim, Nellie | LVEA | N -> Y | Technical clean | 15:23 |
15:03 | SAF | Richard | LVEA | N | Safety checks | 15:20 |
15:07 | OPS | TJ | LVEA | N -> Y | LASER hazard transition | 15:28 |
15:20 | VAC | Jordan & Gerardo | LVEA | Y | Checks & Balances on HAM1 | 16:00 |
15:22 | FAC | LVEA is LASER HAZARD | LVEA | YES | LVEA is LASER HAZARD \u0d26\u0d4d\u0d26\u0d3f(\u239a_\u239a) | 22:54 |
15:24 | FAC | Kim, Nellie | FCES | N | Tech clean | 16:47 |
15:26 | FAC | Randy | LVEA | Y | Move forklift to receving | 15:56 |
15:34 | VAC | Gerardo | LVEA | Y | Open GV2 then GV1 | 16:00 |
15:53 | ISC | Camilla | LVEA | Y | Quick table check | 15:55 |
16:00 | FAC | Tyler, Chris | High bay, MidY | N | Forklift to mids | 18:09 |
16:03 | VAC | Gerardo, Jordan | EndY | N | VAC checks | 16:40 |
16:16 | ISC | Camilla, TJ | LVEA | Y | ISCT1 alignment still | 19:55 |
16:24 | ALS | Keita | Ends | N | Take pictures of electronics racks | 17:07 |
16:24 | ISC | Elenna | LVEA | Y | Join table alignment crew | 17:56 |
16:24 | ISC | Elenna | LVEA | Y | Alignment on ISCT1 | 16:24 |
16:47 | FAC | Kim | MidX | N | Tech clean | 18:04 |
16:48 | FAC | Nellie | MidY | N | Tech clean | 17:38 |
16:52 | VAC | Gerardo, Jordan | EndY | N | CP7 checks | 17:10 |
16:57 | FAC | Richard | LVEA | N | Check out network stuff | 17:02 |
18:01 | ISC | Elenna | LVEA | Y | Table work | 18:59 |
18:09 | ISC | Keita | MidY | N | Grab part out of storage | 18:24 |
19:01 | CDS | Patrick | Ey & EX | N | Getting data from BRS computers | 20:23 |
19:42 | FAC | Randy | LVEA | y | Looking for tools in LVEA | 19:57 |
19:51 | VAC | Travis & Tyler | MidX | N | Looking for Cryopump storage space. | 21:13 |
20:32 | PEM | Marc, Kiet | Mid X | N | Looking at Fiber box connections to vertex Vault | 20:50 |
21:14 | VAC | Gerardo | End Y | N | Twiddling Vac Valves | 21:53 |
22:44 | PCAL | Francisco | PCAL Lab | yes | Shutting apatures & laser apatures | 23:04 |
I noticed that the LLCV is railing at its top value, 100% open, it can't open no more. This is a known issue, but it appears as if the valve is reaching 100% sooner than expected, meaning when the tank is almost half full. First, I'm going to try a re-zero the LLCV actuator and await for the results. First attachment is a 2 day plot of LLCV railing today and yesterday. The second plot is a 3 year history looking at the tank level and the LLCV, it rails at 100% a few times.
BURT restore? PID tuning ok? CP2 @ LLO PID parameters attached for comparison.
Thanks Jon. However, this system has a known issue, it turns out that the liquid level control valve is not suitable for the job, that is the reason why it reaches 100% sooner than later, but it appears as if something slip, now it reaches 100% at a higher level, this is the reason why I want to re-zero the actuator.
Attached is the Fill Control for CP7 The issue was mentioned here for the first time aLOG 4761, but I never found out who discovered this is only briefly mentioned by Kyle. Another entry on aLOG 59841.
Dirty solution of solving the issue with the LLCV getting railed at 100% open, we used the bypass valve, opened it up by 1/8 of a turn and that did the job. Not a single shot, but eventually we settle on that turn number. PID took over and managed to settle around to 92% open for the LLCV. Today we received a load for the tank for CP7. We are still going to calibrate the actuator.
(This is Oli)
Once we had been at 60W for two hours, I started a calibratoin measurement. I started with Simulines since yesterday I had gotten a broadband measurment done (84808). A couple minutes into the measurement, we lost lock. The cause is unknown, but I've attached the output for the simulines measurement (txt).
Lockloss happened as Calibration signals were ramping on, see attached. First glitch in L3, see attached.
Attaching trend of OMC DCPD sum during LL. Plot suggests the DCPDs were not the cause of the lock-loss.