Displaying reports 2501-2520 of 83002.Go to page Start 122 123 124 125 126 127 128 129 130 End
Reports until 14:56, Tuesday 18 February 2025
LHO VE
janos.csizmazia@LIGO.ORG - posted 14:56, Tuesday 18 February 2025 - last comment - 11:36, Tuesday 25 March 2025(82886)
New EX clean air supply is installed
The new clean air supply (compressor, tank, dryer and a series of extra filters), which was received in the end of 2024, was installed in the EX mechanical room as a replacement for the old system. The installation work was carried out in the last few weeks, the last step - the startup by Rogers Machinery - happened today.
This new system has a 69 cfm air delivery operating with 3 pcs. of 7.5 HP motors (in sum 22.5 HP). In comparison, the old system had a 50 cfm air delivery, operating with 5 pcs. of 5 HP motors (in sum 25 HP). Moreover, the new system (unlike the old one) has an automatic dew point monitor, and a complete pair of redundant dryer towers.
So, this new system is a major improvement. The reason for this 69 cfm limit (and not more) is that the cooling of the compressor units in the MER is still feasible, moreover, the filters and the airline do not need any upgrades, they can still accommodate the airflow.
Both the new and old systems are able to produce at least -40 deg F dew point air on paper. During the startup, the new system was however able to produce much better than this - it was ~-70 deg F (and dropping) - as you can see in the attached photo.

Last but not least, a huge congratulations to the Vacuum team for the installation, as this was the first instance, when the installation of a clean air system was carried out by LIGO staff, so this is indeed a huge achievement. Also, big thanks to Chris, who cleaned some parts for the compressor, to Tyler, who helped a lot in the heavy lifting, and to Richard & Ken, who did the electrical wiring.

From next week on, we repeat this same installation at the EY station.
Images attached to this report
Comments related to this report
travis.sadecki@LIGO.ORG - 11:36, Tuesday 25 March 2025 (83549)

Connection to the purge air header was completed today.  Next up, acceptance testing.

Images attached to this comment
H1 SUS
matthewrichard.todd@LIGO.ORG - posted 14:29, Tuesday 18 February 2025 - last comment - 14:58, Tuesday 18 February 2025(82884)
Checking SUS TFs for rubbing

[ Matt, TJ ]

We looked at the transfer functions of the suspensions of the Test Masses, SR3 and PRM (top stage of each), to see if we could see signs of rubbing. This is a limited number of transfer functions; however, it does seem like there is some peak shifting in the ITMY-P.

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 14:58, Tuesday 18 February 2025 (82887)

PRM LPY, and BS LPY. I had to turn off the BS OpLev damping, which is mentioned in the Jeff K TF instructions wiki.

Images attached to this comment
H1 General
thomas.shaffer@LIGO.ORG - posted 14:27, Tuesday 18 February 2025 (82885)
LVEA Swept

I swept the LVEA after maintenance activities today. I unplugged one power strip in the CER, and heard a UPS in one of the vacuum racks going off. I silenced the alarm and talked to Gerardo and Richard, but we should keep an ear out.

H1 AOS
jonathan.hanks@LIGO.ORG - posted 14:02, Tuesday 18 February 2025 (82882)
WP 12333 Installed sw-lvea-aux1 in the CER, moved 4 cameras to the switch and to h1digivideo4
As per WP 12333 we installed a POE switch (sw-lvea-aux1) in the CER, moved four cameras onto the switch and migrated them to h1digivideo4.  This is to expand and test our infrastructure to host more digital cameras.

Jonathan H. and Austin F. moved the switch from its test location in the MSR to the CER.  As part of the work we moved sw-lvea-aux up 1 space in the rack to make more room and installed a cable management bar beneath the switches to provide some strain relief.

Fil C. setup the DC power for the switch.  We had wanted to also convert sw-lvea-aux to full DC power, but Fil would like us to split the load between the two switches more before removing the 1 AC power supply from sw-lvea-aux.

Patrick worked to migrate the cameras from h1digivideo1 to h1digivideo4.
H1 AOS (OpsInfo)
jason.oberling@LIGO.ORG - posted 13:29, Tuesday 18 February 2025 (82883)
PR3 Optical Lever

J. Oberling, R. Short

After the recent PRC alignment work (moving PR3 to better center the beam on PR2) the OpLev beam had fallen off of its QPD, so today I attempted to re-center the QPD.  Using only the translation stages, I could not re-center the QPD since the horizontal translation stage hit its max travel limit well before the beam was back on the QPD (SUM counts went from ~700 to ~850, well short of the usual ~22k).  I enlisted Ryan S. to help and we set out to adjust the OpLev beam alignment from the transmitter.  Turns out that the beam gets clipped before we can get it centered on the QPD.  We first tried to move the translation stage away from its limit but could only get the beam on 2 of the QPD segments (segments 1 and 3) before the SUM counts started dropping; at no point could we get the beam onto QPD segments 2 and 4 in this configuration.  We then moved the horizontal stage to its limit and tried to align the beam onto the QPD; this worked a little better, but still not great.  The best SUM counts we could get and have a "centered" QPD was ~17k, roughly 75% of the total SUM counts for this OpLev, so still some significant clipping.  We decided to unclip the beam to make the OpLev more trustworthy (we've had questions about this OpLev in the past).  Max SUM counts were ~22.7k, and this gives a Yaw reading of roughly -12.5 µrad; any closer to zero and the beam begins to clip again.  We left the OpLev in this configuration and will investigate the clipping at a later date.

Tagging OpsInfo: Around -12.5 µrad is the current "yaw zero" for the PR3 OpLev until we can figure out and alleviate the clipping.

As for the clipping, my best guess at this point is we're clipping somewhere on/in the ~4' long tube that runs from the exit viewport for the OpLev receiver and the OpLev's QPD.  We'll have to climb on top of HAM3 to check this, which we will do at a later date.

This closes WPs 12336 and 12341.

Edit: Fixed typo in yaw reading in bolded sentence (12.7 to 12.5).

H1 SQZ
camilla.compton@LIGO.ORG - posted 12:33, Tuesday 18 February 2025 (82881)
SHG EOM, AOM and fiber pointing realigned, powers measured.

Shiela, Camilla, Sheila and Vicky moved the AOM and realigned in 80830.

Summary: Today we measured the powers on SQZT0, see attached photo. Aimed to understand where we were loosing our green light as we had to turn down the OPO setpoint last week 82787. Adjusted SHG EOM, AOM and fiber pointing are dramatically increased available green light. OPO setpoint back to 80uW with plenty of extra green avaible.

After taking the attached power measurements we could see we the SHG conversion was fine, 130mW green out of SHG for 360mW of IR in.

We were loosing a lot of power through the EOM 96mW in to 76mW out (expect ~100%). We could see this clipping in yaw by taking a photo. Sheila adjusted the EOM alignment increasing the power out to 95mW.

Then AOM and fiber aligned following 72081:
1) set the ISS drive point at 0V to make only 0th order beam and check 90% of AOM throughput with power meter. Started with 95mW in, 50mW out. Improved to 95mW in 80mW out = 84%.
2) set the ISS drive point at 5V and align the AOM to maximize the 1st order beam (which is left side of 0th order beam looking from the laser side of SQZT0). After the AOM alignment, the 1st order beam was 27.5 mW and the 0th order beam was 49 mW. We measured again the AOM throughput including both 0th and 1st order beam. The AOM throughput was 77.5mW/93mW = 83%.
3) set the ISS drive point at 5V and align the fiber by maximizing the H1:SQZ-OPO_REFL_DC_POWER. I needed to adjust the SHG waveplate to reduce power to stop PD saturating
After this alignment, the SHG output is 42.7 mW, the pump going to fiber is 20.5 mW, and the rejected power is 2.7 mW. The ISS can be locked with OPO trans of 80uW while the ISS control monitor is 7.0. Also checked the OPO temperature.
Note that depending on the SHG PZT voltage, the power does change. Maybe cause by an alignment change form the PZT.

As ISS control monitor is 7.0 is a little high with 26.5mW SHG launched (OPO unlocked), I further turned the SHG launch power down to 20mW and adjusted the HAM7 rejected light. ISS control monitor left at 5.4 which is closer to the middle of the 0-10V range.

Non-image files attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:44, Tuesday 18 February 2025 (82880)
Tue CP1 Fill

Tue Feb 18 10:08:06 2025 INFO: Fill completed in 8min 3secs

Gerardo confirmed a good fill curbside. TCmins [-51C, -48C] OAT (+2C, 36F) DeltaTempTime 10:08:06.

missing data in plot is due to DAQ restart at that time.

Images attached to this report
H1 CAL (CAL)
vladimir.bossilkov@LIGO.ORG - posted 10:03, Tuesday 18 February 2025 - last comment - 12:03, Thursday 20 February 2025(82878)
Calibration sweeps losing lock.

I reviewed the weekend lockloss where lock was lost during the calibration sweep on Saturday.

I've compared the calibration injections and what DARM_IN1 is seeing [ndscopes], relative to the last successful injection [ndscopes].
Looks pretty much the same but DARM_IN1 is even a bit lower because I've excluded the last frequency point in the DARM injection which sees the least loop suppression.

It looks like this time the lockloss was a coincidence. BUT. We desperately need to get a successful sweep to update the calibration.
I'll be reverting the cal sweep INI file, in the wiki, to what was used for the last successful injection (even though it includes that last point which I suspected caused the last 2 locklosses), out of abundance of caution and hoping the cause of locklosses is something more subtle that I'm not yet catching.

Images attached to this report
Comments related to this report
vladimir.bossilkov@LIGO.ORG - 09:08, Wednesday 19 February 2025 (82904)

Despite the lockloss, I was able to utilise the log file saved in /opt/rtcds/userapps/release/cal/common/scripts/simuLines/logs/H1/ (log file used as input into simulines.py), to regenerate the measurement files.

As you can imagine the points where the data is incomplete are missing but 95% of the sweep is present and fitting all looks great.
So it is in some way reassuring that in case we lose lock during a measurement, data gets salvaged and processed just fine.

Report attached.

Non-image files attached to this comment
vladimir.bossilkov@LIGO.ORG - 12:03, Thursday 20 February 2025 (82933)CAL

How to salvage data from any failed attempt simulines injections:

  • simulines siletently dumps log files into this directory: /opt/rtcds/userapps/release/cal/common/scripts/simuLines/logs/{IFO}/ for IFO=L1,H1
  • navitating there you will be greeted by a log the outputs of simulines every single time it has ever been run. The one you are interested in can be identified by the time, as the file name format is the same as the measurement and report directory time-name format.
  • running the following will automagically populate .hdf5 files in the calibration measurement directories that the 'pydarm report' command searches in for new measurements:
    • './simuLines.py -i /opt/rtcds/userapps/release/cal/common/scripts/simuLines/logs/H1/{time-name}.log'
    • for time-name resembling 20250215T193653Z
    • where './simuLines.py' is the simulines exectuable and can have some full path like the calibration wiki does: './ligo/groups/cal/src/simulines/simulines/simuLines.py'
H1 ISC
camilla.compton@LIGO.ORG - posted 09:26, Tuesday 18 February 2025 - last comment - 10:35, Tuesday 18 February 2025(82877)
MICH, PRCL, SRCL displacments in long vs short locks

The operator team have been finding 11Hz oscillation locklosses.  Attached is the spectrum of MICH, PRCL and SRCL from one of our last long (2 day) locks to a more recent 6 hour lock. There is a debatable PRCL bump around 10-11Hz.

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 10:35, Tuesday 18 February 2025 (82879)

I'm comparing plots that Oli made in this alog to a plot I added to this alog from early O4a where we were having frequent locklosses due to marginal stability in PRCL. The ring up looks very similar and I would guess that we should measure the PRCL OLG and adjust the gain. Just scrolling back the last few days on the summary pages, I don't visually see the excess noise around 11 Hz for a long time before the locklosses, like we saw in O4a, but that doesn't mean much since I could be fooled by the color scale.

H1 SQZ
camilla.compton@LIGO.ORG - posted 08:56, Tuesday 18 February 2025 (82873)
SQZ_ANG_SERVO strangeness

After Ryan S and I changed the SQZ ang servo H1:SQZ-ADF_OMC_TRANS_PHASE yesterday, the SQZ was unhappy overnight, plot, and the servo failed to work for a few hours, should be keeping H1:SQZ-ADF_OMC_TRANS_SQZ_ANG at zero but from the attached plot, clearly wasn't.

On the positive, the SHG power has now increased to 130mW and is more stable since the LVEA temperatures are stable again.

Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 07:34, Tuesday 18 February 2025 (82872)
Ops Day Shift Start

TITLE: 02/18 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 2mph Gusts, 1mph 3min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.47 μm/s
QUICK SUMMARY: Lost lock at 1507 UTC (0707 PT). Maintenance day today. No active alarms.

H1 CDS
erik.vonreis@LIGO.ORG - posted 07:05, Tuesday 18 February 2025 (82871)
workstations updated

Workstations were updated and rebooted.  This was an OS packages update.  Conda packages were not updated.

H1 General (ISC, SQZ)
oli.patane@LIGO.ORG - posted 22:02, Monday 17 February 2025 (82870)
Ops Eve Shift End

TITLE: 02/18 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Currently Observing and have been Locked for almost 1 hour.

Two locklosses during my shift, and they both had that weird oscillation in DARM right before the lockloss (see 82847). Relocking went okay as long as I ran an initial alignment.

First relocking:
At the start of the initial alignment during green arms, I did get the ALSX issue where it's locked but the WFS won't turn on. I Forced the auto-centering for the WFS and that was all it needed.
Once we got back up to NLN we were having issues with the filter cavity losing lock (similar to the issues from the rest of this weekend). It was losing lock at different places in the locking process (FIND_IR, GR_VCO_LOCKING, TRANSITION_IR_LOCKING, etc). I tried just running SQZ_MANAGER through DOWN and back up but that didn't work. I then tried moving FC1&2 a bit to get them back to where they were during our last lock but that also didn't work. I then just tried what Ryan S did yesterday (82840) - I paused SQZ_MANAGER in LOCK_CLF and adjusted the OPO temp. That worked well and we were then able to lock the filter cavity, start squeezing, and get back into Observing. Tagging sqz
Second relocking:
I just made sure to immediately run an initial alignment and there were no issues getting back to NLN and Observing.
LOG:

00:06 Lockloss
    - started an inital alignment
        - ALSX was locked (but oscillating weirdly) around 1 but the WFS wouldn't turn on - the auto-centering was off (we've been seeing this recently sometimes). I Forced the auto-centering and the WFS turned on and took ALSX up to 1.2 right away
    - Lockloss from TURN_ON_BS_STAGE2
    - Lockloss from PRMI_ASC

02:20 NOMINAL_LOW_NOISE
    - FC losing lock during different stages in the locking process (FIND_IR, GR_VCO_LOCKING, TRANSITION_IR_LOCKING, etc.)
    - Ran SQZ_MANAGER through DOWN and back up - didn't work
    - Adjusted FC1&2 a bit to get them back to where they were during our last lock - didn't work (did not revert these changes)
    - Did what Ryan S did yesterday (82840) - paused SQZ_MANAGER in LOCK_CLF and adjusted OPO temp. We were then able to lock the filter cavity, start squeezing, and get back into Observing
03:05 Observing
    03:36 Popped out of Observing to reset the sqz angle since we had a message about it and sqz was slowly getting worse
    03:36 Back into Observing

    03:47 Popped out of Observing to reset the sqz angle since we had a message about it and sqz was slowly getting worse
    03:48 Back into Observing

04:00 Lockloss
    - Ran initial alignment

05:16 NOMINAL_LOW_NOISE
05:19 Observing
                                                                                                            

Start Time System Name Location Lazer_Haz Task Time End
00:19 PEM Robert LVEA YES Putting viewport cover back on 00:29
Images attached to this report
H1 AOS
oli.patane@LIGO.ORG - posted 21:57, Monday 17 February 2025 (82869)
Prevalence of ETMX Glitch Locklosses throughout O4

In our ongoing quest to figure out what causes the ETMX glitch locklosses, I've rerun all O4 NLN locklosses with our most up-to-date locklost code to see if we can pinpoint when it started and how/if it comes and goes.

To do this I made a histogram plot for all of O4 showing the amount of locklosses we had each day in blue, and then over that plotted the number of  those locklosses that have ETMX glitches right before the lockloss (plot). Basically, it looks like it's been with us since the start of O4. There are times where we go a week or maybe almost a whole month without getting locklosses from them, but they're still pretty regular and we see them every few days otherwise. In O4b, it looks like we saw them very often between end of August to early October and then saw less of them in December, but it's also hard to quantify because we had less locklosses in December, but that could've been either due to more time locked, or due to more time when the detector was down (same with November - that was when we had all the NPRO stuff).

I also have this plot as a pdf, and here's the link to my lockloss tool in case anyone wants to peruse the early O4 ETM glitch locklosses. Also keep in mind that a few ETMX glitch locklosses will have been missed being tagged by the lockloss tool - our code relies on the glitch surpassing a threshold and then going back down and staying below another threshold for a set amount of time, but in a few cases the glitch leads right into the lockloss, meaning it doesn't stay below that threshold long enough for the lockloss tool to realize it was an ETM glitch and thus not tagging it. This doesn't happen very often but it's still important to note that there are more ETM glitch locklosses than we have plotted on this histogram.

Images attached to this report
Non-image files attached to this report
H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 16:21, Monday 17 February 2025 - last comment - 21:12, Monday 17 February 2025(82860)
Lockloss

Lockloss @ 02/18 00:06 UTC after almost 5.5 hours locked

Comments related to this report
oli.patane@LIGO.ORG - 18:27, Monday 17 February 2025 (82862)

This lockloss also has the strange oscillation in many LSC channels + QUADs right before the lockloss that was noted in 82847

oli.patane@LIGO.ORG - 19:08, Monday 17 February 2025 (82864)

03:05 UTC OBSERVING

oli.patane@LIGO.ORG - 21:12, Monday 17 February 2025 (82868)

Looking at front room ndscopes+QUADs/DARM in the minute leading up to LL: ndscope1

Oscillation seen by QUADS, ASC-INP1_P_IN, and ASC-CHARD_P_OUT

NOT seen in power recycling gain or circulating power (like the LL that happens after this one - 82865)

 

Looking at QUADS/DARM and LSC signals in the second before the LL: ndscope2

Seen by all QUADS, DARM, and LSC signals

Images attached to this comment
H1 SQZ
camilla.compton@LIGO.ORG - posted 10:56, Monday 17 February 2025 - last comment - 08:55, Tuesday 18 February 2025(82853)
Trouble with SQZ FC this AM.

Ryan S, Camilla

Corey 82849, Ryan and I have had some issues with SQZ this morning. Corey adjusted the SHG temp to get enough green power to lock the OPO but after that the SQZ and range was bad. FR3 signal and FC WFS were very noisy.

Ryan tried to take SQZ out, checked OPO temp (fine), reset the SQZ angle from 220deg back to a more sensible 190deg and then put SQZ back in again but he couldn't get the FC to lock. Followed steps in the SQZ wiki to touch up FC1/2 alignment and got  H1:SQZ-FC_TRANS_C_LF_OUTPUT to >120, plot. but we still were loosing the FC at TRANSION_TO_IR_LOCKING. THE FC also seemed unstable when locked on green. While we were troubleshooting, the IFO lost lost. Unsure if this is an ASC issue, FC_ASC trends attached (POS for Y and P were moving much more than usual), SQZ ASC trends (ZM4 PIT changes alot).

After the lock loss, the SQZ_FC seemed to lock stably in green with H1:SQZ-FC_TRANS_C_LF_OUTPUT = 160. Plot. This is higher than usual and it's not clear what changed!

Ryan mentioned that something happened to Oli at the weekend where the range was bad but SQZ unlocked and re-locked and it came back good, plot, but this seemed to the the OPO PZT changing to a better place (we know it likes ot be ~90 rather than 50s).

After relocking, everything was fine but the FC ASC wasn't turning on as the threshold was too low, 2.5. Ryan decreased H1:SQZ-FC_ASC_TRIGGER_THRESH_ON from 3.0 to 2.0, this has been slowly decreasing, maybe as we decreased the OPO trans from 80uW to 60uW last week. Plot. Ryan accepted in sdf and then checked SQZ_MANAGER, SQZ_FC, sqzparams to check this isn't set in GRD.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 11:16, Monday 17 February 2025 (82854)

FC ASC lower trigger threshold accepted in SDF. It is not set anywhere by Guardian nor is this model's SAFE.snap table pushed during SDF_REVERT.

Images attached to this comment
ryan.short@LIGO.ORG - 12:59, Monday 17 February 2025 (82855)SQZ

After a while, Camilla and I again dropped H1 out of observing as even after thermalization, BNS range and squeezing weren't looking good. We decided to reset the SQZ ASC and angle in case they were set at a bad reference point. I took SQZ_MANAGER to 'RESET_SQZ_ASC_FDS' and adjusted the SQZ angle to optimize DARM and SQZ BLRMS. To reset the angle servo here, I adjusted SQZ-ADF_OMC_TRANS_PHASE to make SQZ-ADF_OMC_TRANS_SQZ_ANG oscillate around 0 (ended with a total change of -10deg), then requested SQZ_MANAGER back to 'FREQ_DEP_SQZ' to turn servos back on, and finally accepted on SDF (attached) to return to observing. It's been about 20 minutes since then, and so far H1 is observing with a much better steady range at around 160Mpc.

What happened that needed the ADF servo setpoint to be updated with a different SQZ angle? Investigation is ongoing.

Images attached to this comment
camilla.compton@LIGO.ORG - 08:55, Tuesday 18 February 2025 (82874)

Attaching the plot from after Corey got the SQZ locked and it was bad, you can see it looks like one of the loops was oscillating, plot attached. Compare to normal SQZ plot.

Images attached to this comment
H1 CDS (CDS)
corey.gray@LIGO.ORG - posted 03:07, Friday 14 February 2025 - last comment - 09:06, Tuesday 18 February 2025(82799)
H1 Wake Up Due To: Due to BS Camera No Longer Updating & Taking CAMERA_SERVO Out Of Nominal

BS Camera stopped updating just like in alogs:

This takes the Camera Sevo guardian into a neverending loop (and takes ISC LOCK out of Nominal and H1 out of Observe).  See attached screenshot.

So, I had to wake up Dave so he could restart the computer & process for the BS Camera.  (Dave mentioned there is a new computer for this camera to be installed soon and it should help with this problem.)

As soon as Dave got the BS camera back, the CAMERA SERVO node got back to nominal, but I had accepted the SDF diffs for ASC which happened when this issue started, so I had to go back and ACCEPT the correct settings.  Then we automatically went back to Observing.

OK, back to trying to go back to sleep again!  LOL

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 14:58, Friday 14 February 2025 (82814)

Full procedure is:

Open BS (cam26) image viewer, verify it is a blue-screen (it was) and keep the viewer running

Verify we can ping h1cam26 (we could) and keep the ping running

ssh onto sw-lvea-aux from cdsmonitor using the command "network_automation shell sw-lvea-aux"

IOS commands: "enable", "configure terminal", "interface gigabitEthernet 0/35"

Power down h1cam26 with the "shutdown" IOS command, verify pings to h1cam26 stop (they did)

After about 10 seconds power the camera back up with the IOS command "no shutdown"

Wait for h1cam26 to start responding to pings (it did).

ssh onto h1digivideo2 as user root.

Delete the h1cam26 process (kill -9 <pid>), where pid given in file /tmp/H1-VID-CAM26_server.pid

Wait for monit to restart CAM26's process, verify image starts streaming on the viewer (it did).

corey.gray@LIGO.ORG - 16:48, Sunday 16 February 2025 (82841)

FRS:  https://services1.ligo-la.caltech.edu/FRS/show_bug.cgi?id=33320

corey.gray@LIGO.ORG - 09:06, Tuesday 18 February 2025 (82876)

Forgot once again to note timing for this wake-up.  This wake-up was at 233amPDT (1033utc), and I was roughly done with this work in about 45min after phoning Dave for help.

H1 AOS
corey.gray@LIGO.ORG - posted 01:00, Friday 14 February 2025 - last comment - 08:58, Tuesday 18 February 2025(82798)
H1 Wake Up Call Due to ALS EY Diffs

ALS EY WFS F9's came up as (2) SDFs.  Accepted (see attached).

The last lockloss was most likely due to an EQ (Equador?).  I was already up, so I stayed up to proactively run an alignment and got almost done with SRC OFFLOADED but H1 Manager took ALIGN IFO DOWN (!!!!) and started comepletely over---I guess I shoudl have taken H1 Manager down? 

At any rate, ran a manual alignment for SRC again & H1 made it back up all the way except for the ALSey diffs.  OK, back to bed.

Images attached to this report
Comments related to this report
corey.gray@LIGO.ORG - 08:58, Tuesday 18 February 2025 (82875)

Here's the lockloss that preceded this 1253amPDT (853utc) Wake-up call (the one where I happened to still be up at 1115pmPDT); and it does not have an EQ tag, but I could have sworn I remembered hearing Verbal going to EQ Mode a few min before the lockloss---this is why I stayed up to run an alignment!  Since I was up, this is the one where I tried to help H1 by running an alignment before trying to go to bed, but ended up fighting H1_MANAGER with the alignment attempts.  Then I was awakened just before 1am for the SDF diffs noted above.

This wake-up call was only me getting out of bed for a few minutes to ACCEPT the ALS SDF diffs (which might have been due to me and my errant alignments).

Displaying reports 2501-2520 of 83002.Go to page Start 122 123 124 125 126 127 128 129 130 End