Displaying reports 2201-2220 of 85619.Go to page Start 107 108 109 110 111 112 113 114 115 End
Reports until 12:13, Sunday 20 July 2025
H1 SUS (SUS)
corey.gray@LIGO.ORG - posted 12:13, Sunday 20 July 2025 - last comment - 14:24, Sunday 20 July 2025(85875)
SR3 Dither Pitch Offset Zeroed + H1SUSSR3 safe.snap Updated

Finally had the opportunity (due to H1 Lockloss during my shift) to address the SR3 Dither Pitch Offset Oli noted last week (alog 85830), and I first looked at during a lockloss at the end of my shift Friday evening (alog 85855)---but my changes did not hold for the Dither Pitch Offset due to the H1SUSSR3 SDF needing its safe.snap changed/accepted.

It's been ages since I've updated an safe.snap, so pardon the less elegant steps I took to update the SR3 SDF here---Basically I 

  1. Turned the OFFSET button OFF (then Accepted the SDF diff for the safe.snap) and then
  2. Took the OFFSET from 32.3 to 0 (then Accepted the SDF diff, once again, for the safe.snap). 

Both updates are screenshot-ed separately! Ha.  The new SR3 OPTICALIGN_P_OFFSET was already at its new offset (of 457.9...which used to be 445.8) since Friday (and aligned).  So now that the SR3 pointing changed with the Dither Offset going to zero, I ran a new alignment. 

Currently, H1 has locked DRMI and I'm sure the final step here would be updated thing observe.snap with the changes noted above.

ADDENDUM:  Currently stuck at PREP_DC_READOUT_TRANSITION, where the OMC can't lock.  I'm wondering if this is due to the SR3 change....Where the SR3 Top Mass was at one spot since Fri evening, and now with the Dither Offset being zeroed, we now have SR3 Top Mass back to it's pointing we had for the last few weeks up until Friday night.  (see attached ndscope trend over the weekend)

Images attached to this report
Comments related to this report
corey.gray@LIGO.ORG - 14:24, Sunday 20 July 2025 (85878)

After the OMC Power Supply issues noted above, H1 made it to NLN, but it did in fact have SDF Diffs for OBSERVE.snap for H1SUSSR3.  Those new settings were ACCEPTED in SDF (see attached), and this is task is now complete.

Images attached to this comment
LHO VE
david.barker@LIGO.ORG - posted 10:16, Sunday 20 July 2025 (85874)
Sun CP1 Fill

Sun Jul 20 10:08:56 2025 INFO: Fill completed in 8min 52secs

 

Images attached to this report
LHO General (CDS, FMP, PEM)
corey.gray@LIGO.ORG - posted 08:06, Sunday 20 July 2025 (85873)
Notes From Ops Shift Check Sheet

1) Dust Monitor Check Notifications for LVEA5 & LAB2

Ran the "check_dust_monitors_are_working" script the last two mornings and received notifications for the following:

2)  Access System "Flashing Doors"

3) LHO Control Room Screenshots & FOMs

LHO General
corey.gray@LIGO.ORG - posted 08:00, Sunday 20 July 2025 (85871)
Sun DAY Ops Transition

TITLE: 07/20 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 7mph Gusts, 5mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.09 μm/s
QUICK SUMMARY:

Whoa, big earthquake overnight as RyanC notes in his wake-up calls.  Crazy 20hrs with high winds first which then died and tagged in the big EQ!  Seismic motion elevated 8hrs ago for the EQ, and finally calmed down within the last hour.  H1's been locked for the last 2+hrs.  Nice to see that the violins weren't rung up after the shaky night!

Images attached to this report
H1 General
ryan.crouch@LIGO.ORG - posted 04:51, Sunday 20 July 2025 - last comment - 04:56, Sunday 20 July 2025(85869)
OPS OWL assistance

11:03 UTC guardian called again after not being able to find IR.

11:13 - 11:49 UTC I decided to run an initial alignment. Xarm IR struggled so I trended the IMs and had to move IM2 a bunch in pitch to return it to its previous position pre watchdog trip.

11:50 UTC Back to regular locking

Comments related to this report
ryan.crouch@LIGO.ORG - 04:56, Sunday 20 July 2025 (85870)

We found IR and DRMI locked after less than 30 seconds.

H1 General (SEI)
ryan.crouch@LIGO.ORG - posted 00:07, Sunday 20 July 2025 (85868)
OPS OWL assistance

12:01 GRD called, we got hit by 2 large semi close earthquake from the eastern Russian penisula, a 6.7 then a 7.4. A few ISIs and suspensions tripped, it'll be a few hours till the ground motion comes down enough to relock. We were going through DRMI_ASC at the time of the earthquakes.

H1 General (Lockloss)
anthony.sanchez@LIGO.ORG - posted 22:14, Saturday 19 July 2025 (85867)
Ops Eve Shift Report

TITLE: 07/20 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
Inherited a Locked IFO.
Dropped from Observing at 3:35:45 UTC for SQZ_FC locking Issues.
I followed the instructions for FC troublshooting found here.
We went back into Observing at 3:47:32UTC
Wind started to pick up in speed.
Lockloss potentially from an Alaskan 4.7M Earthquake.

Locking Notes:
Initial Alignment was ran and completed.
 

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 20:30, Saturday 19 July 2025 (85866)
Mid Shift Ops Eve shift & Fire Watch report.

TITLE: 07/20 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 23mph Gusts, 17mph 3min avg
    Primary useism: 0.08 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY:
I have attached a series of pictures from the LIGO Hanford Corner Station roof.
Conditions are not smokey at all here. No fires or smoke can be seen.  

Came back down form the roof and immdiately heard these from Verbals.

GRB-Short E582309 02:16:06 UTC
SuperEvent S250720J 02:16:58
GRB-Short E582309 02:17:09
SuperEvent S250720J 02:21:27

I'm not sure why there are duplicates like this.
 

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 16:48, Saturday 19 July 2025 (85865)
Saturday Ops Eve Shift Start

TITLE: 07/19 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 58Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 22mph Gusts, 9mph 3min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY:

H1 has been locked for over 12 Hours!

Stand down alerts Failure happened earlier. 
Ryan pointed out these instructions to me: 
https://cdswiki.ligo-wa.caltech.edu/wiki/Ryan%20Crouch?highlight=%28Ryan%29%7C%28Crouch%29

We were able to get it up and running again fairly quickly.

LHO General
corey.gray@LIGO.ORG - posted 16:31, Saturday 19 July 2025 (85860)
Sat DAY Ops Summary

TITLE: 07/19 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:

Another nice DAY shift with H1 being locked for more than the entire DAY shift (over 13hrs!). 

Since H1 was locked the entire shift, did not get another chance at removing the SR3 Pit OFFSET (So it is still there and the SR3 is at the new Pit Bias I put in yesterday---and it was aligned to this last night.  So when we want to fix this, we'll need to take the SR3 Pit Offset to 0.0 and then run an alignment ) 

Attempted the Saturday Calibration, but it was most likely not successful (but ran a 2nd Calibration at the end of the shift---which was SUCCESSFUL!)
LOG:

H1 CAL (CAL)
corey.gray@LIGO.ORG - posted 16:29, Saturday 19 July 2025 (85864)
Attempt #2: Saturday H1 Calibration Measurement (broadband headless + simulines)

NOTE:  Saw that L1 had a lockloss, so took the opportunity at running my 2nd Calibration of the day (WITHOUT any CTRL-C's!!!)

Measurement NOTES:

Attached is a screenshot of the Calibration Monitor + pdf of Pydarm Report (but I only ran:  "pydarm report" vs. "pydarm report --skip-gds"

Images attached to this report
Non-image files attached to this report
H1 CAL (CAL)
corey.gray@LIGO.ORG - posted 14:25, Saturday 19 July 2025 (85862)
(Probably Errant) Saturday H1 Calibration Measurement (broadband headless + simulines)

Summary

Measurement NOTES:

Attached is a screenshot of the Calibration Monitor, and unfortunately, I did not get to run a PyDarm Report.  I'm assuming this is due to my CTRL-C from the headleass measurement noted above.  Because, also at the end of this measurement, there was an SDF Diff also!  Luckily, Tony is here and he was able to take care of the SDF Diff.  The SDF was for PCAL Y (medm is SiteMap/CAL EY) , and it was related to the In-Loop (OFS) PD(H1:CAL-PCALY_OFS_PD_OUT16) being railed at -7.8.  Tony fixed this by toggling the Loop Enable Button (H1:CAL-PCALY_OPTICALFOLLOWERSERVOENABLE) to Off and then On.  This is all mentioned on the top of the PCal Known Issues wiki.

Once the SDF was cleared H1 was taken back to Observing, but there was discussion about trying to run the calibration again since L1 was still relocking.  Opted to not drop out of Observing for this since we were already out of Observing for over 30min.

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 12:27, Saturday 19 July 2025 (85863)
SaturDAY Mid-Shift Status

Smooth sailing thus far with H1 locked for almost 9hrs (H1 even rode through two M5+ earthquakes off the Guatamalan coast!).  Delayed the Saturday Calibration, to allow L1 to thermalize after their recent lockloss and will start the calibration in about 30min.

LHO VE
david.barker@LIGO.ORG - posted 10:23, Saturday 19 July 2025 (85861)
Sat CP1 Fill

Sat Jul 19 10:09:44 2025 INFO: Fill completed in 9min 40secs

 

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 07:57, Saturday 19 July 2025 (85859)
Sat DAY Ops Transition

TITLE: 07/19 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 1mph 3min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.10 μm/s
QUICK SUMMARY:

H1's been locked almost 4.5hrs with a decent night; microseism continues to drop and is below the 50th percentile and winds have been calm the last 7hrs.

H1 General
anthony.sanchez@LIGO.ORG - posted 22:03, Friday 18 July 2025 (85858)
Friday Eve Shift Report.

TITLE: 07/19 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
I inherited an unlocked IFO. After a few DRMI Locking attempt. I was able to relcok the IFO.
But the changes to SR3  H1:SUS-SR3_M1_DITHER_P_OUTPUT were reverted by SDF Revert after a DRMI lockloss.

H1 was locked at NLN at 2:26:39 UTC
And Observing at 2:18:19UTC.

SQZ_manager Dropped from FREQ_DEP_SQZ and took H1 into commissioning @ 2:28:38 UTC.
SQZ_FC is Stuck between GR_SUS_LOCKING and Down.
H1:SUS-FC2_M1_OPTICALIGN_P_OFFSET & Its Yaw counter part were moved to relock the FC.
We got back to Observing at 3:00:01 UTC

LOG:
No Log.

H1 SUS (SUS)
edgard.bonilla@LIGO.ORG - posted 20:34, Friday 18 July 2025 (85857)
Changes to the HLTS_W_EST model to test the OSEM estimator on H1 SR3

Edgard, Ivey, Brian.

Relevant FRS ticket : 32526

We made modifications to the HLTS_W_EST and estimator library parts to add DQ channels to monitor the total drive request to the M1 OSEMs with and without the estimator damping. In passing, we made a few changes to the names of channels on the EST block (by modifying ESTIMATOR_PARTS.mdl ) to make them a bit more readable/less redundant. These changes will only affect the H1 SR3/PR3 models only.

The changes were committed to the userapps svn under revision 32426.

 

Oli mentioned that they will do a model restart to get these changes in on Tuesday, as long as we got the changes in before Monday.

The estimator MEDM screens haven't been updated yet, but I think Brian will get to it on Monday.

____________

This is a summary of the library part changes [see attached.pdf for screenshots of these changes in the library parts]:

SIXOSEM_T_STAGE_MASTER_W_EST.mdl

HLTS_MASTER_W_EST.mdl

ESTIMATOR_PARTS.mdl

 

 

Non-image files attached to this report
H1 DetChar (DetChar, PEM)
derek.davis@LIGO.ORG - posted 17:22, Friday 18 July 2025 - last comment - 11:20, Thursday 24 July 2025(85856)
20.2 Hz line appeared Jun 9, turns on and off

Prompted by me noticing on-off behaviors in the daily strain spectrogram for today at around 20.2 Hz, I've done some additional investigations into the source and behavior of this line: 

The 20.2 Hz line, which is currently prominent in DARM, first appeared in accelerometer and microphone data from the corner station on June 9. The first appearance of this line that I found was in the PSL mics, as shown in this spectrogram. This line then appeared in DARM in the first post-vent locks a few days later. The summary of work from June 9 does not show anything obvious to me that would be the source of this new noise.

This feature also turns off and on multiple times during the day. An example from today can be seen in this spectrogram. Most corner station microphones and accelerometers exhibit this feature, but it is most pronounced visually in the PSL microphone spectrograms. I was unable to identify any other non-PEM channels that showed the same on-off behavior, but this does reveal many change points that should aid in tracking down the source. Almost every day, this line exhibits abrupt on-off features at different times of the day and for varying durations. Based on my initial review, these change points appear to be more likely during the local daytime (although not at any specific time).  When the line first appeared, it was usually in the "off" state and then turned on for short periods. However, this has slowly changed, so that now the line is generally in the "on" state and turns off for brief periods. 

  

Images attached to this report
Comments related to this report
derek.davis@LIGO.ORG - 09:24, Monday 21 July 2025 (85887)

Looking into past alogs, I noticed that I reported this same issue last summer in alog 79948. Additional discussion about this line can be found in the detchar-requests repository (requires authentication). In this case, the line appeared in late spring and disappeared in early autumn of 2024. No source was identified before the line disappeared. 

Going back further, I also see the same feature appearing in late spring and disappearing in early autumn of 2023. The presence of the line is hence correlated with the outside temperature, likely related to some aspect of the air conditioning system that is only needed when it is (roughly) hotter outside than inside. This also means that we can expect this line to remain present in the data until autumn unless mitigation measures are taken.

timothy.ohanlon@LIGO.ORG - 11:20, Thursday 24 July 2025 (85959)

I looked briefly into the 20 Hz Noise without much success. Comparing the floor accelerometers, the noise is louder in the EBAY than the LVEA (although the signal of the EBAY accelerometer doesn't look good since the vent). The next closest is HAM1 followed by BS. So the noise is around the -X-Y corner of the LVEA, likely in the EBAY, Transition Area or Optics Lab because HAM6 sees less motion than HAM1 and EBAY sees the most.

Images attached to this comment
H1 CAL
elenna.capote@LIGO.ORG - posted 17:14, Friday 18 July 2025 (85851)
Summary of Calibration Confusion So Far

For background, I attempted to push a new calibration on 7/3 to account for the change in the SRCL offset that we made on 6/26, but it failed due to the broadband PCAL measurement showing a larger uncertainty that we had beforehand (see 85529). Since then, we have been running with the same calibration we have had since 6/10, which has a low error (~3%), but is based on a model that know to be incorrect. Namely, the model created and pushed on 6/10 has a small, positive spring, and we believe now that DARM has no spring to at least 10 Hz. We are especially confused because we expected the model change to be focused around the 10-30 Hz region, since this is the band where we expect significant change due to the SRCL offset, but the measurement shows large, >5%, error at 100 Hz.

I have made a series of plots comparing a variety of PCAL broadband measurements from different points since 6/10, measuring PCAL with GDS CALIB STRAIN and CAL DELTA L.

Plot 1 shows CALIB STRAIN/PCAL and DELTA L/PCAL on 6/11 after we pushed a new calibration modeled with a positive spring. The calibration at this point was very good; the calibration line uncertainties showed error of 3% or less. However, this plot is already showing something a bit confusing- a difference in CAL DELTA L and GDS CALIB STRAIN, where GDS CALIB STRAIN has a higher uncertainty around 70-200 Hz. We believe the application of the kappas should further reduce the uncertainty of GDS CALIB STRAIN.

Plot 2 shows CALIB STRAIN/PCAL and DELTA L/PCAL on 6/26 after changed the SRCL offset. The calibration report generated from that day indicates that the sensing function is flatter with the adjusted SRCL offset. Because the calibration still expects a spring, we were not surprised to see that the low frequency uncertainty changed.

Plot 3 shows CALIB STRAIN/PCAL and DELTA L/PCAL on 7/3 after we pushed a new calibration which was supposed to account for the flatter sensing function. However, we saw that the uncertainty increased at 100 Hz, which we did not expect. This measurement was run slightly early during the "TDCF burn in" so it may not have been an accurate look at the effect of the new calibration.

Plot 4 shows CALIB STRAIN/PCAL and DELTA L/PCAL on 7/3 after we pushed a new calibration, and then were only relocked for 10 minutes. The uncertainty was even larger than the previous uncertainty measurement. We were also very confused that CAL DELTA L changed significantly compared to plot 3. We're not sure if the kappas were significantly different from 1 to also cause problems in GDS CALIB STRAIN when applied.

Images attached to this report
Displaying reports 2201-2220 of 85619.Go to page Start 107 108 109 110 111 112 113 114 115 End