Displaying reports 7701-7720 of 83579.Go to page Start 382 383 384 385 386 387 388 389 390 End
Reports until 09:59, Friday 31 May 2024
H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 09:59, Friday 31 May 2024 - last comment - 12:15, Friday 31 May 2024(78158)
Lockloss

Lockloss @ 05/31 16:45 UTC due to earthquake

We were out of Observing to touch up SQZ and A2L when we lost lock (78155). We were able to finish adjusting SQZ and running A2L for ETMX.

Comments related to this report
oli.patane@LIGO.ORG - 12:15, Friday 31 May 2024 (78161)

19:13 UTC Back to Observing

H1 General
oli.patane@LIGO.ORG - posted 09:36, Friday 31 May 2024 - last comment - 12:15, Friday 31 May 2024(78155)
Out of Observing

Out of Observing at 16:32 UTC to try and tune up A2L and SQZ. We might be losing lock from earthquakes in  a few minutes.

Comments related to this report
sheila.dwyer@LIGO.ORG - 09:48, Friday 31 May 2024 (78156)

Our range hovered around 150 Mpc overnight, here's a screenshot of the darm coherence checks from the operator LowRangeChecks

This shows that some of our noise around 20 Hz is due to ASC, so we want to run A2L to see if we can improve that.

Want to change gain from 3.19 to 3.25, rounded to 2 decimal places. St.Div is 0.56. Change of -0.061
H1:SUS-ETMX_L2_DRIVEALIGN_P2L_SPOT_GAIN => 3.25

Want to change gain from 4.89 to 4.9, rounded to 2 decimal places. St.Div is 0.99. Change of -0.011
H1:SUS-ETMX_L2_DRIVEALIGN_Y2L_SPOT_GAIN => 4.9

by the time we started ETMY the earth was shaking so much that the measurement wasn't good.  I've put these ETMX values into the guardian.

 

 

Images attached to this comment
oli.patane@LIGO.ORG - 09:49, Friday 31 May 2024 (78157)
Images attached to this comment
oli.patane@LIGO.ORG - 12:15, Friday 31 May 2024 (78160)
Images attached to this comment
H1 General
oli.patane@LIGO.ORG - posted 07:33, Friday 31 May 2024 (78154)
Ops Day Shift Start

TITLE: 05/31 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 7mph Gusts, 5mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.06 μm/s
QUICK SUMMARY:

We're Observing and have been Locked for almost 15 hours. Everything looks nominal

LHO General
corey.gray@LIGO.ORG - posted 01:00, Friday 31 May 2024 (78150)
Thurs EVE Ops Summary

TITLE: 05/30 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Tony
SHIFT SUMMARY:

After locking H1 at the beginning of the shift, there was some Squeezer work for about an hour and then H1's been Observing for most of the rest of the shift (except for a quick drop noted below).  And in the final minutes of the shift was capped off with a gravitational wave candidate.
LOG:

H1 SQZ (SQZ)
corey.gray@LIGO.ORG - posted 23:51, Thursday 30 May 2024 (78153)
Squeezer On/On+Noisy/Off Comparisons

At around 0528utc, noticed a slight step down in range.  The telltale tells of this instance showed a slight increase in DARM from 18-30Hz resulting in a drop in range of about 5Mpc.  Took this opportunity to drop H1 from Observing to Turn OFF Squeezing, run a DARM spectrum, and then return to Observing. 

     Plot #1:

     Plot #2:  Range over the last 6hrs with the drop in range noted.

This dtt measurement is located at:  /ligo/home/corey.gray/Templates/dtt/DARM_05312024.xml

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 20:52, Thursday 30 May 2024 (78152)
Mid-Shift Status (Thurs Eve)

H1's been locked for 4hrs (& observing for ~3hrs).  Range is hovering just under 150Mpc.  Mindful of any drops in range so that I can drop out of Observing, turn OFF squeezing and then run a measurement for Sheila---just to see whether we can determine if Squeezing is responsible for range drops....but I've not had the opportunity so far tonight.

H1 ISC (ISC)
corey.gray@LIGO.ORG - posted 17:53, Thursday 30 May 2024 (78151)
SDF Diffs: LSC & LSC AUX

LSC's SDF Diff for MICH Feed Foward gain change (0.97 to 1.00) was confirmed by Sheila and accepted.  (1st attachement)

LSCAUX's SDF Diff for LSC LOCKIN_1_Freq (& TRamp) were a mystery to Sheila, but since the amplitude was 0 for this, they were ACCEPTED.  (2nd attachment)

Images attached to this report
H1 ISC (OpsInfo)
ryan.short@LIGO.ORG - posted 16:53, Thursday 30 May 2024 (78091)
O4 IR Frequency Values

Continuing the effort I started in alog77692, I've created histograms showing the various frequencies used in the FIND_IR main locking state for both COMM and DIFF. For COMM I used the actual VCO frequency (since the offset is zeroed early on) and for DIFF I used the offset value. The data starts from the beginning of O4.

It became clear as I gathered the data for these plots that for both COMM and DIFF there are two main groups where the frequency likes to be for IR locking. There were also occasional outliers which look to be from a person finding the IR resonance by-hand moving the COMM or DIFF offset.

COMM:

Lower group offsets: {78893888.0, 78893760.0, 78893664.0, 78893568.0, 78893824.0, 78893856.0, 78893832.0, 78893864.0, 78893512.0, 78893712.0, 78893616.0, 78893880.0, 78893840.0, 78893808.0, 78893912.0, 78893816.0}
Upper group offsets: {78931328.0, 78931232.0, 78931432.0, 78931080.0, 78931368.0, 78931376.0, 78931184.0, 78931440.0, 78931280.0, 78931128.0, 78931032.0}

DIFF:

Lower group offsets: {385.0, 388.0, 390.0, 391.0, 394.0, 394.5, 395.0, 397.0, 398.3, 399.0, 400.0, 402.0, 403.0, 405.0, 406.0, 408.0, 409.0, 410.0, 411.0, 412.0, 413.0, 414.0, 415.0, 417.0, 418.0, 420.0, 421.0, 423.0, 424.0, 425.625, 426.0, 425.0, 427.0, 429.0, 430.0, 432.0, 435.0, 364.0, 496.0, 379.0, 381.0}
Upper group offsets: {3206.5, 3232.0, 3232.9, 3212.5, 3212.0, 3215.5, 3215.0, 3218.5, 3218.0, 3220.0, 3221.0, 3221.5, 3223.0, 3224.0, 3224.5, 3225.0, 3227.0, 3227.5, 3226.0, 3230.0, 3230.5, 3231.0, 3233.0, 3233.5, 3229.0, 3236.0, 3236.5, 3238.0, 3239.0, 3239.5, 3241.0, 3242.0, 3242.5, 3244.0, 3245.0, 3245.5, 3247.0, 3248.0, 3248.5, 3250.0, 3251.0, 3251.5, 3249.0, 3254.0, 3254.5, 3256.0, 3257.0, 3257.5, 3259.0, 3260.0, 3261.0, 3263.0, 3265.0, 3266.0, 3268.0, 3272.0, 3401.0, 3275.0, 3277.0, 3221.2, 3278.0, 3281.0, 3222.0, 3249.2, 3362.5, 3252.0, 3436.0, 3235.0, 3191.0}

Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 16:18, Thursday 30 May 2024 (78138)
Ops Day Shift End

TITLE: 05/30 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Moved our SR3, SR2 alignment back to the "+Y" alignment (referencing from alog77694) that we had after the late April optical gain loss and subsequent SR Y movements. On Tuesday (alog78096) they moved to a "+P" position that didn't seem to help us much over the last few days, so today we tried to move to the "-P" location. We got past DRMI but then it looked like there was clipping in the SRC and we eventually lost lock trying to improve the buildups with SR3 movement. We decided to go back to known good place, the "+Y". As soon as we got to low noise and tried to adjust the squeezer, we lost lock when moving ZM4 to its old position. Currently relocking.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
19:13 SAF - LVEA YES HAZARD 05:53
16:33 PEM Robert LVEA YES Viewport viewing 17:45
17:46 PEM Robert EX n ESD checks 19:16
19:30 PEM Robert LVEA - LVEA Sweep 19:50
20:17 VAC Janos, Isaiah Mech room n Working on purge pumps 23:12
20:24 VAC Gerardo LVEA - Retrieve small turbo pump 20:27
20:51 SQZ Terry Opt Lab local SHG work 22:45
21:42 PEM Robert LVEA YES Setup camera at viewport 21:56
LHO General
corey.gray@LIGO.ORG - posted 16:14, Thursday 30 May 2024 (78149)
Thurs EVE Ops Transition

TITLE: 05/30 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 6mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.06 μm/s
QUICK SUMMARY:

H1 was actively locking as I walked in (TJ mentioned Squeezer alignment knocked us out for the last lock) and this is with the OFI pointing reverting to a +Yaw spot.  Got the note that due to the SQZ alignment change, after NLN, take a locked squeezer to the two additional states scan_alignment_fds + scan_sqzang.

H1 ISC
sheila.dwyer@LIGO.ORG - posted 15:54, Thursday 30 May 2024 (78121)
Noise, optical gain, spot position, while we move the spot on the OFI

On Tuesday we moved the spot in the SRC to one of the spots (-P, second column) that TJ listed in 77694 .  This was partially motivated by some modeling that Alenna has done which suggests that our original (now possibly burned) spot in the OFI was below the center, so we might benefit from trying to move to a spot above the burned spot to be closer to center in the OFI.  Jenne measured the spot position on SR2 and SRM in  788119, minimizing the coupling of angle dithers to DARM, rather than SRCL, which indicated that we actually moved lower on SRM.  We'd like to check if we get the same result when measuring the A2L coefficient in using SRCL rather than DARM.

The second attached screenshot shows how our kappa C (normalized optical gain) has changed as we've moved in the SRC, with the inital recovery after the OFI damage shown at ~38 days, where the optical gain dropped to 96%, recovery to 98% with the large SR3 yaw move 35 days ago.  Around -29 days ago, Jennie Wright updated the OMC QPD offsets to bring kappa c back to near 1, as she noted in 78143 this change in offsets wasn't due to the damage of the Faraday, we had run a similar QPD spot scan before the OFI incident which indicated that we would have gained optical gain by moving to these same offsets before the OFI was damaged.  This implies that if we hadn't lost any optical gain due to the Faraday damage, we should now have 1.02 for kappa c with the optimized QPD offsets.  In the last few days when SR3 was at the -P position from 77694, we had slightly more optical gain (1.01 kappa C, meaning 2% better throughput.). 

In this -P position, the range was not as good as at the position we've been using since late April (+Y in 77694).  The first attachment shows a low frequency noise comparison.  Interestingly below 15Hz the noise was better in the -P alignment, but worse from 20-45Hz.  Some of this extra noise is from LSC FF which seems to need to be retuned at this alignment.  I did make an attempt yesterday to rescale the MICH FF, but this wasn't very sucsesfull as it seems that we would need to refit the filter to get good feedforward subtraction.  The LSC coherence though wouldn't explain all of this excess noise, especially not the 25Hz peak. 

Today, we decided to try the other pitch spot, to see if that would put us closer to the center of the OFI.  TJ moved us to the spot called +P in 77694 .  As soon as the ASC started to come on in full lock, it was clear that this is not a good location, with hughe power fluctuations in the PRC, POP 18 and POP90.  We were sitting in RF lock with this from 5/30 20:44-21:20 UTC.  During this time we injected a 31Hz line on SRM pitch, and adjusted P2L gain to minimize it's appearance in both SRCL and Rf DARM.  This wasn't a very precise measurement, but for both SRCL and DARM the line was smallest at -1.5 P2L gain (compared to -1 or -2), indicating that this position is closer to centered on SRM than the other two positions.

We've now gone back to the position that we had been using since late April, as that has the best range.  We may want to continue looking for better spots next week, to see if we can fully recover optical gain to 1.02. 

 

Images attached to this report
H1 General (SUS)
camilla.compton@LIGO.ORG - posted 12:05, Thursday 30 May 2024 (78144)
Lockloss from Swapping from ITMX to ETMX ESD DARM control

Camilla, Sheila, Robert

We successfully swapped from EX to IX ESD DARM control for Robert's bias sweep by manualling SUS_CHARGE to SWAP_TO_ITMX. On the swap back we again manualled SUS_CHARGE to SWAP_BACK_ETMX, this caused a lockloss as we forgot to return the ETMX bias to it's normal value so the BIAS MASTER_OUT_DCMON was around -487,000 rather than the nominal +170,000. Sorry.

Unrelated to today's lockloss but should stop lockloss caused by in-lock charge measurements: commit 27861
This SWAP_BACK_ETMX state has been causing locklosses on Tuesday mornings since we've swapped to new DARM. Today we found this was due to out of date L3_DRIVEALIGN_L2L filter settings (FM4,5 rather than FM7) hard-coded in RAN_ESD_EXC.py. We edited this and expect this will stop future locklosses from in-lock charge measurements. Sheila also found a ramping inconsistency in SUS_CHARGE SWAP_TO_ITMX that we were surviving, edited that too.
Changed in-lock excitation frequencies to suggested new ones that avoid 14Hz roll mode we never used from 76492: ETMs 12Hz, ITMX 13Hz, ITMY 15Hz.  Now we've got recent data with all test masses. Tagging SUS.
H1 TCS (TCS)
ibrahim.abouelfettouh@LIGO.ORG - posted 11:51, Thursday 30 May 2024 (78145)
TCS Chiller Water Level Top-Off (Bi-Weekly, FAMIS #27790)

TCS Chiller Water Level Top-Off (Bi-Weekly, FAMIS 27790)

Closes FAMIS 27790

H1 ISC
jennifer.wright@LIGO.ORG - posted 11:21, Thursday 30 May 2024 (78143)
No need to tune OMC ASC QPD offsets every time alignment into OFI changes

Jennie W, Sheila

 

I compared the two OMC DCPD plots we made while rastering the QPD offsets on the 19th and 26th of April - ie. before (#77294) and after (#77440) we changed the alignment through the OFI.

These show the same trend for offsets in all four degrees of freedom - ie. PItch and YAW in OMC QPD A and B.

This seems to imply that we do not need to change the QPD offsets if we change the positions of SR3 and SR2 to move the spot on the OFI.

Images attached to this report
H1 SQZ
camilla.compton@LIGO.ORG - posted 11:21, Thursday 30 May 2024 (78140)
Adjusted ZM2/3 alignment to look for OPOS clippping, none seen

Vicky, Sheila, Camilla

Adjusted ZM2,3 alignment, see no clipping on this path (though VOPS). Moved Pitch then Yaw each around 40urad. Via moving ZM2, leting FC ASC move ZM3, FC1, FC2. I increased H1:SQZ-FC_ASC_INMATRIX_P,Y_RAMPING for INJ_ANG to speed up ZM3 control. 40urad  much more than usual fluctuation (comparison attached).

If we had been clipping, we had expected power on FC REFL WFS to change (Vicky showed they had drifted over past ~40 days) or for FC WFS Q to change. This didn't happen more than the original drift. Zoomed out plot. 

SFI2 temperature changes and injections were happening at the same time, plus some Robert stray beam watching. Unsure the reason for the CLF power decrease and WFS increase, seem not correlated with alignment change.

Images attached to this report
H1 SQZ
naoki.aritomi@LIGO.ORG - posted 11:14, Thursday 30 May 2024 (78125)
SQZ scatter noise investigation

Naoki, Andrei, Sheila

To investigate the scatter noise from SQZ at low frequency, we excited the ZM2 or ZM5 length at 1 Hz with same amplitude. The attached figure shows the scatter shelf in DARM with ZM2 or ZM5 excitation.

The scatter shelf by ZM5 excitation is much larger than ZM2, which means that the origin of the scatter light is different for these two scatter shelves. The scatter light with ZM2 excitation should come from FC, but the scatter light with ZM5 excitation should come from between ZM2 and ZM5.

We also changed the SFI2 temperature. The scatter shelf does not change much with SFI2 25.2 and 35.2, but got worse with SFI2 20.2 for ZM2 excitation.

Images attached to this report
H1 DetChar (DetChar)
ansel.neunzert@LIGO.ORG - posted 11:01, Thursday 30 May 2024 (78142)
9.5 Hz comb triplet likely started early Sept 2021 amid PSL upgrade work

Ansel and Sheila following up on 77990. This work relates to the 9.5 Hz comb triplet .

Channels involved

(Thanks to Sheila for guidance on channels and times here!)

Comb history

With this information in hand, it was possible to track the 9.5 Hz combs back to a likely start date on or near Sept 2, 2021. Details:

Checking the alogs around that time, there was quite a bit of PSL upgrade work ongoing, as summarized in 59832.

This work is ongoing; as before, details and work-in-progress notes can be found on gitlab. Anamaria and Matt H have supplied a number of times to look at LLO data (recall that these combs have counterparts at LLO), particularly for dark spectra of the ISS PDs. This is next on the to-do list.

What about the 11.9 Hz comb mentioned in the previous alog?

It's also in a lot of these channels! However, unlike the 9.5 Hz, I don't see a notable change point in early September. This is not really surprising, since the 11.9 Hz comb was noted in O3. I'm focusing on the 9.5 Hz combs first, which are more pervasive, but keeping an eye out for more 11.9 Hz clues along the way.

Anybody interested in looking at the spectra can find the full set of plots here (more will be added as I work): https://ldas-jobs.ligo-wa.caltech.edu/~ansel.neunzert/iss_channels_combs/ .

Images attached to this report
H1 SQZ (CDS)
camilla.compton@LIGO.ORG - posted 08:28, Wednesday 29 May 2024 - last comment - 12:21, Thursday 30 May 2024(78111)
SQZ ASC clear history/offload scripts ran when I logged in

As I logged in this morning, the  SQZ ASC and FC ASC offload and graceful clear history scripts that I  ran and left open yesterday (whoops) reran. 15:16UTC. Before I could save sdf diffs, we went though DOWN SDF revert. There was nothing in SQZ ASC, but FC ASC got cleared. As squeezing hasn't been in a good alignment over the last 12 hours, this doesn't really matter. But if  we were in observing it would have knocked us out.

Reminder to close scripts once used. CDS is there a way we can disable this from happening?

Comments related to this report
camilla.compton@LIGO.ORG - 12:21, Thursday 30 May 2024 (78147)

Currently these scripts are called from medm via command 'xterm -g 80x15 -hold -e python3 clear_FC_ASC.py &' or simular 

After speaking with Dave and Joanthan there's two things to be done to avoid this in future:

  • When logging out, un-select "Save session for future logins" and shouldn't be reopened. 
  • We should not use the -hold option from the medm as this keeps the window perminatly open. If we want the xterm to stay open long enough to read any error text, but for it to eventually close itself, we should could tack an extra sleep on the end of the python script. e.g.
    • In the medm's .xml file, remove '-hold'
    • Add the following to the end of our scripts, with a sleep timer long enough to read/record the terminal output:

print("done")

time.sleep(60)

sys.exit(0)

Displaying reports 7701-7720 of 83579.Go to page Start 382 383 384 385 386 387 388 389 390 End