Displaying reports 461-480 of 85368.Go to page Start 20 21 22 23 24 25 26 27 28 End
Reports until 14:40, Thursday 09 October 2025
H1 SEI (OpsInfo)
thomas.shaffer@LIGO.ORG - posted 14:40, Thursday 09 October 2025 (87395)
Earthquake response

I updated Jim's "rasta plot" with a line for when Jim thinks we should turn on the high ASC to help us ride through earthquakes. This is a guess and will need more tested times, so we will very likely make changes to this in the near future. Operators, please transition to the high ASC before the earthquake arrives. You can transition back when the earthquake has sufficiently calmed down, perhaps even waiting until the SEI_ENV node goes out of Earthquake mode. 

Interestingly, the M6.0 from Russia this morning would have been in the high ASC realm, but we looked to have scraped by and survived. 

I'm ready to test the Guardian version of this high ASC switch whenever we get the chance. We had a lock loss just before trying it today, so perhaps Monday or opportunistically over the weekend. In the mean time operators will have to do the switching manually.

Images attached to this report
H1 SEI
jim.warner@LIGO.ORG - posted 12:03, Thursday 09 October 2025 (87393)
HAM2,3,6 coupling to DARM

Over the last several weeks, I have collected some HAM ISI to DARM coupling measurements. These are the only HAM ISI that don't have st0 L4C feedforward, and should have overall not as good performance as the other HAMs. Attached pdfs show the coupling to darm for the closest analogs to their associate cavity length, pitch, yaw and vertical dofs. It seems like HAM2 generally has the worst coupling, then HAM6, then HAM3. HAM2 is already planned to receive st0 l4cs in the next vent, as well as lower noise vertical CPS. HAM3 is only planned to receive the lower noise CPS, for now. For HAM6, I have been trying ground to HEPI l4c feedforward, but so far haven't found a stable filter and don't yet understand why that feedforward is not work at HAM6 yet.

Non-image files attached to this report
H1 CAL
elenna.capote@LIGO.ORG - posted 11:50, Thursday 09 October 2025 - last comment - 09:31, Friday 10 October 2025(87390)
Some attempts to fix the calibration

We noticed that comparing the last two calibration reports that there has been a significant change in the systematic error, 87295. It's not immediately obvious what the cause of this is. Two current problems we are aware of: there is test mass charge, and kappa TST is up by more than 3%, and we lost another 1% optical gain since the power outage.

One possible source is the SRC detuning changing (not sure how this could happen, but it might change).

Today, I tried to correct some of these issues.

Correcting actuation:

This is pretty straightforward, I measured the DARM OLG and adjusted the L3 DRIVEALIGN gain to bring the UGF back t about 70 Hz, and Kappa TST back to 1. This required about a 3.5% reduction in the drivealign gain, from 88.285 to 85.21. I confirmed that this did the right thing by comparing the DARM OLG to an old reference and watching kappa TST. I updated the guardian with this new gain, SDFed, and changed the standard calibration pydarm ini file to have this new gain. I also remembered to update this gain in the CAL CS model.

Correcting sensing:

Next, Camilla took a sqz data set using FIS at different SRCL offsets, 87387. We did the usual 3 SRCL offset steps, but then we were confused by the results, so we added in a fourth at -450 ct. Part of this measurement requires us to guess how much SRCL detuning each measurement has, so we spent a bunch of time iterating to make our gwinc fit match the data in this plot. I'm still not sure we did it right, but it could also be something else wrong with the model. The linear fit suggests we need about a -435 ct offset. We changed the offset following this result.

Checking the result:

After these changes, Corey ran the usual calibration sweep. The broadband comparison showed some improvement in the calibration. However, the calibration report shows a significant spring in the sensing function. To compare how this looks relative to previous measurements, I plotted the last three sensing function measurements.

The calibration was still with 1% uncertainty on 9/27. On 10/4, the calibration uncertainty increased. Today, we changed the SRCL offset following our SQZ measurement. This plot compares those three times, and includes the digital SRCL offset engaged at the time. I also took the ratio of each measurement with the measurement from 9.27 to highlight the differences. It seems like the difference between the 9/27 and 10/4 calibration cannot be attributed much to a change in the sensing. And clearly, this new SRCL offset makes the sensing function have an even larger spring than before.

Therefore, I concluded that this was a bad change to the offset, and I reverted. Unfortunately, it's too late today to get a new measurement. Since we have changed parameters, we would need a calibration measurement before we could generate a new model to push. Hopefully we can get a good measurement this Saturday. Whatever has changed about the calibration, I don't think it's from the SRCL offset. Also, correcting the L3 actuation strength was useful, but it doesn't account for the discrepancy we are seeing.

Images attached to this report
Non-image files attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 15:05, Thursday 09 October 2025 (87396)DetChar-Request

It turns out that changing the H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN was the WRONG thing to do. The point was to only update the drivealign gain to bring us back to the N/ct actuation strength of the model. We lost lock shortly after I updated the drivealign gains, so I didn't realize the error until just now when I checked the grafana page and saw that the monitoring lines were reporting 10% uncertainty!

Vlad and Louis helped me out. By going out of observing and changing the H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN back to the old value (88.285), I was able to bring the monitoring line uncertainty back down to its normal value (2%). I have undone the SDF in the CAL CS model to correct this.

The observing time section with this error is from 1444075529 to 1444081417.

Updating this alog after discussion with the cal team to include a detchar-request tag. Please veto the above time!

Images attached to this comment
elenna.capote@LIGO.ORG - 16:40, Thursday 09 October 2025 (87399)DetChar-Request

Request: veto time segment listed above.

elenna.capote@LIGO.ORG - 09:31, Friday 10 October 2025 (87407)

I also did revert the gain change in the pydarm_ini file, but forgot to mention it earlier.

H1 General
corey.gray@LIGO.ORG - posted 11:49, Thursday 09 October 2025 (87394)
Commissioning Lockloss at 1840utc

Just had a lockloss while Robert was out on the floor during commissioning time (ends a 20.5+hr lock).

H1 CAL (CAL)
corey.gray@LIGO.ORG - posted 10:50, Thursday 09 October 2025 - last comment - 11:11, Thursday 09 October 2025(87384)
Thurs H1 Calibration Measurement (broadband headless + simulines)

NOTE:  For this particular Thurs Calibration, Commissioning work was done first for almost 2-hrs and then this Calibration was run.

Measurement NOTES:

Non-image files attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 11:11, Thursday 09 October 2025 (87389)

Still having this weird error with the report generation. I regenerated, see the attachment.

I'm unhappy with this result, but that requires more detail to explain, alog incoming. In quick summary, I don't think I will be validating this report.

Non-image files attached to this comment
LHO VE
david.barker@LIGO.ORG - posted 10:43, Thursday 09 October 2025 (87388)
Thu CP1 Fill

Thu Oct 09 10:08:21 2025 INFO: Fill completed in 8min 18secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 SQZ
camilla.compton@LIGO.ORG - posted 09:53, Thursday 09 October 2025 (87387)
SQZ FIS SRCL Offset Brontosaurus Plot

Elenna, Camilla, Matt

We took the data for the SRCL offset SQZ brontosaurs plots as in 86737. Plot saved in camilla.compton/Documents/sqz/templates/dtt/20251009_SRCLSQZdata.xml and attached.
 
Type Time (UTC) SRCL Offset Angle DTT Ref
NoSQZ 15:34:00 - 15:42:00 -382 N/A ref 0
FIS SQZ 16:01:00 - 16:04:00 -382 (-)154.7 ref 1
FIS SQZ 16:06:30 - 16:09:30 -200 (-)210.6 ref 2
FIS SQZ 16:13:45 - 16:16:45 0 (+)135.6 ref 3
FIS SQZ 16:49:30 - 16:52:30 -450 (-)144.6 ref 4

Took above data at NLG of 21.9, measaured eariler 87385.

Images attached to this report
H1 SQZ
camilla.compton@LIGO.ORG - posted 08:43, Thursday 09 October 2025 - last comment - 11:36, Thursday 09 October 2025(87385)
No SQZ Time, Checked NLG

We went into no SQZ from 15:34UTC to 15:42UTC. I checked NLG as in 76542

OPO Setpoint Amplified Max Amplified Min UnAmp Dark NLG
80 0.0134871 0.00017537 0.0005911 -2.57e-5 21.9
Comments related to this report
camilla.compton@LIGO.ORG - 11:36, Thursday 09 October 2025 (87391)

Elenna noticed this OPO temp change made SQZ worse at the yellow BLRMs. I then ran SQZ_OPO_LR's SCAN_OPOTEMP state which moved the OPO temp further in the same direction. This unlocked SQZ but shouldn't have but did make the yellow BLRMs better.

LHO General
corey.gray@LIGO.ORG - posted 07:41, Thursday 09 October 2025 - last comment - 08:50, Thursday 09 October 2025(87382)
Thurs DAY Ops Transition

TITLE: 10/09 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 4mph 3min avg
    Primary useism: 0.08 μm/s
    Secondary useism: 0.18 μm/s 
QUICK SUMMARY:

H1's been locked 16.5+hrs (even rode through an M6.0 Russia EQ an hr ago!).  

OVERNIGHT:  SQZ dropped H1 from Observing for 4min due to the PMC.  NO Wake up Calls!

ALSO:  Thurs Calibration from 830-noon (local time)...where Calibration will be deferred to later in the morning for Elenna task and/or Robert task starting things off.

Comments related to this report
corey.gray@LIGO.ORG - 08:50, Thursday 09 October 2025 (87386)PEM

Did the Dust Monitor Check and a new (to me atleast) problem would be for DR1 (Diode Room 1).

H1 General
ryan.crouch@LIGO.ORG - posted 22:00, Wednesday 08 October 2025 (87381)
OPS Wednesday EVE shift summary

TITLE: 10/09 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: We stayed locked the whole shift, 7 hours. Calm evening.
LOG:                                                                                                                    

Start Time System Name Location Lazer_Haz Task Time End
22:52 SAF Laser HAZARD LVEA YES LVEA is Laser HAZARD \u0d26\u0d4d\u0d26\u0d3f(\u239a_\u239a) 13:52
16:48 psl ryanS.jason optics lab YES NPRO! 00:43
22:36 iss keita.jennyq optics lab yes iss array align 00:53
22:46 ISS Rahul Optics lab YES Join iss array work 00:05
00:02 FAC Tyler X1 beamtube N Checks 00:30
H1 SUS (SUS)
ryan.crouch@LIGO.ORG - posted 20:28, Wednesday 08 October 2025 (87380)
Weekly In-Lock SUS Charge Measurement FAMIS28426

Closes FAMIS28426, last checked in alog87272

ETMX's Veff error bars are pretty big, as expected. The values looks fairly stable over time.

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 16:26, Wednesday 08 October 2025 (87368)
Wed DAY Ops Summary

TITLE: 10/08 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:

One ETMx Glitch lockloss and fire observed on Rattlesnake.
LOG:

H1 SEI
thomas.shaffer@LIGO.ORG - posted 16:14, Wednesday 08 October 2025 (87378)
H1 ISI CPS Noise Spectra Check - Weekly

FAMIS27393

Nothing new, all looks OK.

Last week's task

Non-image files attached to this report
H1 General
ryan.crouch@LIGO.ORG - posted 16:04, Wednesday 08 October 2025 (87377)
OPS Wednesday EVE shift start

TITLE: 10/08 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 10mph Gusts, 5mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.15 μm/s 
QUICK SUMMARY:

H1 General
corey.gray@LIGO.ORG - posted 15:33, Wednesday 08 October 2025 (87376)
H1 Lockloss From Infamous ETMx Glitch (Recovery Was Fine)

At 2047utc, H1 went out of Observing due to the SQZ PMC.  It was almost back, but then H1 had a lockloss at 2050utc & this was due to an ETMx Glitch.  Recovery was fairly quick, but I need to tweak ETMx & mostly TMSx.  And then DRMI looked bad, so I went through CHECK MICH FRINGES + PRMI.  Thern H1 got back to OBSERVING automatically.

H1 IOO (ISC, PSL)
jennifer.wright@LIGO.ORG - posted 10:56, Wednesday 08 October 2025 - last comment - 18:13, Wednesday 08 October 2025(87373)
ISS measurements on Monday and Tuesday this week

Jennie W, Keita, Rahul

Monday:

Keita and I re-optimised the array input pointing using the tilt actuator on the PZT steering mirror and the translation stage.

After taking a vertical and horizontal coupling measurement, he realised that the DC values were very low when we optimised the pointing to improve the vertical coupling. Looking at the QPD the cursor was in the bottom half and so we cannot use the QPD y-readout  channel to work out the 'A' channel for either measurement (where the TF is B/A).

So for the A channel in the TF for the vertical coupling we had to use 
A = QPD_X/(cos(QPD_angle*pi/180))/sqrt(2) /Calib_X where A is the times series for thr TF, 'QPD_angle' is the angle between the horizontal dither axis and the QPD x-axis, Calib_X is the calibration of motion on the QPD in the x-axis in V/mm (LHO alog #85897).

And for the A channel in the TF for the horizontal coupling we had to use 
A = QPD_X/(cos(QPD_angle*pi/180))/Calib_X.

The data is plotted here.


Yesterday Keita and I double-shcked the beam size calculation I did on 26th June when we reset the alignment to the ISS array form the laser after it got bumped (we think). The beam size calculated was 0.23 mm beam radius on PD1 (the one with no transmissions through the mirrors) in x direction and 0.20 mm in y direction. The beam size calculated on the QPD was 0.20 mm in x direction and 0.19 mm in y direction. The waist should be close to this point (behind the plane of the array photodiodes) as the Rayleigh range is 9cm in x direction and 10cm in the y direction.

This check is because our beam calibration as reported in this entry, seems to be at least a factor of 2 off from Mayank and Shiva's measurements reported here (dcc LIGO-T2500077).

Since we already know the QPD was slightly off from our measurements on the 6th October, Keita and Rahul went in and retook the calibration measurements of volts on the QPD to mm on the QPD.

In the process Keita noticed that the ON-TRAK amplifier for the QPD had the H indicator lit and so was saturating. He turned the input pump current down from 130mA to120mA and the gain value on the amplifier from G3 (64k ohm) to G2 (16k ohm). The QPD was also recentred on the input pointing position where we had good vertical and horizontal coupling as we had left it in the position we found on Monday where it was off in yaw. We had to do several iterations of alignment switching between vertical and horizontal dither and still could only find a place where the coupling of PDs 1-4 were optimised. PDs 5-8 have bad coupling at this point. At this position we also took calibration measurments where we scanned the beam and noted down the voltage on the QPD X, Y and SUM channels.

Keita notes that for the QPD to be linear the voltages need to be below +/- 7V.

I will attach the final set of measurements in a comment below.

We left the alignment in this state with respect to the bullseye QPD readout.

Images attached to this report
Comments related to this report
jennifer.wright@LIGO.ORG - 14:27, Wednesday 08 October 2025 (87375)

The coupling measurement from the last measurements we took on Tuesday is here, and the calibration of the motion on the QPD is here.

Images attached to this comment
jennifer.wright@LIGO.ORG - 18:13, Wednesday 08 October 2025 (87379)

I was calibrating the above data using the Calib_X and Calib_Y values instead of by sqrt(Calib_X^2 + Calib_Y^2).

Fixed this in the attached graph.

Also 3 of the PDs are starting to near the edge of their aligned range which can be seen looking at the spread of DC values on the array PDs in this graph.

Images attached to this comment
Displaying reports 461-480 of 85368.Go to page Start 20 21 22 23 24 25 26 27 28 End