Displaying reports 261-280 of 84519.Go to page Start 10 11 12 13 14 15 16 17 18 End
Reports until 10:17, Thursday 04 September 2025
H1 PEM
oli.patane@LIGO.ORG - posted 10:17, Thursday 04 September 2025 (86733)
DustMon Monthly Trends FAMIS

Closes FAMIS#37256, last checked 86348

ndscope

Things to note:
Possible issues:
CS_DUST_LAB1_{300,500}NM both stalled at a value 16 days ago
CS_DUST_DR1_300NM has been at zero for the past four days
CS_DUST_DR1_500NM has been at zero for the past six days

Not an issue:
CS_DUST_LVEA5_300NM has been at 0 for three months - since we turned it off after the vent
CS_DUST_LAB2_{300,500}NM both off as expected (comparing to last month when Ryan C didn't mention it as an issue)

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:14, Thursday 04 September 2025 (86732)
Thu CP1 Fill

Thu Sep 04 10:05:52 2025 INFO: Fill completed in 5min 48secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 08:56, Thursday 04 September 2025 (86728)
Wed afternoon power glitch detected by MSR UPS.

15:59:33 Wed 03sep2025 PDT power glitch. H1 was in lock throughout and no impact on range.

Images attached to this report
H1 General
corey.gray@LIGO.ORG - posted 08:25, Thursday 04 September 2025 - last comment - 09:31, Thursday 04 September 2025(86727)
H1 Lockloss (after 42hr03min lock)

H1 lockloss a few after hitting the 42hr mark for a lock (42hrs03min).  At this time we were in at the beginning of 6hr commissioning period and at the time of the lockloss we were about ~10min into running the DARM Offset script for OM2 heating set-up.

Comments related to this report
jennifer.wright@LIGO.ORG - 09:31, Thursday 04 September 2025 (86730)

Not sure that the DARM offset step caused this as we were almost back at our nominal offset when we lost lock (step 6 out of 7 with H1:OMC-READOUT_X0_OFFSET = 9) which is ~27 mA, nominal is 40 mA.

We checked the lockloss ndscopes and couldn't see any smoking guns when the offset was changing from 8 to 9.

Images attached to this comment
LHO General
corey.gray@LIGO.ORG - posted 07:33, Thursday 04 September 2025 (86724)
Thurs DAY Ops Transition

TITLE: 09/04 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 2mph 3min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY:

Day 2 of being locked since Maintenance (41.5hrs)!  Commissioning starts in 12min.  All is quiet seismically (microseism is below the 50th percentile even!) and winds are low.

H1 just had it's 3rd superevent in 10hrs 1hr ago (1350utc).

LHO General
ryan.short@LIGO.ORG - posted 22:02, Wednesday 03 September 2025 (86723)
Ops Eve Shift Summary

TITLE: 09/04 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Another very quiet shift with H1 locked and observing throughout, plus one candidate event. H1 has now been locked for 32 hours.
LOG:

H1 SUS
ryan.short@LIGO.ORG - posted 18:03, Wednesday 03 September 2025 (86722)
In-Lock SUS Charge Measurement - Weekly

FAMIS 28421, last checked in alog86614

Once again, too low of coherence for ITMX, so no new data point there. The most recent point is from the August 19th measurement.

Images attached to this report
H1 TCS
ryan.short@LIGO.ORG - posted 17:48, Wednesday 03 September 2025 - last comment - 10:00, Thursday 04 September 2025(86721)
TCS Monthly Trends

FAMIS 28464, last checked in alog86272

Nothing much out of the ordinary here that I can see aside from the fact that the ITMX SLED is getting awfully close to the 1mW lower power threshold and will likely hit it in the next month.

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 10:00, Thursday 04 September 2025 (86731)

Two issues I'm seeing in these plots  1) The ITMX HWS SLED is degrading faster than previous SLEDs and  2) There is some weirdness going on with the ITMX CO2 flow meter a week and a half ago.

ITMX HWS SLED

The rate of degradation of these sleds is often on the order of roughly 2mW/yr. This is why we can generally get about a year's worth out of each SLED. Comparing the last ITMX SLED and the one that we just installed in May (alog84417), the old SLED decayed at a rate of 2.43mW/yr and the new one is 4.2mW/yr. This is just from making some lines in ndscope, so it's pretty rough, but we can definitey say that it is degrading about 2x as fast as the last SLED (attachment 1). The ITMY SLEDs are a bit more consistent at: previous = 2.7mW/yr and new = 2.2mW/yr.

We saw that on the ITMY SLED this last time around that we were still able to see spherical power changes from lock to lock even with the SLED power reporting 0.5mW, and ring heater changes were tough to see (alog84408). Let's consider 0.5mW to be our limit we can take these, but really it should be before we can't see anything, at our current rate this SLED would finish the run at ~0.3mW. Looks like we will need to swap this one before the end of the run.

ITMX CO2 Flow

In Ryan's alog I noticed that the usual change in flow, as reported by the paddle wheel flow meter on the floor, became much more stable on Sunday Aug. 24th. Looking back over two years (attachment 2), it actually looks like the flow has been unusually unstable since returning from the spring vent, and whatever happened on the 24th brought us back to normal. Being a Sunday, there wasn't much going on (alog86540), we were in the middle of a 40+ hour long lock. Zooming into the event doesn't show the laser doing anything during that time, as if it didn't actually see a change in flow. Not sure what's going on here.

Images attached to this comment
LHO General
corey.gray@LIGO.ORG - posted 16:32, Wednesday 03 September 2025 (86708)
Wed DAY Ops Summary

TITLE: 09/03 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:

Today we had Commissioning time to make up for the time from Monday's holiday.  Also had the opportunity to take H1 to the new ASC High Gain mode to ride out a Magnitude 6.0 earthquake!  H1 is approaching 26.5hrs of lock.
LOG:

LHO General
ryan.short@LIGO.ORG - posted 16:03, Wednesday 03 September 2025 (86720)
Ops Eve Shift Start

TITLE: 09/03 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 8mph Gusts, 3mph 3min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.11 μm/s
QUICK SUMMARY: H1 has been locked for 26 hours. Sounds like commissioning and riding through earthquakes today all went well.

H1 SEI (OpsInfo, PEM, SEI)
corey.gray@LIGO.ORG - posted 15:57, Wednesday 03 September 2025 (86719)
Riding Through A Magnitude 6.0 Aleutian Islands EQ !!

(Elenna C [in spirit], Ibrahim A, Corey G, Tony S, Jim W)

Summary:  H1 survived a magnitude 6.0 EQ in Alaska via new ASC High Gain button/state.  This preserved a 24+hr lock.  This probably saved us (at minimum 2.5hrs of downtime!):  (1) ~90 min for the earthquake seismic noise to die down & (2) for 1-2hrs of locking.  L1 went DOWN due to this EQ about 20min after we saw it rolling through LHO!

Had my first experience getting to ENGAGE the new RED "ASC Hi Gn" button (see Elenna's alog) for an earthquake!  This tool is still fairly new and "manual" (& cool!) for operators.  Main items for engaging at this point (for me, as a newbie) was waiting for SEI_ENV to go to EARTHQUAKE mode and also watched for any colors on the Picket Fence.  While we were dealing with this earthquake, Jim walked in to also observe and answer questions for us; Ibrahim also gave advice since he's engaged this EQ mode twice.  All the details for what happened are logged below.

How Everything Went "Down"!

2102 VerbalAlarm:  "EQ alert for M6.0 in Aleutians"  (it said "Incoming earthquake from Canada")

Images attached to this report
LHO FMCS (PEM)
anthony.sanchez@LIGO.ORG - posted 15:39, Wednesday 03 September 2025 (86718)
Famis Using Vibration Sensors To Gauge Health Of HVAC Fans Site Wide

Using Vibration Sensors To Gauge Health Of HVAC Fans Site Wide FAMIS 26592

H0:VAC-EX_FAN2_570_1 & 2 both seem to have gotten significantly more noisy for about 27 hours from 1800 UTC on Sep 1st to 2100 UTC on Sep 2nd

Images attached to this report
H1 SQZ
camilla.compton@LIGO.ORG - posted 11:54, Wednesday 03 September 2025 (86717)
FC Injection, Shape is Changing day to day...

Sheila and I repeated her 86683 yesterday's FC injections

Interestingly the shape of the injection has changed,  meaning the FC loop shape is changing, see quite time today vs yesterday on FC2_M3 plot. The coupling into DARM is also significantly less today. This isn't;t a surprise as we see the FC causing noise into DARM only intermittently 86608, but we don't know the reason why yet. Sheila checked the FC OLG and it looked fine.

Images attached to this report
H1 TCS (TCS)
corey.gray@LIGO.ORG - posted 11:04, Wednesday 03 September 2025 (86714)
TCS Chiller Water Level Top-Off (Bi-Weekly, FAMIS #27823)

Addressed TCS Chillers (Wed [Sept3] 957-1040am local) & CLOSED FAMIS #27823:

Had a little bit of a rigmarole since I hadn't done this for a while.  In my quick procedure scan, I didn't see how to read the "floating red ball" in the chillers.  I had remembered using the bottom of the ball, but wasn't sure, so I consulted Camilla & TJ.  I ended up using the bottom for my initial measurements, but later learned we measure the top of the ball!  Either way, they were a little low, but I overfilled them since I used the bottom of the ball for readings.  Also not clear what exactly is the "Max level" (reason for overfill).  Either way, chillers had received water since June, so adding water an ok thing!  :)  Attached is a photo of what I'm talking about, and the red ball is about 4mm in diameter so I updated readings I initially entered when filling by adding 4mm to all water level readings.

Images attached to this report
H1 SQZ
anthony.sanchez@LIGO.ORG - posted 05:56, Wednesday 03 September 2025 - last comment - 11:16, Wednesday 03 September 2025(86706)
OWL Shift SQZ Issues

@ 2:43 am (Local time)  H1 called for assistance.
I noticed that it was a SQZ issue, specifically the SHG "PZT was out of range" error on the SQZ_SHG Guardian.
It was bouncing from Locking to Locked then scanning. etc.
I tried Init-ing SQZ_SHG manager..... SQZ_Manager.
Then I took all SQZ GRD Nodes to down  except SQZ_PMC & SQZ_SHG and tried to troubleshoot the SQZ_SHG directly.
Looking at the SQZ troubleshooting guide: There wasn't a section for SQZ _SHG troubleshooting. So I searched the ALOG with no hits for "PZT Out of Range".
I tried to adjust the OPO temp to maximize H1:SQZ-CLF_REFL_RF6_ABS_OUTPUT. But changing the OPO temp had no impact.
This is when I started to realize it may take me awhile to find out what is going on, since it's too late to call a SQZ expert, I should just take us to NO-SQZ.

So I ran the noconda python switch_nom_sqz_states.py without script.

Sadly I did accept the SDFs and these too, because I thought they were causing an SDF issue. But it may have been SQZ_ANG_ADJUST needed to be at ADJUST_SQZ_AND_ADF  instead of DOWN for the SQZr to be considered ready for OBSERVING. 
Made it back to OBSERVING at 11:30:44 UTC
Now that we have made it back to OBSERVING albeit without SQZing I'm using a finer toofed comb on my troubleshooting.

I went looking for Sitemap>SQZ>SQZT0>SHG That last SHG Button was a bit difficult to find when your eyes are still crossed..

I then took a screenshot of the SHG screen before any changes.
I then locked the SQZ_PMC and the SQZ_SHG. The SQZ_SHG was cycling between locked and Unlocked. Dropped from Observing at 11:57:53 UTC.
Then while trying to maximize H1:SQZ-SHG_GR_DC_POWERMON I changed H1:SQZ-SHG_TEC_SETTEMP from 35.89 to 35.61.
After this I  ran the noconda python switch_nom_sqz_states.py with script to try and get us the SQZ SQuoZe.
SQZ_MAN was having an issue with the SQZ_FC losing lock at Transition_IR_LOCKIN, I then re-touched up the OPO temp.

Success! The SQZr Is SQUOZE!!
Now I need to accept all the SDF's again (and these too ) cause I didn't follow the directions when i ran the initial no sqzing script.
Observing reached again at 12:51:32 UTC.



 

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 10:09, Wednesday 03 September 2025 (86711)

I edited SQZ_ANG_ADJUST which had a conditional, reading from sqzparams.use_sqz_angle_adjust, to set the nominal state (which stopped the script running correctly) to just stating the nominal state. Now there is a note in sqzparams.py to change the nominal state in SQZ_ANG_ADJUST and sqz/h1/scripts/switch_nom_sqz_states.py if the flag is changed.

camilla.compton@LIGO.ORG - 11:16, Wednesday 03 September 2025 (86715)

Sheila, Camilla

This morning the SHG PZT was still around 5V so we changed the Offset, Min and Max scan rangers to force it to lock at the peak closer to 50V, see attached for old vs new values and sdfs accepted. Checked it scanned over the correct range by setting the SQZ_SHG guardian to DOWN and under manually scanning the SHG PZT.

Then I copied what Tony did last night and further optimized the SHG temperature to bring the power up from 96mW to 106mW, see attached. Thanks Tony!

Images attached to this comment
H1 SQZ
camilla.compton@LIGO.ORG - posted 15:21, Tuesday 02 September 2025 - last comment - 10:07, Thursday 04 September 2025(86692)
SQZ NLG Messurements at Different ISS AOM Setpoints

Tried to measure NLG (following 76542) at different ISS setpoints (by changing the SHG waveplate to adjust how much power is incident on the AOM), with the same OPO TRANS power. Because in 86363 we were confused that the NLG increased significantly when we realigned Pump fiber, with the same OPO Trans setpoint.

This was confusing and I'd like to repeat next week. I initially thought we were getting pump depletion so decreased the seed power, but later noticed the ISS was just unable to keep up and the control voltage was dropping to -5 for the low values of ISS setpoint, see attached, it could hold in LOCKED_CLF_DUAL with 80uW OPO_TRANS fine but not with the SEED beam.

OPO Setpoint ISS Setpoint when locked on Amplified Unamplified NLG Notes
CLF DUAL SEED Max Min UnAmp Dark
80 4.9 3.2 0.192924 0.002389 0.0079612 -2.1e-5 24.2  
80 2.9 2.9 0.045037     -2.2e-5 39 ?(using below unamp) Pump deplation?  Reduced SEED power from 0.7 to 0.3 to keep it locked on SEED (still didn't work first time).
80 6.3 6.3 0.0430464 0.000546 0.0011063 -2.2e-5 38 ? Unamp signal decreased from SEED power change.
80 4.8 4.7 0.0454944 0.0005378 0.0018533 -2.7e-5 24.2  
80 2.9 -5 ?           Noticed ISS Controlmon at -5. 
80 3.5 3.5 0.0452317 0.00054049 0.0018636 -2.4e-5 24.0  
80 4.95   0.04523       24.0 Leaving here.
 
Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 10:07, Thursday 04 September 2025 (86729)

 Repeated while Corey was relocking today. We had one strange measurement when the ISS setpoint at 3.3V where the un-amplified signal was much lower but when later repeating at 3.1V, we didn't see this.

OPO Setpoint ISS Setpoint when locked on Amplified Unamplified NLG
CLF DUAL SEED Max Min UnAmp Dark
80 5.0 5.0 0.042961 0.00051437 0.00176935 -2.1e-5 24.0
80 2.8 ISS Setpoint dropped to -5,  so OPO TRANS not at 80uW
80 3.3 3.2 0.043273 0.0005247 0.00106713 -2.1e-5 39.7
80 6.2 6.2 0.041031 0.00051677 0.0017658 -2.1e-5 23.0
80 6.5 6.4 0.0409285 0.00051425 0.00175578 -2.1e-5 23.0
80 3.6 3.5 0.0429806 0.00051216 0.00175932 -2.7e-5 24.1
80 3.1 3.0 0.0431685 0.000516166 0.0017296 -2.7e-5 24.5
80 2.8 ISS Setpoint dropped to -5,  so OPO TRANS not at 80uW
80 4.9 4.8 0.0429243 0.0005134 0.0017627 -2.7e-7 24.0
H1 OpsInfo
jennifer.wright@LIGO.ORG - posted 09:28, Tuesday 02 September 2025 - last comment - 11:12, Wednesday 03 September 2025(86623)
Instructions for running DARM OFFSET STEP to do output loss meas with hot OM2.

Test for Thursday morning at 7.45 am, assuming we are thermalised.

conda activate labutils

python auto_darm_offset_step.py

Comments related to this report
jennifer.wright@LIGO.ORG - 11:12, Wednesday 03 September 2025 (86716)

Test for Thursday morning at 7.45 am, assuming we are thermalised, rewrote instructions above and took out last part.

  • In NLN.
  • Turn off OMC ASC:
    • sitemap -> OMC -> OMC Control -> turn MASTER GAIN slider to 0 (its blue and on the bottom centre of screen).
  • Run DARM offset step:
    • Go to /ligo/gitcommon/labtutils/darm_offset_step
    • run:

conda activate labutils

python auto_darm_offset_step.py

  • Wait until program has finished ~15 mins.
  • Turn OMC ASC back on by putting master gain slider back to 0.020.
  • Heat up OM2 heater (~ 3 hours):
    • IFO out -> OM2 -> change H1:AWC-OM2_TSAMS_POWER_SET value to 4.6.
  • Open beam diverter.
    • Go to sitemap -> LSC -> Beam Diverter -> Corner -> Press top right button labelled 'OPEN' under Corner_POP section.

Commissioners will turn off OMC ASC and close beam diverter once heating has finished then do DARM offset step and other tests before turning on ASC and opening beam diverter before cooling down OM2 again.

H1 ISC
elenna.capote@LIGO.ORG - posted 13:56, Wednesday 20 August 2025 - last comment - 10:51, Wednesday 03 September 2025(86482)
Recent measurements of ASC calibrations

Since the HAM1 vent, I have done a few different measurements of the ASC that provide information on how to calibrate WFS signals from counts to microradians. Here is a summary:

CHARD, INP1 and PRC2 results come from this alog

DHARD results come from this alog

SRM results come from this alog (if you are comparing values, I made a power normalization error in the linked alog)

BS results were taken but never alogged (shame on me)

All of these measurements were taken by notching all ASC loops at 8.125 Hz and injecting an 8.125 Hz line in the desired DoF. The osem wits provide the urad reference.

Unless otherwise specified, the witness channels are the bottom stage osems

DoF Input Matrix Calibration Notes
CHARD P -1 * REFL A 45 I + 0.6 * REFL B 45 I

0.0161 urad [ETMY L2] / ct [REFL A 45 I]

0.0109 urad [ETMY L2] / ct [REFL B 45 I]

measured as ETMY L2 wit, must transform to L3 urad, can also convert to cavity angle

coherence near 1

CHARD Y -1 * REFL A 45 I + 0.8 * REFL B 45 I

0.0113 urad [ETMY L2] / ct [REFL A 45 I]

0.00965 urad [ETMY L2] / ct [REFL B 45 I]

measured as ETMY L2 wit, must transform to L3 urad, can also convert to cavity angle

coherence near 1

DHARD P 0.5 * AS A 45 Q - 0.5 * AS B 45 Q

0.00312 urad [ETMX L2] / ct [AS A 45 Q]

0.00312 urad [ETMX L2] / ct [AS B 45 Q]

measured as ETMX L2 wit, must transform to L3 urad, can also convert to cavity angle

coherence = 0.8, 10 averages

DHARD Y 0.5 * AS A 45 Q - 0.5 * AS B 45 Q

0.00612 urad [ITMY L2] / ct [AS A 45 Q]

0.02 urad [ITMY L2] / ct [AS B 45 Q]

measured as ITMY L2 wit, must transform to L3 urad, can also convert to cavity angle

coherence = 0.5, 10 averages

PRC2 P (PR2) 1 * POP X RF I 0.00033 urad / ct coherence 1
PRC2 Y (PR2) 1 * POP X RF I 0.000648 urad / ct coherence 1
INP1 P (IM4)

1.5 * REFL A 45 I + 1 * REFL B 45 I

0.0104 urad / ct [REFL A 45 I]

0.00988 urad / ct [REFL B 45 I]

coherence 1
INP1 Y (IM4) 2 * REFL A 45 I + 1 * REFL B 45 I

0.0141 urad/ct [REFL A 45 I]

0.00608 urad/ct [REFL B 45 I]

coherence 1
MICH P (BS) 1 * AS A 36 Q 0.0161 urad [M2]/ct measured as BS M2 WIT/AS A 36 Q, must transform into M3 urad, coherence near 1
MICH Y (BS) 1 * AS A 36 Q   data not taken
SRC1 P (SRM) 1 * AS A 72 Q 16.9 urad/ct coherence near 1
SRC1 Y (SRM) 1 * AS A 72 Q 10.6 urad/ct coherence near 1

 

Comments related to this report
elenna.capote@LIGO.ORG - 10:51, Wednesday 03 September 2025 (86713)

Here is data for MICH yaw and SRC2:

DoF Input Matrix Calibration Notes
MICH Y 1 * AS A 36 Q 0.00248 urad [BS M2] / ct measured as BS M2 WIT/AS A 36 Q, must transform into M3 urad
SRC2 P (SRM + SR2) 1 * AS_C

33.4 urad [SR2 M3] / ct

44.7 urad [SRM M3] / ct

SRC2 drive matrix is a combination of SRM and SR2:

-7.6 * SRM + 1 * SR2

SRC2 Y (SRM + SR2) 1 * AS_C

20.9 [SR2 M3] / ct

48.8 [SRM M3] /ct

SRC2 drive matrix is a combination of SRM and SR2:

7.1 * SRM + 1 * SR2

 

Displaying reports 261-280 of 84519.Go to page Start 10 11 12 13 14 15 16 17 18 End