Displaying reports 2201-2220 of 83038.Go to page Start 107 108 109 110 111 112 113 114 115 End
Reports until 22:09, Friday 07 March 2025
H1 General
anthony.sanchez@LIGO.ORG - posted 22:09, Friday 07 March 2025 (83237)
Friday Eve Shift Report.

TITLE: 03/08 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: SEISMON_ALERT
    Wind: 9mph Gusts, 6mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.26 μm/s
SHIFT SUMMARY:
Had a lockloss just before my shift.
Did some SQZr changes while  relocking : Corey's alog: 83225 , My alog about this: 83236
Once Relocked H1 has stayed locked and Observing for 3 hours.

LOG:                                                                                                                                                                                                                                                                                             

Start Time System Name Location Lazer_Haz Task Time End
18:34 isc myank.siva optics.lab y iss array hardware 01:50
01:26 ISS Matt Optics Lab Yes Taking pics of Fiber Lasers 01:32
H1 General (Lockloss)
anthony.sanchez@LIGO.ORG - posted 20:21, Friday 07 March 2025 (83236)
Friday Ops Eve Shift Starting report.

TITLE: 03/08 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 15mph Gusts, 11mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.21 μm/s
QUICK SUMMARY:

00:14 UTC H1 Aquired a Lockloss  of unknown cause.
There was a request to load the SQZ_SHG Guardian, but there was a syntax in the code that caused the Guardian node to go into error.  Simple fix took me a few minute to find and fix.
SQZ_SHG Guardian node is running smoothly now.

Corey then took H1 into Initial_Alignment, while I edited the  sqzparams.py.
Line 18  was take from 75 to 80:
opo_grTrans_setpoint_uW = 80  #reload OPO guardian for this to take effect
I then took SQZ_OPO_LR  Guardian to LOCKED CLF_DUAL_NO_ISS,  Then reloaded the Guardian code.
Once reloaded I took SQZ_OPO_LR back to LOCKED_CLF_DUAL. 

Then tried to adjust the OPO_TEC_TEMP, this was a mistake. I could tell that something wasnt correct because I couldn't get the SQZ-CLF_RFEL_RF6_ABS_OUTPUT back to the same hieght it was before while adjusting the OPO_TEC_TEMP. Contacted Sheila.
She informed me that the SQZ_CLF_LR should be locked when doing this.

Once locked the SQZ-CLF_RFEL_RF6_ABS_OUTPUT went up and Looked much better.

Relocking notes.
Initial_Alignment is finished and I'm trying to relock and I have not had success past DRMI. But the night is stil young.

Update:
NOMINAL_LOW_NOISE reached at 2:50 UTC
Observing reached at 3:02 UTC

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 17:02, Friday 07 March 2025 (83225)
Fri Ops DAY Shift Summary

TITLE: 03/07 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:

edit:  h1 had a lockloss minutes before the end of the shift, so Tony and I started addressing SQZ To-Dos (see below in the log)

Ops To Do At Next Opportunity (next H1 lockloss, or L1 goes down.  See sheila alog 83227):

  1. Load SQZ_SHG (this can be done with H1 in any state)
  2. Restore opo_grTrans_setpoint_uW back to 80 (was set to 75 this morning), and re-tune the OPO TEC setpoint .  This can be done with H1 in any state.

LOG:

def in_hardfault():
    #if ezca['SQZ-LASER_IR_DC_ERROR_FLAG'] != 0:
    #    notify('Squeezer laser PD error')
    #    return True
    #elif ezca['SQZ-PMC_TRANS_DC_ERROR_FLAG'] != 0:
    #    notify('Squeezer PMC not locked')
    #    return True
    if ezca['SQZ-SHG_TRANS_RF24_PHASE_ERROR_FLAG'] != 0:
        notify('SHG Trans phase shifter error')
        return True
    #elif ezca['SQZ-SHG_TRANS_RF24_DEMOD_ERROR_FLAG'] != 0:
    #    notify('SHG Trans Demod error')
    # see comment to alog 83224 for why this was commented out
    #    return True

H1 PSL (CSWG, ISC, Lockloss, SEI, SQZ, SYS)
jeffrey.kissel@LIGO.ORG - posted 16:37, Friday 07 March 2025 - last comment - 12:06, Saturday 08 March 2025(83230)
PMC Duty Cycle from July 1 2022 to July 1 2024
J. Kissel

I've been looking through the data captured about the PMC in the context of the two years of observatory use between July 2022 and July 2024 where we spanned a few "construction, commission, observe" cycles -- see LHO:83020. Remember the end goal is to answer the following question as quantitatively as possible: "does the PMC have a high enough duty cycle in the construction and commissioning phases that the SPI does *not* need to buy an independent laser?"

Conclusions: 
 - Since the marked change in duty cycle after the PMC lock-loss event on 2023-05-16, the duty-cycle of the PMC has been exceptionally high, either 91% during install/commissioning times or 99% during observing times. 
 - Most of the down time is from infrequent planned maintenance. 
 - Recovery time is *very* quick, unless the site loses power or hardware fails. 
 - The PMC does NOT lose lock when the IFO loses lock. 
 - The PMC does NOT lose lock just because we're vented and/or the IMC is unlocked. 
 - To-date, there are no plans to make any major changes to the PSL during the first one or two O4 to O5 breaks.
So, we shouldn't expect to lose the SPI seed light frequently, or even really at all, during the SPI install or during commissioning. And especially not during observing. 

This argues that we should NOT need an independent laser from an "will there even be light?" "won't IFO construction / commissioning mean that we'll be losing light all the time?"  duty-cycle stand point.
Only the pathfinder itself, when fully functional with the IFO, will tell us whether we need the independent laser from a "consistent noise performance" stand-point.

Data and Historical Review
To refresh your memory, the major milestones that happened between 2022 and 2024 (derived from a two year look through all aLOGs with the H1 PSL task):
- By Mar 2022, the PSL team had completed the complete table revamp to install the 2x NeoLase high-power amps, and addressed all the down-stream adaptations.

- 2022-07-01 (Fri): The data set study starts.
- 2022-09-06 (Tue): IO/ISC EOM mount updated, LHO:64882
- 2022-11-08 (Tue): Full IFO Commissioning Resumes after Sep 2022 to Dec 2022 vent to make FDS Filter Cavity Functional (see E2000005, "A+ FC By Week" tab)
- 2023-03-02 (Thu): NPRO fails, LHO:67721
- 2023-03-06 (Mon): NPRO and PSL function recovered LHO:67790
- 2023-04-11 (Tue): PSL Beckhoff Updates LHO:68586
- 2023-05-02 (Tue): ISS AOM realignment LHO:69259
- 2023-05-04 (Thu): ISS Second Loop Guardian fix LHO:69334
- 2023-05-09 (Tue): "Something weird happened to the PMC, then it fixed itself" LHO:69447
- 2023-05-16 (Tue): Marked change in PSL PMC duty-cycle, nothing specific PSL team did with the PMC, but the DC power supplies for the RF & ISC racks we replaced, 69631, while Jason tuned up the FSS path LHO:69637 
- 2023-05-24 : O4, and what we'll later call O4A, starts, we run with 75W requested power from the PSL.
- 2023-06-02 (Fri): PSL ISS AA chassis it was replaced, but PMC stays locked through it LHO:70089
- 2023-06-12 (Sun): PMC PDH Locking PD needs threshold adjustment, LHO:70352, for "never found out why" reason FRS:28260
- 2023-06-19 (Mon): PMC PDH Locking PD needs another threshold adjustment, LHO:70586, added to FRS:28260, but again reasons never found.
- 2023-06-21 (Wed): Decision made to reduce requested power into the IFO to 60W LHO:70648
- 2023-07-12 (Wed): Laser Interlock System maintenance kills PSL LHO:71273
- 2023-07-18 (Tue): Routine PMC / FSS tuneup, with quick PMC recovery LHO:71474
- 2023-08-06 (Sun): Site-wide power glitch takes down PSL LHO:72000
- 2023-09-22 (Fri): Site-wide power gltich takes down PSL LHO:73045
- 2023-10-17 (Tue): Routine PMC / FSS tuneup, with quick PMC recovery LHO:73513
- 2023-10-31 (Tue): Jeff does a mode scan sweeping the PMC FSR LHO:73905
- 2023-11-21 (Tue): Routine PMC / FSS tuneup, with quick PMC recovery LHO:74346
- 2024-01-16 : O4A stops, 3 months, focused on HAM567, no PSL work (see E2000005, "Mid-O4 Break 1" tab)
O4A to O4B break lock losses: 7
       2024-01-17 (Wed): Mid-vent, no IFO, no reported cause.
       2024-01-20 (Sat): Mid-vent, no IFO, no reported cause.
       2024-02-02 (Fri): Mid-vent, no IFO, no reported cause.
       2024-02-08 (Thu): Mid-vent, no IFO, no reported cause. During HAM6 close out, may be related to alarm system
       2024-02-27 (Tue): PSL FSS and PMC On-table Alignment LHO:76002.
       2024-02-29 (Thu): PSL Rotation Stage Calibration LHO:76046.
       2024-04-02 (Tue): PSL Beckhoff Upgrade LHO:76879.
- 2024-04-10 : O4 resumes as O4B start
O4B to 2024-07-01 lock losses: 1
       2024-05-28 (Tue): PSL PMC REFL tune-up LHO:78093.
- 2024-07-01 (Mon): The data set study ends.

- 2024-07-02 (Tue): The PMC was swapped just *after* this data set, LHO:78813, LHO:78814

By the numbers

Duty Cycle (uptime in days / total time in days)
     start_to_O4Astart: 0.8053
    O4Astart_to_O4Aend: 0.9450
    O4Aend_to_O4Bstart: 0.9181
       O4Bstart_to_end: 0.9954
(Uptime in days is the sum on the values of H1:PSL-PMC_RELOCK_DAY just before lock losses [boxed in red] in the attached trend for the given time period)

Lock Losses (number of times "days" goes to zero)
     start_to_O4Astart: 80
    O4Astart_to_O4Aend: 22
    O4Aend_to_O4Bstart: 7
       O4Bstart_to_end: 1
(Number of lock losses is mere the count of red boxes for the given time period)

Lock Losses per calendar days
     start_to_O4Astart: 0.2442
    O4Astart_to_O4Aend: 0.0928
    O4Aend_to_O4Bstart: 0.0824
       O4Bstart_to_end: 0.0123
(In an effort to normalize the locklosses over the duration of the time period to give a more fair assessment.)

I also attach a histogram of lock durations for each duration, as another way to look at how the duty cycle dramatically changed around the start of O4A.
Non-image files attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 12:06, Saturday 08 March 2025 (83243)CDS, CSWG, SEI, SUS, SYS
The data used in the above aLOG was gathered by ndscope using the following template,
    /ligo/svncommon/SeiSVN/seismic/Common/SPI/Results/
        alssqzpowr_July2022toJul2024_trend.yaml


and then exported (by ndscope) to the following .mat file,
    /ligo/svncommon/SeiSVN/seismic/Common/SPI/Results/
        alssqzpowr_July2022toJul2024_trend.mat


and then processed with the following script to produce these plots
    /ligo/svncommon/SeiSVN/seismic/Common/SPI/Scripts/
        plotpmcuptime_20250224.m    rev 9866


ndscope have become quite an epically powerful data gathering tool!

H1 TCS (TCS)
ibrahim.abouelfettouh@LIGO.ORG - posted 14:29, Friday 07 March 2025 (83234)
TCS Monthly Trends - FAMIS 28458

Closes FAMIS 28458. Last checked in alog 82659.

Images attached to this report
H1 PSL (PSL)
corey.gray@LIGO.ORG - posted 11:49, Friday 07 March 2025 (83229)
PSL Status Report (FAMIS #26370)

This is for FAMIS #26370.
Laser Status:
    NPRO output power is 1.853W
    AMP1 output power is 70.51W
    AMP2 output power is 140.0W
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN
    PDWD watchdog is GREEN

PMC:
    It has been locked 30 days, 23 hr 28 minutes
    Reflected power = 22.36W
    Transmitted power = 106.5W
    PowerSum = 128.8W

FSS:
    It has been locked for 0 days 5 hr and 18 min
    TPD[V] = 0.7969V

ISS:
    The diffracted power is around 3.5%
    Last saturation event was 0 days 5 hours and 18 minutes ago

Possible Issues: None reported

H1 SEI (SEI)
corey.gray@LIGO.ORG - posted 11:42, Friday 07 March 2025 (83228)
SEI ground seismometer mass position check - Monthly (#26499)

Monthly FAMIS Check (#26499)

T240 Centering Script Output:

Averaging Mass Centering channels for 10 [sec] ...
2025-03-07 11:28:08.262570
There are 15 T240 proof masses out of range ( > 0.3 [V] )!
ETMX T240 2 DOF X/U = -0.975 [V]
ETMX T240 2 DOF Y/V = -1.02 [V]
ETMX T240 2 DOF Z/W = -0.554 [V]
ITMX T240 1 DOF X/U = -1.776 [V]
ITMX T240 1 DOF Y/V = 0.392 [V]
ITMX T240 1 DOF Z/W = 0.484 [V]
ITMX T240 3 DOF X/U = -1.843 [V]
ITMY T240 3 DOF X/U = -0.789 [V]
ITMY T240 3 DOF Z/W = -2.313 [V]
BS T240 1 DOF Y/V = -0.351 [V]
BS T240 3 DOF Y/V = -0.311 [V]
BS T240 3 DOF Z/W = -0.432 [V]
HAM8 1 DOF X/U = -0.317 [V]
HAM8 1 DOF Y/V = -0.434 [V]
HAM8 1 DOF Z/W = -0.712 [V]

All other proof masses are within range ( < 0.3 [V] ):
ETMX T240 1 DOF X/U = -0.006 [V]
ETMX T240 1 DOF Y/V = -0.014 [V]
ETMX T240 1 DOF Z/W = 0.009 [V]
ETMX T240 3 DOF X/U = 0.029 [V]
ETMX T240 3 DOF Y/V = -0.068 [V]
ETMX T240 3 DOF Z/W = -0.005 [V]
ETMY T240 1 DOF X/U = 0.082 [V]
ETMY T240 1 DOF Y/V = 0.171 [V]
ETMY T240 1 DOF Z/W = 0.24 [V]
ETMY T240 2 DOF X/U = -0.067 [V]
ETMY T240 2 DOF Y/V = 0.216 [V]
ETMY T240 2 DOF Z/W = 0.073 [V]
ETMY T240 3 DOF X/U = 0.26 [V]
ETMY T240 3 DOF Y/V = 0.114 [V]
ETMY T240 3 DOF Z/W = 0.146 [V]
ITMX T240 2 DOF X/U = 0.181 [V]
ITMX T240 2 DOF Y/V = 0.294 [V]
ITMX T240 2 DOF Z/W = 0.245 [V]
ITMX T240 3 DOF Y/V = 0.13 [V]
ITMX T240 3 DOF Z/W = 0.133 [V]
ITMY T240 1 DOF X/U = 0.082 [V]
ITMY T240 1 DOF Y/V = 0.123 [V]
ITMY T240 1 DOF Z/W = 0.04 [V]
ITMY T240 2 DOF X/U = 0.04 [V]
ITMY T240 2 DOF Y/V = 0.263 [V]
ITMY T240 2 DOF Z/W = 0.132 [V]
ITMY T240 3 DOF Y/V = 0.08 [V]
BS T240 1 DOF X/U = -0.161 [V]
BS T240 1 DOF Z/W = 0.156 [V]
BS T240 2 DOF X/U = -0.031 [V]
BS T240 2 DOF Y/V = 0.072 [V]
BS T240 2 DOF Z/W = -0.092 [V]
BS T240 3 DOF X/U = -0.139 [V]
Assessment complete.

STS Centering Script Output:

Averaging Mass Centering channels for 10 [sec] ...
2025-03-07 11:37:12.263138
There are 1 STS proof masses out of range ( > 2.0 [V] )!
STS EY DOF X/U = -2.334 [V]
All other proof masses are within range ( < 2.0 [V] ):
STS A DOF X/U = -0.503 [V]
STS A DOF Y/V = -0.757 [V]
STS A DOF Z/W = -0.603 [V]
STS B DOF X/U = 0.235 [V]
STS B DOF Y/V = 0.954 [V]
STS B DOF Z/W = -0.319 [V]
STS C DOF X/U = -0.862 [V]
STS C DOF Y/V = 0.801 [V]
STS C DOF Z/W = 0.681 [V]
STS EX DOF X/U = -0.041 [V]
STS EX DOF Y/V = -0.05 [V]
STS EX DOF Z/W = 0.102 [V]
STS EY DOF Y/V = -0.065 [V]
STS EY DOF Z/W = 1.318 [V]
STS FC DOF X/U = 0.195 [V]
STS FC DOF Y/V = -1.104 [V]
STS FC DOF Z/W = 0.659 [V]
Assessment complete.

LHO VE
david.barker@LIGO.ORG - posted 10:22, Friday 07 March 2025 (83226)
Fri CP1 Fill

Fri Mar 07 10:09:56 2025 INFO: Fill completed in 9min 52secs

Jordan confirmed a good fill curbside.

Images attached to this report
H1 SQZ (OpsInfo, SQZ)
corey.gray@LIGO.ORG - posted 09:45, Friday 07 March 2025 - last comment - 11:09, Friday 07 March 2025(83224)
SQZ SHG TEC Set Temperature Increased (+ opo_grTrans_setpoint lowered to 75)

SUMMARY:  Back To OBSERVING, but got here after going in a few circles.

H1 had a lockloss before the shift, but when I arrived H1 was at NLN, BUT SQZ had issues. 

I opened up the SQZ Overview screen and could see that the SQZ_SHG guardian node was bonkos (so I had this "all node" up the whole time...it was crazy because it was frantically moving through states to get LOCKED, but could not), BUT also I saw.....

1)  DIAG_MAIN

DIAG_MAIN had notifications flashing which said:  "ISS Pump is off. See alog 70050."  So, this is where I immediately switched focus.

2)  Alog70050:  "What To Do If The SQZ ISS Saturates"

Immediately followed the instructions from alog70050 which were pretty straightforward.  Remember:  H1's not Observing, so I jumped on the alog instructions and wanted to get back to Observing ASAP.)  I took the opo_grTrans_setpoint_uW from 80 to 50, and tried to get SQZ back to FDS, but no go (SQZ Manager stuck....and SQZ_SHG still bonkos!).

At this point, I saw that there were several other sub-entries with updated instructions and notes.  So I went through them and took opo_grTrans_setpoint_uW to 60 (no FDS + SQZ_SHG still crazy), and finally set opo_grTrans_setpoint_uW = 75 (but still no FDS + SQZ_SHG still crazy).

At this point, I'm assuming DIAG_SDF sent me on a wild goose chase.  Soooooo, I focused on the erratic SQZ_SHG......

3)  Alog search:  "SQZ_SHG"   --->   H1:SQZ-SHG_TEC_SETTEMP Taken to 33.9

This did the trick!  And this search took me to early Feb2025 alogs from (1) RyanS alog82599 which sounded what I had and then (2) Ibrahim's alog82581 which laid out instructions for what to do for adjusting the SHG TEC Set Temperature (went from 33.7 to 33.9; see attachment #1).  AND---during these adjustments the infamous SQZ_SHG finally LOCKED!!

After this it was easy and straightforward taking the SQZ Manager to FDS and get H1 back to Observing.

NOTE:  I wanted to see the last time this Set Temperature was adjusted and it was Feb 17, 2025.  Doing an alog search on just "SHG" + tag: SQZ took me to when it was last adjusted:  By ME!  During an OWL wake-up call, I adjusted this set point from ~35.1 to 33.7 on 02172025 at 1521utc/72amPST (see attachment #2).

The only SDF to ACCEPT was the H1:SQZ-SHG_TEC_SETTEMP  = 33.7 (see attachment #3).  BUT:  Remember other changes I made (when I erroneously thought I had to adjust the OPO TEC TEMP which are not in SDF:

Hindsight is 20/20, but if I addressed the "bonkos SQZ_SHG" via instructions from an alog search first, I would have saved some time!  :)

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 11:09, Friday 07 March 2025 (83227)

During this time that the SHG guardian was cycling through it's states, that was happening because of the hard fault checker, which checks for errors on the SQZ laser, PMC transdiode, SHG demod, phase shifter.

The demod had an error because the RF level was too high, indeed this was above this threshold in this time and dropped back to normal allowing Corey to lock the squeezer.

The second screenshot shows a recent time that the SHG scaned and locked sucsesfully, in this case as the PZT scans the RF level goes up as expected when the cavity is close to resonance, and also goes above the threshold of 0dBm for a moment, causing the demod to have an error.  This must have happened not at the moment when the guardian was checking this error, so that the guardian allowed it to continue to lock.

It doesn't make sense to throw an error about this RF level when the cavity is scanning, so I've commented out the demod check from the hardfault checker. 

Also, looking at this hardfault checker, I noticed that it is check for a fault on the PMC trans PD.  It would be prefferable to have the OMC guardian do whatever checks it needs to do on the PMC, and trust the sqz manager to correctly not ask the SHG to lock when the PMC is unlocked.  Indeed, SQZ manager has a PMC checker when it is asking the SHG to lock, so I've commented out this PMC checker in the SHG guardian.  This same logic applies to the check on the squeezer laser, leaving us with only a check on the SHG phase shifter in the hardfault checker. 

Editing to add:  I wondered why Corey got the message about the pump ISS.  DIAG_MAIN has two checks for the squeezer, first that SQZ_MANAGER is in the nominal state, then second that the pump ISS is on.  I added an elif to the pump ISS one, so if the sqz manager isn't in the nominal state this will be the only message that the operator sees, Ryan Short and I looked at the SQZ_MANAGER and indeed it seems that there isn't a check for the pump ISS in FREQ_DEP_SQZ. 

SQZ_SHG guardian and DIAG_MAIN will need to be reloaded at the next oppurtunity.

Images attached to this comment
Non-image files attached to this comment
LHO General
corey.gray@LIGO.ORG - posted 07:46, Friday 07 March 2025 (83223)
Fri Ops Day Transition

TITLE: 03/07 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 2mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.22 μm/s
QUICK SUMMARY:

H1's just made it to NLN .after a 7.75hr lock overnight (lockloss), but has a SQZ ISS Pump Off issue.  microseism are low and winds are as well. 

H1 General (Lockloss)
anthony.sanchez@LIGO.ORG - posted 22:07, Thursday 06 March 2025 (83222)
Thursday Eve Shift Summary

TITLE: 03/07 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR:  ->Ryan S.<-
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 14mph Gusts, 8mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.20 μm/s
SHIFT SUMMARY:
H1 was locked for 7 Hours and 17 minutes.
Until a sudden and Unknown lockloss struck at 5:24 UTC, Screenshots of lockloss ndscopes attached.

I took the last half hour of my shift to run an Initial_alignment before the start of Ryans Shift.
H1 is currently just past at CARM_TO_TR.

LOG:                                                                                                                                                                                                                                                                                                                                                         

Start Time System Name Location Lazer_Haz Task Time End
00:32 WFS Keita Optics Lab Yes Parts for WFS 01:36

 

Images attached to this report
H1 SEI
anthony.sanchez@LIGO.ORG - posted 21:52, Thursday 06 March 2025 (83221)
BRS Drift Trends--Monthly

BRS Dift Trends --Monthly FAMIS 26452

BRSs are not trending beyond their red thresh holds.

Images attached to this report
H1 ISC (ISC)
keita.kawabe@LIGO.ORG - posted 18:26, Thursday 06 March 2025 - last comment - 16:33, Monday 10 March 2025(83220)
New in-vac POP WFS (HAM1) assembly is ready for testing

I assembled the 45MHz WFS unit in the optics lab. Assembly drawing: D1102002.

BOM:

I confirmed that the baseplate and the WFS  body are electrically isolated from each other.

There were many black spots on the WFS body (2nd pic) as well as the aluminum foil used for wrapping (3rd pic). It seems that this is a result of rubbing of aluminum against aluminum. I cannot wipe it off but this should be aluminum powder and not some organic material.

QPD orientation is such that the tab on the can is at 1:30 o'clock position seen from the  front (4th pic). You cannot tell it from the picture but there's a hole punched to the side of the can.

Clean SMP - dirty SMA cables are in a bag inside the other clean room in the optics lab. DB25 interface cable is being made (or was made?) by Fil.

Images attached to this report
Comments related to this report
corey.gray@LIGO.ORG - 12:28, Friday 07 March 2025 (83231)

This WFS Assembly (D1102002) has been given the dcc-generated Serial Number of:  S1300637 (with its electronics installed & sealed with 36MHz & 45MHz detection frequencies).  As Keita notes, this s/n is etched by hand on the WFS Body "part" (D1102004 s/n016).

Here is ICS information for this new POP WFS with the Assembly Load here:  ASSY-D1102002-S1300637

(NOTE:  When this WFS is installed in HAM1, we should also move this "ICS WFS Assy Load" into the next Assy Load up:  "ISC HAM1 Assembly" (ICS LINK:  ASSY-D1000313-H1)

daniel.sigg@LIGO.ORG - 16:33, Monday 10 March 2025 (83279)

Tested the in-vac POP_X sensor in the optics lab:

  1. Flashlight test: all 4 segments showed a DC response when a flashlight was pointed to the QPD.
  2. RF transfer functions from the test input to the individual segments; see attached plot. All traces look as expected.

All electronics tests passed! We are ready for installation.

Non-image files attached to this comment
H1 General
anthony.sanchez@LIGO.ORG - posted 16:41, Thursday 06 March 2025 (83219)
Thursday Eve Shift login

TITLE: 03/07 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 144Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 21mph Gusts, 16mph 3min avg
    Primary useism: 0.06 μm/s
    Secondary useism: 0.25 μm/s
QUICK SUMMARY:

H1 has been locked and Observing for 2 hours and 23 minutes.
All systems are running well, though the range seems a bit low.

H1 General (DetChar)
oli.patane@LIGO.ORG - posted 16:38, Thursday 06 March 2025 (83218)
Ops Day Shift End

TITLE: 03/07 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Currently Observing at 147 Mpc and have been Locked for 2.5 hours. Relocking after the lockloss during the calibration measurements was fully automatic and went relatively smoothly.
LOG:

20:35UTC Lockloss during calibration measurements

22:07 NOMINAL_LOW_NOISE
22:10 Observing

23:25 Three military jet planes flew overhead at a very low altitude (tagging Detchar)                                                                                                                                                                                                                                

Start Time System Name Location Lazer_Haz Task Time End
16:02 FAC Nelly Opt lab n Tech clean 16:14
19:21 SQZ Sheila, Mayank LVEA - SQZ n SQZ rack meas. 19:52
21:05 TOUR Sheila, Nately LVEA N Tour 21:29
21:53 ISC Matt, Siva, Mayank OpticsLab n ISS Array Alignment 22:48
23:55 VAC Jordan, Fifer reps Mids n Vacuum work 00:39
00:32 WFS Keita Optics Lab Yes Parts for WFS 02:32
H1 DetChar (DetChar-Request)
elenna.capote@LIGO.ORG - posted 16:23, Thursday 06 March 2025 - last comment - 15:03, Friday 07 March 2025(83217)
Possible Scattered Light ~28 Hz

I found evidence of possible scattered light while looking at some data from a lock yesterday. Attached is a whitened spectrogram of 30 minutes of data starting at GPS 1425273749. It looks like the peaks are around 28, 38, and 48 Hz, but they are broad and it's hard to tell the exact frequency and spacing. Sheila thinks this may have appeared after Tuesday maintenance. Tagging detchar request so some tests can be run to help us track down the source!

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 12:42, Friday 07 March 2025 (83232)

Ryan Short and I have been looking through the summary pages to see what we could learn about this. 

Our range has been shaggy and low since Tuesday, this does seem to line up well with Tuesday maintence. Comparing the glitch rate before and after Tuesday isn't as easy, Iara Ota pointed me to the DMT omega glitch pages to make the comparison for Wed when omicron wasn't working.  DMT omega pages don't show the problem very clearly, but the omicron based ones do show more glitches SNR 8 and higher since Tuesday maintence, we can compare Monday to Thursday

Hveto does flag something interesting, which is that the ETMX optical lever vetos a lot of these glitches, both the pitch and yaw channels are picked by hveto, and they don't seem related to glitches in other channels.  The oplev wasn't appearing in hveto before Tuesday. 

derek.davis@LIGO.ORG - 13:06, Friday 07 March 2025 (83233)DetChar

In recent weeks (every day after Feb 26), there have been large jumps in the amplitude of ground motion between 10-30 Hz at ETMX during the night. A good example of this behavior is on March 1 (see the relevant summary page plot from this page). This jump in ground motion occurs around 3 UTC and then returns to the lower level after 16 UTC. The exact times of the jumps change from night to night, but the change in seismic state is quite abrupt, and seems to line up roughly with the time periods when this scattering appears. 

Images attached to this comment
sheila.dwyer@LIGO.ORG - 15:03, Friday 07 March 2025 (83235)

Ryan found this alog from Oli: 83093 about this noise.  Looking back through the summary pages, it does seem that this started turning off and on Feb 20th, before the 20th this blrms was constantly at the level of 200 nm/s. 

Comparing the EX ground BLRMS to the optical lever spectra, whenever this ground noise is on you can see it in the optical lever pitch and yaw, indicating that the optical lever is sensing ground motion. Feb 22nd is a nice example of the optical lever getting quieter when the ground does.  However, at this time we don't see the glitches in DARM yet, and hveto doesn't pick up the optical lever channel until yesterday.  I'm having a hard time telling when this ground motion started to line up with glitches in DARM, it does for the last two days.

Displaying reports 2201-2220 of 83038.Go to page Start 107 108 109 110 111 112 113 114 115 End