Displaying reports 13401-13420 of 84063.Go to page Start 667 668 669 670 671 672 673 674 675 End
Reports until 18:22, Sunday 10 September 2023
H1 General (SQZ)
anthony.sanchez@LIGO.ORG - posted 18:22, Sunday 10 September 2023 - last comment - 23:08, Sunday 10 September 2023(72791)
Dropped out of Observing at 23:59 UTC due to SQZ issue.

Dropped out of Observing due to a squeezing issue that told me to go look at alog 70050 from Diag Main.
I took SQZ_Manager to NO_SQUEEZING,  edited line 12 on sqzparams.py Guardian code: 
From:
opo_grTrans_setpoint_uW = 80 #OPO trans power that ISS will servo to. alog 70050.

to:
opo_grTrans_setpoint_uW = 50 #80 #OPO trans power that ISS will servo to. alog 70050.

Loaded SQZ_OPO_LR Guardian.
I then took SQZ_OPO_LR Guardian to LOCKED_CLF_DUAL_NO_ISS, then to LOCKED_CLF_DUAL.

The instructions told me to maximize H1:SQZ-OPO_TEC_SETTEMP. after a few slider bumps it was maxed.

Then I saw the update at the bottom, Which then told me to change the opo_grTrans_setpoint_uW = 60 which I then did.

I then took SQZ_MANAGER back up to FREQ_DEP_SQZ And accepted the SDF Diffs, and took H1 back to Observing. It was happy with this for a breif moment then dropped back out of observing.

This is when Vicki checked in with me and my pleas for SQZr help.
She told me that This is likely due to the the SQZr being tuned for opo_grTrans_setpoint_uW = 80. I will link this alog to the comment of the 70050 which has guided me to changing it.

We have since dropped out 2 more times. And I have simply taken the SQZ_MANAGER to DOWN, then back up to FREQ_DEP_SQUEEZING. allowing for Observing to be reached again. but it hasn't like this.
More troubleshooting is needed. Vicki said she will be logging on shortly to check it out remotely.



 



 

Comments related to this report
anthony.sanchez@LIGO.ORG - 18:29, Sunday 10 September 2023 (72793)

Screen shot of the logs attached.

 

Images attached to this comment
anthony.sanchez@LIGO.ORG - 18:48, Sunday 10 September 2023 (72794)

H1:SQZ-SHG_SERVO_IN1GAIN was changed to hopefully add more stability to the SQZr
SDF Diff accepted.

Images attached to this comment
anthony.sanchez@LIGO.ORG - 23:08, Sunday 10 September 2023 (72795)SQZ

In the Process of taking SQZ_MANAGER to NO_SQUEEZE to Stay in Observing for longer than an hour a Lockloss happened.

Vicki gave me the following instructions to get the IFO into Observing with no SQZ.

Overall:

    bring SQZ_MANAGER --> DOWN --> NO SQUEEZING
    (top of guardian code, Line 36) set SQZ_MANAGER:

   nominal = 'NO_SQUEEZING'  (instead of 'FREQ_DEP_SQZ')
   
   REVERT following SDF diffs, such that we run with:
   (SHG SERVO IN1GAIN = -9
   FIBR_SERVO_COMGAIN=15)

anthony.sanchez@cdsws13: caget H1:SQZ-SHG_SERVO_IN1GAIN
H1:SQZ-SHG_SERVO_IN1GAIN       -9
anthony.sanchez@cdsws13: caget H1:SQZ-FIBR_SERVO_COMGAIN
H1:SQZ-FIBR_SERVO_COMGAIN      15

 

H1 General
anthony.sanchez@LIGO.ORG - posted 16:15, Sunday 10 September 2023 (72790)
Sunday Ops Eve Shift Start

TITLE: 09/10 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 134Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 2mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.07 μm/s
QUICK SUMMARY:

Inherited a Locked IFO that has been Locked for 15 hours.
Everything looks great.

LHO General
ryan.short@LIGO.ORG - posted 16:00, Sunday 10 September 2023 (72789)
Ops Day Shift Summary

TITLE: 09/10 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:

Quiet shift today with H1 locked and observing throughout. Current lock stretch is up to 15 hours.

LOG:

No log for this shift.

LHO General
ryan.short@LIGO.ORG - posted 12:01, Sunday 10 September 2023 (72788)
Ops Day Mid Shift Report

State of H1: Observing at 149Mpc

Very quiet day so far with just a couple of small earthquakes passing through. H1 has been locked for 11 hours.

LHO VE
david.barker@LIGO.ORG - posted 10:10, Sunday 10 September 2023 (72787)
Sun CP1 Fill

Sun Sep 10 10:07:48 2023 INFO: Fill completed in 7min 44secs

Images attached to this report
H1 PSL
ryan.short@LIGO.ORG - posted 09:49, Sunday 10 September 2023 (72786)
PSL 10-Day Trends

FAMIS 19993

There was a brief temperature spike a little over 6 days ago, but this quickly came back to normal. No other major events in the past 10 days.

Images attached to this report
LHO General
ryan.short@LIGO.ORG - posted 08:03, Sunday 10 September 2023 (72785)
Ops Day Shift Start

TITLE: 09/10 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 10mph Gusts, 6mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.07 μm/s
QUICK SUMMARY: H1 has been locked for 7 hours.

H1 General (GRD)
oli.patane@LIGO.ORG - posted 02:05, Sunday 10 September 2023 (72784)
LOCKLOSS_SHUTTER_CHECK keeping us from Observing

IFO_NOTIFY asked for assistance after reaching NLN and having ADS converge because the LOCKLOSS_SHUTTER_CHECK guardian node wasn't nominal (nominal is HIGH_ARM_POWER).

It looks like when the lockloss at 08/10 06:46UTC happened, LOCKLOSS_SHUTTER_CHECK jumped from HIGH_ARM_POWER to CHECK_SHUTTER, and got stuck there with the message "USERMSG 0: run shutter check, then manually take to low power". I requested CHECK _SHUTTER (even though it was already in that state), and then went into MANUAL mode and requested LOW_ARM_POWER. Since the arm power was above threshold, it automatically move up to HIGH_ARM_POWER as soon as I switched to AUTO mode and we got into Observing. I've attached a txt file of the output log since my description of what I did might be a little hard to follow.

Non-image files attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 00:14, Sunday 10 September 2023 (72783)
Saturday OPS Eve Shift End

TITLE: 09/10 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY:

Lockloss 00:52 UTC https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72780
Attempt to lock right away was met with a lock loss at PRMI.
Initial alignment started 1:26 UTC After PRMI Lockloss.
Relocking started again at 1:49 UTC. 
Nominal Low Noise Reached at 2:35 UTC
Observing Reached at 2:48 UTC
GRB-Short E436485 standing down
6:47 UTC unknown Lockloss. https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72782

Current Status: Relocking, and in PRMI_ASC
 

 

H1 General (Lockloss)
anthony.sanchez@LIGO.ORG - posted 23:56, Saturday 09 September 2023 (72782)
Lockloss 1378363592

Unkown Lockloss right before the end of the shift.

https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1378363592

Looking at this plot make me think this lockloss looks very much like a lockloss I had the other day Alog 72772 .

Images attached to this report
H1 General (TCS)
anthony.sanchez@LIGO.ORG - posted 22:27, Saturday 09 September 2023 (72781)
Drop from OBSERVING

Tagging TCS because H1 Dropped from observing due to TCS changing settings and thus making SDF Differences.
The TCS_ITMX_CO2 Guardian dropped down to FIND_LOCK_POINT for a minute for some reason.
 

Images attached to this report
H1 General (Lockloss)
anthony.sanchez@LIGO.ORG - posted 18:30, Saturday 09 September 2023 (72780)
Lockloss 1378342360

Unknown Lockloss at 00:52 UTC

https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1378342360
Lockloss select plots attached.

Seismic System didn't have much motion.
I'll be looking more into this lockloss.

 

 

Images attached to this report
H1 PEM (DetChar)
robert.schofield@LIGO.ORG - posted 17:00, Saturday 09 September 2023 - last comment - 11:19, Monday 11 September 2023(72778)
HVAC couples at LVEA and EX but not EY: update on partial shutdown tests

Lance, Genevieve, Robert

Recently, we shut down specific components of the HVAC system in order to further understand the loss of about 10 Mpc to the HVAC system (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72308 ). We noted that shutdown of the EX water pump had shown that the 52 Hz DARM peak is produced by the chilled water pump at EX.  Based on coupling studies during commissioning time yesterday, the coupling of the water pump can be predicted from shaking injections in the area around the EX cryo-baffle, supporting the hypothesis that the water pump couples at the undamped cryo-baffle (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72769 ). Here we report on other results of the shutdown tests that we have been able to do so far.

CS Fans SF1, 2, 3, 4, 5, and 6 cost roughly 6 Mpc – coupling via input jitter noise and unknown coupling.

Figure 1 shows that the range increased by about 6Mpc when only the CS turbines were shut down; no chillers or chilled water pumps were shut down. Figure 2, a comparison of DARM spectra before, during, and after the fan-only shutdown, shows that there were two major differences. First, a decrease in peaks associated with input jitter noise, particularly the 120 Hz peak. Second, a broad band reduction in noise between about 20 and 80 Hz. This is not consistent with input jitter noise and represents an unknown noise source that we haven’t found yet.

There is a third difference that could be coincidence. The 9.8 Hz ITM bounce modes are higher in the before and after of Figure 2. I was tempted to wonder if the broad band noise was upconversion from the 9.8 Hz peak. We also have harmonics of roughly 10 Hz in the spectrum every so often.  I compared BLRMS of 8.5-10 Hz to BLRMS of 39-50 Hz but didn’t see any obvious correlation. But Im not sure this eliminates the possibility.

120 Hz peak in DARM due to periscope resonance matching new 120 Hz peak from HVAC, possibly due to a new leak in LVEA ducts.

Figure 3 shows that the 120 Hz peak in DARM went away when only SF1, 2, 3 and 4 were shut down. It also shows that the HVAC produces a broad peak between 115 and 120 Hz. I looked back and the 120 Hz vibration peak from the HVAC appears to have started during HVAC work at the end of May, beginning of June. There was a period when flows were increased to a high level for a short time that might have pushed apart a duct connection that is now whistleing at 120 Hz.  I think it would be worth checking for a leak in the ducts associated with SF1,2,3 and 4.

In addition to fixing a potential duct leak, we could mitigate the peak in DARM by moving the PSL periscope peak so that it doesn’t overlap with the HVAC peak. In the past I have moved PSL periscope resonances for similar reasons by attaching small weights.

EY HVAC does not contribute significantly to DARM noise

Figure 4 shows that on/off/on/off/on/off/on series of EY fan, chiller and water pump shutdowns does not seem to correlate with range.

Non-image files attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 11:19, Monday 11 September 2023 (72807)ISC, SYS
This data is the analysis of the 2023-Aug-18 data originally summarized in LHO:72331.
H1 General
anthony.sanchez@LIGO.ORG - posted 16:26, Saturday 09 September 2023 (72779)
Saturday OPS Eve Shift Start

TITLE: 09/09 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 44Mpc
OUTGOING OPERATOR: Austin
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 19mph Gusts, 15mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY:
Inherited a Locked IFO that has been locked for 16:29 hours.
Wind is a little elevated.

LHO General
austin.jennings@LIGO.ORG - posted 16:00, Saturday 09 September 2023 (72775)
Saturday Operator Summary

TITLE: 09/09 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:

- Started out with a 6.0 EQ from Indonesia, but we were able to ride it out

- EX saturations @ 15:25

LOG:                                                         

Start Time System Name Location Lazer_Haz Task Time End
17:22 EPO Tours LExC/CR/Overpass N Second Saturday tours! 21:52
LHO General
austin.jennings@LIGO.ORG - posted 12:08, Saturday 09 September 2023 (72777)
Mid Shift Report

H1 is still going strong, currently locked for 12 hours. Ground motion has settled since this morning's earthquake. Second Saturday tours are ongoing.

LHO VE
david.barker@LIGO.ORG - posted 10:11, Saturday 09 September 2023 (72776)
Sat CP1 Fill

Sat Sep 09 10:08:35 2023 INFO: Fill completed in 8min 31secs

Images attached to this report
LHO General
austin.jennings@LIGO.ORG - posted 08:03, Saturday 09 September 2023 (72774)
Ops Day Shift Start

TITLE: 09/09 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 3mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.10 μm/s 
QUICK SUMMARY:

- H1 is currently locked, just hit the 8 hour mark

- FMCS channel still appear to be down on alert handler

- Literally as I was typing that seismic motion has calmed down since yesterday, we just got an alert for a 6.0 EQ from Indonesia en route...looks like I jinxed it

H1 General
anthony.sanchez@LIGO.ORG - posted 00:16, Saturday 09 September 2023 (72773)
Friday Ops Eve Shift End

TITLE: 09/09 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 144Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
 

00:27 UTC  Another Incoming 5.6M Earthquake from Indonesia.
Initial Alignment @ 1:23 UTC
Locking Started at @ 1:54 UTC

5.3M Eathquake from El Salvador 3:30UTC - Survived!

4 more 4.5-4.8M earthquakes roll through from Costa Rica , Panama, and Indonesia.

Lockloss 5:46 UTC https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72772
Back to NOMINAL_LOW_NOISE@6:53 UTC
Back to Observing @ 7:06 UTC

 

 

 

H1 SQZ
sheila.dwyer@LIGO.ORG - posted 16:22, Wednesday 31 May 2023 - last comment - 20:53, Tuesday 01 October 2024(70050)
what to do if the SQZ ISS saturates

I've set the set point for the OPO trans to 60 uW, this gives us better squeezing and a little bit higher range.  However, the SHG output power sometimes fluctuates for reasons we don't understand, which causes the ISS to saturate and knocks us out of observing.   Vicky and operators have fixed this several times, I'm adding instructions here so that we can hopefully leave the setpoint at 60uW and operators will know how to fix the problem if it arrises again. 

If the ISS saturates, you will get a message on DIAG_MAIN, then, the operators can lower the set point to 50 uW.

1) take sqz out of IFO by requesting NO_SQUEEZING from SQZ_MANAGER. 

2) reset ISS setpoint, by opening the SQZ overview screen, and opening the SQZ_OPO_LR guardian with a text editor.  in the sqzparams file you can set opo_grTrans_setpoint_uW to 50. Then load SQZ_OPO_LR, request LOCKED_CLF_DUAL_NO_ISS, then after it arrives re-request LOCKED_CLF_DUAL, this will turn the ISS on with your new setpoint.  

3)This change in the ciruclating power means that we need to adjust the OPO temperature to get the best SQZ.  Open the OPO temp ndscope, from the SQZ scopes drop down menu on the sqz overview (pink oval in screenshot).  THen adjust the OPO temp setting (green oval) to maximize the CLF-REFL_RF6_ABS channel, the green one on the scope.  

4) Go back to observing, by requesting FREQ_DEP_SQZ from SQZ_MANAGER.  You will have 2 SDF diffs to accept as shown in the screenshot attached. 

Images attached to this report
Comments related to this report
victoriaa.xu@LIGO.ORG - 20:13, Monday 05 June 2023 (70162)

Update: in the SDF diffs, you will likely not see H1:SQZ-OPO_ISS_DRIVEPOINT change, and just the 1 diff for OPO_TEC_SETTEMP. The channel *ISS_DRIVEPOINT is used for commissioning but ISS stabilizes power to the un-monitored value which changes, H1:SQZ-OPO_ISS_SETPOINT.

Also, if SQZ_OPO_LR guardian is stuck ramping in "ENGAGE_PUMP_ISS" (you'll see H1:SQZ-OPO_TRANS_LF_OUTPUT ramping), this is b/c the setpoint is too high to be reached, which is a sign to reduce "opo_gr_TRANS_setpoint_uW" in sqzparams.py.

naoki.aritomi@LIGO.ORG - 16:56, Monday 07 August 2023 (72044)

Update for operators:

2) reset ISS setpoint, by opening the SQZ overview screen, and opening the SQZ_OPO_LR guardian with a text editor. In the sqzparams file you can set opo_grTrans_setpoint_uW to 50 60. Then load SQZ_OPO_LR, request LOCKED_CLF_DUAL_NO_ISS, then after it arrives re-request LOCKED_CLF_DUAL, this will turn the ISS on with your new setpoint. Check if the OPO ISS control monitor (H1:SQZ-OPO_ISS_CONTROLMON) is around 3 by opening SQZ OVERVIEW -> SQZT0 -> AOM +80MHz -> Control monitor (attached screenshot). If the control monitor is not around 3, repeat 2) and adjust the opo_grTrans_setpoint_uW to make it around 3.

Images attached to this comment
anthony.sanchez@LIGO.ORG - 18:27, Sunday 10 September 2023 (72792)

Vicki has asked me to make a comment about how the line in sqzparams.py should stay at 80 due to tuning for 80 instead of 50 or 60.

Line 12: 
opo_grTrans_setpoint_uW = 80 #OPO trans power that ISS will servo to. alog 70050.

relevent alog:

https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72791

anthony.sanchez@LIGO.ORG - 20:53, Tuesday 01 October 2024 (80414)

Latest update on how to deal with this SQZ error message with a bit more clarity:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=80413

Displaying reports 13401-13420 of 84063.Go to page Start 667 668 669 670 671 672 673 674 675 End