Displaying reports 16761-16780 of 86676.Go to page Start 835 836 837 838 839 840 841 842 843 End
Reports until 11:01, Monday 07 August 2023
H1 DetChar
oli.patane@LIGO.ORG - posted 11:01, Monday 07 August 2023 - last comment - 16:27, Monday 07 August 2023(72029)
Voltage Drops Due to Weather

Over the past 6 hours, we've seen multiple drops in voltage due to the weather here - it has been raining on and off and there is thunder and lightning in the area. Attached plot shows what H0:FMC-EX_MAINS_CHAN_{1,2,3}_VOLTAGE are seeing.

Between 14:55-16:47UTC especially the voltage drops are quite large. Tagging DetChar

Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 16:27, Monday 07 August 2023 (72038)

The last of these voltage glitches seen in the EX_MAINS voltage channels was at 18:46UTC.

We've had a lot of big drops in our range today, so I plotted the range against the EX MAINS voltage channels to see if there was any correlation between the glitches and the drops in range.

Highlighting the two hour time period on the voltage channels with the largest and most frequent drops in voltage (attachment1) shows that the range did start dropping more often around this time, but the large range drops still continued past this time. Some of the voltage drops generally line up with drops in the range in the following minute, but there is no consistant correlation. Attachments 2, 3, and 4 show the range channel overlayed on the voltage channels over the period of 12:00UTC to 18:00UTC. Attachments 5, 6, and 7 are zoomed in looks at the potential range drop following a power glitch. Notice that in some cases, another voltage glitch occurred a bit before but did not result in a drop in the detector's range.

Images attached to this comment
LHO VE
david.barker@LIGO.ORG - posted 10:59, Monday 07 August 2023 (72028)
Mon CP1 Fill

Mon Aug 07 10:15:13 2023 INFO: Fill completed in 15min 9secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 ISC (PSL)
jenne.driggers@LIGO.ORG - posted 10:58, Monday 07 August 2023 - last comment - 10:46, Wednesday 04 October 2023(72027)
Cycled ISS Second Loop - increased diffracted power

The ISS Second Loop engaged this lock with a low-ish diffracted power (about 1.5%).  Oli had chatted with Jason about it, and Sheila noticed that perhaps it being low could be related to the number of glitches we've been seeing.  A concern is that if the control loop needs to go "below" zero percent (which it can't do), this could cause a lockloss.

I "fixed" it by selecting IMC_LOCK to LOCKED (which opens the ISS second loop), and then selecting ISS_ON to re-close the second loop and put us back in our nominal Observing configuration.  This set the diffracted power back much closer to 2.5%, which is where we want it to be.

Comments related to this report
jeffrey.kissel@LIGO.ORG - 12:33, Friday 08 September 2023 (72762)CAL, IOO
This cycling of the ISS 2nd loop (a DC coupled loop) dropped the power into the PRM (H1:IMC-PWR_IN_OUT16) from 57.6899 W to 57.2255 over the course of ~1 minute 2023-Aug-07 17:49:28 UTC to 17:50:39 UTC. It caught my attention because I saw a discrete drop in arm cavity power of ~2.5W while trending around looking for thermalization periods. 

This serves as another lovely example where time dependent correction factors are doing their job well, and indeed quite accurately. If we repeat the math we used back in O3, (see LHO:56118 for math derivation), we can model the optical gain change in two ways:
    - the relative change estimated from the power on the beam splitter (assuming the power recycling gain is constant and cancels out)
    relative change = (np.sqrt(57.6858) - np.sqrt(57.2255)) / np.sqrt(57.6858) 
           = 0.0039977
           = 0.39977%
    
    - the relative change estimated by the TDCF system, via kappa_C
    relative change = (0.97803 - 0.974355)/0.97803
           = 0.0037576
           = 0.37576%
    
indeed the estimates agree quite well, especially given the noise / uncertainty in the TDCF (because we like to limit the height of the PCAL line that informs it). This gives me confidence that -- at least over the several minute time scales -- kappa_C is accurate to within 0.1 to 0.2%. This is consistent with how much we estimate the uncertainty is from converting the coherence between the PCAL excitation and DARM_ERR into uncertainty via Bendat & Piersol's unc = sqrt( (1-C) / (2NC) ).

It's nice to have these "sanity check" warm and fuzzies that the TDCFs are doing their job; but also nice to have detailed record of these weird random "what's that??" when trending around looking for things.

I also note that there's no change in cavity pole frequency, as expected.
Images attached to this comment
camilla.compton@LIGO.ORG - 10:46, Wednesday 04 October 2023 (73266)TCS

When the circulating power dropped ~2.5kW, kappa_c trended down, plot attached. This implies that the lower circulating powers induced in previous RH tests 73093are not the reason kappa_c increases. Maybe see an slight increase in high frequency noise as the circulating power is turned up, plot attached.

Images attached to this comment
H1 AOS
mitchell.robinson@LIGO.ORG - posted 10:21, Monday 07 August 2023 (72024)
Monthly Dust Monitor Vacuum Pump Check

All dust monitor pumps running smothly. Temps are within the operating range.

H1 General
oli.patane@LIGO.ORG - posted 08:06, Monday 07 August 2023 (72022)
Ops DAY Shift Start

TITLE: 08/07 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 8mph Gusts, 5mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.05 μm/s
QUICK SUMMARY:

Detector is Observing and has been Locked for 8 hours.

I will change back last night's edit of IFO_NODE_LIST.py (72021)

H1 General
austin.jennings@LIGO.ORG - posted 00:32, Monday 07 August 2023 - last comment - 00:40, Monday 07 August 2023(72020)
Owl Observations CANCELED (IF we lose this current lock) - 8/7

Tonight H1 had issues with holding lock/losing lock at various states in ISC LOCK. After running through some surface level troubleshooting with commissioners and the CDS team, we were unable to find the root cause of the problem. However, we do have some theories, but this would require multiple teams on site to help diagnose. Being that it is midnight our time, we have confirmed with the run coordinator and decided to cancel observations IF the interferometer loses its current lock. The automation and alerts are turned off.

Comments related to this report
austin.jennings@LIGO.ORG - 00:40, Monday 07 August 2023 (72021)OpsInfo

We have excluded ISC_LOCK from the intention bit to allow us to request ISC LOCK to DOWN in case we lose lock tonight (so we don't relock over the night). This needs to be changed first thing in the morning! - Tagging OpsInfo

To do this, go to the IFO guardian node, hit edit, and go to IFO_NODE_LIST.py and DELETE line 32. Then save and hit LOAD.

LHO General
austin.jennings@LIGO.ORG - posted 00:24, Monday 07 August 2023 (72012)
Sunday Eve Shift Summary

TITLE: 08/06 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Aligning
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:

- Arrived with H1 running an IA and trying to relock (following a hectic morning with the high voltage trip, all looks to have been recovered), IA ran without issue

- Acquired NLN @ 0:07, OBSERVE @ 0:22

- Lockloss @ 1:34

- Had to intervene in relocking my moving the DIFF offset by a few steps

- At CARM TO TR there was a message on DIAG MAIN reading "NO IR in arms," couldn't find a reference on how to troubleshoot this and it was stuck for about 15 minutes, so I killed the lock and tried again

- After an hour of trying to lock and doing another initial alignment, we cannot even get passed DRMI, I began rallying the troops

- We first looked into some odd saturations for PRM/PR3/MC1/MC3 that occurrred @ 23:32/4:14 UTC at DRMI LOCKED CHECK ASC

LOG:

No log for this shift.

H1 CDS
sheila.dwyer@LIGO.ORG - posted 23:11, Sunday 06 August 2023 - last comment - 16:28, Monday 07 August 2023(72019)
locking troubles, overflows on suspension computers

Austin, Sheila

Austin contacted me about intermittent locklosses at various stages of the acquisition sequence.  

He posted the attached verbal alarms log, a few interesting episodes in this log include: 

P_R_M  (Aug 6 23:32:38 UTC)
P_R_3  (Aug 6 23:32:38 UTC)
M_C_1  (Aug 6 23:32:38 UTC)
M_C_3  (Aug 6 23:32:38 UTC)

.....

P_R_2  (Aug 7 03:35:19 UTC)
S_R_2  (Aug 7 03:35:19 UTC)
M_C_2  (Aug 7 03:35:19 UTC)
T_M_S_X  (Aug 7 03:35:19 UTC)
T_M_S_Y  (Aug 7 03:35:19 UTC)
IFO_OUT  (Aug 7 03:35:19 UTC)

....

P_R_M  (Aug 7 04:14:22 UTC)
P_R_3  (Aug 7 04:14:22 UTC)
M_C_1  (Aug 7 04:14:22 UTC)
M_C_3  (Aug 7 04:14:22 UTC)

Verbal alarms looks at H1:FEC-(number)_ACCUM_OVERFLOW for these alarms.  PR3 saturations seem suspicous because we send no ISC feedback to PR3, I looked at the osems, drive requests, and individual channel overflows and see nothing at this time, but looking at the FEC-ACCUM_OVERFLOW it does show overflows at 4:14:19 UTC.  The suspensions reporting overflows at this time are all on HAM2, and all their models are on SUSH2A.  Also suspicous is when there is an overflow reported from PR2, SR2, and MC2 at the same time, these are all the suspensions on SUSH34.  This is what is making me think that the locking troubles may be due to some intermittent problem with CDS. 

Images attached to this report
Non-image files attached to this report
Comments related to this report
rahul.kumar@LIGO.ORG - 16:28, Monday 07 August 2023 (72035)SUS

Sheila, Dave, Austin, Rahul

Following up on SUS saturation issue which Austin and Sheila faced on Sunday, I trended the ADC0 and ADC1 channels on H1SUSPRM and SUSH34.

For H1SUSPRM, I found two saturations (PRM_M3_WD_OSEMAC_BANDIM_UR_INMON), first at 23.32 UTC and second at 04.14 UTC - see attached plot. I trended all the inmons and DAQ output for PRM and did not find any of the suspension channels saturating for those two times. I am attaching a screenshot of DAQ output channels for M1, M2 and M3 stage for both the time, i.e . 23.32 UTC and 04.14UTC.

Similarly for SUSH34 also showed some saturations at channel no 24 (which is MC2_M3_WD_OSEMAC_BANDIMUL_INMON) - see two plots attached - shows all the channels in ADC0 and this plot focuses on channel 24. For MC2 I see saturations in the DAQ output for M2 and M3 stage at times coincident with SUSH34. I will further investigate on MC2 (however Sheila mentioned that it is fairly common for MC2 DAQ to saturate during locking).    

Since they are also on SUSH34, hence I trended PR2, SR2 and did not find any saturations in the inmons or DAQ output.

                                                                              

Images attached to this comment
H1 DetChar (DetChar)
zoheyr.doctor@LIGO.ORG - posted 21:46, Sunday 06 August 2023 (72018)
DQ Shift Report LHO: 31 July 2023 00:00 UTC to 6 Aug 2023 23:59 UTC

Link to report here

 
 
 
LHO General
austin.jennings@LIGO.ORG - posted 20:02, Sunday 06 August 2023 (72017)
Mid Shift Eve Report

H1 is now relocking after a lockloss, the lock itself was very short, only 1.5 hours. Being that an IA was just completed, and ground motion was low (and not seeing anything obvious on the locklost tools), I'm confused as to why. Relocking has been a bit strange, had a lockloss at SHUTTER ALS and CARM TO TR (though I think this might have been from IR not being found properly). Nonetheless, H1 is now at LOWNOISE COIL DRIVERS and hopefully will be back up soon.

H1 General (Lockloss)
austin.jennings@LIGO.ORG - posted 18:37, Sunday 06 August 2023 (72016)
Lockloss @ 1:34

Lockloss @ 1:34, cause unknown, seismic motion is low.

H1 General
oli.patane@LIGO.ORG - posted 16:18, Sunday 06 August 2023 (72015)
Ops DAY Shift End

TITLE: 08/06 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Aligning
INCOMING OPERATOR: Austin
SHIFT SUMMARY:

Currently working through an inital alignment

 

15:00UTC Detector down due to power glitch at 13:34UTC; Ryan S going to CER to turn High Voltage back on (72000)

15:15 Relocking
- Had issues with relocking because the OMC High Voltage PZT were not reset after the power outage (72003)
17:43 Continuing up
18:05 NOMINAL_LOW_NOISE
18:28 Observing

20:33 Earthquake mode activated
20:44 Back to CALM

22:28 Lockloss (72010)


LOG:                                                                                                                                                                                                                               

Start Time System Name Location Lazer_Haz Task Time End
13:54 OPS Ryan S HEADING TO SITE - Investigating PSL issue 14:54
14:56 PSL Ryan S CER - Turning on PMC HV 15:23
17:11 OMC Oli LVEA n Turning HAM6 High Voltage back on 17:52
LHO General
austin.jennings@LIGO.ORG - posted 16:09, Sunday 06 August 2023 (72011)
Ops Eve Shift Start

TITLE: 08/06 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 7mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.04 μm/s
QUICK SUMMARY:

- H1 is currently running an IA following the lockloss and should hopefully be back up shortly

- CDS/SEI/DMs ok

H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 15:29, Sunday 06 August 2023 - last comment - 16:15, Sunday 06 August 2023(72010)
Lockloss

Lockloss at 22:28UTC

Comments related to this report
oli.patane@LIGO.ORG - 16:15, Sunday 06 August 2023 (72014)

Looking quickly at peakmon, lockloss occurred at crosshair mark on attachment. Spikes are relatively small but sharp.

Images attached to this comment
H1 General
oli.patane@LIGO.ORG - posted 12:30, Sunday 06 August 2023 (72009)
Ops DAY MidShift Report

Everything looking normal now, we've been Locked for 1hr 25mins.

H1 General
oli.patane@LIGO.ORG - posted 08:08, Sunday 06 August 2023 - last comment - 16:13, Sunday 06 August 2023(72001)
Ops DAY Shift Start

TITLE: 08/06 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: SEISMON_ALERT
    Wind: 5mph Gusts, 3mph 5min avg
    Primary useism: 0.07 μm/s
    Secondary useism: 0.07 μm/s
QUICK SUMMARY:

Ryan S is heading over to the CER to restart the High Voltage from the lockloss(72000). Major dustmon alarm on for the PSL even though winds are low.

Comments related to this report
oli.patane@LIGO.ORG - 11:29, Sunday 06 August 2023 (72008)

18:28UTC Reached Observing

oli.patane@LIGO.ORG - 16:13, Sunday 06 August 2023 (72013)

I noticed that the dustmons for the PSL laser room spiked dramatically within 10 minutes of the power glitch even though wind was low at the time (attachment). The dust counts then went back down a few minutes after the NPRO and amps were restored in the PSL. We don't see how dust could jump after just a power glitch, and Austin suggested that it may just be some strange cross-coupling between the electronics in the PSL laser room and the dustmon in there.

Images attached to this comment
H1 CAL
anthony.sanchez@LIGO.ORG - posted 15:41, Saturday 29 July 2023 - last comment - 09:20, Thursday 05 October 2023(71812)
PCAL EY End Station Measurement

ENDY Station Measurement
During the Tuesday maintenace, the PCAL team(Julianna Lewis & Tony Sanchez) went to ENDY with Working Standard Hanford aka WSH(PS4) and took an End station measurement.
The ENDY Station Measurement was carried out according to the procedure outlined in Document LIGO-T1500062-v15, Pcal End Station Power Sensor Responsivity Ratio Measurements: Procedures and Log, and was completed by 11 am.

First thing we did is take a picture of the beam spot before anything is touched!

Martel:
We started by setting up a Martel Voltage source to apply some voltage into the PCAL Chassis's Input 1 channel and we record the times that a -4.000V, -2.000V and a 0.000V signal was sent to the Chassis. The analysis code that we run after we return uses the GPS times, grabs the data and created the Martel_Voltage_Test.png graph. We also did a measurement of the Martel's voltages in the PCAL lab to calculate the ADC conversion factor, which is included on the document.

After the Martel measurement the procedure walks us through the steps required to make a series of plots while the Working Standard(PS4) is in the Transmitter Module. These plots are shown in WS_at_TX.png.

Next steps include: The WS in the Receiver Module, These plots are shown in WS_at_RX.png.

Followed by TX_RX.png which are plots of the Tranmitter module and the receiver module operation without the WS in the beam path at all.
The last picture is of the Beam spot after we had finished the measurement.
All of this data is then used to generate LHO_ENDY_PD_ReportV2.pdf which is attached, and a work in progress in the form of a living document.

All data and Analysis has been commited to the SVN.
https://svn.ligo.caltech.edu/svn/aligocalibration/trunk/Projects/PhotonCalibrator/measurements/LHO_ENDY/

PCAL Lab Responsivity Ratio Measurement:
A WSH/GSHL (PS4/PS5)FrontBack Responsivity Ratio Measurement was ran, analyzed, and pushed to the SVN.
The analysis of this measurement produces 4 PDF files which we use to vet the data for problems.
raw_voltages.pdf
avg_voltages.pdf
raw_ratios.pdf
avg_ratios.pdf

All data and Analysis has been commited to the SVN.
https://svn.ligo.caltech.edu/svn/aligocalibration/trunk/Projects/PhotonCalibrator/measurements/LabData/PS4_PS5/


A surpise BackFront PS4/PS5 Responsivity Ratio appeared!!
PCAL Lab Responsivity Ratio Measurement:
A WSH/GSHL (PS4/PS5)BF Responsivity Ratio measurement was ran, analyzed, and pushed to the SVN.
The analysis of this measurement produces 4 PDF files which we use to vet the data for problems.
raw_voltages2.pdf
avg_voltages2.pdf
raw_ratios2.pdf
avg_ratios2.pdf

This adventure has been brought to you by Julianna Lewis & Tony Sanchez.

Images attached to this report
Non-image files attached to this report
Comments related to this report
anthony.sanchez@LIGO.ORG - 10:53, Monday 07 August 2023 (72026)

Post PCAL meeting update:
Rick Dripta and I have spoken at length about the recent End Station report's depiction of the RxPD Calibration in ct/W Plot which looks like there is a drop in the calibration and thus has changed.
This is not the case, even though we see this drop on both arms from the last 3 End Station measurements.

There is an observed change in the plots of the Working Standard(PS4)/ Gold Standard(PS5) responsivity ratio made in the PCAL lab as well. Which is why we make an in lab measurement of the Working Standard over the Gold Standard after every End Station measurement.
The timing of the change in May, the direction of the change, and the size of the change all indicate that there must be a change with either PS4 or PS5 which would have been seen on RxPD Calibration plots.
We have not seen the same change in the responsity ratio plots involving the Gold Standard (PS5) and any other integrating sphere.
This means that the observed changes in the RxPD Calibration is very likely due to a change associated with the Working Standard (PS4).
 

Images attached to this comment
Non-image files attached to this comment
anthony.sanchez@LIGO.ORG - 09:20, Thursday 05 October 2023 (73286)
Non-image files attached to this comment
Displaying reports 16761-16780 of 86676.Go to page Start 835 836 837 838 839 840 841 842 843 End