Displaying reports 2081-2100 of 83026.Go to page Start 101 102 103 104 105 106 107 108 109 End
Reports until 09:03, Thursday 13 March 2025
H1 General (CAL)
thomas.shaffer@LIGO.ORG - posted 09:03, Thursday 13 March 2025 - last comment - 10:54, Thursday 13 March 2025(83351)
Lock loss 1530UTC

1425916765

This happened at the end of the calibration measurement. This also happened last week (alog83210).

Comments related to this report
ryan.crouch@LIGO.ORG - 09:49, Thursday 13 March 2025 (83352)CAL

The last 3 calibration measurements have ended in locklosses, todays, saturdays and last thursdays (these link to the lockloss tool webpages). The 3 locklosses look fairly similiar but looking at the ETMX signals, todays seemed a little stronger of a lockloss? Compared to Saturday and last thursday.

Last thursdays lockloss alog83239

Images attached to this comment
camilla.compton@LIGO.ORG - 10:54, Thursday 13 March 2025 (83354)CAL, SUS

Looking at the same channels that Vlad did in 82878 and Ryan's above ETMX channels, the 42Hz growing wobble in DARM/ETMX seems to start as soon as the DAM1_EXC is ramped on to it's full amplitude, plot attached. The 42Hz is seen as an excitation into  H1:SUS-ETMX_L2_CAL_EXC_OUT_DQ, plot attached. Maybe the ETM is not fully stable at that frequency?

Looking at a successful calibration sweep Feb 22nd 82973, There was no instability when the 42Hz oscillation was turned onto ETMX L2, plot attached.

Images attached to this comment
LHO General
tyler.guidry@LIGO.ORG - posted 08:33, Thursday 13 March 2025 (83348)
LSB Roof Replacement
Overhaul of the failed roofing at the LSB finished on Tuesday.


T. Guidry
Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 08:22, Thursday 13 March 2025 - last comment - 16:24, Thursday 13 March 2025(83345)
LVEA vacuum glitch coincident with H1 lockloss

We had a vacuum glitch in the LVEA at 01:57 Thu 13mar2025 PDT which was coincident with a lockloss. The glitch was below VACSTAT's alarm levels by an order of magnitude, so no VACSTAT alert was issued.

The glitch is seen in most LVEA gauges, and took about 10 minutes to pump down.

Attached plots show a sample of LVEA gauges, the VACSTAT channels for LY, and the ISC_LOCK lockloss.

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 08:29, Thursday 13 March 2025 (83346)

Adding the lock loss tag and a link to the lock loss tool - 1425891454

There's no blatantly obvious cause. The wind was definitely picking up right before the lock loss, which can be seen in many of the ASC loops, but I'm not sure it was enough to cause the lock loss.

camilla.compton@LIGO.ORG - 09:58, Thursday 13 March 2025 (83353)

This may be a what-came-first the LL or the VAC spike scenario but looking at the past 3 hour trend of H1:CDS-VAC_STAT_LY_Y4_PT124B_VALUE attached,  it does not go above 3.48e-9. However 3 seconds before the lockloss, it jumps to 3.52e-9 attached. This seems suspiciously like whatever caused the VAC spike came before the lockloss.

There is a LL issue open to add a tag for this: #227, as it was previously seen in 82907.

Images attached to this comment
janos.csizmazia@LIGO.ORG - 13:35, Thursday 13 March 2025 (83359)
The only possible plausible vacuum glitch without the laser hitting something first, is IP-glitch. Gerardo is looking into this now.

Otherwise, I would say the laser hit something, which didn't cause lockloss right away (only 3 seconds later), but - obviously - caused pressure spike.

The rising wind is just too big of a coincidence to me..
jordan.vanosky@LIGO.ORG - 15:14, Thursday 13 March 2025 (83362)

Corner RGA caught small change in H2 and N2 at the time of the vacuum glitch. AMU 2 delta = 5.4e-10 amp,  AMU 28 delta = 1.61e-10 amp

Attached is a screenshot of RGA scan just before the vac glitch (1:57:02 AM 3/13/25), and second screenshot is ~10 seconds after. Top trace is the 0-100AMU scan at that time, bottom trace is the trend of typical gas components (AMU 2, 18, 28, 32, 40, 44), over ~50 minutes. Vertical line on bottom trace corresponds to time RGA was collected

RGA is a Pfeiffer Prisma Plus, 0-100AMU, with 10ms dwell time, EM enabled with 1200V multiplier voltage. Scans run continuous 0-100AMU sweeps

Images attached to this comment
gerardo.moreno@LIGO.ORG - 16:24, Thursday 13 March 2025 (83364)VE

Main ion pumps reacted to the "pressure spike" after it was noted by other instruments such as the vacuum gauges, see attached plot, first one.

The second plot shows the different gauges located at the corner station, the "pressure spike" appears to be noted first by two gauges PT120 (gauge on dome of BSC2) and PT152 (gauge located at the relay tube).  The amplitude of the "pressure spike" was very small, signature only was noted at the corner station, not noted at Mids or Ends.

Two of the ion pumps at the filter cavity tube responded to the "pressure spkie", see third attachment.

Also, the gauges located on the filter cavity tube noted the spike, including the "Relay Tube".

Images attached to this comment
LHO General
thomas.shaffer@LIGO.ORG - posted 07:34, Thursday 13 March 2025 - last comment - 08:51, Thursday 13 March 2025(83344)
Ops Day Shift Start

TITLE: 03/13 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 134Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 24mph Gusts, 18mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.28 μm/s
QUICK SUMMARY: Locked for 1.5 hours. Planned calibration and commissioning today. Elevated winds already present, some gusts above 40mph. The PSL 101 dust monitor has been in alarm.

 

Comments related to this report
thomas.shaffer@LIGO.ORG - 08:38, Thursday 13 March 2025 (83349)

Our range has been diving down to 130 or so then recovering. It doesn't seem to be the PIs or the end station noise that we were looking at recently. Rumors in the control room are that it was the squeezer, stay tuned for another alog.

Images attached to this comment
camilla.compton@LIGO.ORG - 08:51, Thursday 13 March 2025 (83350)DetChar, SQZ

Sheila, TJ, Camilla. In 83330 we showed that the SQZ servo with edited ADF PLL phase 83270 was working well, but it's been running away this morning, plot attached. We're not sure yet what changed to make it become unstable but this is similar to why we turned it off in Feb.

Tagging DetChar as the times when the SQZ phase and IFO range jump a lot (2025/03/13 14:49 UTC and 15:10UTC) may have glitches.

Images attached to this comment
H1 ISC (ISC)
keita.kawabe@LIGO.ORG - posted 23:25, Wednesday 12 March 2025 - last comment - 10:46, Thursday 13 March 2025(83343)
Old ZM2 optic transmission VS AOI (Rahul, Keita)

As a potential optic for the new TT for the in-vac POPX WFS in HAM1, we tested the transmission of what is supposed to be the old ZM2 mirror from O3 days as a function of AOI.

The results summarized below doesn't make sense because it's least transmissive at 0 deg AOI and it's not HR at 45 deg (3.3% transmission). It's supposed to be E1700103 according to the label but with a handwritten note saying "verify??? doesnt look the same" or something like that. Anyway, if it is indeed E1700103, it should be R>99.99% @ 1064nm for both P and S at 45 +- 5 deg AOI.

This mirror is probably usable for POPX in that the transmission is not disastrously big at the proposed AOI (~30deg or so?) but this is a mystery optic as far as I'm concerned. If the mirror in ZM1 is better, we'll use it.

AOI (deg) power in transmission (mW) power transmission in %
0 0.05 0.03
10 0.08 0.04
15 0.09 0.05
20 0.11 0.061
25 0.16 0.089
30 0.28 0.16
35 0.44 0.24
40 1.23 0.68
45 5.94 3.3

Other things to note:

We mounted the mirror in a class A siskiyou mirror holder on a dirty rotational stage with a clean sheet of foil inbetween so we can easily change the AOI in YAW.

The apparatus was NPRO - BS (to throw away some power) - HWP - PBS cube transmission (to select P-pol) - steering - ZM2 optic transmission - power meter.

The beam height between the steering mirror and the ZM2 optic was leveled, then ZM2 was adjusted so the beam retroreflects within +-2mm over 19 inches, and this became our AOI=0 reference point with uncertainty of ~+-2mm/19inches/2 rad ~ +-0.1deg. We used the indicator on the rotational stage to determine the AOI, and overall AOI error should have been smaller than 1 deg.

Before/after the measurement, the power impinging the HR side of the mirror was 180.3mW/176.1mW. Since the transmitted power only had 2-digits precision at best, I used 1.8e2 mW as the input power. The ambient light level was less than 10uW and is ignored here.

Another obvious candidate is ZM1 (again from O3 days). The mirror was not removed from TT after deinstallation though. We'll see if we can roughly determine the AOI while the mirror is still suspended, as it's a hassle to remove it from the suspension and there's a risk of chipping.

Comments related to this report
rahul.kumar@LIGO.ORG - 10:46, Thursday 13 March 2025 (83356)ISC, SUS

I found out that old ZM2 (HTTS, HAM5) optic is E1000425 and not E1700103 (which is for the old HTTS ZM1 – HAM6). For reference please see Stuart’s LLO alog – 64077.

H1 General
oli.patane@LIGO.ORG - posted 22:01, Wednesday 12 March 2025 (83342)
Ops Eve Shift End

TITLE: 03/13 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Observing at 147Mpc and have been Locked for almost 16 hours. Nothing happened during my shift. This also means I didn't get to do any of the tests with the cameras or A2L changes.
LOG:

Observing my entire shift                                                                                                                                                                                                                                                                                           

Start Time System Name Location Lazer_Haz Task Time End
20:06 ISC/SUS Keita Opt Lab Yes WFS PM1 testing 23:32
21:06 ISC Mayank Opt Lab Yes ISS PD array round 2 00:50
H1 General
oli.patane@LIGO.ORG - posted 20:19, Wednesday 12 March 2025 (83341)
Ops EVE Midshift Status

Observing at 144Mpc and have been Locked for 14 hours. Our range has been moving around quite a bit unfortunately.

H1 SUS
oli.patane@LIGO.ORG - posted 16:46, Wednesday 12 March 2025 - last comment - 19:05, Wednesday 12 March 2025(83318)
Weekly In-Lock SUS Charge Measurement FAMIS

Closes FAMIS#28396, last checked (injections last made) 82537

The ITMX data initially wasn't analyzed due to some issue with grabbing the data yesterday, but I got it to work today by just having it redownload the data and this time there were no issues with it.

I'll be editing the process_single_charge_meas.py script to make sure that if there is an error with downloaded data, the script tries to redownload the data instead of reading in previously downloaded corrupted/incomplete data, failing, and just giving up on that measurement.

Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 19:05, Wednesday 12 March 2025 (83340)

Edits made to process_single_charge_meas.py to make sure that in the future, if the timeseries data that the script grabs and saves in rec_LHO_timeseries is somehow messed up, running the script again will make sure that if the data gives an error when being read from the file, it'll try redownloading the data. Not sure why I had this issue happen, since the first time I ran the script, it had no issue grabbing the data for the other quads. Hopefully this edit will at least make it so it doesn't constantly fail because of something wrong with file if the data itself is now available. This change has been committed to the svn as revision 31024.

LHO General
thomas.shaffer@LIGO.ORG - posted 16:30, Wednesday 12 March 2025 (83333)
Ops Day Shift End

TITLE: 03/12 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: We've been locked for 10 hours with range hovering below 150Mpc for most of the time. In the beginning of the shift we had to address PI31 damping since that mode was ringing up slowly and the damping of it was not bringing it down to a level that it wasn't coupling into DARM (alog83332).

More investigations on the ALS locking troubles from last night were happening offline today. There are a few things to try during Oli's shift if we lose lock:

  1. See if the auto exposure brings the centroids back to their old offsets during green locking - attachment 1
  2. Reset the green references if we can get back to 2W with full ASC on

LOG:

Start Time System Name Location Lazer_Haz Task Time End
15:53 FAC Kim Opt Lab n Tech clean 16:19
17:25 ISC Mayank, Matt, Siva Opt Lab Yes ISS PD array 20:15
18:43 FAC/SEI Tyler, Mitchell EY n Wind fence check 19:07
20:06 ISC/SUS Keita, Rahul Opt Lab Yes WFS PM1 testing

Rahul out @2320

Keita still in

20:27 PEM Richard, Robert EX n Check on HVAC fan1 20:48
21:06 ISC Mayank Opt Lab Yes ISS PD array round 2 00:06
Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 16:28, Wednesday 12 March 2025 (83339)
Ops Eve Shift Start

TITLE: 03/12 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 143Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 14mph Gusts, 8mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.31 μm/s
QUICK SUMMARY:

Observing at 145Mpc and have been Locked for over 10 hours now.

H1 SUS (ISC)
thomas.shaffer@LIGO.ORG - posted 10:49, Wednesday 12 March 2025 - last comment - 12:50, Thursday 13 March 2025(83332)
pi31 slow ringup causing range drop

We noticed that the range was dropping even lower than our already low ~145Mpc. There was a lot of low frequency noise that was creeping up. Sheila suggested that we look at the PIs and sure enough, PI31 had started to slowly creep up the same time as the range degredation (see attached). The SUS_PI guardian node will turn on the damping at 3 according to the H1:SUS-PI_PROC_COMPUTE_MODE31_RMSMON channel and turn it back off when it gets below 3. For now we just tried changing that threshold to 0 to all for continuous damping. This let the mode damp down completely and it brought our range back up.

We should think more about damping this down lower and having two thresholds, an on an off.

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 11:06, Wednesday 12 March 2025 (83334)

Changed the threshold to 1. At 0 the node would continuously search for a new phase thinking that it needed to damp it further.

sheila.dwyer@LIGO.ORG - 13:16, Wednesday 12 March 2025 (83335)

The first attached screenshot shows the spectrum around 10.4 kHz (edited to fix typo), the mode which rang up and caused this issue is at 10428.38 Hz, previously identified as being in the y arm (Oli's slides)

The behavoir seemed pretty similar to what we saw a few weeks ago, 82961, with broad nonstationary noise in DARM. 

The downconversion doesn't seem to be caused by the damping, it started to have a noticable impact on the range while the damping was off, and when the damping came on and reduced the amplitude of the mode the range improved. 

In the ndscope screenshot attached you can see the range that we are using on the DCPDs ADC, this is a 20 bit DAC so it would saturate at 524k counts, when this PI was at it's highest today it was using 10% of the range, the DARM offset takes up about 20% of the ADC range.

The third attachment is the DARM spectrum at the time of this issue, as requested by Peter.  As described in 83330 our range is decreasing with thermalization in each lock, the spectrum shows the typical degredation in the spectrum in these last few days since the SQS angle servo has been keeping the squeezing more stable.  The excess noise seen when the PI was rung up has a similar spectrum to the excess noise we get after thermalization.

Also, for some information about these 10.4kHz PI modes, see G1901351

Images attached to this comment
sheila.dwyer@LIGO.ORG - 12:50, Thursday 13 March 2025 (83358)

Jeff pointed me to 82686 with the suggestion to check the channels that have different digital AA. 

The attached plot shows DCPD sum, and two alternatives, in red is H1:OMC-DCPD_16K_SUM1_OUT_DQ which has a high pass to reduce the single precision noise and no digital AA filters (and some extra lines), SUM2 has 1 digital AA filter and the high pass, and DCPD sum is the darm channel and has 2 digital AA filters and no high pass.  The broadband noise is the same for all of these, so digital aliasing doesn't seem to be involved in adding this broadband noise.

Images attached to this comment
H1 SQZ
sheila.dwyer@LIGO.ORG - posted 10:26, Wednesday 12 March 2025 (83330)
range trends with sqz angle servo on

The attached screenshot shows that turning on the SQZ angle servo 83270, which is shown by the time when CLF_RF6 phase starts to move around, the 350Hz squeezing (yellow BLRMS3) has been better, which is what this servo is intended to do. 

You can also see that the higher frequency squeezing has been worse, and now seems correlated with the range dropping.  Early in each lock there is a time when the high freq squeezing reaches 4.7dB, which corresponds with the times when the range is highest around 156 Mpc.  This is during the thermalization, you can also see "f_s" which is the estimate of the SRC spring frequency based on the calibration lines, and kappa_c are also changing rapidly in this time, so the change in range could be explained by these changes more than the impact on squeezing.

TJ has posted a sensitivity comparison with coherence here:  83327 which shows that we have some A2L coherence to fix. 

Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 16:39, Tuesday 11 March 2025 - last comment - 12:12, Wednesday 12 March 2025(83306)
Ops Day Shift End

TITLE: 03/11 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: List of maintenance activities below, the most impactful to locking being the camera server change that we ended up having to revert to resume locking (alog83308). We will try to implement these cameras that are used in locking a different day when we can dedicate time to setting the IFO to their new offsets.

Locking notes:

LOG:

Start Time System Name Location Lazer_Haz Task Time End
14:36 FAC Nelly Opt Lab n Tech clean 14:48
15:06 SUS Jeff CER n Move PR3 OpLev cable 16:01
15:06 FAC Kim EY n Tech clean 16:53
15:06 FAC Nelly EX n Tech clean 16:05
15:14 CDS Dave, Erik, Jeff CR n Model restarts (PR3, HTTS, IMs, SUSAUX) also SEIH16 15:35
15:17 CDS Fil, Erik, Marc CER n SEI rack work (HAM1) 18:21
15:18 FAC Tyler LVEA, Ends, Mids n 3IFO checks 17:34
15:20 VAC Ken EY MR n Compressor electrics 18:45
15:25 FAC Contractor EY n Well testing - Large van, water truck, SUV 16:22
15:29 IAS Jason, Ryan C LVEA n FARO 18:59
15:35 OPS Richard EY n Check on panels and Ken 16:51
15:35 VAC Janos, Travis, Jordan EY, EX, mids n Compressor cleanup 18:18
15:51 PCAL Rick, Francisco, Sivananda EX not yet PCAL meas 17:35
16:17 VAC Gerardo LVEA n Vac rack UPSs 17:07
16:18 ISC Camilla, job shadow LVEA n Mark table location 17:17
16:29 FAC Nelly FCES n Tech clean 17:14
16:32 VAC Norco CP2 (CS) n LN2 fill 18:19
16:49 CDS Jonathan, Tony MSR n Pulling digivideo0&1 18:03
16:54 FAC Kim, Nelly LVEA n Tech clean 18:16
16:55 SEI Jim, Mitchell Ends n Dropping off wind fence materials 17:58
17:10 VAC Gerardo, Jackie Ends, Mids n Vac rack UPSs 18:36
17:11 SYS Betsy LVEA n Check on HAM1 vent prep 17:11
17:30 VAC Janos, job shadow LVEA n Turbo test 18:45
17:31 SYS Betsy Garbing room n Check on covers and other 18:03
17:35 FAC Tyler, Eric EY n Replacing pump 18:23
17:41 ISC Mayank, Sivananda, Matt Opt Lab Local ISS PD array 20:12
17:46 VAC Travis, Jordan LVEA n HAM1 feedthrough check 18:18
18:25 - Daniel, Fil, Betsy LVEA n HAM1 vent prep 19:04
18:34 - Richard LVEA n Look at cleanrooms 18:42
18:38 VAC Gerardo, Jackie LVEA n Join on turbo testing 18:55
18:41 FAC Eric EY, MY n Glycol pumping 19:59
18:45 OPS Tony LVEA n Sweep 19:09
18:47 VAC Marc, Fil LVEA n BSC1 temperature sensor 18:55
18:58 VAC Janos LVEA n Turbo test end 19:04
19:47 PCAL Dripta PCAL Lab Yes Setting PS5//PS12 Measurement 20:00
20:31 CDS Fil, Jackie, Marc CER n Replacing power supply for susb123 20:33
20:34 FAC Jim, Mitch EndY N Wind fence investigation 21:36
22:19 ISC Mayank, Siva Opt Lab Local ISS PD array ongoing
Comments related to this report
gerardo.moreno@LIGO.ORG - 12:12, Wednesday 12 March 2025 (83336)VE

We had a cold cathode gauge trip during yesterday's maintenance period, H0:VAC-EX_X2_PT524B_PRESS_TORR is the channel from the signal for the cold cathode gauge.  The gauge pair, cold cathode and the pirani gauges, are located next to the gate valve number 20 (GV20), not far from the P-Cal transmitter pier.  These gauges are very sensitive to interference from communication devices, such as two way radios and mobile phones, please try to keep communication devices away from all vacuum gauges, thank you.

Now, we wait for the gauge to comeback on its own.

Images attached to this comment
Displaying reports 2081-2100 of 83026.Go to page Start 101 102 103 104 105 106 107 108 109 End