Displaying reports 12461-12480 of 84437.Go to page Start 620 621 622 623 624 625 626 627 628 End
Reports until 15:14, Friday 10 November 2023
H1 SEI
camilla.compton@LIGO.ORG - posted 15:14, Friday 10 November 2023 - last comment - 16:27, Friday 10 November 2023(74140)
HAM1 HEPI started "ringing/singing" at 22:16UTC, damped vibrating part with foam

alog for Tony, Robert, Jim, Gerardo, Mitchell, Daniel

At 22:16UTC the HAM1 HEPI started "ringing", Robert heard this when he was in the LVEA as a 1000Hz "ringing" that he tracked to HAM1. plot attached.

Geradro, Mitchell and Robert investigated the HEPI pumps in the Mechanical room mezzanine and didn't find anything wrong. Robert physically damped the part of HEPI that was vibrating with some foam around 22:40UTC and the "ringing" stopped, readbacks going back to nominal background levels. Can see it clearly in H1:PEM-CS_ACC_HAM3_PR2_Y_MON plot as well as H1:HPI-HAM1_OUTF_H1_OUTPUT channels, plots attached.  It must be down converting to see it in HEPI 16Hz channels. HAM1 Vertical IPSINF channels also looked strange, plot.

Jim checked the HEPI readbacks are now okay. 

Don't know why it started. Current plan is that it's okay now and more through checks will be done on Tuesday.

Images attached to this report
Comments related to this report
anthony.sanchez@LIGO.ORG - 15:52, Friday 10 November 2023 (74144)
Images attached to this comment
jim.warner@LIGO.ORG - 16:27, Friday 10 November 2023 (74150)

Rober reports at 1khz, but is seems there are a number of features at 583, 874 and 882. Can't tell if there are any higher, because HEPI is only a 2k model. Attached plot shows the H1 L4C asds, red is from a couple weeks ago, blue is when HAM1 was singing, pink is after Robert damped the hydraulic line. Seems like the HAM1 motion is back to what it was a couple weeks ago. Not sure what this was, I'll look at the chamber when I get a chance on Monday or Tuesday, unless it becomes an emergency before then...

Second set of asds compare all the sensors during the "singing" to the time in October. Red and light green are the Oct data, blue and brown are the choir, top row are the H L4Cs, bottom row are the V.  Ringing is generally loudest in the H sensors, though H2 is quieter than the other 3 H sensors.

Images attached to this comment
H1 TCS
camilla.compton@LIGO.ORG - posted 14:05, Friday 10 November 2023 - last comment - 12:01, Tuesday 14 November 2023(74138)
Searched for home on CO2 power control waveplates and adjusted calibration for CO2X

Noticed that the CO2's weren't exactly outputting 0W at their NO_OUTPUT settings, plot attached, so I searched for home on both rotation stages. This brought them back much closer to zero. Daniel reminds us that "search for home" needs to be done after every Beckhoff reboot, unsure if we did it after the Beckhoff came back on Tuesday. 

I further adjusted CO2X calibration as it hadn't been getting close to 1.7W since we touched it on Tuesday, 74044, TJ's bootstrapping was getting it closer to 1.7W but works best when we start with a close power.

CO2Y Rotation stage weirdness: On my final test, asking CO2Y to go to 1.7W it jumped to -700degrees! I then asked it ot go back to minium power, which it slowly did. Very strange. A better way to take it back might have been to ask it to "search for home" but I remembered that clicking "abort" often crashes Beckhoff! Searched for home after this. Plot attached.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 12:01, Tuesday 14 November 2023 (74195)

Before bootstrapping, CO2Y had only been getting to 1.5W injected with 1.7W requested. I adjusted the calibration (sdf attached) to bring this closer to 1.7W.

We've noticed before but the CO2Y power meter power drops when the rotation stage stops moving, maybe the RS slides back after it's finished rotating, changing the power by ~0.03W. Plot attached. CO2Y rotation stage is noisier than CO2X. We should check we have a spare RS on hand.

Images attached to this comment
H1 AOS (CAL)
louis.dartez@LIGO.ORG - posted 10:43, Friday 10 November 2023 (74136)
high Q resonance in MICH FF coupling into DARM
This is a follow up on LHO:LHO:74113

Indeed there is a high Q feature in LSC_MICHFF in FM5 (10-12-23-EY) right at 17.7Hz that is coupling into DARM and conflicting with the LINE3 SUS line at 17.6Hz (see attached). 

It can also be seen in the LSC FF posted in LHO:73428. The resonant peak is about an order of magnitude higher than before the filter changes on October 12.

Options forward include: 
1. revert the MICH FF until the 17.7Hz can be removed.
2. increase the L3 SUS line even more to accommodate.



Addon: Camilla asked me what physically is causing the peak. From talking to JoeB, Vlad, and Gabriele: 
It's caused by beamsplitter bounce mode modeV3. It's listed in https://awiki.ligo-wa.caltech.edu/wiki/Resonances/LHO. Oddly, it's listed as being at 17.522 Hz but the alog the records points to (via a wrong link), LHO:49643, pegged it right at 17.79Hz(!)

Joe & Vlad: We should notch the bounce mode in MICH to avoid driving it during the excitation, e.g. when retuning the MICH FF.
Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:19, Friday 10 November 2023 (74135)
Fri CP1 Fill

Fri Nov 10 10:06:11 2023 INFO: Fill completed in 6min 7secs

Travis confirmed a good fill curbside.

Images attached to this report
H1 SEI
ibrahim.abouelfettouh@LIGO.ORG - posted 09:46, Friday 10 November 2023 (74134)
H1 ISI CPS Sensor Noise Spectra Check - Weekly FAMIS 25964

Famis 25964

BSC high freq noise is elevated for these sensor(s)!!!
    
ITMX_ST2_CPSINF_H3    
ITMX_ST2_CPSINF_V1    

But this is a trend going back several weeks already.

Non-image files attached to this report
H1 PEM
robert.schofield@LIGO.ORG - posted 08:26, Friday 10 November 2023 (74128)
HVAC tests yesterday

I made focussed shutdowns yesterday of just one or a few fans. The range and spectra were not strongly affected, and I did not find a particularly bad fan.

Nov. 9 UTC

CER ACs

Off 16:30

On 16:40

Off 16:50

On 17:00

Turbine shutdowns

SF4 off : 17:10

SF4 on: 17:21

SF3 and 4 off: 17:30

SF3 and 4 back on: 17:40

SF3 off: 17:50

SF3 back on: 18:00

SF1 and 4 off: 18:30

SF1 and 4 on: 18;35

SF1 and 4 off: 19:00

SF1 and 4 on: 19:18

SF1 and 3 off: 19:30

SF1 and 3 back on: 19:40

SF1 off: 19:50

SF1 back on: 20:00

SF1 and 4 off 20:10

SF1 and 4 back on: 20:20

SF3 off: 22:50

SF3 on: 23:00

SF3 off: 3:41

SF3 on: 3:51

Nov 10 UTC

SF5,6 off 0:00

SF5,6 back on 0:10

H1 General
anthony.sanchez@LIGO.ORG - posted 08:07, Friday 10 November 2023 - last comment - 09:45, Friday 10 November 2023(74131)
Friday Ops day Shift

TITLE: 11/10 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 2mph Gusts, 1mph 5min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.52 μm/s
QUICK SUMMARY:
H1 Is locked in Nominal_Low_Noise and Observing.
The ITMYMode 5 & 6 Violins are Elevated, 10E-15 on DARM, but they are trending downward.

 

Comments related to this report
rahul.kumar@LIGO.ORG - 09:45, Friday 10 November 2023 (74133)SUS

The latest settings are still working fine, both IY05 and IY06 are going down as shown in the attached plot (shows both narrow and broad filters along with the drive output)- however it will take some time before they get down to their nominal level.

ITMY08 is also damping down nicely.

Images attached to this comment
LHO General
austin.jennings@LIGO.ORG - posted 00:00, Friday 10 November 2023 - last comment - 10:02, Friday 10 November 2023(74123)
Thursday Eve Shift Summary

TITLE: 11/10 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 159Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:

- Lockloss @ 0:40 UTC, cause unknown

- Took the opportunity to reload the HAM2 ISI GDS TP while we were down

- Alignment looked horrible, so will be running an initial alignment

- H1 lost lock at 25 W, DCPD saturation

- Back to NLN/OBSERVE @ 3:55, attached are some ALS EY SDFs which I accepted

- Superevent S231110g

- 4:06 - inc 5.9 EQ from Indonesia

- Lockloss @ 5:27 - cause unknown

- Relocking was automated, reached NLN/OBSERVE @ 6:46 UTC

LOG:

No log for this shift.

Images attached to this report
Comments related to this report
ryan.crouch@LIGO.ORG - 10:02, Friday 10 November 2023 (74132)

Looking at the DCPD signals during the 25W state before this evening LL, during the LL and the next 25W relock, they looked typical (~8-12k) during the LL but there was a weird glitch like halfway through the states duration and they were diverging when the final glitch and LL occured. The relock after the 25W LL had higher DCPDs signals, ~40k higher than usual, and the previous lock also had higher than usual DCPDs at this state, which were damped down over the long lock. So something caused them (particularly ITMY modes 5/6, and 8... the problem childs) to ring up during the following lock aquisition from the 25W LL, they didn't have enough time to damp down during this lock and so after the 5:27 LL they were still high during aquisition.

Images attached to this comment
H1 General (Lockloss)
austin.jennings@LIGO.ORG - posted 21:32, Thursday 09 November 2023 (74130)
Lockloss @ 5:27

Lockloss @ 5:27 UTC, cause unknown, no saturations on verbal. Looks like ASC AS A saw motion first again.

Images attached to this report
LHO General
austin.jennings@LIGO.ORG - posted 20:00, Thursday 09 November 2023 (74129)
Mid Shift Report

Just got H1 back into observing as of 3:55 UTC. Reaquisition took a bit longer due to alignment being poor and needing an initial alignment, followed by a lockloss at 25 W (which I believed were caused by rung up violins). During the second locking reaquisition, I noticed that the violins were extremely high, so I held H1 at OMC WHITENING to allow the violins to damp before going into observing.

H1 General (Lockloss)
austin.jennings@LIGO.ORG - posted 16:48, Thursday 09 November 2023 (74127)
Lockloss @ 0:40

Lockloss @ 0:40 UTC, DCPD saturation right before. Looking at the scope, ASC-AS_A_DC saw the motion first.

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 16:23, Thursday 09 November 2023 - last comment - 16:23, Thursday 09 November 2023(74125)
Ops DAY Shift End

TITLE: 11/10 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 161Mpc
INCOMING OPERATOR: Austin
SHIFT SUMMARY: We've been Locked for 31 hours and everything is good.
LOG:

16:00UTC Detector Observing and Locked

18:02 SQZ ISS lost lock and took us out of Observing; needed a few tries but it got itself back up and locked

18:05 Back to Observing

Comments related to this report
austin.jennings@LIGO.ORG - 16:23, Thursday 09 November 2023 (74126)
Start Time System Name Location Lazer_Haz Task Time End
16:06 PEM Robert CR, CER n HVAC Tests 20:06
17:20 FAC Karen OptLab, VacPrep n Tech clean 17:44
19:31 FAC Cindi WoodShop n Laundry 20:39
20:07   Camilla, Jeff MX n Running 20:39
22:37 SAF Travis, Danny OptLab n Safety checks 00:04
LHO General
austin.jennings@LIGO.ORG - posted 16:02, Thursday 09 November 2023 (74122)
Ops Eve Shift Start

TITLE: 11/10 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 163Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 11mph Gusts, 8mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.49 μm/s
QUICK SUMMARY:

- H1 has been locked for 30 hours

- CDS/SEI DMs ok

H1 CDS (OpsInfo)
david.barker@LIGO.ORG - posted 14:43, Thursday 09 November 2023 (74120)
CDS alerts voice calling available again

While Twilio's texting service continues to be unavailable while we are awaiting approval for our usage, Twilio's phone calling service does still work.

I have made a hybrid version of locklossalert which uses the cell phone providers for SMS texting, and Twilio for phone calls.

Text and Phone-call alerts are now available again.

Note to operators: please check your alert settings, I have to revert to an earlier configuration after restarting the service.

H1 CAL (DetChar)
louis.dartez@LIGO.ORG - posted 10:42, Thursday 09 November 2023 - last comment - 10:44, Friday 10 November 2023(74113)
Line at 17.7 Hz interferes with kappaTST line, mysteriously got louder on 10/13
JoeB, M. Wade, A. Viets, L. Dartez

Maddie pointed out an issue with the computation of the kappa_TST starting since 10/13. After looking into it a bit further, Aaron found a noisy peak at 17.7 Hz just above the kappa_TST line (which is very close at 17.6 Hz). It turns out that the peak has been there for a while, but it got louder on 10/13 and has been interfering with the kappa_TST calculation since then. 

For a quick reference take a look at the top left plot on the calibration summary pages for October 12 and October 13. Taking a look at the DARM spectra for those days, JoeB, noticed a near 17.7Hz on 10/12 at about 23:30 UTC (darm_spect_oct12_2023.png). Interestingly, he noted that it looks like the 17.7Hz line, which was present before the glitch, got louder after the glitch (darm_spect_oct13_2023.png). 

I've attached an ndscope screenshot of the moment that the glitch happens (darm_17p7hz_glitch_kappaTST_uncertainty.png). Indeed, there is a glitch at around 23:30 on 10/12 and it is seen in by the KappaTST line and the line's uncertainty. Interestingly, after the glitch the TST line uncertainty stayed high by about 0.6% compared to its value before the glitch occurred. This 0.6% increase pushed the mean KappaTST line uncertainty above 1%, which is also the threshold applied by the GDS pipeline to determine when to begin gating that metric (see comment LHO:72944 for more info on the threshold itself).

It's not clear to us what caused the glitch or why the uncertainty stayed higher afterwards. I noticed that the glitch at 23:30 was preceded by a smaller glitch by a few hours. Oddly, the mean KappaTST uncertainty also increased (and stayed that way) then too. There are three distinct "steps" in the kappaTST uncertainty shown in the ndscope I attached. 

I'll note that I initially looked for changes to the 17.7Hz line before and after the TCS changes on 10/13 (LHO:73445) but did not find any evidence that the two are related.

==
Until we identify what is causing the 17.7Hz line and fix it, we'll need to do something to help the kappaTST estimation. I'd like to see if I can slightly increase the kappaTST line height in DARM to compensate for the presence of this noisy peak and improve the coherence of the L3 sus line TF to DARM.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 11:54, Thursday 09 November 2023 (74117)

The TCS Ring Heater changes were reverted 16 October 74116.

On Oct 12th we retuned MICH and SRCL LSC Feedforwards and moved the actuation from ETMX to ETMY PUM 73420.

gabriele.vajente@LIGO.ORG - 12:47, Thursday 09 November 2023 (74119)

The MICH FF always has a pole / zero pair at about 17.7 Hz. In the latest filter, the peak is a few dB higher than in previous iterations

camilla.compton@LIGO.ORG - 15:12, Thursday 09 November 2023 (74121)

The change in H1:CAL-CS_TDEP_SUS_LINE3_UNCERTAINTY is when the MICH and SRCL feedforward filters were changed. See attached.

Images attached to this comment
louis.dartez@LIGO.ORG - 10:44, Friday 10 November 2023 (74137)
this is due to a 17.7Hz resonance in the new MICH FF. See LHO:74136.
Displaying reports 12461-12480 of 84437.Go to page Start 620 621 622 623 624 625 626 627 628 End