Displaying reports 13141-13160 of 84081.Go to page Start 654 655 656 657 658 659 660 661 662 End
Reports until 20:57, Saturday 23 September 2023
H1 General
oli.patane@LIGO.ORG - posted 20:57, Saturday 23 September 2023 (73074)
Ops EVE Midshift Update

We've been Locked for 6.5 hours now and are Observing at 152Mpc.

H1 General
anthony.sanchez@LIGO.ORG - posted 16:18, Saturday 23 September 2023 (73071)
Saturday OPS Day Shift End

TITLE: 09/23 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:

19:02 UTC Pi 24 Ring up.
19:11 UTC a few seconds after I found the right setting to bring PI24 down a lockloss happend.


relocking notes:
Xarm Increase Flashes ran. Lockloss at Locking ALS.


Relocking attempt 2
20:19 UTC Nominal Lownoise reached
20:34 UTC Lockloss before the Camera Servos turned on.

relocking attempt 3
started at 20:36 UTC
Nominal Low Noise Reached at 21:20 UTC
Observing Reached at 21:34 UTC

LOG (Testing site independent Reservation system) :                                                                                           

Start Time System Name Location Lazer_Haz Task Time End
16:34 test test test test test 16:35
H1 General
oli.patane@LIGO.ORG - posted 16:18, Saturday 23 September 2023 (73072)
Ops EVE Shift Start

TITLE: 09/23 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 14mph Gusts, 12mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.11 μm/s
QUICK SUMMARY:

Detector has been Observing and Locked for almost 2 hours now. Winds look like they have spiked up just past 20mph in the last halfhour, and microseism is low.

The LVEA zone 5 temp looks like it peaked about an hour ago, but didn't go up as high or as sharply as yesterday's temperature peak.

H1 General (Lockloss)
anthony.sanchez@LIGO.ORG - posted 15:05, Saturday 23 September 2023 (73070)
Lockloss 1379536511

Unkown Lockloss:
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1379536511
This Lockloss happened while waiting for the Camera Servo Gaurdian to get to CAMERA_SERVO_ON.

 

Images attached to this report
H1 General (Lockloss)
anthony.sanchez@LIGO.ORG - posted 13:39, Saturday 23 September 2023 - last comment - 15:50, Tuesday 26 September 2023(73069)
PI24 Induced Lockloss

Lockloss:
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1379531482

19:02 UTC Pi 24 starts to Ring up.
I let the PI Gaurdian try and sort it out at first. When I noticed that was cycling too quickly through the PI damping settings, I tried fighting it a little. Eventually I turned off the PI Guardian. then Tried by hand. Unfortunately My first guess at what would dampen PI24 actually made it worse, and When I finnaly found the setting that was turning it around the IFO unlocked.
I Struggled to try and fid the right settings to damp it.
19:11 UTC a few seconds after I found the right setting to bring PI24 down a lockloss happend.



 

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 15:50, Tuesday 26 September 2023 (73126)OpsInfo

Naoki checked that PI24 is still at the correct frequency, his plot attached.

We checked that the SUS_PI guardian was doing what was expected, it was. It stopped at 40deg PHASE which seemed to be damping but as the RMSMON started rising it continued to cycle, see attached. The issue appears to be that the guardian couldn't find the exact frequency to damp the mode quicker than it was ringing up.

Naoki and I have edited SUS_PI to take steps or 45degrees rather than 60degrees to fix this. We've left the timer at 10 seconds between steps but this may need to be reduced later. SUS_PI needs to be reloaded when next out of observe, tagging OpsInfo.

Images attached to this comment
LHO VE
david.barker@LIGO.ORG - posted 10:29, Saturday 23 September 2023 (73068)
Sat CP1 Fill

Sat Sep 23 10:11:53 2023 INFO: Fill completed in 11min 49secs

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 08:11, Saturday 23 September 2023 (73067)
Saturday OPS Day Shift Start

TITLE: 09/23 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 0mph Gusts, 0mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.11 μm/s
QUICK SUMMARY:

Have the following CDS lights still :
H1:DAQ-DC0_H1IOPSUSAUXH2_CRC_SUM
H1:DAQ-DC0_H1SUSAUXH2_CRC_SUM
Dave put in an alog about this:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=73055
This has not been resolved and I believe Dave has been wanting to resolve this but likely has to wait until Tuesday.

H1 General
oli.patane@LIGO.ORG - posted 00:27, Saturday 23 September 2023 (73065)
Ops EVE Shift End

TITLE: 09/23 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Aligning
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Majority of my shift was quiet. We were out of Observing for less than two minutes due to the TCS ITMX CO2 laser unlocking (73061), and then at 05:39UTC we lost lock (73064) and had saturations in L2 of all ITMs/ETMs that delayed relocking. We are currently in INITIAL_ALIGNMENT with Corey helping us lock back up.

23:00UTC Detector Observing and Locked for7.5hours

23:40 Taken out of Observing by TCS ITMX CO2 laser unlocking (73061)
23:42 SDF Diffs cleared by themselves and we went back into Observing

05:39 Lockloss (73064)

LOG:

no log

H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 22:42, Friday 22 September 2023 - last comment - 12:22, Tuesday 03 October 2023(73064)
Lockloss

Lockloss @ 09/23 05:39UTC

Comments related to this report
oli.patane@LIGO.ORG - 01:03, Saturday 23 September 2023 (73066)

Right after lockloss DIAG_MAIN was showing:

- PSL_ISS: Diffracted power is low

- OPLEV_SUMS: ETMX sums low

 

05:41UTC LOCKING_ARMS_GREEN, the detector couldn't see ALSY at all and I noticed the ETM/ITMs L2 saturations(attachment1-L2 ITMX MASTER_OUT as example) on the saturation monitor(attachment2) and noticed the ETMY oplev moving around wildly(attachment3)

05:54 I took the detector to DOWN, and immediately could see ALSY on the cameras and ndscope; L2 saturations were all still there

05:56 and 06:05 I went to LOCKING_ARMS_GREEN again and GREEN_ARMS_MANUAL, respectively, but wasn't able to lock anything - I was able to go to LOCKING_ARMS_GREEN and GREEN_ARMS_MANUAL without the saturations getting too bad but both ALSX and Y evenually locked for a few seconds each and then unlocked and gone to basically 0 on the cameras and ndscope(attachment4). ALSX and Y sometime after this went to FAULT and were giving the messages "PDH" and "ReflPD A"

06:07 Tried going to INITIAL_ALIGNMENT but Verbal kept calling out all of the optics as saturating, ALIGN_IFO wouldn't move past SET_SUS_FOR_ALS_FPMI, and the oplev overview screen showed ETMY, ITMX, and ETMX moving all over the place (I was worried something would go really wrong if I left it in that configuration)

I tried waiting a bit and then tried INITIAL_ALIGNMENT and LOCKING_ARMS_GREEN again but had the same results.

Naoki came on and thought the saturations might be due to the camera servos, so he cleared the camera servo history and all of the saturations disappeared (attachment5) and an INITIAL_ALIGNMENT is now running fine.

Not sure what caused the saturations in the camera servos, but it'll be worth looking into whether the saturations are what ended up causing this lockloss.

Images attached to this comment
naoki.aritomi@LIGO.ORG - 13:50, Monday 25 September 2023 (73091)

As shown in the attached figure, the output of camera servo for PIT suddenly became 1e20 and the lockloss happened after 0.3s. So this lockloss is likely due to the camera servo. I checked the input signal of camera servo. The bottom left is the input of PIT1 and should be same as the BS camera output (bottom right). The BS camera output itself looks OK, but the input of PIT1 became red dashed and the output became 1e20. The situation is same for PIT2, 3. I am not sure why this happened.

Images attached to this comment
david.barker@LIGO.ORG - 13:41, Tuesday 26 September 2023 (73119)

Opened FRS29216 . It looks like a transient NaN on ETMY_Y camera centroid channel caused a latching of 1e20 on the ASC CAM_PIT filter modules outputs.

oli.patane@LIGO.ORG - 11:40, Wednesday 27 September 2023 (73135)

Camilla, Oli

We trended the ASC-CAM_PIT{1,2,3}_INMON/OUTPUT channels alongside ASC-AS_A_DC_NSUM_OUT_DQ(attachment1) and found that according to ASC-AS_A_DC_NSUM_OUT_DQ, the lockloss started(cursor 1) ~0.37s before PIT{1,2,3}_INMON read NaN(cursor 2), so the lockloss was not caused by the camera servos.

It is possible that the NaN values are linked to the light dropping off of the PIT2 and 3 cameras right after the lockloss, as when the cameras come back online ~0.2s later, both the PIT2 and PIT3 cameras are at 0, and looking back over several locklosses it looks like these two cameras tend to drop to 0 between 0.35 and 0.55s after the lockloss starts. However the PIT1 camera is still registering light for another 0.8s after coming back online(typical time for this camera).

 

Images attached to this comment
naoki.aritomi@LIGO.ORG - 12:22, Tuesday 03 October 2023 (73239)

Patrick updated the camera server to solve the issue in alog73228.

H1 General
oli.patane@LIGO.ORG - posted 20:25, Friday 22 September 2023 (73063)
Ops EVE Midshift Update

Observing at 150Mpc and have been up for almost 12 hours now. We were taken out of Observing at 23:40UTC due to the ITMX TCS CO2 laser unlocking(73061), but were able to get back to Observing pretty quickly. Wind was up to 20mph for a while but has come back down, and microseism is trending downwards as well.

LHO FMCS (PEM)
oli.patane@LIGO.ORG - posted 19:42, Friday 22 September 2023 (73062)
HVAC Fan Vibrometers Check

Closes FAMIS#26253, last checked 72917

Corner Station Fans (attachment1)
All fans are looking good and are at values that are consistant with the previous week

Outbuilding Fans (attachment2)
Most fans are looking good and consistant, except for EX_FAN1_570_2. Starting 09/15 it suddenly became much noisier and now for the past week has had frequent, quick, high jumps in vibrations(attachment3).

Images attached to this report
H1 TCS
oli.patane@LIGO.ORG - posted 18:21, Friday 22 September 2023 (73061)
TCS ITMX CO2 Laser Unlocked + 1wk Update after Chiller Temp Offset Change

The TCS CO2 laser unlocked at ITMX at 09/22 23:40UTC and took us out of Observing for a minute and a half until it could relock itself. We went back into Observing at 23:42UTC. (Didn't contact Dave to clear the DAQ CRC errors since TCS locked back up so quickly)

A couple weeks ago I had noted(72653) that the TCS ITMX CO2 laser had been losing lock consistantly every ~2.8 days, leading to a steady decline in the laser head power (72627).


On 09/14 at 22:13UTC Camilla lowered the H1:TCS-ITMX_CO2_CHILLER_SET_POINT_OFFSET by 0.3degreesC(72887). In the week since then, it looks like (at least based on the 3 TCSX CO2 laser unlocks we've had since then) the average amount of time between unlocks has gone up slightly from every ~2.8ish days to every ~3.4 (attachment1 - T1 is when the offset was changed). Granted there's only two data points currently, but so far at least it looks like we've gained a bit of extra time between unlocks.


Unfortunately though the laser head power looks to still be decreasing at around a similar rate every time as compared to before the offset change.

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 16:17, Friday 22 September 2023 (73059)
Friday Ops Shift End


TITLE: 09/22 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
Aquired IFO While it was relocking

Have the following CDS lights:
H1:DAQ-DC0_H1IOPSUSAUXH2_CRC_SUM
H1:DAQ-DC0_H1SUSAUXH2_CRC_SUM
Dave put in an alog about this:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=73055
This has not been resolved and Dave has been waiting for us to fall out of Observing to reset this.

DARM FOM was having trouble with GDS again. Ive had to VNC into nuc30 twice taody to fix it.

Got into NOMNIAL_LOW_NOISE at 15:36 UTC

A SQZer ISS Unlocked too many times, and refferenced ALOG 70050 again.
Camilla jumped in and started following the instructions found in 70050.
Camilla is adjusting the H1:SQZ-OPO_TEC_SETTEMP
Camilla's alog: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=73053
Camilla is adjusting the H1:SQZ-OPO_TEC_SETTEMP

Before 16:07 UTC Ryan Short adjusted the ISS Defracted power for the PSL.  
We may have to Readjust the ISS Defracted power as the PSL thermalizes in a few hours.
This Has not happened yet as we have been in Observing the entire day.

Observing reached 16:10 UTC

Zone 5 LVEA temps increased by 3/4 of a degree oover 8 hours. But has since turned around and started to cool off already. Please keep an eye on that Zone 5 Temp change.

Locked for 7.5 hours and Observing the entire time.

LOG:
                                                                        

Start Time System Name Location Lazer_Haz Task Time End
14:09 PSL RyanS Remote - Restarting PSL 14:19
16:53 FAC Tyler EY N Chiller and Supply and return Hose work 17:10
20:11 FAC Tyler Air Handler1 N Checking AH 1 20:15
H1 General
oli.patane@LIGO.ORG - posted 16:13, Friday 22 September 2023 (73060)
Ops EVE Shift Start

TITLE: 09/22 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 14mph Gusts, 11mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.12 μm/s
QUICK SUMMARY:

Detector Observing and has been Locked for almost 8 hours.

LHO VE
david.barker@LIGO.ORG - posted 12:03, Friday 22 September 2023 - last comment - 12:10, Friday 22 September 2023(73056)
Fri CP1 Fill

Fri Sep 22 10:09:28 2023 INFO: Fill completed in 9min 24secs

Travis confirmed a good fill curbside.

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 12:10, Friday 22 September 2023 (73057)

Note that the TCs only dropped to -150C and -144C for A, B respectively because the outside air temp was only 49F at 10am (due to fog?). We might have to increase the trip temp from -130C to -120C soon.

H1 CDS
david.barker@LIGO.ORG - posted 11:43, Friday 22 September 2023 (73055)
h1susauxh2 DAQ CRC error linked to first power glitch 03:41:57 PDT

Around the time of the first power glitch this morning both models on h1susauxh2 recorded a single DAQ CRC error by both DC0 and DC1.

Attached plots show the 3 mains phases for this glitch, and the second plot shows one phase and one of the CRC errors (they all happened concurrently).

This glitch was more severe than usual, with two phases being significantly reduced in voltage for several cycles.

I will clear the CRCs next time H1 is out-of-observe.

Images attached to this report
H1 PSL (PSL, SYS)
ryan.short@LIGO.ORG - posted 07:53, Friday 22 September 2023 - last comment - 12:45, Friday 22 September 2023(73050)
Site Power Glitch and PSL Recovery

At 13:47 UTC, Corey put in an alog noting that H1 was unlocked and the PSL was down. I checked the control room screenshots page and saw that DIAG_MAIN was reporting the PMC high voltage was off, and Richard reported that there were two site power glitches early this morning, so I was immediately reminded of a very similar situation in early August where the high voltage supply for PMC locking was tripped off during a power glitch. I got in contact with Corey and Richard (who was on-site) and we coordinated turning the 375V power supply in the CER mezzanine back on. Richard also checked other HV supplies and reported they were on. Once the PMC HV was back on, I proceeded with recovering the PSL from home and ran into no issues. Corey is now relocking H1.

Since the PSL had just turned on, I needed to adjust the ISS RefSignal from -2.00V to -2.06V to keep the diffracted power between 2.0% and 2.5%. This will need to be adjusted again as the PSL thermalizes to keep the diffracted power where we want it; I'll check on this when I'm on-site later this morning.

Attached is a trend of the site power glitch at 10:42 UTC this morning and the PMC HV going down as a result.

Images attached to this report
Comments related to this report
matthew.heintze@LIGO.ORG - 09:13, Friday 22 September 2023 (73052)PSL

Just FYI, here at LLO the PMC HV power supply was the most sensitive to a power glitch (it is what would trip off first, even if the PSL stayed up)

 

This after a couple attempts we have swapped into the UPS talked about in LLO alog 66808. To date we have had no issues since this particular UPS was installed. We even simulated power outages/power glitches and the PMC HV supply stayed on.

ryan.short@LIGO.ORG - 12:45, Friday 22 September 2023 (73058)PSL

Created FRS 29181 to track PMC HV trips and noted LLO's implementation of an online UPS.

H1 ISC (AWC, DetChar, ISC)
keita.kawabe@LIGO.ORG - posted 12:01, Tuesday 19 September 2023 - last comment - 12:01, Tuesday 03 October 2023(72967)
OM2 thermistors are connected back to Beckhoff (but not the heater driver voltage input, for which we're still using the voltage reference) (Daniel, Keita)

Detchar, please tell us if the 1.66Hz comb is back.

We changed the OM2 heater driver configuration from what was described in alog 72061.

We used a breakout board with jumpers to connect all OM2 thermistor readback pins (pin 9, 10, 11, 12, 22, 23, 24, 25) to Beckhoff at the back of the driver chassis. Nothing else (even the DB25 shell on the chassis) is connected to Beckhoff.

Heater voltage inputs (pin 6 for positive and 19 for negative) are connected to the portable voltage reference powered by a DC power supply to provide 7.15V.

BTW, somebody powered the OM2 heater off at some point in time, i.e. OM2 has been cold for some time but we don't know exactly how long.

When we went to the rack, half of the power supply terminal (which we normally use for 9V batteries) was disconnected (1st picture), and there was no power to the heater. Baffling. FYI, if it's not clear, the power terminal should look like in the second picture.

Somebody should have snagged cables hard enough, and didn't even bother to check.

Next time you do it, since reconnecting is NOT good enough, read alog 72286 to learn how to set the voltage reference to 7.15V and turn off the auto-turn-off function, then do it, and tell me you did it. I promise I will thank you.

Images attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 12:07, Tuesday 19 September 2023 (72969)AWC, ISC

Thermistors are working.

Images attached to this comment
daniel.sigg@LIGO.ORG - 12:22, Tuesday 19 September 2023 (72970)

There is an OSEM pitch shift of OM2 at the end of the maintenance period 3 weeks ago (Aug 29)

Images attached to this comment
jenne.driggers@LIGO.ORG - 14:35, Tuesday 19 September 2023 (72979)CAL

Having turned the heater back on will likely affect our calibration.  It's not a bad thing, but it is something to be aware of.

ryan.short@LIGO.ORG - 14:47, Tuesday 19 September 2023 (72980)CAL

Indeed it now seems that there is a ~5Mpc difference in the range calculations between the front-ends (SNSW) and GDS (SNSC) compared to our last observation time.

Images attached to this comment
ansel.neunzert@LIGO.ORG - 10:22, Wednesday 20 September 2023 (73000)

It looks like this has brought back the 1.66 Hz comb. Attached is an averaged spectrum for 6 hours of recent data (Sept 20 UTC 0:00 to 6:00); the comb is the peaked structure marked with yellow triangles around 280 Hz. (You can also see some peaks in the production Fscans from the previous day, but it's clearer here.)

Images attached to this comment
keita.kawabe@LIGO.ORG - 13:12, Wednesday 20 September 2023 (73006)

To see if one of the Beckhoff terminals for thermistors is kaput, I disconnected thermistor 2 (pins 9, 11, 22 and 24) from Beckhoff at the back of the heater driver chassis.
For a short while the Beckhoff cable itself was disconnected but the cable was connected back to the breakout board at the back of the driver chassis by 20:05:00 UTC.

Thermistor 1 is still connected. Heater driver input is still receiving voltage from the voltage reference.

 

ansel.neunzert@LIGO.ORG - 09:53, Thursday 21 September 2023 (73028)

I checked a 3-hour span starting at 04:00 UTC today (Sept 21) and found something unusual. There is a similar structure peaked near 280 Hz, but the frequency spacing is different. These peaks lie on integer multiples of 1.1086 Hz, not 1.6611 Hz. Plot attached.

Images attached to this comment
keita.kawabe@LIGO.ORG - 14:32, Thursday 21 September 2023 (73037)DetChar-Request

Detchar, please see if there's any change in 1.66Hz comb.

At around 21:25 UTC, I disconnected OM2 thermistor 1 (pins 10, 12, 23, 25 of the cable at the back of the driver chassis) from Beckhoff and connected thermistor 2 (pins 9, 11, 22, 24).

Images attached to this comment
ansel.neunzert@LIGO.ORG - 11:19, Friday 22 September 2023 (73054)

Checked 6 hours of data starting at 04:00 UTC Sept 22. The comb structure persists with spacing 1.1086 Hz.

jeffrey.kissel@LIGO.ORG - 11:07, Tuesday 03 October 2023 (73235)DetChar
Electrical grounding of the beckhoff systems has been modified as a result of this investigation -- see LHO:73233.
jeffrey.kissel@LIGO.ORG - 12:01, Tuesday 03 October 2023 (73236)CAL
Corroborating Daniel's statement that the OM2 heater power supply was disrupted on Tuesday Aug 29th 2023 (LHO:72970), I've zoomed in on the pitch *OSEM* signals for both 
    (a) when the suspected time of power distruption (first attachment), and 
    (b) when the power and function of OM2 was restored (second attachment).

One can see that upon power restoration and resuming the HOT configuration of TSAMS on 2023-09-19 (time (a) above), OM2 pitch *decreases* by 190 [urad] over the course of ~1 hour, with a characteristic "thermal time constant" exponential shape to the displacement evolution.

Then, heading back to 2023-08-29, we can see a similar shaped event that causes OM2 pitch to *increase* by 160 [urad] over the course of ~40 minutes (time (b) above). Then at 40 minutes IFO recovery from maintenance begins, and we see the OM2 pitch *sliders* adjusted to account for the new alignment, as had been done several times before with the OM2 ON vs. OFF state changes.

I take this to be consistent with:
    The OM2 TSAMS heater was inadvertently turned OFF and COLD on 2023-08-29 at 18:42 UTC (11:42 PDT), and 
    The OM2 TSAMS heater was restored turned ON and HOT on 2023-09-19 18:14 UTC (11:14 PDT).
Images attached to this comment
Displaying reports 13141-13160 of 84081.Go to page Start 654 655 656 657 658 659 660 661 662 End