Displaying reports 12261-12280 of 84556.Go to page Start 610 611 612 613 614 615 616 617 618 End
Reports until 08:49, Wednesday 29 November 2023
H1 CDS (PEM)
david.barker@LIGO.ORG - posted 08:49, Wednesday 29 November 2023 (74471)
LVEA10 Dust monitor temperature readout issue

Around 12:12 PST Tuesday, coincidently around the DAQ restart time, H1:PEM-CS_TEMP_LVEA10_DUSTMON_DEGF dropped from 68F to 25F. Tagging PEM.

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 06:46, Wednesday 29 November 2023 - last comment - 08:20, Wednesday 29 November 2023(74468)
OWL Relocking Troubleshooting So Far

Around 10:30UTC H1_MANAGER requested help due to the Initial Alignment having timed out.

    - By the time I had gotten on, it had just been able to lock green arms and had offloaded fine.

    - However, it couldn't get XARM_IR to lock because the IMC kept unlocking and kicking MC2

11:08 I took the detector to DOWN

Restored all optics to 1385057770 (last second of the inital alignment before the 27hour lock from two days ago)

    (- I then restored the X and Y arms to 1385290825 (from when ALSX and ALSY had just been locked fine), but ALSY needed to be adjusted a lot and still wasn't able to get high enough to catch so I restored the arm optics back to the 1385057770 time)

    - Had to touch up both arms in GREEN_ARMS_MANUAL.

13:03 Went back into INITIAL_ALIGNMENT

    - Locked both arms quickly, but ALSY keeps drifting down while waiting for WFS, unlocking Y arm (attachment1)

13:15 Thinking that the issue might be in the specific alignment of the Y arm, I put the values of ETMY and TMSY to the values that they were while in INITIAL_ALIGNMENT and GREEN_ARMS_OFFLOADED from a few hours ago (1385289601)

    - Y arm was bad again and would not have caught, so I adjusted it again.

14:26 Into MANUAL_INITIAL_ALIGNMENT

    - Same as before, both arms locked quickly, but then ALSY started drifting down until it unlocked again.

So now we're having a different issue from what what it was initially, and referencing 49309, LASERDIODE{1,2}POWERMONITOR are both within range and tolerance(attachment2), although LASERDIODE2POWERMONITOR does look to be slowly drifting down, but very slightly and shouldn't be the issue anyway since ALSY was locking nicely just a couple of hours ago.

Something partially unrelated is that the ALSY spot on the camera is definitely a lot further over/cut off than it usually is, although it's possible that it's just because I'm seeing it closer up than usual. However, it's causing the flashes from Y to bleed over to the X spot and cause little jumps in ALS-C_TRX_A_LF_OUT.

Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 08:20, Wednesday 29 November 2023 (74470)

Updates:

To fix the drifting YARM when locked, Jenne just adjusted PR3 and fixed how green arms looked on the camera. Got that locked and offloaded, but the PR3 value is currently reverted so that we could get IR X to catch, and then will be moving it back to its new location once ASC is on.

After fixing this, we went back to having the initial issue that I was called for - XARM IR not being able to lock due to the IMC continuously unlocking. This time at least, MC2 is not constantly saturating. Jenne tried forgoing locking XARM IR and tried locking YARM IR, but we are having the same issue.

16:16 Jenne just got XARM_IR to lock by running the dark_offset script and they're now restarting an initial alignment.

LHO General
ryan.short@LIGO.ORG - posted 00:00, Wednesday 29 November 2023 (74466)
Ops Eve Shift Summary

TITLE: 11/29 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 140Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Two locklosses this shift which prompted a reversion of the changed AS72 whitening gain from maintenance day. Since then, things have been quiet, but BNS range is lower tonight for some reason.

LOG:

No log for this shift.

H1 SUS
ryan.short@LIGO.ORG - posted 23:26, Tuesday 28 November 2023 (74467)
In-Lock SUS Charge Measurement - Weekly

FAMIS 26068, last checked in alog 74211

I had to use the cds-py39 conda environment to run the coefficients.py script, as usual.

Images attached to this report
H1 ISC (OpsInfo)
naoki.aritomi@LIGO.ORG - posted 21:29, Tuesday 28 November 2023 - last comment - 22:19, Tuesday 28 November 2023(74457)
Increase the AS72 whitening gain

Sheila, Naoki

We increased the AS72 A/B whitening gain from 12dB to 21dB to reduce the ADC noise. We accepted it in safe.snap as shown in the first attached figure. To compensate the whitening gain increase, we decreased the SRC1 gain from 4 to 1.4 (4/1.4 = 9dB) by changing the following guardian.

line 3429, 3438 in ISC_LOCK guardian, ENGAGE_ASC_FOR_FULL_IFO state
line 1095, 1097 in ISC_DRMI guardian, ENGAGE_DRMI_ASC state

Then we found that the IFO lost lock twice after DRMI ASC is engaged. We found that the dark offset of AS72 was large and that caused too large SRC1 error signal. So we ran the dark offset script in userapps/isc/h1/scripts/dark_offsets/dark_offsets_exe.py. After that, the IFO could go to NLN.

We accepted a bunch of SDFs both in safe.snap and observe.snap as shown in the attached figures. 

Images attached to this report
Comments related to this report
naoki.aritomi@LIGO.ORG - 21:27, Tuesday 28 November 2023 (74464)

Since the BNS range was worse and the lock duration was only 1.5 hour with this whitening gain change. We reverted the whitening gain from 21dB to 12dB. We also reverted the whitening filter from 2 stage to 1 stage, which was done in 74231. Then we ran the dark offset script again and accepted a bunch of SDFs in safe.snap as shown in the attached figures. We need to accept them also in observe.snap. We also reverted the SRC1 gain in ISC_LOCK and ISC_DRMI guardian.

Images attached to this comment
ryan.short@LIGO.ORG - 22:19, Tuesday 28 November 2023 (74465)

I accepted these same SDF diffs in the OBSERVE tables when we relocked. Screeshots attached.

Images attached to this comment
H1 General (ISC, Lockloss)
ryan.short@LIGO.ORG - posted 20:42, Tuesday 28 November 2023 (74463)
Lockloss @ 04:25 UTC

Lockloss @ 04:25 UTC - no obvious cause, but this is about the same duration into the lock as the last lockloss. We suspect this has to do with changes to the AS72 A/B whitening and SRC1 gains made earlier today (alog 74457). Naoki is reverting these changes and we will run the dark offsets script again before relocking.

LHO General
ryan.short@LIGO.ORG - posted 20:20, Tuesday 28 November 2023 (74462)
Ops Eve Mid Shift Report

State of H1: Observing at 135Mpc

H1 has been locked and observing for 1.5 hours. Range has been lower for both lock stretches since maintenance day and the power recycling gain is quite noisy; it's gotten worse over the past 10 minutes and some ASC control signals follow it (mainly CSOFT_P and CHARD_Y).

Images attached to this report
H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 18:02, Tuesday 28 November 2023 - last comment - 09:02, Thursday 30 November 2023(74460)
Lockloss @ 01:39 UTC

Lockloss @ 01:39 UTC - no obvious cause, online lockloss analysis failed.

Looks like PRCL saw the first motion by a very small margin.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 19:02, Tuesday 28 November 2023 (74461)

H1 back to observing as of 03:01 UTC.

oli.patane@LIGO.ORG - 09:02, Thursday 30 November 2023 (74491)
Images attached to this comment
LHO General
ryan.short@LIGO.ORG - posted 16:09, Tuesday 28 November 2023 - last comment - 16:40, Tuesday 28 November 2023(74458)
Ops Eve Shift Start

TITLE: 11/29 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 2mph Gusts, 1mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.23 μm/s
QUICK SUMMARY:

H1 is relocking following maintenance day, currently up to TRANSITION_FROM_ETMX. The PRG trace is noisier than usual, but otherwise things look okay.

Comments related to this report
ryan.short@LIGO.ORG - 16:40, Tuesday 28 November 2023 (74459)ISC, OpsInfo

H1 is back to observing as of 00:22 UTC.

I had to deal with one SDF diff in the CS_ISC table for the H1:ALS-C_DIFF_PLL_VCOCOMP channel. Trending it back, it seems to have been turned OFF last Tuesday during maintenance (ISC_LOCK in 'IDLE') and back ON earlier this afternoon at 20:46 UTC, during the "Beckhoff work" section of Ibrahim's day shift log. Prior to it being switched OFF last week, it had been ON for about 8.5 years, or pre-O1. So, I ACCEPTED this diff.

Images attached to this comment
LHO General (FMP)
ibrahim.abouelfettouh@LIGO.ORG - posted 16:01, Tuesday 28 November 2023 (74456)
OPS Day Shift Summary

TITLE: 11/28 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:

  1. 16:35 UTC - Dave noticed an EY GPS timing error on the CDS overview that was flashing. The PPS A GPS clock was flashing an error due to a timing difference past tolerance. Error disappeared after 16:40 UTC.
  2. 17:00 UTC SNEWS alert test T456241 caught on both Diag main and verbal
  3. 18:11 UTC HAM7 Watchdog trip, likely due to SQZ rack work that Fil is doing (pulling cables).
  4. 18:46 UTC IOC server temp change: While Erik was updating the IOC LVEA temperature, he realized that the entire IOC server needed to be rebooted.
    1. IOC server reboot happened at 18:46 UTC and connection was reestablished at 18:49 UTC.
    2. IOC reboot temporarily took down:
      1. Remote power
      2. Check violins
      3. Wi-Fi
      4. Ops GraceDB Standown - did not restart as expected, TJ turned it back on
      5. Picket Fence - did not restart as expected prompting further investigation by Erik - 
    3. Wi-Fi was separately turned on during reboot in order to maintain control room work.
    4. 19:09 UTC Erik updating IOC temp. Done at 19:16 UTC
  5. Two GRB Short Alerts
    1. 19:25 UTC
    2. 19:37 UTC
  6. 19:00 UTC (ish) Temp excursion in CER and SUP Rack 1 - still within “nominal/tolerance” but visibly on the come-up. 
    1. 20:10 UTC Eric investigated and found that the lead unit had a fault and so he reset the system. Temperatures visibly on the come-down, both in CER and SUP.
  7. 19:46 UTC - during Beckhoff work, 
    1. Beckhoff restart caused a connection error with guardian ALSY node. Guardian 1 seems to have lost connection, despite Beckhoff showing everything as normal. There are 7 SPM diffs, all pertaining to H1:ALSY (and nothing else). 
    2. Apparently it fixed itself at 19:46 UTC and then the same issue happened at 20:27 UTC
    3. Error message reads as: “CONNECTION ERRORS, see SPM DIFFS for dead channels". See screenshot.
    4. The workstations have no problems and everything else seems normal.
    5. Dave and Daniel suggested doing a Guardian reboot, though Dave is looking into it and seeing if it will/won’t help.
    6. 20:45 UTC: This is currently stopping us from locking
    7. 20:51 (ish) UTC - TJ fixed the issue by bringing the guardian node to STOP and then EXEC, instead of reloading
      1. Apparently this is an issue where sometimes when connections are lost and reconnected a complete STOP rather than a PAUSE/RELOAD is necessary to bring the connection back.
      2. No Guardian reboot necessary
  8. DAQ Restart: NUC23 isn’t loading and is yielding an error message - Ryan C couldn’t get into it during the previous maintenance Tuesday.
    1. Ryan C is power cycling it and then attempting to connect
    2. Worked fine after restart
  9. Locking at 20:53 UTC 
    1. Starting with an initial alignment - Initial Alignment went fine
      1. Had to touch ALS_Y Yaw to get it caught
  10. Lock acquisition 1 at 21:48 UTC: Lock loss due to DRMI lockloss at Turning_BS_Stage2
    1. Caused by DRMI_ASC issue that Sheila caught and is working on.
  11. Lock acquisition 2 at 21:52 UTC: Sheila and Naoki - DRMI_ASC won’t work until they fix something so guardian only taken to DRMI_Locked_Prep_ASC
    1. Sheila said it was potentially fixed -
    2. Lockloss at the same stage again 21:57 UTC
  12. Lock acquisition 3 at 21:59 UTC
    1. IMC put to DOWN at 22:00 UTC while Sheila and Naoki investigate the DRMI ASC issues
    2. Back to lock acquisition at 22:17 UTC
    3. Paused at DRMI_LOCKED_PREP_ASC - Sheila and Naoki noticed ASC SRC1 Pitch and Yaw offset(s) were off by orders of magnitude but they seem to have been fixed during their investigation. See Naoki’s alog 74457.
    4. 22:28 UTC Continuing lock - it worked!
    5. NLN Achieved at 23:04 UTC
    6. LOCKLOSS at 23:14 UTC while ASC clearing SDF Diffs.
      1. TJ noticed noisy power recycling gain
  13. 14:54 UTC: FMCS Air Handler 3B (Reheat) - FMCS alarm handler came onto red and then 3 mins later went back to normal (tagged FMP)
  14. Lock acquisition 4 at 23:15 UTC
    1. Power recycling gain is being noisy (again).

LOG:

Start Time System Name Location Lazer_Haz Task Time End
16:04 FAC Kim and Karen EX, EY N Technical cleaning 17:23
16:07 SUS Randy, Chris, Mitchell EY, EX N EX cleanroom sock install 18:32
16:08 VAC Jordan MY, EY N Turbo pump tests 17:01
16:11 FAC Cindi FCES N Technical cleaning 17:14
16:15 VAC Gerardo and Jordan FCES N Valve install 17:15
16:48 CDS Fernando and Marc LVEA N SQZ 4 Beckhoff modifications 19:44
16:55 SQZ/CDS Fil CER/SQZ Racks N Pulling cables 19:51
17:13   Richard LVEA N Electrical walkthrough/escorting people 17:43
17:14 FAC Cindi Mech room N Cardboard collection 17:44
17:43 VAC Ken and Gerardo FCTE N Valve install 20:13
17:49 VAC Travis EX N Turbo Station Cooling Lines Upgrade 20:15
17:50 VAC Jordan MY, EY N Turbo pump tests 18:46
18:00 FAC Karen and Cindi LVEA N Technical cleaning + High bay check 19:29
18:10 VAC Norco CP8 EX N LN2 Fill 20:05
18:13 FAC Ken LVEA N Electrical work 20:07
18:19   Richard M-Station/Wandering/FCES N Smoke detector check 19:56
18:47 CDS Erik   N IOC Server Reboot (and temp change) 18:57
18:51 VAC Jordan FCTE N Valve install assistance 19:51
19:02 TCS Camilla LVEA N TCS Setup 20:11
19:22 FAC Karen Recieving N Bringing car out 19:54
19:28 FAC Eric CER/Sup N Investigating temperature excursion 19:51
19:45 CDS Fernando   N Rebooting with modifications 19:56
19:48 FAC Mitchell and Eric CER N Checking CER disconnects 20:15
20:04 CDS Jonathan   N DAQ Restart 20:23
20:06 VAC Travis EX N Sensor correction correction 20:17
20:14 VAC Gerardo FCTE N Valve Opening 20:24
Images attached to this report
H1 General (DetChar)
camilla.compton@LIGO.ORG - posted 15:28, Tuesday 28 November 2023 (74455)
Increased Glitches during DARM Range Fuzzy Time

Arianna, Camilla

There are many glitches making up the “fuzzy” range time from 07:00UTC 23rd November 2023. We identified glitches using ndscope of the DARM BLRMs, (plot attached showing yellow, green, blue BLRMs getting worse at t-cursor) and then used ldvw.ligo.caltech.edu Q-transform to plot omega scans of the glitches. Original troubleshooting in alog 74377

Attached is a pdf containing the glitches seen between 07:00UTC and the lockloss at 10:10UTC. There are many more glitches than usual and lots of different types of glitches.

Images attached to this report
Non-image files attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 14:34, Tuesday 28 November 2023 - last comment - 08:10, Wednesday 29 November 2023(74454)
CDS Maintenance Summary: Tuesday 28th November 2023

WP11540 Remove SDF safe->OBSERVE exceptions for h1seiproc and h1brs

TJ, Jim:

TJ made changes to guardian to remove the exception that h1seiproc and h1brs do not transition to OBSERVE.snap. h1brs OBSERVE==safe, h1seiproc has a separate OBSERVE.snap which Jim will verify is correct.

WP11546 New FMCS STAT code base

Erik:

Erik rewrote the FMCS STAT IOC to use this new softIOC python module. The code was restarted.

New Guardian node

Camilla, Dave:

Camilla started a new Guardian nodel called SQZ_ANG_ADJUST. I updated the H1EPICS_GRD.ini file. DAQ+EDC restart was required.

DAQ Restart:

Jonathan, Erik, Dave:

The DAQ was restarted to include the new GRD channels into the EDC. This was a good restart with no issues.

cdsioc0 reboot

Erik, Dave:

Erik updated and rebooted cdsioc0. There were some issues getting Picket Fence running again, which Erik resolved. We were reminded that I was running a temporary HWS ETMY IOC in a tmux session, Erik switched this over to a systemd service via puppet.

Comments related to this report
david.barker@LIGO.ORG - 08:10, Wednesday 29 November 2023 (74469)

Tue28Nov2023
LOC TIME HOSTNAME     MODEL/REBOOT
12:12:04 h1daqdc0     [DAQ] <<< 0-leg restart
12:12:13 h1daqfw0     [DAQ]
12:12:13 h1daqtw0     [DAQ]
12:12:14 h1daqnds0    [DAQ]
12:12:22 h1daqgds0    [DAQ]


12:12:35 h1susauxb123 h1edc[DAQ] <<< EDC restart for GRD


12:20:44 h1daqdc1     [DAQ] <<< 1-leg restart
12:20:56 h1daqfw1     [DAQ]
12:20:57 h1daqtw1     [DAQ]
12:20:58 h1daqnds1    [DAQ]
12:21:07 h1daqgds1    [DAQ]
 

LHO VE
jordan.vanosky@LIGO.ORG - posted 13:08, Tuesday 28 November 2023 - last comment - 11:28, Tuesday 05 December 2023(74453)
Functionality Test Performed on EY/MY Turbo Pumps

Jordan

We ran the functionality test on the main turbopumps in MY and EY during Tuesday Maintenance (11/28/23). The scroll pump is started to take pressure down to low 10^-02 Torr, at which time the turbo pump is started, the system reaches low 10^-08 Torr after a few minutes, then the turbo pump system is left ON for about 1 hour, after the hour the system goes through a shut down sequence.
 

MY Turbo:

Bearing Life:100%

Turbo Hours: 208

Scroll Pump Hours: 74

EY Turbo:

Scroll pump made a grinding sound after getting to ~ 5E-2 Torr. I closed all valves and stopped the test. The scroll pump only has 200 hours on it so it will be disassembled to figure out the source of the noise. I have swapped the scroll pump with a new ISP250, but did not have time to run the turbo test. I will resume next tuesday and add a comment to this alog with the EY results.

Closing WP 11544 and FAMIS 24917

Comments related to this report
jordan.vanosky@LIGO.ORG - 11:28, Tuesday 05 December 2023 (74606)

After swapping the scroll pump, I ran the functionality test on the EY main turbopump during Tuesday maintenance, no issues were encountered during this test.

Turbo Hours: 1275

Scroll Pump Hours: 72

Bearing life: 100%

Closing WP 11553 and FAMIS 24941

H1 CDS (SEI)
filiberto.clara@LIGO.ORG - posted 15:01, Tuesday 21 November 2023 - last comment - 12:54, Tuesday 28 November 2023(74345)
HAM4 ISI Coil Driver Fan

WP 11533

Checked the HAM4 ISI Coil Driver Chassis. Issue reported last week of noisy fan. Fan is spinning and noise reported last week has not returned. Will leave WP open another week.

Comments related to this report
filiberto.clara@LIGO.ORG - 12:54, Tuesday 28 November 2023 (74452)

Second week of monitoring fan. No issues, closing work permit.

Displaying reports 12261-12280 of 84556.Go to page Start 610 611 612 613 614 615 616 617 618 End