Displaying reports 14001-14020 of 86302.Go to page Start 697 698 699 700 701 702 703 704 705 End
Reports until 13:28, Wednesday 29 November 2023
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 13:28, Wednesday 29 November 2023 - last comment - 15:40, Wednesday 29 November 2023(74478)
OPS Day Midshift Update

Midshift Update: Still troubleshooting (but finding issues and chasing leads)

Here are the main stories, troubleshooting ideas and control room activities so far...

 

TIME: 18:08 UTC - 19:00 UTC

Codename: Ham 1 Watchdog Trip

Ham 1 Watchdog tripped due to Robert hitting the damping since it started “audibly singing”. We untripped each other. At 18:41 UTC, Jim suggested going down to Ham 1 to investigate the watchdog trip and the singing. See Jenne’s alog 74476 for more. This is not the cause of our locking issues.

 

TIME: 18:09 UTC - 18:33 UTC

Codename: No Name Code

There was an ALS_DIFF naming error in a piece of code that TJ pushed in this morning. Jenne found it, we edited the code, reloaded (via stop and exec) the ALS_DIFF guardian and it was fine. Initially thought to be as a part of the watchdog trip but was not. This is not the cause of our locking issues.

 

TIME: 17:30 (ish) UTC - 21:30 UTC

Codename: Fuzzy Temp

The temperature excursion that was caught and apparently fixed yesterday has not been fixed. While the temperatures have definitely returned to “nominal” and are within tolerance, the SUP temperature monitor is reporting extremely noisy temperatures that plateaued at a higher level (1.5 degrees) than it was prior to the excursion. In addition, the readings are extremely fuzzy. I went down with Eric into the CER and we confirmed that both the thermostats are correct in their readings, eliminating that potential cause. There are a few reasons this may be happening per Fil and Robert’s investigation.

  1. It could be the temperature sensor itself causing excess noise and potentially contributing to its own maybe erroneous report of temperature.
  2. It could be a faithful reading of another nearby source moving up and down in temperature quickly, causing this fluctuation (the noisiness oscillates at 15-min periods generally). This would mean it is a controls and/or external CER machine issue and there’s nothing wrong with the temp readout itself.

The proposed plan for this was to switch the cables between the more stable CER temp readout in the same room and the fuzzy SUP readout to determine if this issue was upstream (Beckhoff error) or downstream (temperature sensor). The cables were switched and upon trending the channels, we found that the noisy/fuzzy SUP readout (now plugged into CER channel) became stable and vice versa. This meant that the noise was related to the equipment being plugged in i.e. the temperature sensor and/or its cable.

Fil switched the sensor out but the fluctuation did not change. Robert had the idea that it could be a nearby air conditioner (AC5) that was turning on and off and thus causing the temperature fluctuations. He turned the AC off 20:30 UTC and and we waited to see the temperature response. We found that indeed, the AC was the cause of the fluctuation (Screenshot 4).

This tells us that the AC behavior changed during yesterday’s maintenance, causing it to be more noisy. This noisiness was only perceived after the temperature excursion, and only appeared to be changed after the excursion was fixed.

Unfortunately, this would means that the issue is contained to faulty equipment rather than faulty controls, which means this is not the cause of our locking issues.

See screenshots (1 → 4) to get an idea of the overall pre-switch noise and the post-switch confirmation.

TIME: 16:30 UTC - Ongoing

Codename: Mitosis

There is a perceived “cell splitting” jittering in the AS AIR camera during PRMI’s engage ASC loop that takes place after PRMI is locked. This jittering, given enough time, causes swift locklosses in this state, and definitely worse with presence of ASC actuation.

Jenne found no issues or glitches in the PRC optics (lower and higher stages) (Screenshot 5). Jenne did find a 1.18Hz ring up when PRMI is locked, and when that gets bad there's the glitches in POP18. Jenne found that the glitching seems to go away, and that the 1.18 Hz ringing went away when she lowered the LSC MICH locking gain from nominal 3.2 down to 2.5. (Screenshot 6).

Coil drivers: Checked during troubleshooting to see if these might have caused/exacerbated lock issues - confirmed by Rahul not to be the case.

An idea so far is that the SUS-PRM-M3 stage may be glitching, but we need to see if this glitch persists without the feedback that a locked PRMI would have. Confirmed not to be glitching. Sheila just checked the same thing for the BS and the ITMs. We are left with less of an idea of what’s going on now. The jittering in the AS AIR camera is, however, fixable this way. This was not changed in the guardian.

So this “Mitosis” issue is somewhat resolved (or at least bandaged as we investigate more).

Ideas of leads to chase are:

 

Stay tuned.

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 15:40, Wednesday 29 November 2023 (74482)

It seems that the "mitosis" is BS optical lever glitching, which shouldn't prevent us from locking if we can get past DRMI. (and wouldn't be responsible for high noise and locklosses overnight).

74480

H1 ISC (DetChar)
gabriele.vajente@LIGO.ORG - posted 11:38, Wednesday 29 November 2023 (74474)
Non-linear ESD noise

This is a follow up on 74315, just another way to see that the low frequency (1-4 Hz) ESD actuation is modulating the DARM noise at higher frequency (15-25 Hz).

Looking at a DARM spectrogram, noise in the low frequency range (<30 Hz) is non-stationary. This is not a new observation, and the effect if even more visible looking at a whitened spectrogram.

One can compute the band-limited noise in DARM by summing over frequency bins between 16 and 30 Hz in the whitened spectrogram (so averaged over 5s windows). And compute the total RMS in the ESD signal in the same windows (dominated by the low frequency part). A scatter plot of ESD RMS vs DARM noise shows a clear correlation.

Similarly, one can comput the bicoherence of the triplet (ESD, ESD, DARM), so that the 2d plot shows how ESD(f_1) * ESD(f_2) contributes to DARM(f_1+f_2). There is strong bicoherence of the low frequency ESD signal with DARM at frequencies above 10 Hz.

One can select all times when the ESD RMS is below 0.06 (in the units of the scatter plot) and average DARM during those times. A comparison with an average over all times gives us an idea of how much DARM could be improved by reducing the ESD drive (if the non-linear noise behavior is due to the ESD, as we believe).

Images attached to this report
H1 SEI
jenne.driggers@LIGO.ORG - posted 10:23, Wednesday 29 November 2023 (74476)
HAM1 HEPI trip

[Ibrahim, Jenne]

Not sure yet why, but HAM1 HEPI tripped while we were on Find IR.  I plotted the actuator trip from the watchdog screen, and maybe there were some glitches several seconds before the trip.  I'm not sure if that's meaningful, and perhaps it's just Robert walking gently around the input arm, but perhaps this is related to......something?  Attached is the plot from the watchdog screen.

Also, Robert just came back to the control room to report that while he was out there, the HAM1 HEPI started 'singing' again.  He noted that the damping had been removed, since apparently we thought it had been fixed, but now perhaps it seems that it's intermittent.  Robert crawled under HAM1 and put the damping material back in place, so it's possible that that's the cause of these small glitches and the trip.

 

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:08, Wednesday 29 November 2023 (74475)
Wed CP1 Fill

Wed Nov 29 10:04:23 2023 INFO: Fill completed in 4min 20secs

Jordan confirmed a good fill curbside. Note TC-B started above -1.0C so we got another decade in the y-axis log scaling.

Images attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 09:31, Wednesday 29 November 2023 - last comment - 09:33, Wednesday 29 November 2023(74472)
OPS Day Shift Start

TITLE: 11/29 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 0mph Gusts, 0mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.54 μm/s
QUICK SUMMARY:

IFO is troubleshooting issues ongoing from last night. Oli and Jenne were on TS when I arrived. Both described the problems they were facing (some outlined in Oli's alog 74468). Most recently, Jenne was thinking that becaue there was still signla when the IMC was off, the dark offsets script should be run. Once this finished, we went to (manual) initial alignment in order to see tH1:ALS-C_COMM_A_DEMOD_RFMOhow IFO was doing.

ALS Locked, but only by moving PR3 (again, since it was moved by Jenne during Oli's shift). We're trying to lock X arm IR now by reverting PR3 Y to its initial number. So far, it seems that X Arm IR and ALS don't lock at the same configuration. Continuing to troubleshoot.

On TS with Jenne still.

Other:

Comments related to this report
camilla.compton@LIGO.ORG - 09:33, Wednesday 29 November 2023 (74473)SQZ

Comparing Tuesday AM locks to last night, the range BLRMs <60Hz are much worse, see attached, explaining the ~10-15MPc lower range. DARM attached at 2h30 into both locks, at 2023/11/28 10:45:32 UTC (range 158MPc) and 2023/11/29 08:35:25 UTC (range 148MPC), the decreased sensitivity seen <70Hz remains for the whole lock. Microseism was increased at this time, maybe this is the only cause?

SQZ is also slightly worse everywhere. BLRMs showing ~ 3.8dB rather than 4.2dB, we should look at if we need any re-tuning of the OPO temperature once we are back in NLN, tagging SQZ. Espcially have more of our 4.6kHz mode mismatch bump. Unsure why it would have changed over Tuesday maintenance but there was some Beckhoff restarts and a HAM7 WD trip 74456.

Images attached to this comment
H1 CDS (PEM)
david.barker@LIGO.ORG - posted 08:49, Wednesday 29 November 2023 (74471)
LVEA10 Dust monitor temperature readout issue

Around 12:12 PST Tuesday, coincidently around the DAQ restart time, H1:PEM-CS_TEMP_LVEA10_DUSTMON_DEGF dropped from 68F to 25F. Tagging PEM.

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 06:46, Wednesday 29 November 2023 - last comment - 08:20, Wednesday 29 November 2023(74468)
OWL Relocking Troubleshooting So Far

Around 10:30UTC H1_MANAGER requested help due to the Initial Alignment having timed out.

    - By the time I had gotten on, it had just been able to lock green arms and had offloaded fine.

    - However, it couldn't get XARM_IR to lock because the IMC kept unlocking and kicking MC2

11:08 I took the detector to DOWN

Restored all optics to 1385057770 (last second of the inital alignment before the 27hour lock from two days ago)

    (- I then restored the X and Y arms to 1385290825 (from when ALSX and ALSY had just been locked fine), but ALSY needed to be adjusted a lot and still wasn't able to get high enough to catch so I restored the arm optics back to the 1385057770 time)

    - Had to touch up both arms in GREEN_ARMS_MANUAL.

13:03 Went back into INITIAL_ALIGNMENT

    - Locked both arms quickly, but ALSY keeps drifting down while waiting for WFS, unlocking Y arm (attachment1)

13:15 Thinking that the issue might be in the specific alignment of the Y arm, I put the values of ETMY and TMSY to the values that they were while in INITIAL_ALIGNMENT and GREEN_ARMS_OFFLOADED from a few hours ago (1385289601)

    - Y arm was bad again and would not have caught, so I adjusted it again.

14:26 Into MANUAL_INITIAL_ALIGNMENT

    - Same as before, both arms locked quickly, but then ALSY started drifting down until it unlocked again.

So now we're having a different issue from what what it was initially, and referencing 49309, LASERDIODE{1,2}POWERMONITOR are both within range and tolerance(attachment2), although LASERDIODE2POWERMONITOR does look to be slowly drifting down, but very slightly and shouldn't be the issue anyway since ALSY was locking nicely just a couple of hours ago.

Something partially unrelated is that the ALSY spot on the camera is definitely a lot further over/cut off than it usually is, although it's possible that it's just because I'm seeing it closer up than usual. However, it's causing the flashes from Y to bleed over to the X spot and cause little jumps in ALS-C_TRX_A_LF_OUT.

Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 08:20, Wednesday 29 November 2023 (74470)

Updates:

To fix the drifting YARM when locked, Jenne just adjusted PR3 and fixed how green arms looked on the camera. Got that locked and offloaded, but the PR3 value is currently reverted so that we could get IR X to catch, and then will be moving it back to its new location once ASC is on.

After fixing this, we went back to having the initial issue that I was called for - XARM IR not being able to lock due to the IMC continuously unlocking. This time at least, MC2 is not constantly saturating. Jenne tried forgoing locking XARM IR and tried locking YARM IR, but we are having the same issue.

16:16 Jenne just got XARM_IR to lock by running the dark_offset script and they're now restarting an initial alignment.

LHO General
ryan.short@LIGO.ORG - posted 00:00, Wednesday 29 November 2023 (74466)
Ops Eve Shift Summary

TITLE: 11/29 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 140Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Two locklosses this shift which prompted a reversion of the changed AS72 whitening gain from maintenance day. Since then, things have been quiet, but BNS range is lower tonight for some reason.

LOG:

No log for this shift.

H1 SUS
ryan.short@LIGO.ORG - posted 23:26, Tuesday 28 November 2023 (74467)
In-Lock SUS Charge Measurement - Weekly

FAMIS 26068, last checked in alog 74211

I had to use the cds-py39 conda environment to run the coefficients.py script, as usual.

Images attached to this report
H1 ISC (OpsInfo)
naoki.aritomi@LIGO.ORG - posted 21:29, Tuesday 28 November 2023 - last comment - 22:19, Tuesday 28 November 2023(74457)
Increase the AS72 whitening gain

Sheila, Naoki

We increased the AS72 A/B whitening gain from 12dB to 21dB to reduce the ADC noise. We accepted it in safe.snap as shown in the first attached figure. To compensate the whitening gain increase, we decreased the SRC1 gain from 4 to 1.4 (4/1.4 = 9dB) by changing the following guardian.

line 3429, 3438 in ISC_LOCK guardian, ENGAGE_ASC_FOR_FULL_IFO state
line 1095, 1097 in ISC_DRMI guardian, ENGAGE_DRMI_ASC state

Then we found that the IFO lost lock twice after DRMI ASC is engaged. We found that the dark offset of AS72 was large and that caused too large SRC1 error signal. So we ran the dark offset script in userapps/isc/h1/scripts/dark_offsets/dark_offsets_exe.py. After that, the IFO could go to NLN.

We accepted a bunch of SDFs both in safe.snap and observe.snap as shown in the attached figures. 

Images attached to this report
Comments related to this report
naoki.aritomi@LIGO.ORG - 21:27, Tuesday 28 November 2023 (74464)

Since the BNS range was worse and the lock duration was only 1.5 hour with this whitening gain change. We reverted the whitening gain from 21dB to 12dB. We also reverted the whitening filter from 2 stage to 1 stage, which was done in 74231. Then we ran the dark offset script again and accepted a bunch of SDFs in safe.snap as shown in the attached figures. We need to accept them also in observe.snap. We also reverted the SRC1 gain in ISC_LOCK and ISC_DRMI guardian.

Images attached to this comment
ryan.short@LIGO.ORG - 22:19, Tuesday 28 November 2023 (74465)

I accepted these same SDF diffs in the OBSERVE tables when we relocked. Screeshots attached.

Images attached to this comment
H1 General (ISC, Lockloss)
ryan.short@LIGO.ORG - posted 20:42, Tuesday 28 November 2023 (74463)
Lockloss @ 04:25 UTC

Lockloss @ 04:25 UTC - no obvious cause, but this is about the same duration into the lock as the last lockloss. We suspect this has to do with changes to the AS72 A/B whitening and SRC1 gains made earlier today (alog 74457). Naoki is reverting these changes and we will run the dark offsets script again before relocking.

LHO General
ryan.short@LIGO.ORG - posted 20:20, Tuesday 28 November 2023 (74462)
Ops Eve Mid Shift Report

State of H1: Observing at 135Mpc

H1 has been locked and observing for 1.5 hours. Range has been lower for both lock stretches since maintenance day and the power recycling gain is quite noisy; it's gotten worse over the past 10 minutes and some ASC control signals follow it (mainly CSOFT_P and CHARD_Y).

Images attached to this report
H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 18:02, Tuesday 28 November 2023 - last comment - 09:02, Thursday 30 November 2023(74460)
Lockloss @ 01:39 UTC

Lockloss @ 01:39 UTC - no obvious cause, online lockloss analysis failed.

Looks like PRCL saw the first motion by a very small margin.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 19:02, Tuesday 28 November 2023 (74461)

H1 back to observing as of 03:01 UTC.

oli.patane@LIGO.ORG - 09:02, Thursday 30 November 2023 (74491)
Images attached to this comment
LHO General
ryan.short@LIGO.ORG - posted 16:09, Tuesday 28 November 2023 - last comment - 16:40, Tuesday 28 November 2023(74458)
Ops Eve Shift Start

TITLE: 11/29 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 2mph Gusts, 1mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.23 μm/s
QUICK SUMMARY:

H1 is relocking following maintenance day, currently up to TRANSITION_FROM_ETMX. The PRG trace is noisier than usual, but otherwise things look okay.

Comments related to this report
ryan.short@LIGO.ORG - 16:40, Tuesday 28 November 2023 (74459)ISC, OpsInfo

H1 is back to observing as of 00:22 UTC.

I had to deal with one SDF diff in the CS_ISC table for the H1:ALS-C_DIFF_PLL_VCOCOMP channel. Trending it back, it seems to have been turned OFF last Tuesday during maintenance (ISC_LOCK in 'IDLE') and back ON earlier this afternoon at 20:46 UTC, during the "Beckhoff work" section of Ibrahim's day shift log. Prior to it being switched OFF last week, it had been ON for about 8.5 years, or pre-O1. So, I ACCEPTED this diff.

Images attached to this comment
H1 CDS
david.barker@LIGO.ORG - posted 14:34, Tuesday 28 November 2023 - last comment - 08:10, Wednesday 29 November 2023(74454)
CDS Maintenance Summary: Tuesday 28th November 2023

WP11540 Remove SDF safe->OBSERVE exceptions for h1seiproc and h1brs

TJ, Jim:

TJ made changes to guardian to remove the exception that h1seiproc and h1brs do not transition to OBSERVE.snap. h1brs OBSERVE==safe, h1seiproc has a separate OBSERVE.snap which Jim will verify is correct.

WP11546 New FMCS STAT code base

Erik:

Erik rewrote the FMCS STAT IOC to use this new softIOC python module. The code was restarted.

New Guardian node

Camilla, Dave:

Camilla started a new Guardian nodel called SQZ_ANG_ADJUST. I updated the H1EPICS_GRD.ini file. DAQ+EDC restart was required.

DAQ Restart:

Jonathan, Erik, Dave:

The DAQ was restarted to include the new GRD channels into the EDC. This was a good restart with no issues.

cdsioc0 reboot

Erik, Dave:

Erik updated and rebooted cdsioc0. There were some issues getting Picket Fence running again, which Erik resolved. We were reminded that I was running a temporary HWS ETMY IOC in a tmux session, Erik switched this over to a systemd service via puppet.

Comments related to this report
david.barker@LIGO.ORG - 08:10, Wednesday 29 November 2023 (74469)

Tue28Nov2023
LOC TIME HOSTNAME     MODEL/REBOOT
12:12:04 h1daqdc0     [DAQ] <<< 0-leg restart
12:12:13 h1daqfw0     [DAQ]
12:12:13 h1daqtw0     [DAQ]
12:12:14 h1daqnds0    [DAQ]
12:12:22 h1daqgds0    [DAQ]


12:12:35 h1susauxb123 h1edc[DAQ] <<< EDC restart for GRD


12:20:44 h1daqdc1     [DAQ] <<< 1-leg restart
12:20:56 h1daqfw1     [DAQ]
12:20:57 h1daqtw1     [DAQ]
12:20:58 h1daqnds1    [DAQ]
12:21:07 h1daqgds1    [DAQ]
 

Displaying reports 14001-14020 of 86302.Go to page Start 697 698 699 700 701 702 703 704 705 End