Displaying reports 19901-19920 of 87669.Go to page Start 992 993 994 995 996 997 998 999 1000 End
Reports until 11:42, Tuesday 23 May 2023
H1 ISC
marc.pirello@LIGO.ORG - posted 11:42, Tuesday 23 May 2023 - last comment - 14:32, Tuesday 23 May 2023(69833)
ISC Rack Power Adjustment 2

F. Mera, M. Pirello

Continuing with WP11193

7 - Installed segregated +24V run from VDC-C2 U34 RHS to SQZ-C1 for the OMC IO Chassis.  This conductively isolates the OMC IO Chassis +24V power from the Beckhoff SQZ +24V power.

8 - Replaced VDC-C2 U24 RHS -18V Kepco which supplies -18V to ISC-C1 & ISC-C2.

H1 VDC Rack Drawing D2300167

This concludes WP11193

Comments related to this report
jeffrey.kissel@LIGO.ORG - 14:32, Tuesday 23 May 2023 (69852)CAL, CDS, DetChar, ISC
Tagging DetChar and CAL, for the improved electrical isolation of the h1omc0 IO chassis (which houses the isolated 524 kHz ADC card that's reading out the OMC DCPDs; the gravitational wave PDs) that comes from Marc / Fernando's execution of:
    7 - Installed segregated +24V run from VDC-C2 U34 RHS to SQZ-C1 for the OMC IO Chassis.  This conductively isolates the OMC IO Chassis +24V power from the Beckhoff SQZ +24V power. 

We don't have "smoking gun" evidence "before" the change, but hopefully after this day, the amount or amplitude of lines in the detector sensitivity will be reduced -- so, be aware CW group!

Note that this is one of the last official parts of segregating the OMC IO chassis, a la (H1 only, thus far) ECR E2200441 and IIET Ticket 25756.
Some of the motivating history is in that FRS ticket, as well as it's predecessor IIET Ticket 17846, where we identified mixing of unsynchronized FPGA clocks on all the ADC and DAC cards in the h1lsc0 chassis -- which cites Roberts initial findings in LHO:58313.
H1 CDS (ISC, SEI, SUS)
jeffrey.kissel@LIGO.ORG - posted 11:03, Tuesday 23 May 2023 - last comment - 14:01, Tuesday 23 May 2023(69826)
Prep for Corner Station to End Station Dolphin Card Replacement -- SEI / SUS Put in their 'Ready for potential computer crash' mode
J. Kissel, E. von Reis, T. Shaffer,
WP 11219

In prep for the suddenly needed replacement of the corner-station to end-station "CDS RFM" dolphin card, which has a non-zero risk of crashing all end-station models, we've 
    - Accepted the ETMX / TMSX, ETMY / TMSY alignment sliders in the SDF system
    - Brought the ETMX / TMSX, ETMY / TMSY SUS guardians to DAMPED
    - Brought the EX and EY SEI guardian managers to ISI_DAMPED_HEPI_OFFLINE
Comments related to this report
jeffrey.kissel@LIGO.ORG - 11:34, Tuesday 23 May 2023 (69832)
Good thing! As feared, EX and EY SEI, SUS, and ISC end-station computers crashed as a result of this work.
jeffrey.kissel@LIGO.ORG - 14:01, Tuesday 23 May 2023 (69849)ISC, SEI
See discussion in of the dolphin crash in LHO:69843.

see the unexpected consequence of the crash -- the ISI lost settings for newish belnd filter fader system in LHO:69835.
H1 CDS (CAL, DetChar, ISC, SQZ)
jeffrey.kissel@LIGO.ORG - posted 10:57, Tuesday 23 May 2023 (69831)
Prep for more LSC, ASC, SQZ, OMC Rack Power Supply Work -- h1lsc0, h1asc0, and h1omc0 Computers / IO Chassis turned OFF
D. Barker, J. Kissel, T. Shaffer

In prep for round 2 of Power Supply Work for the LSC, ASC, SQZ, OMC Rack (i.e. ISC-C1 and SQZ-C1 racks)  (WP 11193, LHO:69652, LHO:69631), I've asked Dave to power down the h1asc0, h1lsc0, and h1omc0 IO chassis, safely removing those IO chassis / front-end computers from the Dolphin network fabric before shutting them down in an orderly fashion.

The h1omc0 IO chassis power, in the SQZ-C1 rack, is being replaced with its own segregated supply, so it *definitely* needed to be brought down and out in an orderly fashion. 

On the other hand, the h1lsc0 and l1asc0 IO chassis, in the ISC-C1 rack, are powered by +/-24 V. Marc's work with ISC-C1 was replacing / improving / upgrading the +/- 18V power supply, which powers "only" the AA and AI chassis in ISC-C1 (see D1001427). So taking down h1lsc0 and h1asc0 was a mere precautionary measure because this kind of rack work tends to glitch the electrical ground of rack, which glitches the IO chassis timing system (or does other things nasty to the IO chassis), which kills the IO chassis, and if the chassis / computer are not brought out of the Dolphin network fabric, then it can glitch the whole collection of corner station station computers. So I took the precaution.
LHO VE
david.barker@LIGO.ORG - posted 10:53, Tuesday 23 May 2023 (69830)
Tue CP1 Fill

Tue May 23 10:02:42 2023 INFO: Fill completed in 2min 42secs

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 10:15, Tuesday 23 May 2023 (69828)
Started ZPOOL scrub on h1daqframes-0

WP11208

Dan, Erik, Jonathan, Dave:

I've started the ZPOOL scrub of FW0's file system. We are monitoring to see if it slows file access like it did last scrub run in April. If it causes issues we will stop it, otherwise we will let it run. It is expected to take 4 days.

H1 CAL (DetChar, ISC)
jeffrey.kissel@LIGO.ORG - posted 09:27, Tuesday 23 May 2023 (69825)
Calibration Lines Change -- CAL_AWG_LINES Guardian brought to IDLE for the Foreseeable Future, Lines OFF
J. Kissel, T. Shaffer

We've not yet done a quantitative study on how the CAL_AWG_LINES guardian's eight "thermalization characterization" calibration lines (PCALY driven at freq=[8.925, 11.575, 15.275, 24.5] Hz, and DARM1_EXC driven at freq=[8.825, 11.475, 15.175, 24.4] Hz) substantially pollute parts of the detector sensitivity, but given that we "should have enough data" from the past few weeks to fullfil the current plan of how to handle what systematic error the detuned SRC brings in to the calibration during thermalization (see, e.g. LHO:69796), we've turned off the CAL_AWG_LINES guardian calibration line excitations.

Here's what "turning off" looks like:
    - Requested the CAL_AWG_LINES guardian to IDLE. (Because this guardian is not robust against computer crashes [LHO:69688], that made the guardian go into error, so we hit "RELOAD" to re-initialized it, and confirmed that the PCALY and DARM1_EXC test points have disappeared). Not it sits in IDLE, with none of its lines driven.
    - We've changed the NOMINAL_STATE to "IDLE" (rather than LINES_ON) 
    - Committed CAL_AWG_LINES code changes the changes to cal/h1/guardian/CAL_AWG_LINES.py, now at rev 25692.
    - We've removed the following lines from the ISC_LOCK guardian, and reloaded the guardian as of 2023-05-23 16:24 UTC 
         ISC_LOCK    Actual Code                             Context Notes
         Line #                
         43          CAL_AWG_LINES                           # In the list of managed nodes
         5475        nodes['CAL_AWG_LINES'] = 'LINES_ON'     # In ISC_LOCK's LOWNOISE_LENGTH_CONTROL state, in the "run" portion, making sure the lines were on on the way up during every lock acquisition sequence
         5899        nodes['CAL_AWG_LINES'] = 'IDLE'         # In ISC_LOCK's NLN_CAL_MEAS state, in the "main" portion, turning these OFF during calibration measurements
         5923        nodes['CAL_AWG_LINES'] = 'LINES_ON'     # In ISC_LOCK's NLN_CAL_MEAS state, in the "run" state, turning them back ON when returning from NLN_CAL_MEAS to NOMINAL_LOW_NOISE
    - Set the CAL_AWG_LINES guardian's MANAGER to "AUTO" by hitting the H1:GRD-CAL_AWG_LINES_MODE button.
    - Committed the ISC_LOCK guardian code to the userapps/isc/h1/guardian/ to the svn. (just before my "removal" commit was rev 25693, removed commit is rev 25699.)

We have *not* stopping or disabling the node, just in case we *don't* have enough data and we want to bring this back online occasionally.
H1 SUS (IOO, ISC)
jeffrey.kissel@LIGO.ORG - posted 08:50, Tuesday 23 May 2023 (69823)
Prep for more LSC, ASC, SQZ, OMC Rack Power Supply Work -- Offloaded MC WFS, Saved SUS MC OPTICALIGN Alignment Slider Values and PSL Input PZT Steering Slider Values in SDF
J. Kissel, T. Shaffer

In order to aide recovery later today post round two of changing out the power supplies for the ISC racks (WP 11193, LHO:69652, LHO:69631), TJ and I offloaded the MC WFS to their suspension's alignment and input PZT sliders (while the IMC is locked, use IMC_LOCK guardian, request MCWFS_OFFLOADED).
Then, to make sure it sticks across computer reboots, we accepted them in each computer's safe.snap SDF file. 
    - Because alignment sliders are not monitored in SUS SDF systems, for the IMC SUS, MC1, MC2, and MC3, this means changing the SDF view to FULL_TABLE, and accepting OPTICALIGN*OFFSET values,
    - for the input PZTs, which live in the h1ascimc front-end model, we merely accepted them in the SETTINGS_DIFFS list, because these *are* monitored.

Neither SUS nor PZTs offset changes were that large, but it'll help.
H1 CDS
david.barker@LIGO.ORG - posted 08:27, Tuesday 23 May 2023 - last comment - 10:04, Tuesday 23 May 2023(69821)
h1seih16 powered down for 2nd ADC replacement

WP11211

FRS27187

Fil, Dave:

h1seih16 is fenced from Dolphin and powered down. Fil is replacing the 2nd ADC

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 10:04, Tuesday 23 May 2023 (69827)GRD, SEI
J. Kissel (for J. Warner)

In prep for this card replacement, Jim brought the HPI HAM1 manager guardian to READY, and the SEI_HAM6 manager guardian to ISI_DAMPED_HEPI_OFFLINE. Suspensions on these tables were left as they were, in their nominal state (ALIGNED) running happily.

Post replacement, he restored these systems to their nominal state, ROBUST_ISOLATED for HAM1 and ISOLATED for HAM6.

We've now also brought the SDF files to which the settings are compared back to OBSERVE.snap.
LHO General
thomas.shaffer@LIGO.ORG - posted 08:14, Tuesday 23 May 2023 (69820)
Ops Day Shift Start

TITLE: 05/23 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
CURRENT ENVIRONMENT:
    SEI_ENV state: MAINTENANCE
    Wind: 3mph Gusts, 1mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.13 μm/s
QUICK SUMMARY: Ryan almost had the IFO recovered after the dolphin trip last night, but was stuck getting the OMC locked. There was little to no light on the QPDs, but we could find the carrier, though with lower power.

Maintenance work started, as soon as I transitioned to Maintenance mode in SEI_ENV we lost lock maybe 30 seconds later.

H1 General
ryan.crouch@LIGO.ORG - posted 08:04, Tuesday 23 May 2023 (69815)
OPS Tuesday OWL shift summary

TITLE: 05/23 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
SHIFT SUMMARY:

Lock#1:

After waiting out EQ mode, I had to adjust Yarm by hand, Yarm unlocked during check_ir and killed it

Lock#2:

constant SRM saturations after engage DRMI_ASC, DRMI lockloss tripped HAM5 ISI again

Lock #3:

Lost it at check IR while trying to find COMM by hand

Lock#4:

Decided to an an init align, init align went into error (GREEN_ARMS), reloading the node fixed it. SRC align caused SRMs saturations

PRMI locked, but AS AIR spot looked terrible, same SRM sats and HAM4/5 ISI trip during DRMI, trending the OM OSEMs and SR3 OPLEV, they don't seem to have moved much over the past day (SR3 was a little different in pitch, so I tapped its slider to adjust)

Lock#5:

Went into a manual initial alignment for the SRC, went to SRC_ALIGN which saturated, then I went to SR2 align offloaded it then went to PREP_SRY and adjusted SRM quite a lot in P (~200microrads) & Y (~150 microrads) while watching the AS AIR image and the INIT ALIGN IR ndscope. Offloaded SRC then went back to down

The OMC was having a lot of trouble locking itself, maybe due to the high violins? Tried to find it by hand, unsuccesful swept through a few times and never found the carrier. I also tried clearing the history as suggested by this doc and following Jennes alog, neither were sucessful, with Jennes alog method without whitening I couldn't get any coherance so I was probably on a locked on a sideband? I searched and searched for the carrier. A few more attempts later I almost found it? around an offset of 18, I had 26 coherence indices but "TF phase is weird" message was in the grdlog from the TUNE_OFFSETS state and the code wasn't able to reconcile it and it sadly cycled back to down. The peaks all seemed very small, an alignment issue? But I'm not sure where, I rechecked all the OMs, OM2 was off by 40 microradians, I tried to adjust it but it didn't help.

Handing off to TJ

LOG:

H1 CDS
erik.vonreis@LIGO.ORG - posted 06:58, Tuesday 23 May 2023 (69819)
Workstations and Wall Displays updated

Workstations and Wall Displays were updated and rebooted.

H1 General
ryan.crouch@LIGO.ORG - posted 04:03, Tuesday 23 May 2023 - last comment - 12:10, Tuesday 23 May 2023(69814)
OPS Tuesday OWL midshift update
Comments related to this report
ryan.crouch@LIGO.ORG - 05:14, Tuesday 23 May 2023 (69818)

The OMC is having trouble locking, the log keeps giving the same error message " Didn't find enough peaks, resetting and trying again" The OMC TRANS camera flashes appear very offcenter as well (bottom right), I was unsucessful trying to scan and lock by hand, I also tried to do a Graceful clear history as suggested by the the troubleshooting doc since the OMC-ASC values were all stuck at high values, clearing the history reset them but then they went right back to being high a few minute later.

Images attached to this comment
betsy.weaver@LIGO.ORG - 08:47, Tuesday 23 May 2023 (69822)

It looks like the SRM came back up after the computer crash with some old SDF saved alignment (as they all might since we do not monitor, and therefore do not "Accept" in SDF very often).  So the eventual resetting of it back via large-ish slider values restored it to a more recent locking alignment last night.

I have accepted these values in SDF for better luck next time, but we're chewing on how to ensure we are at the best alignment starting place after this type of computer reboot.

Images attached to this comment
sheila.dwyer@LIGO.ORG - 12:10, Tuesday 23 May 2023 (69836)

Patrick THomas wrote a script that operators can use to restore sliders to a specific time in the past, there is a link to it and gui on the IFO align screen.  We should probably add this to any instructions we have for operators on how to come back from a computer problem.

H1 General
corey.gray@LIGO.ORG - posted 01:51, Tuesday 23 May 2023 (69810)
H1 Recovery Status

With the Dolphin Crash at around 8pmPT, for H1, it primarily affected the Corner Station's SUS & SEI (but not every chamber).  After Dave restored the CDS side of things, we worked on bringing back the SUS and then the SEI (with Rahul and Jim).  Some notes:

ALIGNMENT #1:

INIT_ALIGN Errors:

INPUT ALIGN:

Once INPUT ALIGN was completed, I continued with ALIGN_IFO to complet the Initial Alignment (I did not use INIT_ALIGN because it wasn't working for me---due to the ERRORs).

LOCKING:

Since I was worried the alignment was not run correctly due to the INIT ALIGN error, I decided to run another alignment letting ALIGN IFO use INIT ALIGN.

ALIGNMENT #2:

Mostly a straightforward alignment, BUT for SRC ALIGN:  There were SRM saturations via Verbal---not a good a sign.  These saturations would continue even after offloading.  So something had to be amiss---I must have missed something from the crash.

Since it was almost midinght, I went for another lock, and sadly, H1 made it through DRMI, but it continued to have SRM saturations and at the lockloss the HAM4 & HAM5 ISIs tripped again.  This is when I handed off to Ryan.

Additional NOTE:

Here Is the INIT_ALIGN error:

2023-05-23_06:26:02.156247Z INIT_ALIGN [LOCKING_GREEN_ARMS.main] timer['y_locked_but_no_wfs'] = 0
2023-05-23_06:27:02.154391Z INIT_ALIGN [LOCKING_GREEN_ARMS.run] timer['x_not_locked'] done
2023-05-23_06:27:02.156131Z INIT_ALIGN [LOCKING_GREEN_ARMS.run] timer['y_not_locked'] done
2023-05-23_06:28:41.334397Z INIT_ALIGN W: Traceback (most recent call last):
2023-05-23_06:28:41.334397Z   File "/usr/lib/python3/dist-packages/guardian/worker.py", line 494, in run
2023-05-23_06:28:41.334397Z     retval = statefunc()
2023-05-23_06:28:41.334397Z   File "/usr/lib/python3/dist-packages/guardian/state.py", line 246, in __call__
2023-05-23_06:28:41.334397Z     main_return = self.func.__call__(state_obj, *args, **kwargs)
2023-05-23_06:28:41.334397Z   File "/usr/lib/python3/dist-packages/guardian/state.py", line 246, in __call__
2023-05-23_06:28:41.334397Z     main_return = self.func.__call__(state_obj, *args, **kwargs)
2023-05-23_06:28:41.334397Z   File "/opt/rtcds/userapps/release/isc/h1/guardian/INIT_ALIGN.py", line 130, in run
2023-05-23_06:28:41.334397Z     nodes[f'ALS_{xy}ARM'] = 'INITIAL_ALIGNMENT_OFFLOADED'
2023-05-23_06:28:41.334397Z NameError: name 'xy' is not defined
2023-05-23_06:28:41.335859Z INIT_ALIGN [LOCKING_GREEN_ARMS.run] USERMSG 0: USER CODE ERROR (see log)
2023-05-23_06:28:41.388535Z INIT_ALIGN ERROR in state LOCKING_GREEN_ARMS: see log for more info (LOAD to reset)
2023-05-23_06:28:54.822757Z INIT_ALIGN LOAD REQUEST
2023-05-23_06:28:54.827322Z INIT_ALIGN RELOAD requested.  reloading system data...
2023-05-23_06:28:54.872241Z INIT_ALIGN module path: /opt/rtcds/userapps/release/isc/h1/guardian/INIT_ALIGN.py
2023-05-23_06:28:54.872241Z INIT_ALIGN user code: /opt/rtcds/userapps/release/isc/h1/guardian/ISC_library.py
2023-05-23_06:28:54.872552Z INIT_ALIGN user code: /opt/rtcds/userapps/release/isc/h1/guardian/lscparams.py
2023-05-23_06:28:54.872552Z INIT_ALIGN user code: /opt/rtcds/userapps/release/sys/h1/guardian/timeout_utils.py
2023-05-23_06:28:55.722360Z INIT_ALIGN RELOAD complete
2023-05-23_06:28:55.723769Z INIT_ALIGN W: RELOADING @ LOCKING_GREEN_ARMS.run

H1 ISC
elenna.capote@LIGO.ORG - posted 06:59, Monday 22 May 2023 - last comment - 10:21, Sunday 28 May 2023(69785)
MICH/SRCL FF retuned, not in guardian

Gabriele and I have fit some new filters for the LSC feedforward based on measurements taken recently in a thermalized interferometer. These fits look reasonable and we think they are ready for testing. I am hesitant to put them in the guardian because the last time we tried this we made the noise worse and also caused a lockloss because of some unexpected high frequency behavior. We don't think that will happen again, but I want to be able to supervise the test.

If someone enterprising wants to check it out, the new filters are FM9 of the MICH FF and SRCL FF bank. They should be engaged with a gain of 1. It would be nice to see how they perform at the start of the lock and after thermalization.

Comments related to this report
elenna.capote@LIGO.ORG - 00:17, Tuesday 23 May 2023 (69813)

Corey tried out these new filters at the start of a recent lock. I ahve attached a set of plots made by Evan. It appears the noise in DARM from 10-40 Hz is worse, while there is some improvement 60-100 Hz. The improvement seems to come from the slight reduction in SRCL coherence. My understanding is that the reference traces are the old feedforward, while the live traces are the new feedforward.

The injections we used to fit the feedforward were done after 6 hours of lock, once the thermalization has settled. This could be why the feedforward seems to be worse early on. I'd like to see this feedforward tried one more time later on, maybe around hour 6 if possible. If the feedforward still worsens the noise, there is some other coupling present that we do not understand.

One more note: comparing the new SRCL FF fit to the current settings, it appears the biggest change is the gain. We could maintain the same SRCL FF filter, but try adjusting the overall gain by a small amount and see if that improves the subtraction.

Images attached to this comment
gabriele.vajente@LIGO.ORG - 02:51, Tuesday 23 May 2023 (69817)

Concerning the evolution of MICH coupling during thermalization. This plot looks at a 7 hour long lock some days ago, with the old FF filters. I used NonSENS to estimate the optimal TF to subtract MICH on top of the running LSC FF. So this measures the residual MICH coupling to DARM after the reduction due to the FF. It's clear that the residual coupling gets lower with thermalization: this is expected since those FF were tuned with a hot thermalized IFO.

The bottom line is that we cannot have a MICH FF filter that works both for the cold and hot IFO. The same is true for a NonSENS online subtraction

Images attached to this comment
H1 ISC
gabriele.vajente@LIGO.ORG - posted 15:53, Friday 28 April 2023 - last comment - 01:20, Tuesday 23 May 2023(69166)
Quadratic coupling of frequency noise in DARM

We've known for a while that when we inject laser frequency noise, we see significant downconversion in DARM, together with the expected linear coupling. In particular, when injecting band-limited laser frequency noise in the kHzs region, we see noise in the 100s Hz region getting worse.

Over the past few days I did a few noise and line injections to characterize this coupling. A more detailed analysis will follow, but here are some observations:

The first two plots attached show the effect of injecting noise that mimicks the shape of the 4.3 kHz bump visible in CARM. The first one shows the spectra, the second plot shows the ratio to the quiet time. It's clear that the 100 Hz bump in DARM scales quadratically. It also seems that the downconversion effect is not large enough to be worrisome at this level.

The third plot shows what happens when I injected several lines at high frequency. The red dots shows the lines I injected, while the green Xs shows all the differences in the frequencies. All excess lines in DARM and REFL ERR are marked with a green X, so we see that the coupling is quadratic and does not involve any other line.

The last plot shows that line injections at frequencies outside the 4 - 4.5 kHz band don't produce significant downconversion in DARM.

In a follow up analysis, I plan to model the quadratic coupling from REFL to DARM, and build a noise projection.

Images attached to this report
Comments related to this report
craig.cahillane@LIGO.ORG - 01:20, Tuesday 23 May 2023 (69816)
4.1 kHz is the OMC dither line.
LHO alog 44874

Keita did some nice work in the past on the downconversion from frequency noise near the OMC dither.  At that time we determined it was not limiting DARM.  Is worth doing again.
Displaying reports 19901-19920 of 87669.Go to page Start 992 993 994 995 996 997 998 999 1000 End