Displaying reports 8721-8740 of 86146.Go to page Start 433 434 435 436 437 438 439 440 441 End
Reports until 12:51, Tuesday 27 August 2024
H1 ISC
sheila.dwyer@LIGO.ORG - posted 12:51, Tuesday 27 August 2024 (79740)
added more DOFs back to DRMI ASC

TJ, Sheila

TJ is having difficulty locking in the high winds this afternoon.  We've been running without DRMI ASC other than the BS, so that operators have to manually adjust PRM and SRM sometimes.  This afternoon we added back in INP1, PR1 and PRC2 loops, but left SRC ASC off.  This seems to be helping with POP18, although the operators may still need to adjust SRM by hand.

LHO General
thomas.shaffer@LIGO.ORG - posted 12:27, Tuesday 27 August 2024 (79739)
Ops Day Mid-Shift Report

Maintenance activies completed around 30 min ago. I ran an initial alignment at 1100 PST, then started locking at 1206 PST. The wind is really starting to pick up and we just lost lock at the state Resonance. We will keep trying but this might be a tough relock.

H1 PSL
ryan.short@LIGO.ORG - posted 12:18, Tuesday 27 August 2024 (79738)
PSL Rotation Stage Calibration

Since it hasn't been done since before the vent, and the output of the PMC has continued to slowly drop, before locking today I calibrated the PSL rotation stage following the steps in alog79596.

The measurement file, new calibration fit, and screenshot of accepting SDFs are all attached.

  Power in (W) D B (Minimum power angle) C (Minimum power)
Old Values 97.882 1.990 -24.818 0.000
New Values 94.642 1.990 -24.794

0.000

Images attached to this report
Non-image files attached to this report
H1 AOS (INJ)
keith.riles@LIGO.ORG - posted 12:12, Tuesday 27 August 2024 - last comment - 16:26, Tuesday 27 August 2024(79737)
CW hardware injections seem to be disabled
Now that CIT's ldas-pcdev2 machine is back up following last week's data center troubles, my CW hardware injection monitoring shows no apparent signals since the return to observing mode on Saturday. Is that by design, or did the injections restart fall through a crack? Thanks.
Comments related to this report
david.barker@LIGO.ORG - 14:39, Tuesday 27 August 2024 (79741)

I just checked the CW hardware injection signal is running with no problems. The h1hwinj1 server had crashed on 25jul2024 and I rebooted it on 29jul2024 and verified it started ok. Attachment shows a month trend of the signal, the restart can been seen on the left.

Images attached to this comment
keith.riles@LIGO.ORG - 16:26, Tuesday 27 August 2024 (79746)
My bad. There was indeed another disruption to the monitoring that was affected by the pcdev2 shutdown, which I hadn't noticed. Thanks for the quick follow up and sorry for the noise.
H1 PSL
jason.oberling@LIGO.ORG - posted 11:35, Tuesday 27 August 2024 (79736)
PSL FSS RefCav Tune-up (WP 12057)

J. Oberling, R. Short

The FSS RefCav TPD had been trending down again, and Ryan did not get enough improvement from a remote alignment tweak last week.  So today we went into the enclosure to do an on-table alignment of the FSS RefCav beam path.  As usual, we began with a power budget:

No, we're not getting magic amplification between the FSS In and AOM In power measurements; that kind of small discrepancy is normal with the small stick power meter head we use for this alignment (they are also very AOI dependent, so it's entirely possible we had the head better aligned to the beam for the AOM In measurement).  Immediately we see the single-pass diffraction efficiency is lower than it normally is (generally hangs out in the lower 70% range these days), so we began by adjusting the AOM alignment to improve it.  This also means we have to then adjust M21 to improve the double-pass diffraction efficiency.  After the adjustments we had:

This is the lowest we've seen the diffraction efficiencies of this AOM, and suggests maybe it's time to swap the AOM for a new one (pretty sure this is the same AOM installed with the PSL in 2012, would have to go back through the alog to see if it's been swapped since install).  As usual, we had to do some small tweaks to the FSS EOM (provides the 21.5 MHz PDH sidebands for RefCav locking), and had to tweak the beam alignment into the RefCav using the input iris.  Once done, the RefCav locked without issue and our picomotor mounts were used to maximize the RefCav TPD; we began with a TPD of 0.57 V and ended with a TPD of 0.82 V.  We've had it higher in the past, but we have less power out of the PMC than usual (due to our slowly increasing PMC Refl that we have yet to figure out the cause of), and less power out of the PMC means less power available for the FSS RefCav which means lower maximum RefCav TPD.  To end we realigned the beam onto the RefCav RFPD and took a visibility measurement:

This finished the FSS RefCav tune-up, so we left the enclosure.  We left the ISS OFF while the enclosure returns to thermal equilibrium; it will be turned back ON once that equilibrium is reached, and Ryan will do a rotation stage calibration.  This closes LHO WP 12057.

H1 CDS
david.barker@LIGO.ORG - posted 11:03, Tuesday 27 August 2024 - last comment - 09:00, Wednesday 28 August 2024(79735)
CDS Maintenance Summary: Tuesday 27th August 2024

WP12061 Upgrade RCG h1susex, remove LIGO-DAC delays

EJ, Erik, Jonathan, Dave, Daniel, Marc:

h1susex RCG was upgraded to a custom 5.3.0 specifically for the IOP to remove a delay in the new LIGO 28AO32 DAC. We compliled all of the user models with this RCG as well.

Our first restart was a simple model restart, but we got Dolphin errors. So our second restart was to fence h1susex from the Dolphin fabric and power cycle h1susex via IPMI. After this the models started with no issues.

No DAQ restart was required.

Code path is /opt/rtcds/rtscore/advLigoRTS-5.3.0_dacfix

WP12063 Alert System

Dave:

The locklossalert system code was modified to permit an alert window which spans over midnight, needed for the new owl shift hours.

Note that the business/every day filter is applied after the minute-in-day filter, so a window starting Friday evening and extending into Saturday morning will cut off at midnight if business days only is selected.

One other change:

For everyone subscribed to Guardian alerts, if for any reason the Guardian EPICS record cannot be read (node down, h1guardian1 down) the alert will now default to SEND.

Guardian Reboot

TJ:

h1guardian1 was rebooted at 08:21 PDT. All nodes except TEST came back automatically. TJ worked on TEST and got it going.

MSR Rack Cleanup

Jonathan:

Jonathan removed two test switches from the MSR racks which are no longer needed.

Comments related to this report
david.barker@LIGO.ORG - 14:43, Tuesday 27 August 2024 (79742)

I updated the o4 script which reports where we are in O4 and reminds us of important dates (break start/end, vent start/end).

 

Images attached to this comment
david.barker@LIGO.ORG - 09:00, Wednesday 28 August 2024 (79760)

Tue27Aug2024
LOC TIME HOSTNAME     MODEL/REBOOT
08:50:45 h1susex      h1iopsusex  <<< 1st try, restart model
08:50:59 h1susex      h1susetmx   
08:51:13 h1susex      h1sustmsx   
08:51:27 h1susex      h1susetmxpi 
08:57:11 h1susex      h1iopsusex  <<< 2nd try, reboot
08:57:24 h1susex      h1susetmx   
08:57:37 h1susex      h1sustmsx   
08:57:50 h1susex      h1susetmxpi 
 

H1 CDS
jonathan.hanks@LIGO.ORG - posted 09:58, Tuesday 27 August 2024 (79734)
WP 12058 Removed two temporarily installed switches from the h1-daq-1 rack.
I removed two switches from the h1-daq-1 rack that had been put in for testing.  They are no longer needed in racks.

I also took a moment to sort and clean up a number of SFP transceivers that had been left out on the MSR work area.
LHO VE
david.barker@LIGO.ORG - posted 08:12, Tuesday 27 August 2024 (79732)
Tue CP1 Fill

Tue Aug 27 08:08:21 2024 INFO: Fill completed in 8min 17secs

Jordan confirmed a good fill curbside.

Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 07:35, Tuesday 27 August 2024 (79731)
Ops Day Shift Start

TITLE: 08/27 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 6mph Gusts, 3mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.09 μm/s
QUICK SUMMARY: Locked for 2.5 hours, seems like an automated relocked before that. PEM magnetic injections are running now. Planned 4 hours of maintenance today.

 

H1 CDS
erik.vonreis@LIGO.ORG - posted 07:17, Tuesday 27 August 2024 (79730)
Workstations updated

Workstations were updated and rebooted.  This was an OS packages update.  Conda packages were not updated.

H1 General (SUS)
oli.patane@LIGO.ORG - posted 22:02, Monday 26 August 2024 (79729)
Ops Eve Shift End

TITLE: 08/27 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Observing and have been Locked for 5.5 hours. Quiet evening with nothing to report. Feedforward measurements were run, and I also just tuned squeezing up.

Little comparison on the damping for new PI over the past 5 days (ndscope)
Since starting back up last week, we have been seeing a new PI (79665), and it had caused a few locklosses. On Friday afternoon (08/24 ~00:00UTC - crosshairs on ndscope) it was decided that the easiest way to try and control it for now would be to change our input power from 60W to 61W. This, along with some changes to the damping gain, has helped keep it under control. Comparing the PI's behavior to the length of our lock stretches, we only see the ringups during the first 1.5-2 hours of each lock, so once we're thermalized past a certain point, they stop happening. The changes to damping (I'm not sure what was done besides raising the damping gain) also means that they get caught and damped quickly.
 

LOG:

23:30 Observing and Locked for 2 minutes
04:27 Dropped Observing to take FF measurements
04:46 Feedforward measurements done, running SQZ alignment
04:55 Back to Observing

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 16:38, Monday 26 August 2024 - last comment - 17:25, Monday 26 August 2024(79726)
Ops Eve Shift Start

TITLE: 08/26 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 6mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.07 μm/s
QUICK SUMMARY: Observing and just got to NLN a few minutes ago. Once we're thermalized I'll be taking LSC FF measurements.

 

Comments related to this report
oli.patane@LIGO.ORG - 17:25, Monday 26 August 2024 (79728)

SDF diff turning the PRCLFF gain off ws accepted to get into Observing

Images attached to this comment
LHO General
thomas.shaffer@LIGO.ORG - posted 16:34, Monday 26 August 2024 (79720)
Ops Day Shift End

TITLE: 08/26 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 139Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: The shift was mostly taken up by commissioning time. There was recovery from a CDS crash this morning, but recovery was pretty straight forward. The reacqusitions from the two lock losses one needed an initial alignment, the other did not. Minor interventions for both.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
23:58 sys h1 lho YES LVEA is laser HAZARD 18:24
16:11 FAC Tyler MY n Fire system check 19:11
16:11 FAC RDO (Tyler) Fire tank n Fixing big green 20:51
16:12 FAC Karen MY n Tech clean 17:28
16:48 FAC Kim MX n Tech clean 17:18
17:41 - Betsy, Travis FCTE n Moving hepa fan 19:26
20:37 FAC Tyler CS n Towing water tank around CS 20:47
H1 OpsInfo
thomas.shaffer@LIGO.ORG - posted 15:59, Monday 26 August 2024 (79725)
Restarted SUS charge nodes and PEM mag injection node

I restarted the 4 SUS in lock charge nodes and the PEM magnetic injection node to reestablish the connection to the front ends that can go away after front end crashes, like the one this morning ( alog79708). We wanted to be prepared for these to run tomorrow morning if the IFO is locked.

H1 TCS
ryan.short@LIGO.ORG - posted 15:10, Monday 26 August 2024 (79723)
TCS Chiller Water Level Top-Off

FAMIS 27796

H1 ISC
ryan.short@LIGO.ORG - posted 14:31, Monday 26 August 2024 - last comment - 16:24, Tuesday 27 August 2024(79721)
OMC Locking with Higher DARM Offset

Before the vent, we had lowered the DARM offset used at the end of the DARM_OFFSET state for locking the OMC since we had seen the PRG fall off and cause a lockloss with the nominal offset of 9e-05 (see alog79082 for details). When locking this afternoon, I raised the offset from 6e-05 back to 9e-05 after running through DARM_OFFSET, and seeing that the PRG didn't plummet and cause a lockloss, we continued locking. The OMC locked on the first try, something that hasn't been the case recently, so having more carrier light there seems to help. I compared OMC scans from this lock against the last lock, which used the lower DARM offset; attachment 1 shows the scan with the higher offset and attachment 2 with the lower offset. According to the OMC-DCPD_SUM channel, we get ~10mW more carrier light on the OMC when locking with this higher DARM offset.

I've put this DARM offset of 9e-05 back into ISC_LOCK and loaded it. We can watch over the next couple of lock acquisitions to see if the problem wth the PRG dropping off resurfaces.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 14:37, Monday 26 August 2024 (79722)OpsInfo

Tagging OpsInfo

If you see the power recycling gain start falling soon after DARM_OFFSET, you can turn off the LSC-DARM1 filter module offset, lower it, and turn it back on until the PRG stays steady, then proceed with OMC locking.

ryan.short@LIGO.ORG - 16:24, Tuesday 27 August 2024 (79745)ISC, OpsInfo

Now that we've locked several times successfully since yesterday with this higher DARM offset, I've also rearranged the state order in ISC_LOCK so that the DARM offset is applied before any ASC so that the OMC can work on locking while ASC converges (this is how the order used to be before the DARM offset issues started).

See attached for the new state progression around this point in locking.

Images attached to this comment
H1 CDS
david.barker@LIGO.ORG - posted 08:23, Monday 26 August 2024 - last comment - 17:09, Monday 26 August 2024(79708)
EY Dolphin Crash

TJ, Jonathan, EJ, Dave:

Around 01:30 this morning we had a Dolphin crash of all the frontends at EY (h1susey, h1seiey, h1iscey). h1susauxey is not on the Dolphin network as was not impacted.

We could not ping these machines, but were able to get some diagnostics from their IPMI management ports.

At 07:57 we powered down h1[sus,sei,isc]ey for about a minute and then powered them back on.

We checked the IX Dolphin switch at EY was responsive on the network.

All the systems came back with no issues. SWWD and model WDs were cleared. TJ is recovering H1.

Comments related to this report
jonathan.hanks@LIGO.ORG - 08:29, Monday 26 August 2024 (79709)
Screen shots of the console retrieved via ipmi.  h1iscey had a similar screen to h1seiey, same crash dump.

h1iscey, h1seiey - crash in the dolphin driver.
h1susey - kernel panic, with a note that a LIGO real time module had been unloaded.
Images attached to this comment
david.barker@LIGO.ORG - 08:27, Monday 26 August 2024 (79710)

Crash time: 01:43:47 PDT

david.barker@LIGO.ORG - 08:51, Monday 26 August 2024 (79711)
Images attached to this comment
david.barker@LIGO.ORG - 12:01, Monday 26 August 2024 (79717)

Reboot/Restart LOG:

Mon26Aug2024
LOC TIME HOSTNAME     MODEL/REBOOT
07:59:27 h1susey      ***REBOOT***
07:59:30 h1seiey      ***REBOOT***
08:00:04 h1iscey      ***REBOOT***
08:01:04 h1seiey      h1iopseiey  
08:01:17 h1seiey      h1hpietmy   
08:01:30 h1seiey      h1isietmy   
08:01:32 h1susey      h1iopsusey  
08:01:45 h1susey      h1susetmy   
08:01:47 h1iscey      h1iopiscey  
08:01:58 h1susey      h1sustmsy   
08:02:00 h1iscey      h1pemey     
08:02:11 h1susey      h1susetmypi 
08:02:13 h1iscey      h1iscey     
08:02:26 h1iscey      h1caley     
08:02:39 h1iscey      h1alsey     
 
 

david.barker@LIGO.ORG - 17:09, Monday 26 August 2024 (79727)

FYI: There was a pending filter module change for h1susetmypi which got installed when this model was restarted this morning.

H1 ISC
sheila.dwyer@LIGO.ORG - posted 16:30, Friday 23 August 2024 - last comment - 15:31, Monday 26 August 2024(79670)
operator requests for the weekend

We can edit this list as needed. 

We are having trouble locking this afternoon because of the wind, but we have some requests for the operators this weekend if they are able to relock. 

We've changed the nominal locking power to 61W, in the hopes that this might avoid the PI or let us pass through it quickly.

When we first get to NLN, please take the IFO to NLN_CAL_MEAS and run a calibration sweep.  If we stayed locked for ~1.5 hours, please re-run, and if we are ever locked for more than 3 hours please re-run again. 

After the calibration has run, we would like to check the camera set points since we have seen earlier today that they are not optimal and that might be related to our PI problem.  We already updated the offset for PIT1, but we'd like to check the others.   We modified the script from 76695 to engage +20 dB filters in all these servos to speed up the process.   Each DOF should take a little more than 15 minutes.  We'd like these run with the POP beam diverter open, so we can see the impact on POP18 and POP90.  The operators can run this by going to /ligo/gitcommon/labutils/beam_spot_raster and typing python camera_servo_offset_stepper.py 1 -s now  (and once 1 completes, run 2 and 3 if there's time.)

I've added 3 ndscope templates that you can use to watch what these scripts do.  the templates are in userapps/asc/h1/templates/ndscope/move_camera_offsets_{BS, DOF2_ETMX, DOF3_ETMY}.yml  We'd like to see if any of these can increase the arm powers, or POP18.  If there is a better value found, it can be set to the default by updating lscparams.py lines 457 to 465. 

 

 

Comments related to this report
elenna.capote@LIGO.ORG - 17:00, Friday 23 August 2024 (79675)

The injections for LSC feedforward can also be taken after the tasks Sheila mentions here. Do these measurements after at least 2-3 hours of lock.

The templates used for these measurements are found in "/opt/rtcds/userapps/release/lsc/h1/scripts/feedforward"

Steps:

  • go to NLN  cal meas
  • open LSC feedforward medm: sitemap>LSC overview > IFO FF
  • this is where the filter banks we mention are located
  • top bank is PRCLFF, next is MICHFF and then SRCLFF1 and SRCLFF2. only SRCLFF1 is in use
  • start with the MICH preshaping measurement:
    • Ramp MICHFF gain to 0
    • turn off all filters except the high pass in FM10
    • turn off input to MICHFF filter bank
    • ramp MICHFF gain back to 1
    • run template MICHFF_excitation_ETMYpum.xml for 30 averages and save
    • undo set up by: ramp gain to zero, turn on filters, turn on input, ramp gain back to 1
  • next, measure the MICH feedforward by running MICH_excitation.xml for 30 averages and save
    •  do this with the MICH feedforward on because we are measuring a residual
  • next, measure the SRCL preshaping
    • follow the steps for the MICH set up, but do them in the SRCLFF1 bank. leave on FM10, the high pass
    • measure using SRCLFF_excitation_ETMYpum.xml for 30 averages and save
    • reset the filter bank following the MICH steps
  • next, measure SRCL feedforward
    • first, ramp the SRCLFF gain to 1
    • measure using SRCL_excitation.xml for 30 averages and save
    • ramp SRCL FF gain back to nominal value after measurement is done
  • next, measure PRCL feedforward using PRCL_excitation.xml for 30 averages and save
  • no PRCL preshaping measurement is required
elenna.capote@LIGO.ORG - 15:31, Monday 26 August 2024 (79724)

Putting in another request for LSC feedforward measurements. Please disregard the above instructions and instead follow these:

  • go to NLN cal meas
  • open LSC feedforward medm: sitemap>LSC overview > IFO FF
  • open the template "/opt/rtcds/userapps/release/lsc/h1/scripts/feedforward/MICH_excitation.xml"
    • ramp MICHFF gain to zero
    • the template is set to run accumulative. no need to change this!
    • run the template for at least 30 averages, and then stop. since it is on accumulative, you will need to watch the template and stop it yourself.
    • save the file as isno need to change the name
    • ramp MICHFF gain back to nominal
  • repeat this process using template "/opt/rtcds/userapps/release/lsc/h1/scripts/feedforward/SRCL_excitation.xml"
    • this time, ramp SRCLFF1 gain to zero
    • take measurement following steps detailed above
    • ramp gain back to nominal
  • repeat again using "/opt/rtcds/userapps/release/lsc/h1/scripts/feedforward/PRCL_excitation.xml"
    • ramp PRCLFF gain to zero if not already zero
    • take measurements as above
    • ramp back to nominal

There is no need to take any other measurements at this time! I have copied the exact filenames from the folder. Do not change the filename when you save.

Displaying reports 8721-8740 of 86146.Go to page Start 433 434 435 436 437 438 439 440 441 End