Displaying reports 5941-5960 of 83351.Go to page Start 294 295 296 297 298 299 300 301 302 End
Reports until 16:34, Monday 26 August 2024
LHO General
thomas.shaffer@LIGO.ORG - posted 16:34, Monday 26 August 2024 (79720)
Ops Day Shift End

TITLE: 08/26 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 139Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: The shift was mostly taken up by commissioning time. There was recovery from a CDS crash this morning, but recovery was pretty straight forward. The reacqusitions from the two lock losses one needed an initial alignment, the other did not. Minor interventions for both.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
23:58 sys h1 lho YES LVEA is laser HAZARD 18:24
16:11 FAC Tyler MY n Fire system check 19:11
16:11 FAC RDO (Tyler) Fire tank n Fixing big green 20:51
16:12 FAC Karen MY n Tech clean 17:28
16:48 FAC Kim MX n Tech clean 17:18
17:41 - Betsy, Travis FCTE n Moving hepa fan 19:26
20:37 FAC Tyler CS n Towing water tank around CS 20:47
H1 OpsInfo
thomas.shaffer@LIGO.ORG - posted 15:59, Monday 26 August 2024 (79725)
Restarted SUS charge nodes and PEM mag injection node

I restarted the 4 SUS in lock charge nodes and the PEM magnetic injection node to reestablish the connection to the front ends that can go away after front end crashes, like the one this morning ( alog79708). We wanted to be prepared for these to run tomorrow morning if the IFO is locked.

H1 TCS
ryan.short@LIGO.ORG - posted 15:10, Monday 26 August 2024 (79723)
TCS Chiller Water Level Top-Off

FAMIS 27796

H1 ISC
ryan.short@LIGO.ORG - posted 14:31, Monday 26 August 2024 - last comment - 16:24, Tuesday 27 August 2024(79721)
OMC Locking with Higher DARM Offset

Before the vent, we had lowered the DARM offset used at the end of the DARM_OFFSET state for locking the OMC since we had seen the PRG fall off and cause a lockloss with the nominal offset of 9e-05 (see alog79082 for details). When locking this afternoon, I raised the offset from 6e-05 back to 9e-05 after running through DARM_OFFSET, and seeing that the PRG didn't plummet and cause a lockloss, we continued locking. The OMC locked on the first try, something that hasn't been the case recently, so having more carrier light there seems to help. I compared OMC scans from this lock against the last lock, which used the lower DARM offset; attachment 1 shows the scan with the higher offset and attachment 2 with the lower offset. According to the OMC-DCPD_SUM channel, we get ~10mW more carrier light on the OMC when locking with this higher DARM offset.

I've put this DARM offset of 9e-05 back into ISC_LOCK and loaded it. We can watch over the next couple of lock acquisitions to see if the problem wth the PRG dropping off resurfaces.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 14:37, Monday 26 August 2024 (79722)OpsInfo

Tagging OpsInfo

If you see the power recycling gain start falling soon after DARM_OFFSET, you can turn off the LSC-DARM1 filter module offset, lower it, and turn it back on until the PRG stays steady, then proceed with OMC locking.

ryan.short@LIGO.ORG - 16:24, Tuesday 27 August 2024 (79745)ISC, OpsInfo

Now that we've locked several times successfully since yesterday with this higher DARM offset, I've also rearranged the state order in ISC_LOCK so that the DARM offset is applied before any ASC so that the OMC can work on locking while ASC converges (this is how the order used to be before the DARM offset issues started).

See attached for the new state progression around this point in locking.

Images attached to this comment
H1 ISC
naoki.aritomi@LIGO.ORG - posted 14:01, Monday 26 August 2024 (79719)
OMC fringe wrapping measurement after OFI replacement

I measured OMC fringe wrapping after OFI replacement. First I ran the dtt template in 78942, which caused a lockloss. In the next try, I reduced the excitation from 3421 to 600, which corresponds to 2.4 um pp OMC L (H1:SUS-OMC_M1_DAMP_L_IN1_DQ). The attachment shows that OMC scatter is worse than one in April (blue: April vs purple: today). We may need to adjust the OFI temperature.

Images attached to this report
H1 PSL
anthony.sanchez@LIGO.ORG - posted 13:50, Monday 26 August 2024 (79718)
PSL Status Report - Weekly

FAMIS 26290


Laser Status:
    NPRO output power is 1.831W (nominal ~2W)
    AMP1 output power is 64.55W (nominal ~70W)
    AMP2 output power is 137.1W (nominal 135-140W)
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN
    PDWD watchdog is GREEN

PMC:
    It has been locked 6 days, 23 hr 49 minutes
    Reflected power = 21.13W
    Transmitted power = 105.6W
    PowerSum = 126.7W

FSS:
    It has been locked for 0 days 1 hr and 37 min
    TPD[V] = 0.6479V

ISS:
    The diffracted power is around 1.9%
    Last saturation event was 0 days 1 hours and 37 minutes ago


Possible Issues:
    AMP1 power is low
    PMC reflected power is high
    FSS TPD is low
    ISS diffracted power is low

H1 PSL
ryan.short@LIGO.ORG - posted 10:53, Monday 26 August 2024 (79716)
PSL 10-Day Trends

FAMIS 21189

Nothing much of note this week. PMC REFL is still increasing slowly while PMC TRANS is decreasing.

The FSS TPD signal is still low, and since I wasn't able to increase it much last week, we plan to go into the enclosure and tune up the FSS path on-table soon to fix it.

Images attached to this report
H1 CAL
thomas.shaffer@LIGO.ORG - posted 10:50, Monday 26 August 2024 (79715)
PCALY SDF diffs

There were a few PCAL SDF diffs when we made it back to low noise today at CALEY. It looked like these were loaded into epics from teh safe.snap and these numbers disagreed with what was saved in the observe.snap file. I confirmed with Franscisco that this was the case, and he had me verify that they agreed with alog77386. I then also saved these new values in the safe.snap file as well. Interestingly, these channels are not monitored in the safe.snap.

Images attached to this report
H1 ISC
thomas.shaffer@LIGO.ORG - posted 10:45, Monday 26 August 2024 - last comment - 09:08, Friday 30 August 2024(79714)
Ran A2L for all quads

I ran A2L while we were still thermalizing, might run again later. No change for ETMY but the ITMs had large changes. I've accepted these in SDF, I reverted the tramps that the picture shows I accepted. I didn't notice much of a change in DARM or on the DARM blrms.


         ETMX P
Initial:  3.12
Final:    3.13
Diff:     0.01

         ETMX Y
Initial:  4.79
Final:    4.87
Diff:     0.08

         ETMY P
Initial:  4.48
Final:    4.48
Diff:     0.0

         ETMY Y
Initial:  1.13
Final:    1.13
Diff:     0.0

         ITMX P
Initial:  -1.07
Final:    -0.98
Diff:     0.09

         ITMX Y
Initial:  2.72
Final:    2.87
Diff:     0.15

         ITMY P
Initial:  -0.47
Final:    -0.37
Diff:     0.1

         ITMY Y
Initial:  -2.3
Final:    -2.48
Diff:     -0.18
 

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 15:06, Wednesday 28 August 2024 (79776)

I am not sure how the A2L is run these days, but there is some DARM coherence with DHARD Y that makes me think we should recheck the Y2L gains. See attached screenshot from today's lock.

As a reminder, the work that Gabriele and I did last April found that the DHARD Y coupling had two distinct frequency regimes: a steep low frequency coupling that depended heavily on the AS A WFS yaw offset, and a much flatter coupling about ~30 Hz that depended much more strongly on the Y2L gain of ITMY (this was I think before we started adjusting all the A2L gains on the test masses). Relevant alogs: 76407 and 76363

Based on this coherence, the Y2L gains at least deserve another look. Is it possible to track a DHARD Y injection during the test?

Images attached to this comment
thomas.shaffer@LIGO.ORG - 09:08, Friday 30 August 2024 (79813)

Since I converted this script to run on all TMs and dofs simultaneously, its performance hasn't been stellar. We've only run it a handful of times, but we definitely need to change something. One difference between the old version and the new is the frequencies the injected lines are at. As of right now, they range from 23-31.5Hz, but perhaps these needs to be moved. In June, Sheila and I ran it, then swapped the highest frequency and the lowest frequency to see if it made a difference (alog78495) and in that one test it didn't seem to matter.

Sheila and I are talking about the AS WFS offset and DHARD injection testing to try to understand this coupling a bit better. Planning in progess.

LHO VE
david.barker@LIGO.ORG - posted 10:01, Monday 26 August 2024 (79712)
Mon CP1 Fill

Mon Aug 26 08:02:15 2024 INFO: Fill completed in 2min 13secs

Short but a good fill. Gerardo is reducing the LLCV.

Jordan cleared an ice buildup at the end of the discharge pipe.

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 08:23, Monday 26 August 2024 - last comment - 17:09, Monday 26 August 2024(79708)
EY Dolphin Crash

TJ, Jonathan, EJ, Dave:

Around 01:30 this morning we had a Dolphin crash of all the frontends at EY (h1susey, h1seiey, h1iscey). h1susauxey is not on the Dolphin network as was not impacted.

We could not ping these machines, but were able to get some diagnostics from their IPMI management ports.

At 07:57 we powered down h1[sus,sei,isc]ey for about a minute and then powered them back on.

We checked the IX Dolphin switch at EY was responsive on the network.

All the systems came back with no issues. SWWD and model WDs were cleared. TJ is recovering H1.

Comments related to this report
jonathan.hanks@LIGO.ORG - 08:29, Monday 26 August 2024 (79709)
Screen shots of the console retrieved via ipmi.  h1iscey had a similar screen to h1seiey, same crash dump.

h1iscey, h1seiey - crash in the dolphin driver.
h1susey - kernel panic, with a note that a LIGO real time module had been unloaded.
Images attached to this comment
david.barker@LIGO.ORG - 08:27, Monday 26 August 2024 (79710)

Crash time: 01:43:47 PDT

david.barker@LIGO.ORG - 08:51, Monday 26 August 2024 (79711)
Images attached to this comment
david.barker@LIGO.ORG - 12:01, Monday 26 August 2024 (79717)

Reboot/Restart LOG:

Mon26Aug2024
LOC TIME HOSTNAME     MODEL/REBOOT
07:59:27 h1susey      ***REBOOT***
07:59:30 h1seiey      ***REBOOT***
08:00:04 h1iscey      ***REBOOT***
08:01:04 h1seiey      h1iopseiey  
08:01:17 h1seiey      h1hpietmy   
08:01:30 h1seiey      h1isietmy   
08:01:32 h1susey      h1iopsusey  
08:01:45 h1susey      h1susetmy   
08:01:47 h1iscey      h1iopiscey  
08:01:58 h1susey      h1sustmsy   
08:02:00 h1iscey      h1pemey     
08:02:11 h1susey      h1susetmypi 
08:02:13 h1iscey      h1iscey     
08:02:26 h1iscey      h1caley     
08:02:39 h1iscey      h1alsey     
 
 

david.barker@LIGO.ORG - 17:09, Monday 26 August 2024 (79727)

FYI: There was a pending filter module change for h1susetmypi which got installed when this model was restarted this morning.

LHO General
thomas.shaffer@LIGO.ORG - posted 07:36, Monday 26 August 2024 - last comment - 08:17, Monday 26 August 2024(79705)
Ops Day Shift Start

TITLE: 08/26 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 0mph Gusts, 0mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.06 μm/s
QUICK SUMMARY: Looks like we lost the SUSEY and ISCEY front ends. This created connection errors and put many Guardians, including IFO_NOTIFY, into an error state. Contacting CDS team now.

Comments related to this report
thomas.shaffer@LIGO.ORG - 08:17, Monday 26 August 2024 (79706)

Dave and Jonathan have fixed teh CDS FE issues, we are now starting recovery. I also found HAM5 ISI tripped as well as SRM and OFI, looks like this happened about 4 hours ago, a few hours after the FEs tripped. no idea why they tripped yet.

LHO General
corey.gray@LIGO.ORG - posted 16:29, Saturday 24 August 2024 - last comment - 08:17, Monday 26 August 2024(79683)
Sat DAY Ops Summary

TITLE: 08/24 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY:

Have had the BS camera die on us a couple times in the last 24hrs (this requires contacting Dave to restart the camera's computer--atleast until this camera can be moved to another computer), and so the beginning of the shift was restoring this and also figuring out why H1 had a lockloss, BUT did not drop the Input Power down from 61W to 2W.

After the camera issue was fixed, ran an alignment with no issues, and then took H1 all the back to NLN also with no issues.  

Then completed taking Calibration measurements after 1.5hrs and 3hrs.  Oli also ran LSC Feedforward measurements......

Then there was ANOTHER BS Camera Computer Crash!  Dave brought us back fast.  

Now back to OBSERVING!
LOG:

Comments related to this report
ryan.short@LIGO.ORG - 08:17, Monday 26 August 2024 (79707)ISC, OpsInfo

Looking into why the input power didn't come back down after a lockloss at ADS_TO_CAMERAS, seems that the proper decorators that check for a lockloss are missing from the run method (but are there in main). This means that while ISC_LOCK was waiting for the camera servos to turn on, it didn't notice that the IFO lost lock, and therefore didn't run through the LOCKLOSS or DOWN states which would reset the input power.

I've added the decorators to the run method of ADS_TO_CAMEARS so this shouldn't happen again.

 

H1 ISC
sheila.dwyer@LIGO.ORG - posted 16:30, Friday 23 August 2024 - last comment - 15:31, Monday 26 August 2024(79670)
operator requests for the weekend

We can edit this list as needed. 

We are having trouble locking this afternoon because of the wind, but we have some requests for the operators this weekend if they are able to relock. 

We've changed the nominal locking power to 61W, in the hopes that this might avoid the PI or let us pass through it quickly.

When we first get to NLN, please take the IFO to NLN_CAL_MEAS and run a calibration sweep.  If we stayed locked for ~1.5 hours, please re-run, and if we are ever locked for more than 3 hours please re-run again. 

After the calibration has run, we would like to check the camera set points since we have seen earlier today that they are not optimal and that might be related to our PI problem.  We already updated the offset for PIT1, but we'd like to check the others.   We modified the script from 76695 to engage +20 dB filters in all these servos to speed up the process.   Each DOF should take a little more than 15 minutes.  We'd like these run with the POP beam diverter open, so we can see the impact on POP18 and POP90.  The operators can run this by going to /ligo/gitcommon/labutils/beam_spot_raster and typing python camera_servo_offset_stepper.py 1 -s now  (and once 1 completes, run 2 and 3 if there's time.)

I've added 3 ndscope templates that you can use to watch what these scripts do.  the templates are in userapps/asc/h1/templates/ndscope/move_camera_offsets_{BS, DOF2_ETMX, DOF3_ETMY}.yml  We'd like to see if any of these can increase the arm powers, or POP18.  If there is a better value found, it can be set to the default by updating lscparams.py lines 457 to 465. 

 

 

Comments related to this report
elenna.capote@LIGO.ORG - 17:00, Friday 23 August 2024 (79675)

The injections for LSC feedforward can also be taken after the tasks Sheila mentions here. Do these measurements after at least 2-3 hours of lock.

The templates used for these measurements are found in "/opt/rtcds/userapps/release/lsc/h1/scripts/feedforward"

Steps:

  • go to NLN  cal meas
  • open LSC feedforward medm: sitemap>LSC overview > IFO FF
  • this is where the filter banks we mention are located
  • top bank is PRCLFF, next is MICHFF and then SRCLFF1 and SRCLFF2. only SRCLFF1 is in use
  • start with the MICH preshaping measurement:
    • Ramp MICHFF gain to 0
    • turn off all filters except the high pass in FM10
    • turn off input to MICHFF filter bank
    • ramp MICHFF gain back to 1
    • run template MICHFF_excitation_ETMYpum.xml for 30 averages and save
    • undo set up by: ramp gain to zero, turn on filters, turn on input, ramp gain back to 1
  • next, measure the MICH feedforward by running MICH_excitation.xml for 30 averages and save
    •  do this with the MICH feedforward on because we are measuring a residual
  • next, measure the SRCL preshaping
    • follow the steps for the MICH set up, but do them in the SRCLFF1 bank. leave on FM10, the high pass
    • measure using SRCLFF_excitation_ETMYpum.xml for 30 averages and save
    • reset the filter bank following the MICH steps
  • next, measure SRCL feedforward
    • first, ramp the SRCLFF gain to 1
    • measure using SRCL_excitation.xml for 30 averages and save
    • ramp SRCL FF gain back to nominal value after measurement is done
  • next, measure PRCL feedforward using PRCL_excitation.xml for 30 averages and save
  • no PRCL preshaping measurement is required
elenna.capote@LIGO.ORG - 15:31, Monday 26 August 2024 (79724)

Putting in another request for LSC feedforward measurements. Please disregard the above instructions and instead follow these:

  • go to NLN cal meas
  • open LSC feedforward medm: sitemap>LSC overview > IFO FF
  • open the template "/opt/rtcds/userapps/release/lsc/h1/scripts/feedforward/MICH_excitation.xml"
    • ramp MICHFF gain to zero
    • the template is set to run accumulative. no need to change this!
    • run the template for at least 30 averages, and then stop. since it is on accumulative, you will need to watch the template and stop it yourself.
    • save the file as isno need to change the name
    • ramp MICHFF gain back to nominal
  • repeat this process using template "/opt/rtcds/userapps/release/lsc/h1/scripts/feedforward/SRCL_excitation.xml"
    • this time, ramp SRCLFF1 gain to zero
    • take measurement following steps detailed above
    • ramp gain back to nominal
  • repeat again using "/opt/rtcds/userapps/release/lsc/h1/scripts/feedforward/PRCL_excitation.xml"
    • ramp PRCLFF gain to zero if not already zero
    • take measurements as above
    • ramp back to nominal

There is no need to take any other measurements at this time! I have copied the exact filenames from the folder. Do not change the filename when you save.

H1 PSL (DetChar)
ryan.short@LIGO.ORG - posted 14:22, Monday 19 August 2024 - last comment - 10:36, Monday 26 August 2024(79593)
PSL Control Box 1 Moved to Separate Power Supply (WP 12051)

R. Short, F. Clara

In the ongoing effort to mitigate the 9.5Hz comb recently found to be coming from the PSL flow meters (alog79533), this afternoon Fil put the PSL control box in the LVEA PSL racks on its own, separate 24V bench-top power supply. Once I shut down the PSL in a controlled manner (in order of ISS, FSS, PMC, AMP2, AMP1, NPRO, chiller), Fil switched CB1 to the separate power supply, then I brought the system back up without any issues. I'm leaving the ISS off for now while the system warms back up. I'm leaving WP12051 open for now until enough data can be collected to say whether this separate supply helps the 9.5Hz comb or not.

Please be aware of the new power supply cable running from under the table outside the PSL enclosure to behind the racks; Fil placed a cone here to warn of the potential trip hazard.

Comments related to this report
ansel.neunzert@LIGO.ORG - 10:36, Monday 26 August 2024 (79713)DetChar

This looks promising! I have attached a comparison with a high-resolution daily spectrum from July (orange) vs yesterday (black), zoomed in on a strong peak of the 9.5 Hz comb triplet. Note that the markers tag the approximate average peak position of the combs from O4a, so they are a couple of bins off from the actual positions of the July peaks.

Images attached to this comment
Displaying reports 5941-5960 of 83351.Go to page Start 294 295 296 297 298 299 300 301 302 End