Displaying reports 11281-11300 of 84703.Go to page Start 561 562 563 564 565 566 567 568 569 End
Reports until 13:11, Tuesday 30 January 2024
H1 CAL
louis.dartez@LIGO.ORG - posted 13:11, Tuesday 30 January 2024 (75538)
pydarm reports from start of O4a to late June repaired and prepped for uncertainty budget generation
This work is following up on LHO:73735. 

I've written a script that retroactively fixes the pyDARM parameter issues discussed in LHO:73735. The script lives here on the CDS workstations: /ligo/home/louis.dartez/projects/20240124_script_fix_bad_inis_from_alog_LHO73735/fix_bad_site_inis.py.

Since the pyDARM parameter INI files were initially written in error, anyone trying to process old corresponding to the affected reports' times would be using the wrong IFO models. As such, I've used the script above to fix the affected reports in situ. The original reports have been copied into /ligo/groups/cal/H1/reports/archive/reports_preserved_from_fix_for_LHO73735.
H1 CDS
david.barker@LIGO.ORG - posted 11:57, Tuesday 30 January 2024 - last comment - 12:07, Tuesday 30 January 2024(75625)
CDS Maintenance Summary: Tuesday 30th January 2024

WP11646 New h1sqz model

Daniel, Dave:

A new h1sqz model was installed. DAQ restart was required

WP11651 Add New SQZ_PMC Guardian nodel to DAQ

Vicky, Camilla, Daniel, Dave:

The new SQZ_PMC GRD node was added to H1EPICS_GRD.ini. DAQ + EDC restart required

DAQ Restart

Dave:

The DAQ was restarted for the above changes. Sequence was 0-leg, EDC, 1-leg.

No major problems with the restart, both GDS daqds had to be restarted a second time for channel list synchronization.

Comments related to this report
david.barker@LIGO.ORG - 12:03, Tuesday 30 January 2024 (75626)

DAQ Changes:

key- <channame> <datatype 4=float> <datarate>

Fast Channels Removed

none

Fast Channels Added

< H1:SQZ-PMC_REFL_LF_OUT_DQ 4 2048
< H1:SQZ-PMC_REFL_RF35_I_NORM_DQ 4 16384
< H1:SQZ-PMC_REFL_RF35_Q_NORM_DQ 4 2048
< H1:SQZ-PMC_SERVO_CTRL_OUT_DQ 4 16384
< H1:SQZ-PMC_SERVO_ERR_OUT_DQ 4 16384
< H1:SQZ-PMC_SERVO_SLOW_OUT_DQ 4 2048
< H1:SQZ-PMC_TRANS_LF_OUT_DQ 4 16384

Slow Channels Removed

> H1:SQZ-FIBR_PD_AWHITEN_SET1 4 16
> H1:SQZ-FIBR_PD_AWHITEN_SET2 4 16
> H1:SQZ-FIBR_PD_AWHITEN_SET3 4 16
> H1:SQZ-FIBR_PD_LF_MASK 4 16

Slow Channels Added

< H1:GRD-SQZ_PMC_ACTIVE 4 16
< H1:GRD-SQZ_PMC_ARCHIVE_ID 4 16
< H1:GRD-SQZ_PMC_CONNECT 4 16
< H1:GRD-SQZ_PMC_ERROR 4 16
< H1:GRD-SQZ_PMC_EXECTIME 4 16
< H1:GRD-SQZ_PMC_INTENT 4 16
< H1:GRD-SQZ_PMC_LOAD_STATUS 4 16
< H1:GRD-SQZ_PMC_MODE 4 16
< H1:GRD-SQZ_PMC_NOMINAL_N 4 16
< H1:GRD-SQZ_PMC_NOTIFICATION 4 16
< H1:GRD-SQZ_PMC_OK 4 16
< H1:GRD-SQZ_PMC_OP 4 16
< H1:GRD-SQZ_PMC_PV_TOTAL 4 16
< H1:GRD-SQZ_PMC_READY 4 16
< H1:GRD-SQZ_PMC_REQUEST_N 4 16
< H1:GRD-SQZ_PMC_SPM_CHANGED 4 16
< H1:GRD-SQZ_PMC_SPM_MONITOR 4 16
< H1:GRD-SQZ_PMC_SPM_TOTAL 4 16
< H1:GRD-SQZ_PMC_STALLED 4 16
< H1:GRD-SQZ_PMC_STATE_N 4 16
< H1:GRD-SQZ_PMC_STATUS 4 16
< H1:GRD-SQZ_PMC_TARGET_N 4 16
< H1:GRD-SQZ_PMC_VERSION 4 16
 

david.barker@LIGO.ORG - 12:07, Tuesday 30 January 2024 (75627)

Restart/Reboot log:---------------------------------------------------------------------------------------

Tue30Jan2024
LOC TIME HOSTNAME     MODEL/REBOOT
08:57:21 h1susb123    h1iopsusb123  <<< Recovery from Monday Dolphin Glitch
08:57:35 h1susb123    h1susitmy   
08:57:49 h1susb123    h1susbs     
08:58:03 h1susb123    h1susitmx   
08:58:17 h1susb123    h1susitmpi  
09:00:02 h1sush2a     h1iopsush2a 
09:00:16 h1sush2a     h1susmc1    
09:00:30 h1sush2a     h1susmc3    
09:00:44 h1sush2a     h1susprm    
09:00:47 h1sush34     h1iopsush34 
09:00:58 h1sush2a     h1suspr3    
09:01:01 h1sush34     h1susmc2    
09:01:15 h1sush34     h1suspr2    
09:01:29 h1sush34     h1sussr2    
09:02:37 h1sush56     h1iopsush56 
09:02:56 h1sush56     h1sussrm    
09:03:10 h1sush56     h1sussr3    
09:03:24 h1sush56     h1susifoout 
09:03:38 h1sush56     h1sussqzout 


09:37:30 h1lsc0       h1sqz       <<< New sqz model


09:40:45 h1daqdc0     [DAQ] <<< 0-leg restart
09:40:58 h1daqfw0     [DAQ]
09:40:58 h1daqtw0     [DAQ]
09:40:59 h1daqnds0    [DAQ]
09:41:07 h1daqgds0    [DAQ]


09:41:33 h1susauxb123 h1edc[DAQ] <<< EDC restart for GRD node


09:42:06 h1daqgds0    [DAQ] <<< 2nd gds0 restart


09:44:58 h1daqdc1     [DAQ] <<< 1-leg restart
09:45:07 h1daqfw1     [DAQ]
09:45:08 h1daqtw1     [DAQ]
09:45:09 h1daqnds1    [DAQ]
09:45:18 h1daqgds1    [DAQ]
09:45:51 h1daqgds1    [DAQ] <<< 2nd gds1 restart
 

H1 General
anthony.sanchez@LIGO.ORG - posted 11:35, Tuesday 30 January 2024 (75623)
Tuesday Ops Mid shift and Ops Swap

TITLE: 01/30 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: Tony
INCOMING OPERATOR: Austin
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 8mph Gusts, 5mph 5min avg
    Primary useism: 0.09 μm/s
    Secondary useism: 0.46 μm/s
QUICK SUMMARY:

Revovering from the Dolphin timing crash desribed in Alog 75612
We have taken all the SUS pensions to safe as soon as I could.
Dave restarted all the IOPs.
Rahul Recovered all the SUS from Safe
The CDS issues seem to be resolved by 17:28 UTC

Dainel started working on Beckhoff restarts, Ran into issues at EY.

SQZ Model was restarted which did effect FW0 & FW1 so there is a breif time of no data being written to the frames.

Moving Forklift in the LVEA

18:00 UTC Beckhoff slow controls just went down across the Site including the PSL.
18:04 UTC Beckoff back online.

 
PSL_ENV_LASERRM_ACN_TEMP_DEGF alarm 18:33 UTC
Trended the channel, looks fine.

Beam Splitter ISI tripped.
Beam Splitter ISI Reset.
Beam Splitter SUS is in Damped.

The LVEA is Now LASER SAFE for the HAM3 Door removal

 

H1 ISC (ISC, SUS)
keita.kawabe@LIGO.ORG - posted 11:25, Tuesday 30 January 2024 - last comment - 14:32, Tuesday 30 January 2024(75620)
HAM6 work Jan/29/2024: Alignment of the laser into the new OMC, day 5 (Camilla, Preet, Rahul, Sheila, Betsy, Keita)

day 1 (alog 75548), day 2 (alog 75557), day 3 (alog 75575), day 4 (alog 75601)

Recovery from reboots

After some things were rebooted, we've found that the suspension slider offsets for all OMs, SRM and ZM5 were reverted back to old-ish numbers. I and Camilla manually restored that.

Camilla saw that the beam was not quite back to the old position on one of the HAM7 QPDs but wasn't that bad.

The beam was already on ASC-AS_C but not centered, so I centered it using SRM.

Following that, I had to make a minor tweaking of OM1/2/3 to recenter OMC qpds quickly.

  SRM OM1 OM2 OM3
PIT slider (yesterday/today) 1944.6/1904.6 20/90 0/-80 -590/-550
YAW slider (yesterday/today) -2940.6/-2944.6 650/610 760/760 60/-74
DAC max (yesterday/today) didn't care/57k (18bit DAC) 11k/7.1k 7k/4.5k 9k/6.7k

HAM6 irises are centered

Preet and Sheila recentered the two irises on HAM6. From this point on, these irises are a fiducial for the IFO beam.

OMC trans video beam and OMCR beam dump will be done later this week

LVEA was transitioned to laser safe for HAM3 door removal. We'll continue suspension work in HAM6.

Comments related to this report
sheila.dwyer@LIGO.ORG - 14:32, Tuesday 30 January 2024 (75632)

Here are photos of the beams on the irises.  

Images attached to this comment
H1 CDS
david.barker@LIGO.ORG - posted 11:06, Tuesday 30 January 2024 (75621)
Corner station Dolphin Glitch, Mon 29jan2024 18:45:55 PST

Jonathan, Erik, Tony, Rahul, Richard, Dave:

Last night we had another one of the corner station glitches which took down all the models on h1susb123, h1sush2a, h1sush34 and h1sush56.

The glitch happened at 18:45:55 PST Mon 29th Jan 2024 during the O4a,b break.

This has happened several times before, details in FRS30324, with a regularity of every 80 to 100 days.

After the glitch, the dolphin IPC continued to function, which meant that the SWWDs did not trip the associated Seismic systems.

DMESG logs did not show anything.

I ran a complete set of Dolphin DIS_DIAGS (previous run was 24th Jan 2024), which only showed the issues with OAF0 and SEIH16 from last Friday's crashes. No problems with any SUS was seen.

Recovery from the crash was:

H1 SQZ
daniel.sigg@LIGO.ORG - posted 10:49, Tuesday 30 January 2024 (75619)
Squeezer PMC model updates

The H1SQZ model has been updated to fix the PMC_TRANS & FIBR_PD readbacks, and to add fast DQ channels for the new PMC readbacks.

The EtherCAT system was been updated to fix the PMC_REFL PD type.

A new squeezer guardian node has been added for the PMC which required a DAQ restart to add the new channels.

LHO VE
david.barker@LIGO.ORG - posted 10:17, Tuesday 30 January 2024 (75618)
Tue CP1 Fill

Tue Jan 30 10:12:06 2024 INFO: Fill completed in 12min 2secs

Jordan confirmed a good fill curbside, he removed some ice buildup around the discharge line vent.

Images attached to this report
H1 General (CDS)
anthony.sanchez@LIGO.ORG - posted 08:19, Tuesday 30 January 2024 (75613)
Tuesday Ops Shift Start

TITLE: 01/30 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 4mph Gusts, 3mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.42 μm/s
QUICK SUMMARY:
The is a fairly large number of red items on the CDS overview, which is likely due to Timing Glitch on SUS Front End ALOG 75612

 

Images attached to this report
H1 CDS
erik.vonreis@LIGO.ORG - posted 07:06, Tuesday 30 January 2024 - last comment - 09:49, Tuesday 30 January 2024(75612)
Timing glitch on SUS front ends

There was a timing glitch that affected many models running on corner station front-ends, and the IOPs on h1susb123, h1sush34, h1sush2a, h1sush56.

The timing error is on the order of tens of milliseconds, which is consistent with a Dolphin glitch.  Non-Dolphin front-ends and end station front ends were not affected.

Error was at about Jan 30, 02:45 UTC.  Models are running, but the affected IOPs will need to be restarted.

Comments related to this report
rahul.kumar@LIGO.ORG - 09:49, Tuesday 30 January 2024 (75617)

Dave, Tony, Rahul

SUS and seismic has been recovered after dave gave us a thumbs up.

H1 CDS
erik.vonreis@LIGO.ORG - posted 06:51, Tuesday 30 January 2024 (75611)
workstations updated

Workstations were updated and rebooted.  This was an OS package update.  Conda packages were not updated.

H1 ISC (ISC, SUS)
keita.kawabe@LIGO.ORG - posted 21:34, Monday 29 January 2024 - last comment - 09:10, Tuesday 13 February 2024(75601)
HAM6 work Jan/29/2024: Alignment of the laser into the new OMC, day 4 (Camilla, Vicky, Julian, Rahul, Sheila, Betsy, Keita)

day 1 (alog 75548), day 2 (alog 75557), day 3 (alog 75575)

HAM7 irises were good

Sheila/Camilla checked the iris position on HAM7 and it was good.

ASC-AS_C whitening gain was increased by 18dB, dark offset was reset

I didn't like that the ASC-AS_C input was so small. Increased the whitening gain by 18dB (from nominal 18dB to 36dB) and reset the dark offset.

Recentered the beam on ASC-AS_C. One thing that was strange was that the ASC-AS_C_NSUM would become MUCH bigger (like a factor of 10) when SRM is misaligned. I was worried that I was looking at a ghost beam. Camilla measured the  beam power to be ~1mW out of HAM7 and ~0.7mW into HAM6. When ASC-AS_C was centered, ASC-AS_C_NSUM_OUT became ~0.008 give or take some. Taking the 16dB extra whitening (i.e. a factor of 8) into account, ASC-AS_C_NSUM~0.008 means about 1mW into HAM6, which was in a ballpark, so I convinced myself that the beam on AS_C was good.

HAM6 irises and beam height on OM, the beam was still very low on OM2

At this point OMC QPDs are reasonably centered, so Sheila and Camilla checked the beam position on irises in HAM6.

The beam was OK on the first iris but was a bit low (~2mm) on the iris closer to the OM1.

The beam position on OMs at this point as well as the slider values and max DAC output are listed below (see Camilla's pictures, too). Note that the YAW position in the table is the position of the incoming beam on the mirror measured at some (unknown) distance, it's not on the mirror.  

  OM1 OM2 OM3
Beam height (nominal 4") 1/8" too high 1/4" too low 1/16" too low
YAW position 1/8" to +X 1/32" to -X Cannot measure
PIT slider 430 20 610
YAW slider 600 1300 -231
Max DAC output 11k 7k 9k

The beam was clearing the input and output hole on the shroud, was cleanly hitting the small OMCR steering mirror by the cage, and was already going to the OMCR diode.

They confirmed that the OMC trans video beam was visible on the viewer card when OMC flashes and it was hitting the steering mirror (but we need a viewport simulator to see if the beam will clear the viewport aperture).

Bringing the beam up higher on OM2

Unfortunately the beam was still very low (~1/4"), however I was able to use OM1 alignment slider to bring the beam up on OM2 and use OM2/3 alignment sliders to still center the OMC QPDs. After this was done, OM2 PIT offset became large but OM1/OM3 offsets became low-ish. This was a very good sign as it's infinitely easier to mechanically tilt OM2 than OM1/OM3 due to superior mechanical design.

Anyway, by doing this, the beam height on OM2 went up by about 1/8" (see Rahul's pictures). It's still too low by 1/8", but bringing the beam up more would mean that OM3 DAC output will become large w/o mechanically relieving, which I didn't want to do, so I decided to stay here.

  OM1 OM2 OM3
Beam height 1/8" too high (didn't measure, no reason to suspect that it changed) 1/8" too low 1/16" too low
PIT slider 20 2710 -590
YAW slider 650 660 60
Max DAC output 7.2k 21k 7.1k

Mechanically relieving the OM2 PIT offset

Julian set the OM2 PIT slider gain to 0.75 (from 1), Rahul turned the balance mass screw on the upper mass of OM2 to compensate. We repeated the same thing four times (slider gain 0.75->0.5->0.25->0, each step followed by Rahul's mechanical adjustment). We had to adjust OM2 Y slider in the process to bring the beam back to the center of the OMC QPDs, but overall, this was a really easy process (did I mention that tip-tilt adjustment is not an easy thing to do?).

We ended up with this (we haven't measured the beam height again as OM2 was the only thing that moved, so the height numbers are from the previous table just for convenience).

  OM1 OM2 OM3

Beam height (didn't measure, no reason to suspect that they changed)

1/8" high 1/8" low 1/16" low
PIT slider 20 0 -590
YAW slider 650 760 60
MAX DAC output 7.2k 5k 7.1k

I declared that this is a good place to stay. Rahul fixed the balance mass on OM2 upper mass.

Rahul also fixed the balance mass on OMCS.

Fast shutter path, WFS path, ASAIR path, OMCR path

We closed the fast shutter and the reflected beam goes to the high power beam dump.

We opened the fast shutter and checked the WFS path. The beam was already hitting one quadrant of WFSB but was entirely missing the WSFA. The beam was a bit low on the lens on the WFS sled, so I used two fixed 1" steering mirrors upstream of the WFS sled to move the beam up on the lens and keep the path reasonably level. See Rahul's pictures for the beam height. After this, both WFSs saw the light, and at this point we used pico to center both. We weren't able to see the beam reflected by the WFS but assume that it still hit the black glass.

We tried to see the ASAIR beam but couldn't. Since the beam is hitting the center of ASC-AS_C, we assume that the ASAIR beam will still hit the black glass.

OMCR beam was already hitting the OMCR photodiode, but the beam was REALLY close to the beam dump that's supposed to catch ghost beam. We temporarily moved the dump so the beam is about 5mm from the edge of the glass, but this might be too far. I'll find how close it's supposed to be from the past alog.

Couldn't check if PZT1 is working.

We tried to see if OMC length error signal makes sense when scanning the OMC length, but whenever the OMC is close to resonance there was a huge transient in the DCPD SUM as well as the length signal, probably the intensity noise (due to acoustics or jitter or something) is too much. We'll measure the capacitance of the PZT from outside.

Current status of LVEA

Laser hazard, HAM5 GV is closed, HAM6 and HAM7 curtains are closed.

Remaining tasks

Comments related to this report
camilla.compton@LIGO.ORG - 08:43, Tuesday 30 January 2024 (75614)

Photos attached before the beam height on OM2 was adjusted. In Keita's stage "HAM6 irises and beam height on OM, the beam was still very low on OM2".

Photos of OM1 Pit and OM1 YawOM2 Pit and OM2 Yaw, no photo of OM3. Position of beam on Iris 1, Iris 2 (and the backside of Iris 2). And photos of HAM7 with curtains split for SQZ beam. HAM6 with iris 1.

Images attached to this comment
rahul.kumar@LIGO.ORG - 10:08, Tuesday 30 January 2024 (75615)

Attached below are the pictures showing the beam height on OM2 (pitch and yaw position) and WFS in HAM6 chamber after we made the adjustments.

Also shown is the lens before the WFS, on this the yaw looks OK and is slightly low on pitch but Keita is happy with this.

On a different note:-

OM1, OM3, OMC BOSEM flag position looks fine, OM2 will need some adjustment (once we are laser safe).

The latest slider values for OM1-3 has been attached below.

Images attached to this comment
keita.kawabe@LIGO.ORG - 09:43, Tuesday 30 January 2024 (75616)

The last photo in alog 65101 from Sept. 2022 shows the distance between the OMCR beam dump and the main OMCR beam back then. We will get close to this photo.

https://alog.ligo-wa.caltech.edu/aLOG/uploads/65101_20220923181903_BD_clearance.jpg

julian.gurs@LIGO.ORG - 17:09, Wednesday 31 January 2024 (75653)
Possibly clipping at a beam dump.
In picture "beforemoving" it can be seen how the beam hits the outer left edge of the IR card and this side "touches" the beam dump.
We have moved the beam dump because the beam is centered on the lens.
Images attached to this comment
corey.gray@LIGO.ORG - 09:10, Tuesday 13 February 2024 (75840)EPO

Tagging EPO for alignment of HAM6 work

LHO VE
janos.csizmazia@LIGO.ORG - posted 16:56, Monday 29 January 2024 - last comment - 21:05, Monday 29 January 2024(75609)
1-29 vent vacuum diary
Today's activities:
- All bolts (but 4 of them) on HAM3 Y+ door have been broken, in preparation for the door removal - most likely tomorrow, during laser safe conditions
- The cable trays on BSC8 have been removed, as they would be on the way for the 12" and 16.5" conflat feedthru removals - this activity will start after the LVEA will be laser safe
- The troubleshooting of the EX RGA ion pump has been done - see here in detail: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=75605
- The corner purge air was measured, and found satisfactory (~-46 deg C); seed details in comment
Comments related to this report
jordan.vanosky@LIGO.ORG - 21:05, Monday 29 January 2024 (75610)

Purge air dew point measured at the OMC tube, -46degC, prior to today's in chamber activites.

Images attached to this comment
H1 SQZ
camilla.compton@LIGO.ORG - posted 15:31, Monday 29 January 2024 - last comment - 11:29, Tuesday 30 January 2024(75606)
Edited the LOCKED_SEED_DITHER lockloss threshold

Vicky, Sheila, Camilla

While in HAM6, Sheila noticed they;d lost their SQZ beam. Vicky found that the SQZ_OPO_LR guardian had unlocked but not realized it as the variable lockloss threshold was too high. When Vicky relocked it, the beam came back fine. Plot attached showing H1:SQZ-SHUTTER_I_TRIGGER_OUT_DQ increasing but not getting over the incorrect lockoss threshold of 2800.

We edited SQZ_OPO_LR  LOCKING_SEED_DITHER state to wait 15 seconds after the boosts are turned on for the OPO to be fully locked before moving to  LOCKED_SEED_DITHER and calculating the lockloss threshold (p.round(cdu.avg(1,'H1:SQZ-SHUTTER_I_TRIGGER_OUT_DQ')*1.025)), rather than taking the average while the seed is still finishing locking. If the seed dither unlocks, SQZ_MANGER will relock it.

Strangely the HAM6 QPDs (e.g. OMC_A) NSUM went negative (rather than 0) when this happened, making Tony and the CDS team check there was no issues. Unsure why it did.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 11:29, Tuesday 30 January 2024 (75622)

OPO was still not locking correctly in SEED_DITHER  this morning. Sometimes incorrectly thinking it was locked and other times taking over 30 seconds to be locked and thus setting it's lockloss threshold incorrectly.

We should troubleshoot this when we are next in laser hazard. 

Displaying reports 11281-11300 of 84703.Go to page Start 561 562 563 564 565 566 567 568 569 End