Displaying reports 10761-10780 of 84729.Go to page Start 535 536 537 538 539 540 541 542 543 End
Reports until 09:48, Thursday 07 March 2024
H1 ISC
camilla.compton@LIGO.ORG - posted 09:48, Thursday 07 March 2024 - last comment - 16:53, Friday 08 March 2024(76180)
Record of Guardian Changes

ISC_LOCK.py changes since 2024-01-07

 

 

Comments related to this report
camilla.compton@LIGO.ORG - 10:09, Thursday 07 March 2024 (76181)

No changes in ALIGN_IFO, ALS_ARM (ALS_XARM/YARM), ALS_COMM, ALS_DIFF

ISC_DRMI changes since 2023-12-21

  • self.useINP1/ PRC1 / PRC2 / SRC1 SRC2 all changed from True to False 76172

ALS_DIFF changes since 2023-11-07

  • FINE_TUNE_IR ALS-C_DIFF_PLL_CTRL_OFFSET ThreshLow reduced from 0.05 to 0.04

IMC_LOCK changes since 2023-11-21

  • Notification in OFFLINE state if IMC WFS need to be centered
jenne.driggers@LIGO.ORG - 16:51, Friday 08 March 2024 (76218)

Some notes on sqz guardian changes that were made, then reverted: 76154

jenne.driggers@LIGO.ORG - 16:53, Friday 08 March 2024 (76219)

The change to use TRY as the DARM normalization was reverted back to the formerly-nominal TRX.

When we were using TRY, we had also made some changes in PREP_DC_READOUT_TRANSITION, but those have now been reverted.

H1 CDS
david.barker@LIGO.ORG - posted 08:46, Thursday 07 March 2024 - last comment - 08:04, Friday 08 March 2024(76177)
Overnight results of h1digivideo3 camera stability after 10GE upgrade yesterday

Jonathan, Patrick, Dave:

Around 14:00 yesterday (Wed 06mar2024) we upgraded the camera network link to h1digivideo3 from 1GE TPE to 10GE fiber. After running overnight, all of the cameras on this server are still running and, with the exception of ETMY, no further VALID=0 have been seen (2 day trend attached, upgrade at the 18h mark).

ETMY continues to have an hourly VALID=0 which flashes the camera client images blue-screen for a few seconds. This happens at roughly the 20 minute mark in the hour and it slowly advances through the hour.

The ETMY periodicity is not changed by h1digivideo3 reboots, suggesting it is in the camera itself. To test this, I power cycled the ETMY camera at 08:34 this morning.

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 11:58, Thursday 07 March 2024 (76187)

Power cycling the ETMY camera (h1cam27) appears to have fixed the hourly blue-screen flashing.

david.barker@LIGO.ORG - 08:04, Friday 08 March 2024 (76206)

We have had no camera drop-outs or VALID=0 issues over the past 24 hours. Looks like h1digivideo3's problems have been resolved, I'm closing out FRS30615

LHO General
corey.gray@LIGO.ORG - posted 07:59, Thursday 07 March 2024 - last comment - 09:07, Thursday 07 March 2024(76173)
Thurs DAY Ops Transition

TITLE: 03/07 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 3mph Gusts, 1mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.15 μm/s
QUICK SUMMARY:

Arrived with Sheila & Gabriele running an Manual Initial Alignmnent.  Randy and Eric are moving items around the site as well.  Low winds currently and a couple of earthquakes over night (5 & 7hrs ago).  No alarms or major red-ness on the CDS Overview.

Comments related to this report
corey.gray@LIGO.ORG - 08:10, Thursday 07 March 2024 (76175)PSL

Whoopps--missed this:  Sheila let me know that on DIAG MAIN there is a notification of "Check PSL Chiller."  (RyanC messaged Jason)

[Additionally there is a notification for SEI_CONF "being stuck", but Sheila mentioned work by Jim yesterday (i.e. EX BRS).]

ryan.short@LIGO.ORG - 09:07, Thursday 07 March 2024 (76179)PSL

I looked at the PSL chiller this morning and the water level was hovering right above the minimum fill line (this is the most common reason for the "check PSL chiller" message on DAIG_MAIN), so I added 150mL of lab water.

H1 ISC
jennifer.wright@LIGO.ORG - posted 19:55, Wednesday 06 March 2024 - last comment - 22:07, Wednesday 06 March 2024(76166)
Alignment Notes - Evening

Jenne, Elenna, Jennie W., Camilla, Ryan S., Austin, Matt, Gabriele, Georgia.

 

Going to offload DRMI ASC and try and offload things by hand as build-ups are decaying.

Got to Offload DRMI ASC.

Elenna changing PRM to improve build-ups. LSC-POPAIR_B_RF90_I_NORM and LSC-POPAIR_B_RF18_I_NORM.

PR2 helped.

SRM and SR2.

All of these pitch needed changing.

DHARD hit some limits at CARM offset reduction and we lost lock. First image.

DHARD YAW and DHARD PITCH rung up.

Annother lockloss (image 2) from state 309 but lockloss is reporting state number incorrectly.

We think its losing lock around DHARD WFS.

SRM yaw changed. Improves things.

Noticing some glitches on DHARD Pitch out - not sure what is causing them. Third image.

Changing CHARD alignment to improve ASC.

Lost lock. Possibly from state 305 (DHARD WFS)

PRM pitch being changed helps with build-ups (OFFLOAD DRMI ASC state).

Then stepping the guardian manually up the states making tweaks.

Tried to go from CARM offset reduction to Carm 5 picometers and LSC channels rung up at 60 Hz and we lost lock.

One of the LSC loops might have too much gain.

Elenna measuring TFs to check the loops, but DARM measurement is not giving good coherence and increasing the gain.

Power Normalisation of some loops uses IM4_TRANS_QPD and this is very mis-centred so may be adding in noise. (image 4)

Lockloss due to excitation?

Reached OFFLOAD DRMI ASC again and Elenna moving IM3.

Moved IM4 and less clipping on IM4_TRANS.

We lost lock.

DARM is normalised by X_ARM_TRANS and BRS X has been taken out of loop so Jenne changed the normalisation to be with Y-ARM in ISC_LOCK guardian.

Jenne aligning BS and we are in FIND IR.

Lost lock again.

Elenna doing went to MANUAL Initial Alignment state.

Had to undo the changes Jenne made to IM3 and IM4.

Gabriele is going to measure DARM with white noise measurement.

This shows the UGF is 15Hz instead of what we think it should be (50Hz).

Increased DARM gain by 50%.

Lockloss.

Problem comes with DHARD gain increase during DHARD WFS state (305). Georgis is commenting this out  in line 2337 and 2338 of ISC_LOCK.

Keep losing LOCK from LOCKING GReen ARMS State.

When we lose lock the power normalisation for DARM should reset to 0 but it did not and so was causing noise on the ALS DIFF input. Probably because the DOWN STATE of the guardian is not set to do it.

Elenna updated prep for locking to set this state to 0 even with using Y ARM for the power normalisation instead of X.

We fell out at CARM to TR.

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 20:04, Wednesday 06 March 2024 (76171)

Adding some overall thoughts/conclusions:

It seems like there is some alignment *somewhere* that is still bad, but we can't figure out what it is. Earlier today we could reach "Prep asc for full ifo", and things seem to have degraded to where we cannot pass DHARD WFS. It seems like part of the issue is matching the alignment of the arms, since engaging DHARD WFS is such a problem. However, we are struggling to correct that alignment during the carm reduction in a way that maintains the lock. Also, some earlier attempts involved us trying to fix some of the input pointing that could be clipping, but the changes that Jenne made to IM3 and IM4 were "bad" in the sense that the INPUT ALIGN state no longer worked after that change.

We are also worried about the DARM unity gain in "darm to rf". It seems low to us (~15 Hz) but we actually don't know how high it should be (it should be closer to 60 Hz by the time carm is on resonance). It's also worrying to see the POP alignment degrade through the carm reduction process, but that's the normal process- we don't have PRC or SRC ASC engaged during the process normally either. We can achieve decent DRMI alignment by hand before the carm offset reduction phase.

It would be helpful to think of some way we can "offload" beneficial alignment steps from lock to lock as we retry carm offset reduction so we don't need to start from scratch every lock process. I think we should still also be concerned with whatever is going on at EX (BRS, ISI, etc). We get saturation warnings for EX that don't match our actions so maybe things aren't great down there.

Adding: Georgia moved the SRM before engaging DHARD and it helped significantly, especially with the glitchiness in the DHARD control signal.

Important to note: we commented out the gain increase for DHARD in the guardian. After the SRM move, Georgia tried increasing the gains again by hand. We immediately lost lock. So the higher DHARD gain is no good and is still commented out in the guardian.

elenna.capote@LIGO.ORG - 22:07, Wednesday 06 March 2024 (76172)

Further notes:

In a fit of frustration I re-engaged the DRMI ASC loops in the guardian except for PRC1 (PRM) and SRC2 (SR2) since those rely on QPDs whose offsets we do not trust. This made life easier, and before engaging DHARD WFS, I adjusted the SR2 alignment to improve the arm matching, and then the PRM alignment to increase the RF18 buildup (SRM is controlled by SRC1 and was offloaded in the DRMI ASC state). I also further walked DHARD in pitch and yaw using the move_arms_dev.py script. This improved the DHARD engagement further and the DHARD alignment converged decently. However, before I could move to the next state, I watched the green arm signals. The ALS Y signal drops out slightly before lockloss. I don't think there is any feedback on this signal, but maybe this is a sign that the arms aren't doing so well after all in this alignment, even though DHARD converges.

My current method: take the IFO to "DRMI_LOCKED_CHECK_ASC" and move PRM and SR2 to improve the POP build ups and camera image. Wait for other ASC signals to reconverge. Then, go to DRMI ASC alignment offload. From there, go to "DARM_TO_RF" and check the arm alignment.

I decided to revert the ITMs to the position they were in the last time we achieved "prep asc for full ifo" (see screenshot). I trended back the ITM oplevs. It appears that the most movement has occurred in ITMY yaw (4.8 versus 1.5). Perhaps this is one reason why our attempts to engage DHARD are failing. After this change I reran manual initial alignment to get the beamsplitter back to a good place.

Even with that change, the ALS Y arm buildup is still less than one (at locking arms green). This seems wrong, but nothing we have done makes it better. We did have a better buildup for ALS Y early yesterday that all our alignment efforts seem to degrade.

Ok, this did not work. Engaging DHARD WFS still pulls the Y arm ALS buildup off and then causes a lockloss. I am leaving in down. Please check ITM alignment before beginning locking attempts in the morning.

Images attached to this comment
H1 SQZ (SQZ)
nutsinee.kijbunchoo@LIGO.ORG - posted 16:59, Wednesday 06 March 2024 (76169)
Low vs High CLF -- No significant squeeze degradation observed

Camilla Naoki Daniel Georgia Nutsinee

This morning we had difficulty seeing squeezing in the HD. The problem was a combination of railing CLF ISS and alignment. We had to reduced the power sent to CLF fiber from 1.2mW back to 0.7 mW (the value prior to PMC alignment work). SHG ISS was working fine. We moved FC1, ZM5 and ZM4 (mostly FC1) and eventually saw some squeezing. OPO IR camera was a useful indication to which DOF we should move these optics. We optimized the OPO crystal temperature. NLG was 17.39. We tried squeezing at CLF launch power of 0.07, 0.1, and 0.7 mW. Common mode board gain and squeeze angle was optimized everytime the power changed. No significan't change in squeezing level observed. If we lower CLF or LO gain a coherence between HD DIFF anf LO/CLF around 1-2 kHz can be observed. I suspect this is due to vacuum pump noise on the floor. Reducing LO gain will degrade the over all squeeze level. Reducing CLF level will make acoustinc noise worse.

Images attached to this report
LHO VE
janos.csizmazia@LIGO.ORG - posted 16:03, Wednesday 06 March 2024 - last comment - 08:54, Thursday 07 March 2024(76168)
3-6 vent vacuum diary
The pressures:
HAM7: ~2.9E-7 Torr
HAM8: ~3.3E-7 Torr
Corner: ~4.9E-8 Torr
EX: ~5.4E-9 Torr

Today's activities:
- The X-manifold turbo-station water lines have been updated - now the controller and pump lines are separated

- HAM8 RGA scan is done, details in the comments
- HAM8 IP was valved in
- HAM7 Annulus Ion Pump has railed; it is caused supposedly by a leaky plug on the septum plate, or just the AIP controller - either way, it will be found out soon
- Relay tube - HAM7 - HAM8 further schedule: This volume will be valved in to the main volume around the end of the week - RV1; FCV1; FCV2; FCV3; FCV4; FCV8; FCV9 will be opened; preferably after the HAM7 AIP issue is solved
Comments related to this report
jordan.vanosky@LIGO.ORG - 08:54, Thursday 07 March 2024 (76178)

HAM8 scans collected at T240018

RGA tree was baked at 150C for 5 days following the replacement of leaking calibrated leak with a blank.

Non-image files attached to this comment
LHO General
austin.jennings@LIGO.ORG - posted 16:01, Wednesday 06 March 2024 (76145)
Wednesday Shift Summary

TITLE: 03/06 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: None
SHIFT SUMMARY:

Locking troubleshooting continues. The commissioning team was able to get to PREP ASC before losing lock, so good progess is being made. Alog on today's locking extravaganza here.

Last shift from me, peace out yall :)
LOG:                                                                                                                  

Start Time System Name Location Lazer_Haz Task Time End
23:29 SAF LASER SAFE LVEA - The LVEA is LASER SAFE 03:14
16:00 FAC Ken High Bay N Lighting ??
16:09 FAC Karen/Kim LVEA N Tech clean (Kim out 1646) 17:26
17:05 VAC Jordan HAM 8 N RGA prep 17:33
17:15   Hanford Fire Site   Tumbleweed burn ??
17:47 SEI Jim Remote N Restart BRS X ??
18:25 FAC Karen MY N Technical cleaning 19:11
18:48 VAC Travis/Janos LVEA N Turbo pump water line work 20:32
19:12 ISC Sheila/Matt/Trent ISCT1 YEYE (LOCAL) Beam dump installation 19:46
20:33 VAC Gerardo, Jordan LVEA N Climbing on HAM6 for turbopump 21:33
21:59 VAC Jordan HAM 8 N RGA scans 23:25
22:08 VAC Janos HAM 7 N Annulus ion pump work 22:20
22:46 VAC Travis LVEA N Disconnect pump cart 23:12
H1 ISC
gabriele.vajente@LIGO.ORG - posted 15:47, Wednesday 06 March 2024 - last comment - 15:59, Wednesday 06 March 2024(76149)
Alignment notes

Elenna, Sheila, Gabriele, Jennie W et al.

 

NB:   LHO alog 62110 shows PRG gain plots.

[Jennie W took this log over from Gabriele]

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 15:59, Wednesday 06 March 2024 (76167)OpsInfo

Matt and I checked on the POPAIR checker (added 70882). It was acting correctly, moving to PRMI when RF18 (not RF90) doesn't flash above 95 in the first minute.

Matt lowered the threshold to flashes of RF18 not above 80 in the first 120 seconds. This threshold should be rechecked once we are locking reliably. 

H1 CDS
jonathan.hanks@LIGO.ORG - posted 15:45, Wednesday 06 March 2024 - last comment - 15:48, Wednesday 06 March 2024(76164)
WP 11752 investigating issues with sending dual data streams to h1dmt1
We had issues yesterday with h1dmt1 receiving both gds broadcast streams.  Last night John Zweizig and I left it working with only one stream active.

Today John and I did two tests.

1. We switched which ethernet port was deactivated. During this test we still saw issues with retransmits during periods with both broadcasts being received.  When we disabled the other port that receives the broadcasts things settled down.

2. We rebooted the machine to see if there was some state that was bad and just needed a reboot.  This seems to have done the trick.  Presently we do not know what exactly was cleared up.  We checked the network settings (including buffer sizes, tweaks, ...), software package versions, physical links, ... between h1dmt1 and h1dmt2 and could not find a difference that would explain this behavior.

h1dmt1 is running, and the control room should have access to all the DMT products that are needed (range, darm, ...).
Comments related to this report
jonathan.hanks@LIGO.ORG - 15:48, Wednesday 06 March 2024 (76165)
The CDS view into this is facilitated through two PVs, H1:DAQ-GDS0_BCAST_RETR and H1:DAQ-GDS1_BCAST_RETR.  These PVs show the broadcast retransmit requests that h1daqgds0 and h1daqgds1 receive each second.  We get a stream of these, usually in sets of 3 (1 for each dmt machine).  When the pattern changes and/or we start receiving many more requests that is a sign that their are problems with the broadcast into the DMT.
H1 CDS
david.barker@LIGO.ORG - posted 15:01, Wednesday 06 March 2024 - last comment - 17:15, Wednesday 06 March 2024(76160)
h1digivideo3 video network port upgrade

WP11753

Jonathan, Dave:

As part of the investigation into recent camera image instabilities we upgraded h1digivideo3's 106 VLAN network port from 1GE copper to 10GE fiber.  A solar flare PCIe card was installed in h1digivideo3 at 14:05 this afternoon.

The new fiber is connected to a spare 10GE port on sw-msr-h1aux (ethernet 3/1/2).

On h1digivideo3 we have left the original copper connection to eno2, the new fiber port is enp1s0f0np0.

Currently the EPICS_LOAD_MON channel being trended by the DAQ is still H1:CDS-MONITOR_H1DIGIVIDEO3_NET_RX_ENO2_MBIT, the new channel is H1:CDS-MONITOR_H1DIGIVIDEO3_NET_RX_ENP1S0F0NP0_MBIT, which Jonathan is working on remapping to ENO2 so we don't need a DAQ restart.

Comments related to this report
jonathan.hanks@LIGO.ORG - 17:15, Wednesday 06 March 2024 (76170)
I updated the load_mon_epics on digivideo3 to make the eno2 channel hold the traffic data from ENP1S0F0NP0 so we can keep a consistent trend of the traffic from the cameras.
H1 ISC (OpsInfo)
jennifer.wright@LIGO.ORG - posted 14:44, Wednesday 06 March 2024 - last comment - 15:03, Wednesday 06 March 2024(76158)
Reset OMC ASC matrices and gains to those set on 4th March

As we went through SDF revert yesterday when locking all the OMC SDFs were reset. I turned off the QPD A offsets and reset the input and output ASC matrices to those we had found allowed us to lock the OMC with ASC and no saturation of the OMC suspension.

First picture is the reference picture from yesterday before the revert. I have not set back the changes to DCPD offsets as I think these were to make sure DCPD SUM output did not dip to a negative number due to dark noise.

Second picture is the OMC model SDFs now.

The guardian has been set to not go through SDF revert when we lose lock however the guardian has the ASC gains hard-coded in so we may want to replace these with the new values in the OMC guardian once we verufy these are correct by manually locking the OMC once the full IFO is locked.

Images attached to this report
Comments related to this report
jennifer.wright@LIGO.ORG - 14:53, Wednesday 06 March 2024 (76159)

I accepted these OMC model values above in SDF. Picture attached.

Images attached to this comment
ryan.short@LIGO.ORG - 15:03, Wednesday 06 March 2024 (76161)

The new POS_X and ANG_Y gain values (accepted in previous comment's screenshot) have been updated in the OMC_LOCK Guardian's ASC_QPD_ON state (where they are hard-coded in). Changes loaded and updated in svn.

H1 ISC (OpsInfo)
ryan.short@LIGO.ORG - posted 10:37, Wednesday 06 March 2024 - last comment - 16:49, Friday 08 March 2024(76154)
Changes to ISC_LOCK Guardian for O4b Commissioning

Per commissioner request, I've made two changes to the early main locking steps as set in ISC_LOCK:

  1. By default, ISC_LOCK now goes through CHECK_SDF rather than SDF_REVERT so settings are preserved lock to lock
  2. The timer for moving the ALS arm nodes to INCREASE_FLASHES during LOCKING_ARMS_GREEN has been increased from 2 minutes to 20

Changes have been loaded and committed to svn.

Comments related to this report
ryan.short@LIGO.ORG - 15:43, Wednesday 06 March 2024 (76163)OpsInfo, SQZ

I've also commented out SQZ_MANAGER from the list of managed nodes in ISC_LOCK. This allows SQZ to work independently without main IFO locking telling SQZ_MANAGER what to do for now.

EDIT: We later learned that lines 214-215 of ISC_LOCK needed to be commented out as well since this is a request of SQZ_MANAGER in the DOWN state.

ryan.short@LIGO.ORG - 16:11, Thursday 07 March 2024 (76195)

Furthering this effort as main IFO locking is progressing, I've commented out the first couple lines in the LOWNOISE_LENGTH_CONTROL state which interacts with SQZ_MANAGER, which at this point is not managed.

victoriaa.xu@LIGO.ORG - 16:49, Friday 08 March 2024 (76217)ISC, SQZ

Naoki, Vicky - We have undone these changes guardian changes for the break (brought back SQZ_MANAGER in list of managed nodes, in first few lines of LOWNOISE_LENGTH_CONTROL, and lines 214-215 requesting sqz to down).

SQZ_MANAGER is back to being managed in ISC_LOCK as usual. We will see the lock sequence through a few times and get it running smoothly, and update on that after relocking.

H1 ISC
matthewrichard.todd@LIGO.ORG - posted 18:08, Tuesday 05 March 2024 - last comment - 15:18, Wednesday 06 March 2024(76137)
Updating OMCscan to reflect transition to OMC001

Matthew, Jennie W, Gabriele

 

In the initialization of the OMCscan code (which gets OMC scan data, analyzes it and then plots it), I updated several values to reflect the transition from OMC003 to OMC001, so that omc analyses are accurately done. For example, several small changes include:

The values were obtained from T1500060 Table 19, which report the OMC optical test results for OMC001; note: the conversion from nm/V to MHz/V is found by the relation delta(f)/f = delta(L)/L, where delta(L) is 2*PZTresponse in nm/V, L is the round-trip cavity length, and f is 1064nm converted to MHz

Comments related to this report
koji.arai@LIGO.ORG - 18:37, Tuesday 05 March 2024 (76141)

Are we sure that the previous OMC used OMC's "PZT2" (12.7nm/V) for the scan, not OMC's "PZT1" (11.3nm/V)?
I mean: there is a possibility that the indication of PZT2 on the screen may not mean PZT2 on the OMC.

Also the response of the PZT is nonlinear and hysteretic.

I'd rather believe the frequency calibration using the cavity peaks (e.g. FSR/Modulation freqs) than the table top calibration of the PZTs.

matthewrichard.todd@LIGO.ORG - 18:57, Tuesday 05 March 2024 (76142)

Good suggestion!

Computing the PZT response from the FSRs we get around 6.3 MHz/V.

And on your note about certainty of using PZT2 response, I am not sure.

 

Images attached to this comment
jennifer.wright@LIGO.ORG - 15:18, Wednesday 06 March 2024 (76162)

I think we usually used the channel PZT2 to perform scans with OMC 003. But yeah, I am not sure if this corresponds to PZT2 on the real OMC. The PZT calibration we just use in the scan analysis to get an initial guess for the calibration but the final calibrated scan does indeed find the carrier 00 and 45 MHz 00 peaks to fit the non-linearity of the PZT.

Displaying reports 10761-10780 of 84729.Go to page Start 535 536 537 538 539 540 541 542 543 End