Displaying reports 12541-12560 of 86498.Go to page Start 624 625 626 627 628 629 630 631 632 End
Reports until 16:03, Wednesday 06 March 2024
LHO VE
janos.csizmazia@LIGO.ORG - posted 16:03, Wednesday 06 March 2024 - last comment - 08:54, Thursday 07 March 2024(76168)
3-6 vent vacuum diary
The pressures:
HAM7: ~2.9E-7 Torr
HAM8: ~3.3E-7 Torr
Corner: ~4.9E-8 Torr
EX: ~5.4E-9 Torr

Today's activities:
- The X-manifold turbo-station water lines have been updated - now the controller and pump lines are separated

- HAM8 RGA scan is done, details in the comments
- HAM8 IP was valved in
- HAM7 Annulus Ion Pump has railed; it is caused supposedly by a leaky plug on the septum plate, or just the AIP controller - either way, it will be found out soon
- Relay tube - HAM7 - HAM8 further schedule: This volume will be valved in to the main volume around the end of the week - RV1; FCV1; FCV2; FCV3; FCV4; FCV8; FCV9 will be opened; preferably after the HAM7 AIP issue is solved
Comments related to this report
jordan.vanosky@LIGO.ORG - 08:54, Thursday 07 March 2024 (76178)

HAM8 scans collected at T240018

RGA tree was baked at 150C for 5 days following the replacement of leaking calibrated leak with a blank.

Non-image files attached to this comment
LHO General
austin.jennings@LIGO.ORG - posted 16:01, Wednesday 06 March 2024 (76145)
Wednesday Shift Summary

TITLE: 03/06 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: None
SHIFT SUMMARY:

Locking troubleshooting continues. The commissioning team was able to get to PREP ASC before losing lock, so good progess is being made. Alog on today's locking extravaganza here.

Last shift from me, peace out yall :)
LOG:                                                                                                                  

Start Time System Name Location Lazer_Haz Task Time End
23:29 SAF LASER SAFE LVEA - The LVEA is LASER SAFE 03:14
16:00 FAC Ken High Bay N Lighting ??
16:09 FAC Karen/Kim LVEA N Tech clean (Kim out 1646) 17:26
17:05 VAC Jordan HAM 8 N RGA prep 17:33
17:15   Hanford Fire Site   Tumbleweed burn ??
17:47 SEI Jim Remote N Restart BRS X ??
18:25 FAC Karen MY N Technical cleaning 19:11
18:48 VAC Travis/Janos LVEA N Turbo pump water line work 20:32
19:12 ISC Sheila/Matt/Trent ISCT1 YEYE (LOCAL) Beam dump installation 19:46
20:33 VAC Gerardo, Jordan LVEA N Climbing on HAM6 for turbopump 21:33
21:59 VAC Jordan HAM 8 N RGA scans 23:25
22:08 VAC Janos HAM 7 N Annulus ion pump work 22:20
22:46 VAC Travis LVEA N Disconnect pump cart 23:12
H1 ISC
gabriele.vajente@LIGO.ORG - posted 15:47, Wednesday 06 March 2024 - last comment - 15:59, Wednesday 06 March 2024(76149)
Alignment notes

Elenna, Sheila, Gabriele, Jennie W et al.

 

NB:   LHO alog 62110 shows PRG gain plots.

[Jennie W took this log over from Gabriele]

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 15:59, Wednesday 06 March 2024 (76167)OpsInfo

Matt and I checked on the POPAIR checker (added 70882). It was acting correctly, moving to PRMI when RF18 (not RF90) doesn't flash above 95 in the first minute.

Matt lowered the threshold to flashes of RF18 not above 80 in the first 120 seconds. This threshold should be rechecked once we are locking reliably. 

H1 CDS
jonathan.hanks@LIGO.ORG - posted 15:45, Wednesday 06 March 2024 - last comment - 15:48, Wednesday 06 March 2024(76164)
WP 11752 investigating issues with sending dual data streams to h1dmt1
We had issues yesterday with h1dmt1 receiving both gds broadcast streams.  Last night John Zweizig and I left it working with only one stream active.

Today John and I did two tests.

1. We switched which ethernet port was deactivated. During this test we still saw issues with retransmits during periods with both broadcasts being received.  When we disabled the other port that receives the broadcasts things settled down.

2. We rebooted the machine to see if there was some state that was bad and just needed a reboot.  This seems to have done the trick.  Presently we do not know what exactly was cleared up.  We checked the network settings (including buffer sizes, tweaks, ...), software package versions, physical links, ... between h1dmt1 and h1dmt2 and could not find a difference that would explain this behavior.

h1dmt1 is running, and the control room should have access to all the DMT products that are needed (range, darm, ...).
Comments related to this report
jonathan.hanks@LIGO.ORG - 15:48, Wednesday 06 March 2024 (76165)
The CDS view into this is facilitated through two PVs, H1:DAQ-GDS0_BCAST_RETR and H1:DAQ-GDS1_BCAST_RETR.  These PVs show the broadcast retransmit requests that h1daqgds0 and h1daqgds1 receive each second.  We get a stream of these, usually in sets of 3 (1 for each dmt machine).  When the pattern changes and/or we start receiving many more requests that is a sign that their are problems with the broadcast into the DMT.
H1 CDS
david.barker@LIGO.ORG - posted 15:01, Wednesday 06 March 2024 - last comment - 17:15, Wednesday 06 March 2024(76160)
h1digivideo3 video network port upgrade

WP11753

Jonathan, Dave:

As part of the investigation into recent camera image instabilities we upgraded h1digivideo3's 106 VLAN network port from 1GE copper to 10GE fiber.  A solar flare PCIe card was installed in h1digivideo3 at 14:05 this afternoon.

The new fiber is connected to a spare 10GE port on sw-msr-h1aux (ethernet 3/1/2).

On h1digivideo3 we have left the original copper connection to eno2, the new fiber port is enp1s0f0np0.

Currently the EPICS_LOAD_MON channel being trended by the DAQ is still H1:CDS-MONITOR_H1DIGIVIDEO3_NET_RX_ENO2_MBIT, the new channel is H1:CDS-MONITOR_H1DIGIVIDEO3_NET_RX_ENP1S0F0NP0_MBIT, which Jonathan is working on remapping to ENO2 so we don't need a DAQ restart.

Comments related to this report
jonathan.hanks@LIGO.ORG - 17:15, Wednesday 06 March 2024 (76170)
I updated the load_mon_epics on digivideo3 to make the eno2 channel hold the traffic data from ENP1S0F0NP0 so we can keep a consistent trend of the traffic from the cameras.
H1 ISC (OpsInfo)
jennifer.wright@LIGO.ORG - posted 14:44, Wednesday 06 March 2024 - last comment - 15:03, Wednesday 06 March 2024(76158)
Reset OMC ASC matrices and gains to those set on 4th March

As we went through SDF revert yesterday when locking all the OMC SDFs were reset. I turned off the QPD A offsets and reset the input and output ASC matrices to those we had found allowed us to lock the OMC with ASC and no saturation of the OMC suspension.

First picture is the reference picture from yesterday before the revert. I have not set back the changes to DCPD offsets as I think these were to make sure DCPD SUM output did not dip to a negative number due to dark noise.

Second picture is the OMC model SDFs now.

The guardian has been set to not go through SDF revert when we lose lock however the guardian has the ASC gains hard-coded in so we may want to replace these with the new values in the OMC guardian once we verufy these are correct by manually locking the OMC once the full IFO is locked.

Images attached to this report
Comments related to this report
jennifer.wright@LIGO.ORG - 14:53, Wednesday 06 March 2024 (76159)

I accepted these OMC model values above in SDF. Picture attached.

Images attached to this comment
ryan.short@LIGO.ORG - 15:03, Wednesday 06 March 2024 (76161)

The new POS_X and ANG_Y gain values (accepted in previous comment's screenshot) have been updated in the OMC_LOCK Guardian's ASC_QPD_ON state (where they are hard-coded in). Changes loaded and updated in svn.

H1 OpsInfo (ISC)
jennifer.wright@LIGO.ORG - posted 14:36, Wednesday 06 March 2024 (76157)
Accepted Alignment Sliders in SDF

As we aligned into the OMC before locking efforts started and we think this was a good OMC alignment I accepted these alignment slider values for OM1,2,3 and OMC in SDF, picture attached.

Images attached to this report
LHO General
austin.jennings@LIGO.ORG - posted 11:57, Wednesday 06 March 2024 (76155)
Mid Shift Report

Locking woes continue. We reran the baffle align scripts for all 4 quads this morning and after a few touch ups, ALS now seems to be in a good spot. Commissioners are now trying to get through DRMI locking as flashes on our pop signals are not great. A more detailed alog to follow.

H1 ISC (OpsInfo)
ryan.short@LIGO.ORG - posted 10:37, Wednesday 06 March 2024 - last comment - 16:49, Friday 08 March 2024(76154)
Changes to ISC_LOCK Guardian for O4b Commissioning

Per commissioner request, I've made two changes to the early main locking steps as set in ISC_LOCK:

  1. By default, ISC_LOCK now goes through CHECK_SDF rather than SDF_REVERT so settings are preserved lock to lock
  2. The timer for moving the ALS arm nodes to INCREASE_FLASHES during LOCKING_ARMS_GREEN has been increased from 2 minutes to 20

Changes have been loaded and committed to svn.

Comments related to this report
ryan.short@LIGO.ORG - 15:43, Wednesday 06 March 2024 (76163)OpsInfo, SQZ

I've also commented out SQZ_MANAGER from the list of managed nodes in ISC_LOCK. This allows SQZ to work independently without main IFO locking telling SQZ_MANAGER what to do for now.

EDIT: We later learned that lines 214-215 of ISC_LOCK needed to be commented out as well since this is a request of SQZ_MANAGER in the DOWN state.

ryan.short@LIGO.ORG - 16:11, Thursday 07 March 2024 (76195)

Furthering this effort as main IFO locking is progressing, I've commented out the first couple lines in the LOWNOISE_LENGTH_CONTROL state which interacts with SQZ_MANAGER, which at this point is not managed.

victoriaa.xu@LIGO.ORG - 16:49, Friday 08 March 2024 (76217)ISC, SQZ

Naoki, Vicky - We have undone these changes guardian changes for the break (brought back SQZ_MANAGER in list of managed nodes, in first few lines of LOWNOISE_LENGTH_CONTROL, and lines 214-215 requesting sqz to down).

SQZ_MANAGER is back to being managed in ISC_LOCK as usual. We will see the lock sequence through a few times and get it running smoothly, and update on that after relocking.

LHO VE
david.barker@LIGO.ORG - posted 10:17, Wednesday 06 March 2024 (76152)
Wed CP1 Fill

Wed Mar 06 10:14:59 2024 INFO: Fill completed in 14min 55secs

 

Images attached to this report
H1 TCS
oli.patane@LIGO.ORG - posted 09:19, Wednesday 06 March 2024 (76151)
TCS Chiller Water Level Top-Off FAMIS

Previously done 75898 - closes FAMIS#27784

Both TCSX and TCSY were at their max fill level, so I did not add any water. There was no leaking.

H1 ISC
georgia.mansell@LIGO.ORG - posted 20:12, Tuesday 05 March 2024 - last comment - 10:53, Thursday 07 March 2024(76138)
Locking this evening

Elenna, Jenne, Gabriele, Matt, Trent, Louis, Georgia

- Jenne and I checked the fast shutter and ran the fast_shutter test (SYS -> AS port protection -> run)

- After the initial alignment (alog here) we got through green arms locking at up to CHECK_MICH_FRINGES. When we tried to do PRMI there was a timer that kept taking it back to MICH fringes since the POP90 buildups were so poor. On the AS_AIR camera the alignment looks bad in yaw.

- We edited ISC_LOCK line 1353 to lower the threshold for PRMI flashes on POPAIR_B_RF90 from 20 to 4.

- We ran the dark offsets script with the IMC offline and the ALS shuttered (sdf screenshots here), and started locking again.

- The Y arm was finicky, we struggled to get the flashes above 0.6... We lowered the PD threshold in the ALS Y PDH (H1:ALS-Y_REFL_LOCK_TRANSPDNORMTHRESH) from 0.7 to 0.6.

- We also lowered the threshold in ALS_ARM gen_INCREASE_FLASHES from 0.9 to 0.6.... feels like cheating, maybe the alignment onto ISCT1 is off? The camera alignment for ALS_Y also looks bad (see screenshot)

- Even though we did an initial alignment earlier today, the PRMI flashes are bad and the MICH fringes don't look great. PRMI_ASC caused a couple of locklosses, so we took the guardian to PRMI_LOCKED and tried aligning it by hand. Elenna aligned PRM and BS, but the buildups and camera indicated bad pitch alignment.

- We re-ran initial alignment, and then the guardian took us straight up to PRMI_ASC, DRMI also locked easily! POPAIR_RF18 was a little noisy.

- We started the CARM offset reduction sequence but lost lock at DHARD_WFS, the two arms were very different (higher build up for Y arm), when DHARD came on the arms were pulled closer together but we lost RF9, and we lost lock.

- We decided to stop at DARM_TO_RF, step the X arm (X hard) closer using the move_arm_dev script and fix PRM and SRM alignment to increase the buildups.

- While we stepped the arm and adjusted PRM, the green arms lost lock. We think we might be aligning the whole IFO to some bad ITM alignments (how accurately aligned are they after the baffle PD alignment script?)

We decided to call it a night at this point.

 

 

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 08:49, Wednesday 06 March 2024 (76148)

A other things that we found during the inital alignment process:

  • Beckhoff reboot from yesterday morning set some settings incorrectly.  These show up with a change date in SDF of 1989.  The ALSX PLL common compensation was off making the PLL not lock. 
  • polarization controller box was on, making ALS noisy.
  • AS_C whitening was left at 36dB, has been reset to 18dB, this was before the dark offset was rerun.
trent.gayer@LIGO.ORG - 09:19, Wednesday 06 March 2024 (76150)

We changed the threshold in ALS_ARM gen_INCREASE_FLASHES back to 0.9

We are thinking of increasing the guardian timer for INCREASE_FLASHES but have not done so yet.

sheila.dwyer@LIGO.ORG - 10:32, Wednesday 06 March 2024 (76153)

We also reverted this change, since DRMI flashes are back to their usual level:

We edited ISC_LOCK line 1353 to lower the threshold for PRMI flashes on POPAIR_B_RF90 from 20 to 4.

sheila.dwyer@LIGO.ORG - 10:53, Thursday 07 March 2024 (76176)

Sheila, Gabriele, Corey

  • Based on the alog from last night, we decided to start with an initial alingment, using the camera set points that were set yesterday based on the baffle PD alignment.  This went smoothly, and allowed us to lock to PRE_ASC_FOR_FULL_IFO several times.
  • We undid some guardian changes from yesterday, we removed all DRMI ASC except MICH, and put the DHARD gain increase in DHARD_WFS back in.
  • We realized that we have been loosing lock because the ALS DIFF guardian was thinking that we lost lock (when we skip shuttering ALS this checker doesn't get disabled).  I've added IDLE states to both ALS_DIFF and ALS_COMM, for now we are manually selecting these when we get past the ALS steps of the CARM offset reduction.
  • After that, we are able to sit and walk alignments for longer in PREP_ASC_FOR_FULL_IFO.  We lost lock as we moved CHARD P in the negative direction which was improving the PRG, but in the last seconds of the lock POP90 was increasing.  Our next plan is to turn off MICH ASC while we walk alignment.

Summary of guardian changes that are useful to make when we are recovering from a vent, for future reference:

  • bypass shuttering of ALS in CARM offset reduction by changing wieghts of connections, this allows us to reset the green references when we lock.
  • Greatly extend the timer for INCREASE_FLASHES
  • change connection weights to go through CHECK SDF rather than SDF revert (if this is what we want)
  • we are thinking about the check for giving up on DRMI and moving to PRMI, the logic we have worked well during O4a so we are hesititant about changing it.  A workaround for if it is switching to DRMI when we don't want that is to pause ISC_LOCK and let the DRMI guardian lock DRMI.

Some guardian changes made yesterday to think about maybe reverting:

  • got rid of DHARD gain reduction at DHARD WFS
  • added SRC1 back to DRMI ASC
  • change of ALS normaliztion to Y arm.
H1 ISC
matthewrichard.todd@LIGO.ORG - posted 18:08, Tuesday 05 March 2024 - last comment - 15:18, Wednesday 06 March 2024(76137)
Updating OMCscan to reflect transition to OMC001

Matthew, Jennie W, Gabriele

 

In the initialization of the OMCscan code (which gets OMC scan data, analyzes it and then plots it), I updated several values to reflect the transition from OMC003 to OMC001, so that omc analyses are accurately done. For example, several small changes include:

The values were obtained from T1500060 Table 19, which report the OMC optical test results for OMC001; note: the conversion from nm/V to MHz/V is found by the relation delta(f)/f = delta(L)/L, where delta(L) is 2*PZTresponse in nm/V, L is the round-trip cavity length, and f is 1064nm converted to MHz

Comments related to this report
koji.arai@LIGO.ORG - 18:37, Tuesday 05 March 2024 (76141)

Are we sure that the previous OMC used OMC's "PZT2" (12.7nm/V) for the scan, not OMC's "PZT1" (11.3nm/V)?
I mean: there is a possibility that the indication of PZT2 on the screen may not mean PZT2 on the OMC.

Also the response of the PZT is nonlinear and hysteretic.

I'd rather believe the frequency calibration using the cavity peaks (e.g. FSR/Modulation freqs) than the table top calibration of the PZTs.

matthewrichard.todd@LIGO.ORG - 18:57, Tuesday 05 March 2024 (76142)

Good suggestion!

Computing the PZT response from the FSRs we get around 6.3 MHz/V.

And on your note about certainty of using PZT2 response, I am not sure.

 

Images attached to this comment
jennifer.wright@LIGO.ORG - 15:18, Wednesday 06 March 2024 (76162)

I think we usually used the channel PZT2 to perform scans with OMC 003. But yeah, I am not sure if this corresponds to PZT2 on the real OMC. The PZT calibration we just use in the scan analysis to get an initial guess for the calibration but the final calibrated scan does indeed find the carrier 00 and 45 MHz 00 peaks to fit the non-linearity of the PZT.

H1 SQZ
naoki.aritomi@LIGO.ORG - posted 15:54, Friday 01 March 2024 - last comment - 11:16, Friday 08 March 2024(76078)
LOCK_PMC state in SQZ_MANAGER guardian

I made a LOCK_PMC state in SQZ_MANAGER guardian. The LOCK_PMC state is between LOCK_TTFSS and LOCK_SHG states. The LOCK_PMC state is copied from LOCK_SHG state, but I commented out the PZT checker because the PMC_PZT_OK function is not defined now.  

Comments related to this report
nutsinee.kijbunchoo@LIGO.ORG - 14:34, Wednesday 06 March 2024 (76156)

Camilla, Nutsinee

SQZ_MANAGER now takes charge of PMC Guardian. For now we commented out FC related activities in LOCK_OPO_AND_FC state of the SQZ_MANAGER and LOCKING_HD state in SQZ_LO_LR. Look for "NK March 6" for any changes made today in SQZ_MANAGER and SQZ_LO_LR Guardian. These should be reverted when we have the filter cavity back. For now we can lock PMC all the way to LO with SQZ_MANAGER and will automatically relock themselves when IFO kills the PSL. 

nutsinee.kijbunchoo@LIGO.ORG - 11:16, Friday 08 March 2024 (76212)

We have filter cavity. Changes have been reverted. 

Displaying reports 12541-12560 of 86498.Go to page Start 624 625 626 627 628 629 630 631 632 End