Displaying reports 5961-5980 of 86474.Go to page Start 295 296 297 298 299 300 301 302 303 End
Reports until 06:56, Wednesday 19 February 2025
H1 General
ryan.crouch@LIGO.ORG - posted 06:56, Wednesday 19 February 2025 - last comment - 07:04, Wednesday 19 February 2025(82899)
OPS OWL assistance

H1 called for help again at 14:50 UTC, from the TO_NLN timer expiring. By the time I logged in we were ready to go into Observing.

14:53 UTC Observing

Comments related to this report
ryan.crouch@LIGO.ORG - 07:04, Wednesday 19 February 2025 (82900)

There's been a high stage lockloss between the last 2 acquisitions, at state 558.

H1 General (CAL, SQZ)
ryan.crouch@LIGO.ORG - posted 01:28, Wednesday 19 February 2025 - last comment - 15:33, Tuesday 11 March 2025(82898)
OPS OWL report SDF diffs

To get in Observing I had to accept some SDF diffs for SQZ, and PCALY. There was also still a PEM CS excitation point open as well. There was a notification about PCALY OFS servo malfunction so I looked at it and it was railed at -7.83, so I toggled it off and back on and it brought it back to a good value. I also did not receive a call, a voicemail just appeared.

09:21 UTC observing

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 11:18, Wednesday 19 February 2025 (82909)SQZ

H1:SQZ-LO_SERVO_IN1GAIN was left at -15 by accident, reverted to -12 and saved in sdf.

francisco.llamas@LIGO.ORG - 15:33, Tuesday 11 March 2025 (83310)

DriptaB, FranciscoL

SDF diffs for PCALY were incorrect. The timing of these changes match the h1iscey reboot done the same day (82902). Today around 19:00 UTC (almost three weeks later), we used EPICS values from the Pcal calibration update done in September (80220) to revert changes. Saved changes in OBSERVE and SAFE.

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 22:07, Tuesday 18 February 2025 (82897)
OPS Eve Shift Summary

TITLE: 02/19 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:

IFO is in MAINTENANCE and LOCKING_ALS

Locking has been near-impossible due to microseism and oppurtunistic earthquakes.

Essentially been dealing with the same issues the whole shift. IFO alignment is actually quite good since we have caught DRMI over 10 times automatically and very quickly but we end up losing lock due to clear instability as a cause of EQs that are exacerbated by over 1 micron/sec secondary microseism.

3 of this shift's EQs have been over 5.0 and we have lost lock around the arrival of the S or R waves each time, with 2 of those being at TRANSITION_FROM_ETMX, around an hour into acquisition, resulting in the long lock acquisitions we're experiencing. Other than TRANSITION, we've lost lock at or before DRMI with ASC and LSC signals oscillating a lot.

As I type this, we just lost lock for the 3rd time at around LOWNOISE_ESD_ETMX so there is potentially something wrong with this state specifically, despite CPS signals showing noise at the same time as the lockloss. The good news is I haven't been experiencing ALS issues at all this shift.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
19:34 SAF Laser Haz LVEA YES LVEA is laser HAZARD!!! 06:13
15:57 FAC Kim, Nelly EY, EX, FCES n Tech clean 18:40
17:27 CDS Jonathan, Fil, Austin MSR, CER n Move temp switch and camera server 20:10
17:28 VAC Ken EY n Disconnect compressor electrical 19:49
17:35 VAC Travis EX n Compressor work 18:33
17:46 VAC Gerardo, Jordan MY n CP4 check 19:16
17:49 SQZ Sheila, Camilla LVEA YES SQZ table meas. 19:39
18:19 SUS Jason, Ryan S LVEA YES PR3 OpLev recentering at VP 21:05
18:19 SUS Matt, TJ CR n ETMX TFs for rubbing 21:21
18:37 FAC Tyler Opt Lab n Check on lab 19:03
18:39 CDS Erik CER n Check on switch 19:16
18:41 FAC Kim LVEA yes Tech clean 19:51
18:56 VAC Travis Opt Lab n Moving around flow bench 19:03
18:58 PEM Robert LVEA yes Check on potential view ports 19:51
19:04 FAC Tyler Mids n Check on 3IFO 19:25
20:21 VAC Gerardo LVEA yes Checking cable length on top of HAM6 20:50
20:27 CDS Fil EY n Updating drawings 22:14
20:46 PEM Robert LVEA yes Viewport checks 21:26
20:51 VAC Janos EX n Mech room work 21:25
21:18 OPS TJ LVEA - Sweep 21:26
21:19 FAC Chris X-arm n Check filters 22:54
22:44 PEM Robert LVEA yes Setup tests 00:06
22:46 SQZ Sheila, Camilla, Matt LVEA yes SQZ meas at racks 00:06
00:19 CDS Dave, Marc EY N Timing card issue fix 01:19
H1 CDS
david.barker@LIGO.ORG - posted 21:39, Tuesday 18 February 2025 (82896)
Dummy IOC running to

Jonathan, Patrick, Dave:

Following the move of MC1, MC3, PRM and PR3 cameras from the old h1digivideo1 server to h1digivideo4 running the new EPICS IOC, two channels per camera were no longer present. This meant the EDC has been running with 8 disconnected channels

To "green up" the EDC until such time as we can restart the DAQ, I am running a dummy IOC on cdsws33 which serves these channels.

These cameras have sequentially numbers, CAM[11-14], and the eight channels in question are:

H1:VID-CAM11_AUTO 
H1:VID-CAM11_XY
H1:VID-CAM12_AUTO
H1:VID-CAM12_XY
H1:VID-CAM13_AUTO
H1:VID-CAM13_XY
H1:VID-CAM14_AUTO 
H1:VID-CAM14_XY   
 

H1 CDS
david.barker@LIGO.ORG - posted 21:34, Tuesday 18 February 2025 - last comment - 11:57, Wednesday 19 February 2025(82895)
EY timing error tracked to bad timing fanout port

Daniel, Patrick, Jonathan, Erik, Fil, Ibrahim, Marc, Dave:

Starting around lunchtime and getting more frequent after 2pm the Timing system was showing errors with EY's fanout port_5 (the sixth port). This port sends timing to h1iscey's IO Chassis timing card.

At EY, Marc and I replaced the SFPs in the fanout port_5 and h1iscey's timing card. At this point we could not get port_5 to sync. We tried replacing the timing card itself, but no sync was possible using the new SFPs. Installing the original SFPs restored the sync, but the timing problem was still there. Moving to the unused port_7 (seventh port) of the fanout fixed the problem. We put the original timing card back into the IO Chassis, so at this point all the hardware was original and the fanout SFP had been moved from port_5 to port_6.

 

Comments related to this report
david.barker@LIGO.ORG - 11:57, Wednesday 19 February 2025 (82910)
H1 General
ibrahim.abouelfettouh@LIGO.ORG - posted 19:27, Tuesday 18 February 2025 (82893)
OPS Midshift Update

We're having locking troubles due to Microseism and EQs.

After failing to get past DRMI and experiencing ~10 LLs, we finally did it and got all the way to powering up. Then a 5.6 EQ hit and we lost lock at TRANSITION_ETMX, with clear seismic signs in the few seconds leading up to the Lockloss (similar to the last LL). Lock reacquisition has been as rocky as pre-EQ. Meanwhile, Microseism has continued to increase, with traces now passing the 1 micron threshold, making lock reacquisiton harder yet.

We've just gotten back to PRMI post EQ so hoping this will be a successful reacquisition.

H1 SQZ (OpsInfo)
camilla.compton@LIGO.ORG - posted 17:01, Tuesday 18 February 2025 - last comment - 21:04, Tuesday 18 February 2025(82892)
Something in SQZ Oscillating at 14kHz

Sheila, Matt, Daniel, Camilla.

We went to the SQZ racks to check the loops, all (LO, OPO, CLF) looked stable, the OPO looked peaky but stable. Once we opened beamdiv and were squeezing there was the large oscillation in ndscopes which SR785 showed to be at 14kHz.

Issues seen:

We think it could be FSS or PMC or laser going multi mode. Can see EOMRMSMON gets worse when this noise appears. Will continue to investigate tomorrow.

Tagging OpsInfo, if you see this oscillation i.e. SQZ is injected but DARM looks terrible like NO SQZ, try taking SQZ out and in again, if this doesn't help, go to Observing with NO SQZ tonight, using Ryan's wiki Instructions.

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 21:04, Tuesday 18 February 2025 (82894)

1st photo, CLF OLG

2nd photo, LO OLG

3rd photo, OPO OLG

4th photo, spectrum of LO loop I Mon.  We also looked at OPO and SHG demod spectra and saw 14kHz in those as well.

Images attached to this comment
H1 SQZ (OpsInfo)
camilla.compton@LIGO.ORG - posted 16:32, Tuesday 18 February 2025 - last comment - 12:20, Monday 24 February 2025(82891)
SQZ_ANG_SERVO set to False

Due to so much SQZ strangeness over the weekend, Sheila set the sqzparams.py use_sqz_ang_servo to False and I changed the  SQZ_ANG_ADJUST nominal state to DOWN and reloaded SQZ_MANAGER and SQZ_ANG_ADJUST.

We set the H1:SQZ-CLF_REFL_RF6_PHASE_PHASEDEG to a normal good value of 190. If the operator thinks the SQZ is bad and wants to change this to maximize the range AFTER we've been locked 2+ hours, they can. Tagging OpsInfo.

Comments related to this report
camilla.compton@LIGO.ORG - 12:35, Thursday 20 February 2025 (82937)

Daniel, Sheila, Camilla

This morning we set the SQZ angle to 90deg and scanned to 290deg using 'ezcastep H1:SQZ-CLF_REFL_RF6_PHASE_PHASEDEG -s '3' '+2,100''. Plot attached.

You can see that the place with the best SQZ isn't a good linear range for H1:SQZ-ADF_OMC_TRANS_SQZ_ANG, which is why the SQZ angle servo has ben going unstable. We are leaving the SQZ angle servo off.

Daniel noted that we expect the ADF I and Q channels to rotate around zero, which they aren't. So we should check that the math calculating these is what we expect. We struggles the find the SQZ-ADF_VCXO model block, it's in the h1oaf model (so that the model runs faster).

Images attached to this comment
camilla.compton@LIGO.ORG - 12:20, Monday 24 February 2025 (83010)

Today Mayank and I scanned the ADF phase via 'ezcastep H1:SQZ-ADF_VCXO_PLL_PHASE -s '2'  '+2,180''. You can see in the attached plot the I and Q phases show sine/cosine functions as expected. We think we may be able to adjust this phase to improve the linearity of H1:SQZ-ADF_OMC_TRANS_SQZ_ANG around the best SQZ so that we can again use the SQZ ANG servo. We started testing this, plot,  but found that the SQZ was v. freq dependent and needed the alignment changed (83009) so ran out of time.

Images attached to this comment
LHO General
thomas.shaffer@LIGO.ORG - posted 16:30, Tuesday 18 February 2025 (82890)
Ops Day Shift End

TITLE: 02/19 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Maintenance day. Notable tasks from today's maintenance are below. We recently had another lock loss, we have started initial alignment and will start main locking shortly.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
19:34 SAF Laser Haz LVEA YES LVEA is laser HAZARD!!! 06:13
15:33 FAC Kim, Nelly Opt Lab n Tech clean 15:57
15:51 VAC Ken EX n Compressor electrical 17:19
15:52 VAC Janos, contractor EX n Compressor work 17:35
15:57 FAC Kim, Nelly EY, EX, FCES n Tech clean 18:40
15:59 FAC Eric Mech room n Heater 3a replacement 17:27
16:58 - Christina, truck VPW n Large truck delivery to VPW 16:59
17:27 CDS Jonathan, Fil, Austin MSR, CER n Move temp switch and camera server 20:10
17:28 VAC Ken EY n Disconnect compressor electrical 19:49
17:35 VAC Travis EX n Compressor work 18:33
17:46 VAC Gerardo, Jordan MY n CP4 check 19:16
17:49 SQZ Sheila, Camilla LVEA YES SQZ table meas. 19:39
17:50 SUS Jason LVEA yes PR3 OpLev alignment 18:04
18:19 SUS Jason, Ryan S LVEA YES PR3 OpLev recentering at VP 21:05
18:19 SUS Matt, TJ CR n ETMX TFs for rubbing 21:21
18:37 FAC Tyler Opt Lab n Check on lab 19:03
18:39 CDS Erik CER n Check on switch 19:16
18:41 FAC Kim LVEA yes Tech clean 19:51
18:56 VAC Travis Opt Lab n Moving around flow bench 19:03
18:58 PEM Robert LVEA yes Check on potential view ports 19:51
19:04 FAC Tyler Mids n Check on 3IFO 19:25
20:21 VAC Gerardo LVEA yes Checking cable length on top of HAM6 20:50
20:27 CDS Fil EY n Updating drawings 22:14
20:46 PEM Robert LVEA yes Viewport checks 21:26
20:51 VAC Janos EX n Mech room work 21:25
21:18 OPS TJ LVEA - Sweep 21:26
21:19 FAC Chris X-arm n Check filters 22:54
22:44 PEM Robert LVEA yes Setup tests 00:06
22:46 SQZ Sheila, Camilla, Matt LVEA yes SQZ meas at racks 00:06
00:19 CDS Dave, Marc EY N Timing card issue fix 01:19
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:28, Tuesday 18 February 2025 (82889)
OPS Eve Shift Start

TITLE: 02/19 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: SEISMON_ALERT_USEISM
    Wind: 9mph Gusts, 5mph 3min avg
    Primary useism: 0.06 μm/s
    Secondary useism: 0.77 μm/s
QUICK SUMMARY:

IFO is in INITIAL_ALIGNMENT and MAINTENANCE

We're still recovering from maintenance today with a few key issues:

Given all that, we shall begin locking when initial alignment is done.

LHO VE
janos.csizmazia@LIGO.ORG - posted 14:56, Tuesday 18 February 2025 - last comment - 11:36, Tuesday 25 March 2025(82886)
New EX clean air supply is installed
The new clean air supply (compressor, tank, dryer and a series of extra filters), which was received in the end of 2024, was installed in the EX mechanical room as a replacement for the old system. The installation work was carried out in the last few weeks, the last step - the startup by Rogers Machinery - happened today.
This new system has a 69 cfm air delivery operating with 3 pcs. of 7.5 HP motors (in sum 22.5 HP). In comparison, the old system had a 50 cfm air delivery, operating with 5 pcs. of 5 HP motors (in sum 25 HP). Moreover, the new system (unlike the old one) has an automatic dew point monitor, and a complete pair of redundant dryer towers.
So, this new system is a major improvement. The reason for this 69 cfm limit (and not more) is that the cooling of the compressor units in the MER is still feasible, moreover, the filters and the airline do not need any upgrades, they can still accommodate the airflow.
Both the new and old systems are able to produce at least -40 deg F dew point air on paper. During the startup, the new system was however able to produce much better than this - it was ~-70 deg F (and dropping) - as you can see in the attached photo.

Last but not least, a huge congratulations to the Vacuum team for the installation, as this was the first instance, when the installation of a clean air system was carried out by LIGO staff, so this is indeed a huge achievement. Also, big thanks to Chris, who cleaned some parts for the compressor, to Tyler, who helped a lot in the heavy lifting, and to Richard & Ken, who did the electrical wiring.

From next week on, we repeat this same installation at the EY station.
Images attached to this report
Comments related to this report
travis.sadecki@LIGO.ORG - 11:36, Tuesday 25 March 2025 (83549)

Connection to the purge air header was completed today.  Next up, acceptance testing.

Images attached to this comment
H1 SUS
matthewrichard.todd@LIGO.ORG - posted 14:29, Tuesday 18 February 2025 - last comment - 14:58, Tuesday 18 February 2025(82884)
Checking SUS TFs for rubbing

[ Matt, TJ ]

We looked at the transfer functions of the suspensions of the Test Masses, SR3 and PRM (top stage of each), to see if we could see signs of rubbing. This is a limited number of transfer functions; however, it does seem like there is some peak shifting in the ITMY-P.

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 14:58, Tuesday 18 February 2025 (82887)

PRM LPY, and BS LPY. I had to turn off the BS OpLev damping, which is mentioned in the Jeff K TF instructions wiki.

Images attached to this comment
H1 General
thomas.shaffer@LIGO.ORG - posted 14:27, Tuesday 18 February 2025 (82885)
LVEA Swept

I swept the LVEA after maintenance activities today. I unplugged one power strip in the CER, and heard a UPS in one of the vacuum racks going off. I silenced the alarm and talked to Gerardo and Richard, but we should keep an ear out.

H1 AOS
jonathan.hanks@LIGO.ORG - posted 14:02, Tuesday 18 February 2025 (82882)
WP 12333 Installed sw-lvea-aux1 in the CER, moved 4 cameras to the switch and to h1digivideo4
As per WP 12333 we installed a POE switch (sw-lvea-aux1) in the CER, moved four cameras onto the switch and migrated them to h1digivideo4.  This is to expand and test our infrastructure to host more digital cameras.

Jonathan H. and Austin F. moved the switch from its test location in the MSR to the CER.  As part of the work we moved sw-lvea-aux up 1 space in the rack to make more room and installed a cable management bar beneath the switches to provide some strain relief.

Fil C. setup the DC power for the switch.  We had wanted to also convert sw-lvea-aux to full DC power, but Fil would like us to split the load between the two switches more before removing the 1 AC power supply from sw-lvea-aux.

Patrick worked to migrate the cameras from h1digivideo1 to h1digivideo4.
H1 AOS (OpsInfo)
jason.oberling@LIGO.ORG - posted 13:29, Tuesday 18 February 2025 (82883)
PR3 Optical Lever

J. Oberling, R. Short

After the recent PRC alignment work (moving PR3 to better center the beam on PR2) the OpLev beam had fallen off of its QPD, so today I attempted to re-center the QPD.  Using only the translation stages, I could not re-center the QPD since the horizontal translation stage hit its max travel limit well before the beam was back on the QPD (SUM counts went from ~700 to ~850, well short of the usual ~22k).  I enlisted Ryan S. to help and we set out to adjust the OpLev beam alignment from the transmitter.  Turns out that the beam gets clipped before we can get it centered on the QPD.  We first tried to move the translation stage away from its limit but could only get the beam on 2 of the QPD segments (segments 1 and 3) before the SUM counts started dropping; at no point could we get the beam onto QPD segments 2 and 4 in this configuration.  We then moved the horizontal stage to its limit and tried to align the beam onto the QPD; this worked a little better, but still not great.  The best SUM counts we could get and have a "centered" QPD was ~17k, roughly 75% of the total SUM counts for this OpLev, so still some significant clipping.  We decided to unclip the beam to make the OpLev more trustworthy (we've had questions about this OpLev in the past).  Max SUM counts were ~22.7k, and this gives a Yaw reading of roughly -12.5 µrad; any closer to zero and the beam begins to clip again.  We left the OpLev in this configuration and will investigate the clipping at a later date.

Tagging OpsInfo: Around -12.5 µrad is the current "yaw zero" for the PR3 OpLev until we can figure out and alleviate the clipping.

As for the clipping, my best guess at this point is we're clipping somewhere on/in the ~4' long tube that runs from the exit viewport for the OpLev receiver and the OpLev's QPD.  We'll have to climb on top of HAM3 to check this, which we will do at a later date.

This closes WPs 12336 and 12341.

Edit: Fixed typo in yaw reading in bolded sentence (12.7 to 12.5).

H1 SQZ
camilla.compton@LIGO.ORG - posted 12:33, Tuesday 18 February 2025 (82881)
SHG EOM, AOM and fiber pointing realigned, powers measured.

Shiela, Camilla, Sheila and Vicky moved the AOM and realigned in 80830.

Summary: Today we measured the powers on SQZT0, see attached photo. Aimed to understand where we were loosing our green light as we had to turn down the OPO setpoint last week 82787. Adjusted SHG EOM, AOM and fiber pointing are dramatically increased available green light. OPO setpoint back to 80uW with plenty of extra green avaible.

After taking the attached power measurements we could see we the SHG conversion was fine, 130mW green out of SHG for 360mW of IR in.

We were loosing a lot of power through the EOM 96mW in to 76mW out (expect ~100%). We could see this clipping in yaw by taking a photo. Sheila adjusted the EOM alignment increasing the power out to 95mW.

Then AOM and fiber aligned following 72081:
1) set the ISS drive point at 0V to make only 0th order beam and check 90% of AOM throughput with power meter. Started with 95mW in, 50mW out. Improved to 95mW in 80mW out = 84%.
2) set the ISS drive point at 5V and align the AOM to maximize the 1st order beam (which is left side of 0th order beam looking from the laser side of SQZT0). After the AOM alignment, the 1st order beam was 27.5 mW and the 0th order beam was 49 mW. We measured again the AOM throughput including both 0th and 1st order beam. The AOM throughput was 77.5mW/93mW = 83%.
3) set the ISS drive point at 5V and align the fiber by maximizing the H1:SQZ-OPO_REFL_DC_POWER. I needed to adjust the SHG waveplate to reduce power to stop PD saturating
After this alignment, the SHG output is 42.7 mW, the pump going to fiber is 20.5 mW, and the rejected power is 2.7 mW. The ISS can be locked with OPO trans of 80uW while the ISS control monitor is 7.0. Also checked the OPO temperature.
Note that depending on the SHG PZT voltage, the power does change. Maybe cause by an alignment change form the PZT.

As ISS control monitor is 7.0 is a little high with 26.5mW SHG launched (OPO unlocked), I further turned the SHG launch power down to 20mW and adjusted the HAM7 rejected light. ISS control monitor left at 5.4 which is closer to the middle of the 0-10V range.

Non-image files attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:44, Tuesday 18 February 2025 (82880)
Tue CP1 Fill

Tue Feb 18 10:08:06 2025 INFO: Fill completed in 8min 3secs

Gerardo confirmed a good fill curbside. TCmins [-51C, -48C] OAT (+2C, 36F) DeltaTempTime 10:08:06.

missing data in plot is due to DAQ restart at that time.

Images attached to this report
LHO VE
jordan.vanosky@LIGO.ORG - posted 15:04, Friday 07 February 2025 - last comment - 16:14, Tuesday 18 February 2025(82687)
Corner RGA Non Responsive

The corner RGA (located on output arm between HAM4 and HAM5) lost connection to the control room computer and stopped collecting data around 6pm yesterday (2/6/25). The software gave an error stating "Driver Error: Run could not be stopped".

I could not ping the unit from the terminal, but Erik confirmed the port is still open on the network switch, so it seems to be an issue with the RGA electronics. Other RGAs connected to this computer can still be accessed.

I restarted the software and attempted to reconnect to the RGA, but no luck. I will have to wait until next Tuesday maintenance to troubleshoot. This unit had been collecting data for the past ~6 months without issue. I will perform a hardware reset at the next opportunity to try and bring the unit back online, otherwise we have a new PrismaPro we can replace this unit with during the next vent.

Comments related to this report
jordan.vanosky@LIGO.ORG - 16:14, Tuesday 18 February 2025 (82888)

2/18/25

Today, Erik was able to reconfigure the IPs of the RGAs. Able to ping all three RGAs currently connected to network, and corner monitoring scans have resumed. Filaments are turned on for all three RGAs, Corner, HAM6 and EX. Reminder, both HAM6 and EX have 10 l/s pumps on RGA volume.

Displaying reports 5961-5980 of 86474.Go to page Start 295 296 297 298 299 300 301 302 303 End