Displaying reports 141-160 of 83527.Go to page Start 4 5 6 7 8 9 10 11 12 End
Reports until 07:41, Friday 18 July 2025
LHO General
corey.gray@LIGO.ORG - posted 07:41, Friday 18 July 2025 - last comment - 10:40, Friday 18 July 2025(85841)
Fri Ops Day Transition

TITLE: 07/18 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 1mph 3min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.12 μm/s
QUICK SUMMARY:

H1's been locked for 1.25hrs w/ 4 locklosses over night each with decent recovery times.  Microseism is trending down over last 24hrs; forecast is for high winds (redflag warning) in the afternoon.

Comments related to this report
elenna.capote@LIGO.ORG - 10:25, Friday 18 July 2025 (85846)

The lockloss that occurred at 2025-07-18 08:00 UTC (not tagged as a glitch) was preceded by what appears to be a large kick in the yaw ASC. First attachment shows the CSOFT Y and CHARD Y signals a few seconds before the locklosses. This is also apparent in the test mass L2 signals in the second attachment.

Images attached to this comment
corey.gray@LIGO.ORG - 08:06, Friday 18 July 2025 (85843)

Of the 4 locklosses overnight, we had (1) ETMx Glitch lockloss.

elenna.capote@LIGO.ORG - 10:40, Friday 18 July 2025 (85847)

The other two locklosses last night seem by eye to have the same behavior (2025-07-18 12:07:24 UTC and 2025-07-18 04:20:15 UTC). Within ~100 ms of the lockloss time, there is something glitchy in the darm error signal where the error signal drops sharply. It looks like the glitchy behavior starts in DARM IN1 slightly before ETMX L3 starts behaving weirdly, but that's hard to tell since I'm just looking at ndscopes.

Images attached to this comment
H1 General
oli.patane@LIGO.ORG - posted 22:01, Thursday 17 July 2025 (85840)
Ops Eve Shift End

TITLE: 07/18 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:

Relocking and in MOVE_SPOTS. Relocking after the first lockloss of my shift was hands off, and this relock has been hands off this time so far too, so it's been easy. I did not get a chance to offload the SR3 dither offset onto the sliders (85830), but that can easily be done later, especially since it's already been like that for over 5 years!

We had GRB-Short E581624 come in while we were Observing earlier

LOG:

23:30UTC Observing and have been Locked for 3 hours
    00:14 GRB-Short E581624
02:02 Lockloss
03:34 NOMINAL_LOW_NOISE
    03:36 Observing
04:20 Lockloss

H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 21:21, Thursday 17 July 2025 (85839)
Lockloss

Lockloss at 2025-07-18 04:20 UTC after 46 minutes locked

H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 19:03, Thursday 17 July 2025 - last comment - 20:38, Thursday 17 July 2025(85837)
Lockloss

Lockloss at 2025-07-18 02:02 UTC after 5.5 hours locked

Comments related to this report
oli.patane@LIGO.ORG - 20:38, Thursday 17 July 2025 (85838)

03:36 UTC Observing

During relocking I reloaded the h1asc model filters in so Elenna's new filters could be added (diffs).

Images attached to this comment
H1 CDS
david.barker@LIGO.ORG - posted 17:25, Thursday 17 July 2025 (85836)
H1 in STANDDOWN

H1 is in stand-down which gave me the opportunity to test the notification on the CDS Overview MEDM

Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 16:32, Thursday 17 July 2025 (85826)
Ops Day Shift End

TITLE: 07/17 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Locked for 3 hours. We had one lock loss that I'm still not sure of the cause, and relocked was straight forward with an initial alignment.
LOG:

H1 General
oli.patane@LIGO.ORG - posted 16:28, Thursday 17 July 2025 (85834)
Ops Eve Shift Start

TITLE: 07/17 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 16mph Gusts, 10mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.20 μm/s
QUICK SUMMARY:

Observing at 145Mpc and have been Locked for almost 3 hours. Everything is looking good.

H1 ISC
oli.patane@LIGO.ORG - posted 15:49, Thursday 17 July 2025 (85830)
SR3 M1 DITHER P output has been sending out 32 for 5+ years

TJ, Elenna, Oli, Sheila

I had noticed a bit ago that SR3 M1 DITHER P has been outputting a 32 consistantly (overview). It seems like the OFFSET has been on for most of over 5 years(ndscope1). Over 5 years ago it stopped moving around (ndscope2) and had stayed at 32 almost all the time since then, probably when the SR3_CAGE_SERVO stopped being used. I spoke to Sheila, and she says she knew about it, and that it was just a remnant of the cage servo that no one bothered to offload into the opticalignment sliders, and that we can offload it to the sliders so that it isn't sitting there anymore. We will probably do this at the next lockloss.

Images attached to this report
H1 SEI (SEI)
ibrahim.abouelfettouh@LIGO.ORG - posted 14:19, Thursday 17 July 2025 (85829)
H1 ISI CPS Sensor Noise Spectra Check - Weekly FAMIS 26052

Closes FAMIS 26052. Last checked in alog 85627.

"BSC high freq noise is elevated for these sensor(s)!!!

ITMY_ST1_CPSINF_H3"

By eye, BS_ST2 had elevated counts compared to last week. Otherwise, signals are either comparable or lower.

Non-image files attached to this report
H1 General
thomas.shaffer@LIGO.ORG - posted 14:15, Thursday 17 July 2025 - last comment - 14:20, Thursday 17 July 2025(85828)
Back to Observing 2035 UTC

Locking was straight forward, needed to run an initial alignment, but it was all hands off. There were a bunch of SDFs from commissioning today that we knew would be there.

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 14:20, Thursday 17 July 2025 (85831)

Briefly dropped observing to make a sqz adjustment from 2106-2108UTC

H1 CDS
david.barker@LIGO.ORG - posted 12:25, Thursday 17 July 2025 (85825)
HWS camera control software disabled, HWS cameras are permanently active.

Camilla, TJ, Dave:

On 26jun2025 I started my HWS camera control code on the four HWS machines [h1hwsmsr, h1hwsmsr1, h1hwsex and h1hwsey]. The code disables the HWS cameras when H1 is in observation mode and re-enables them on lock-loss.

Data since that time has shown no clear correlation between HWS camera status and the 2Hz DARM comb. The comb appears to have persisted until the 4th July, at which point it diminished.

Camilla and TJ decided we should try turning off the camera control code and keep the HWS cameras enabled.

Following the 11:43 lock loss this morning (at which point the cameras were enabled by the code) I stopped the camera control python code on all four machines between the times 11:54 and 11:57 PDT. Note that Ctrl-C inside the tmux sessions did not work, I had to "kill -9 " the python processes.

Attached camera control MEDM from 12:03 shows the cam-ctrl agents have stopped, with their GPS times not updating (denoted by purple boxes).

For now I have removed the CCTL (Cam Control) box from the CDS overview, but I am keeping the cam_ctrl_ioc running and the channels remain in the DAQ.

Images attached to this report
H1 ISC
elenna.capote@LIGO.ORG - posted 11:58, Thursday 17 July 2025 - last comment - 15:38, Thursday 17 July 2025(85823)
Designed new CSOFT P low pass

Looking at the measurement of CSOFT from this alog, and also remembering that CSOFT P tends to be highly coherent with DARM from 10-15 Hz, I decided to make some adjustments to the CSOFT P lowpass filter that would both suppress noise around 10 Hz, and also maybe buy back some phase at 1 Hz. I designed a 7 Hz low pass filter with only 40 dB of attentuation. The current lowpass filter design has 60 dB of attenuation, which seems like overkill. I adjusted the low pass to be elliptic, low Q, which gives us back about 5 degrees of phase, but reduces the gain at 10 Hz and above by a factor of 10. I didn't get a chance to try it in lock, but I changed the guardian to engage this filter on the way up to NLN (line 3341 of ISC_LOCK). There will be an SDF diff. The new filter is in FM7, the old filter is in FM9.

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 15:38, Thursday 17 July 2025 (85832)

This has reduced the coherence of CSOFT P with DARM. The blue reference trace shows the coherence from last night, and the red live trace from our relock today with the new low pass.

Images attached to this comment
H1 PSL (IOO)
jennifer.wright@LIGO.ORG - posted 11:56, Thursday 17 July 2025 - last comment - 15:32, Monday 21 July 2025(85795)
ISS array work - horizontal scan

Jennie, Rahul

On Tuesday Rahul and I took the measurements for the horizontal coupling in the ISS array currently on the optical table.

The QPD read 9500 e-7 W.

The X position was 5.26 V, the Y position was -4.98 V.

PD DC Voltage [mV] pk-pk AC Voltage [mV] pk-pk
1 600 420
2 600 380
3 600 380
4 600 420
5 800 540
6 800 500
7 600 540
8 800 540

After thinking about this data I realise we need to retake it as we should record the mean value for the DC coupled measurements. This was with a 78V signal applied from the PZT driver and an input dither signal of 2 Vpp at 100Hz on the oscilloscope and I think 150 mA pump current on the laser.

Comments related to this report
jennifer.wright@LIGO.ORG - 16:14, Friday 18 July 2025 (85853)

Rahul, Jennie W

 

Yesterday we went back into the lab and retook the DC and AC measurements while horizontal dither was on while measuring using the 'mean' setting and without changing the overall input pointing from what it was in the above measurement.

 

PD DC Voltage [V] mean AC Voltage [V] mean
1 -4.08 -0.172
2 -3.81 0.0289
3 -3.46 0.159
4 -3.71 0.17
5 -3.57 -0.0161
6 -3.5 0.00453
7 -2.91 0.187
8 -3.36 0.0912

 

 

QPD direction Mean Voltage [V] Pk-Pk Voltage [V]
X 5.28 2.20
Y -4.98 0.8

QPD sum is roughly 5V.

 

Next time we need to plug in the second axis of the PZT driver so as to take the dither coupling measurement in the vertical direction.

jennifer.wright@LIGO.ORG - 15:12, Monday 21 July 2025 (85890)

horizontal dither calibration = 10.57 V/mm

dither Vpk-pk on QPD x-direction = 2.2V

dither Vpk-pk on QPD y-direction = 0.8V

dither motion in horizontal direction in V on QPD = sqrt(2.2^2 + 0.8^2)

motion in mm on QPD that corresponds to dither of input mirror = sqrt(2.2^2 + 0.8^2) / 4.644 = 0.222 mm

Code is here for calibration of horizontal beam motion to QPD motion plus calibration of dither measurements.

Non-image files attached to this comment
jennifer.wright@LIGO.ORG - 15:32, Monday 21 July 2025 (85891)

To work out the relative intensity noise:

RIN = change in power/ power

= ( change in current/ current) / responsivity of PD

= (change in voltage/voltage) / (responsitvity * load resistance)

 

Therefore to minimise RIN we want to minimise change in voltage / voltage for each PD.

To get the least coupling to array input alignment we work out

relative RIN coupling = (delta V/ V) / beam motion at QPD

 

This works because the QPD is designed to be in the same plane as the PD array.

 

PD DC Voltage [V] mean AC Voltage [mV] pk-pk Beam Motion at QPD [mm] Relative Coupling [1/m]
1 -4.08 420 0.222 465
2 -3.81 380 0.222 450
3 -3.46 380 0.222 496
4 -3.71 420 0.222 511
5 -3.57 540 0.222 683
6 -3.5 500 0.222 645
7 -2.91 540 0.222 838
8 -3.36 540 0.222 726

 

These are all a factor of 50 higher than those measured by Mayank and Shiva but after discussion with Keita either we need higher resolution measurements or we need to further optimise input alignment to the array to minimise the coupling.

H1 SQZ
sheila.dwyer@LIGO.ORG - posted 11:14, Thursday 17 July 2025 - last comment - 08:39, Monday 21 July 2025(85820)
SQZ angle ADF srevo back on in guardian

Sheila, Camilla

We ran a couple of squeezing angle scans to check the settings of the ADF servo. 

One thing that we realized is that the ADF Q demod signal is divided by H1:SQZ-ADF_OMC_TRANS_Q_NORM rather than mulitplied which is what we had thought.  We changed the coefficent from 0.18 to 5.8. The first png attachment shows that this transforms the blue ellipse into the orange one.  It would be a bit better if we first adjusted the demod phase to maximize the Q signal, so that the ellipse would be aligned along the axis, and the rescaled version would be more like a circle.  However you can see in the right side plot that this gives us a reasonably linear readback of sqz angle as we change the RF6 demod angle (which is actually cabled up to RF3 phase) about 150 degrees where our best squeezing is.

Camilla turned the servo back on in sqzparams. 

For future reference, a slightly better way to do this would be to move the demod phase to maximize Q, do a scan and set H1:SQZ-ADF_OMC_TRANS_Q_NORM to the ratio (max of Q)/ (max of I).  Then you can do a smaller scan around the point with the best squeezing, and in sqzparams set sqz_ang_adjust_ang to the readback angle that you think is best.

 

Images attached to this report
Non-image files attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 14:27, Thursday 17 July 2025 (85827)OpsInfo

This didn't work at the start of today's the lock as the ADF frequency had been left near 10kHz. Once I put the ADF back to 322Hz it seemed to work fine.

For operators, this means that if the squeezing looks bad, running SCAN_SQZANG_FDS alone won't change the SQZ angle. You would need to:

  • Request SQZ_MANGER to SCAN_SQZANG_FDS
  • Once it's done, if sqz has improved, adjust H1:SQZ-ADF_OMC_TRANS_PHASE to put H1:SQZ-ADF_OMC_TRANS_SQZ_ANG around zero.
    • see the attached screenshot showing the channels to change and ndscope, this is from sitemap > sqz > sqz manager > ADF
  • Request SQZ_MANGER to FREQ_DEP_SQZ

If the servo is running away, try the above instructions, if that doesn't work, the servo can be turned off via editing use_sqz_angle_adjust = False in sqz/h1/guardian/sqzparams.py. Please alog and tag SQZ.

Images attached to this comment
camilla.compton@LIGO.ORG - 08:39, Monday 21 July 2025 (85884)

Since we've had this servo running, the range has been higher and sqz more stable, see attached.

Images attached to this comment
H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 21:33, Wednesday 16 July 2025 - last comment - 08:38, Friday 18 July 2025(85805)
Lockloss

Lockloss at 2025-07-17 04:11 UTC due to a power issue with ETMX and TMSX. Currently in contact with Dave and Fil is on his way in.

ETMX M0 and R0 watchdogs tripped

ETMX and TMSX OSEMs are in FAULT

ETMX ESD off

ETMX HWWD notified that it would trip soon, so SEI_ETMX was preemptively put into ISI_OFFLINE_HEPI_ON to keep ISI from getting messed up when it trips

 

Comments related to this report
david.barker@LIGO.ORG - 21:46, Wednesday 16 July 2025 (85806)

H1SUSETMX ADC channels zeroed out at 21:11:39. SWWDs did not trip because there is no RMS on the OSEM signals, but the HWWD completed its 20 minute countdown and powered down the three ISI coil drivers at 21:32. This indicates ETMX's top stage OSEMs have lost power.

I've opened WP12692 to cover Fil going to EX to investigate.

david.barker@LIGO.ORG - 23:43, Wednesday 16 July 2025 (85807)

During the recovery the +24VDC power supply for the SUS IO Chassis was glitched which stopped all the h1susex and h1susauxex models. To recover I first did a straight forward reboot of h1susauxex (no Dolphin), it came back with no issues.

To reboot h1susex was more involved, remember that the EX Dolphin switch was damaged by the 06 April 2025 power outage and has no network control. The procedure to reboot h1susex I used was:

  • caput H1:IOP-SEIEX_REMOTE_IPC_PAUSE 1
  • caput H1:IOP-ISCEX_REMOTE_IPC_PAUSE 1
  • caput H1:IOP-SUSEX_REMOTE_IPC_PAUSE 1
  • on h1susex:
  • rtcds stop --all
  • sudo systemctl reboot

When h1susex came back, I verified all the IO Chassis cards were present (they were all there)

I unpaused the SEI and ISC IPC by writing a 0 to their IPC_PAUSE channels.

The HWWD came back in nominal state.

I reset the SUS SWWD DACKILLs and unbypassed the SEI SWWD.

DIAG_RESET to clear all the IPC errors (it did so) and clear DAQ CRCs (they cleared).

Handed systems over to control room (Oli and Ryan S).

david.barker@LIGO.ORG - 23:50, Wednesday 16 July 2025 (85808)

From Fil:

-18VDC Power supply had failed and was replaced.

Power supply is in rack VDD-2, location U25-U28, right-hand supply, label [SUS-C1 C2]

old supply (removed) S1202024

new supply (installed) S1300288

 

david.barker@LIGO.ORG - 08:47, Thursday 17 July 2025 (85814)

Last night's HWWD sequence is shown below. Reminder that at +40mins the SUS part of the HWWD trips, which sets bit2 of the STAT. This opens internal relay switches, but since we don't route the SUS drives through the HWWD unit (too noisy) this has no effect on operations. The delay between 22:52 and 23:20 is because h1iopsusex was down between 23:01 and 23:20.

Images attached to this comment
david.barker@LIGO.ORG - 09:23, Thursday 17 July 2025 (85816)
filiberto.clara@LIGO.ORG - 16:34, Thursday 17 July 2025 (85835)

Fan motor seized on failed power supply.

david.barker@LIGO.ORG - 08:38, Friday 18 July 2025 (85844)

Wed16Jul2025
LOC TIME HOSTNAME     MODEL/REBOOT
23:15:13 h1susauxex   h1iopsusauxex
23:15:26 h1susauxex   h1susauxex  
23:20:21 h1susex      h1iopsusex  
23:20:34 h1susex      h1susetmx   
23:20:47 h1susex      h1sustmsx   
23:21:00 h1susex      h1susetmxpi 
 

LHO General
thomas.shaffer@LIGO.ORG - posted 07:34, Tuesday 15 July 2025 - last comment - 11:56, Thursday 17 July 2025(85760)
Ops Day Shift Start

TITLE: 07/15 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 15mph Gusts, 9mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.14 μm/s
QUICK SUMMARY: We've only been at low noise for 20 min, but magnetic injections are running now. Maintenance day today.

Comments related to this report
thomas.shaffer@LIGO.ORG - 11:56, Thursday 17 July 2025 (85824)

Maintenance was delayed by 1 hour this day due to a Fermi GRB notice (E581020) that we received on site a few minutes before 1500UTC. We were not in Observing at the time, but we were in a low noise state. I brought us back into observing at 1510UTC where we stayed until one hour after the initial BRD notice.

H1 SEI (ISC)
elenna.capote@LIGO.ORG - posted 10:45, Monday 14 July 2025 - last comment - 15:46, Thursday 17 July 2025(85740)
Trying High Bandwidth control for earthquake

Jim, Elenna

We had a 6.6 earthquake begin rolling in from Panama, so Jim and I tried to take the ASC arm control loops to the high bandwidth state. I also turned off the LSC feedforward which drives the ETMY PUM.

This obviously creates a lot of noise in DARM, but we are curious to see if it helps us ride out a large earthquake.

This consists of:

Some of these things can be done by hand, but others, like transitioning filters and gains together have to be done with guardian code to ensure they are done at the same time. I copied and pasted lines of code into a guardian shell.

These are the lines of code that will do everything I mentioned above:

ezca.get_LIGOFilter('ASC-CHARD_Y').ramp_gain(300, ramp_time=10, wait=False)
ezca.switch('ASC-CHARD_Y', 'FM3', 'FM8', 'FM9', 'OFF')
 
ezca.switch('ASC-DHARD_Y', 'FM1', 'FM3', 'FM4', 'FM5', 'FM8', 'OFF')
 
ezca.switch('ASC-CHARD_P', 'FM9', 'ON')
ezca.switch('ASC-CHARD_P', 'FM3', 'FM8', 'OFF')
ezca['ASC-CHARD_P_GAIN'] = 80
 
ezca.get_LIGOFilter('ASC-DSOFT_Y').ramp_gain(30, ramp_time=5, wait=False)
ezca.get_LIGOFilter('ASC-DSOFT_P').ramp_gain(10, ramp_time=5, wait=False)
 
ezca.switch('ASC-DHARD_P', 'FM4', 'FM8', 'OFF')
 
ezca['LSC-PRCLFF_GAIN'] = 0
 
ezca['LSC-MICHFF_GAIN'] = 0
 
ezca['LSC-SRCLFF1_GAIN'] = 0

I saved this as a script called "lownoise_asc_revert.py" in my home directory. This is a bit of a misnomer since it also reverts the LSC feedforward.

We are still locked so far, but we are waiting to see how this goes (R wave just arrived).

Comments related to this report
elenna.capote@LIGO.ORG - 10:48, Monday 14 July 2025 (85741)

We lost lock when the ground motion got to be about 2.5 micron/s.

elenna.capote@LIGO.ORG - 10:58, Monday 14 July 2025 (85742)

This was a "large earthquake" aka within the yellow band on the EQ response zone plot. Since these earthquakes are highly likely to cause lockloss, Jim and I are thinking we could try this high bandwidth control reversion for these earthquakes to see if this can help us survive the earthquake. This would take us out of observing and kill the range (we were at about 70 Mpc before the lockloss), but we could then go back to lownoise once the earthquake passes.

Jim also thinks he can make some adjustments to other seismic controls, but I'll let him explain how that would work since he is the expert.

elenna.capote@LIGO.ORG - 11:12, Monday 14 July 2025 (85743)

I refined the script to include some sleeps, and wrote another script to revert the reversion so it will put the ASC and LSC feedforward back in the nominal low noise state.

Both scripts are attached. They should be tested!

Non-image files attached to this comment
jim.warner@LIGO.ORG - 15:46, Thursday 17 July 2025 (85833)

I have added buttons to run modified versions of Elenna's scripts to my seismic overview ISI_CONFIG screen, the smaller red and blue buttons that say "ASC Hi Gn" and "ASC Low noise" in the upper left, but I don't think we are ready to suggest anyone use them yet. I added a bit of code to ask if you are sure, so they shouldn't be very easy to launch accidentally. I'm trying to compile some data to estimate how much run time we lose or could gain before investing much more effort in automating this.

Images attached to this comment
Displaying reports 141-160 of 83527.Go to page Start 4 5 6 7 8 9 10 11 12 End