Displaying reports 161-180 of 82872.Go to page Start 5 6 7 8 9 10 11 12 13 End
Reports until 11:57, Wednesday 18 June 2025
H1 ISC
elenna.capote@LIGO.ORG - posted 11:57, Wednesday 18 June 2025 (85156)
DC6 P does not contribute noise to DARM

I ran a noise budget injection into DC6 (centering loop for POP WFS), using a broadband excitation from 10-100 Hz. Based on the results in the DTT template (attached), there is not any measureable contribution to noise in DARM when injecting about 100x above ambient in the DC6 P control (bottom left plot, ref trace shows quiet time, live trace shows injection). We can include this channel in the ASC noise budget, but our code won't even generate a trace since the reference DARM and injection DARM shown here are exactly the same (top left plot, blue reference trace shows quiet time, red live trace shows injection time).

At a later time I will check DC6 Y.

Images attached to this report
H1 General
ibrahim.abouelfettouh@LIGO.ORG - posted 11:47, Wednesday 18 June 2025 - last comment - 13:46, Wednesday 18 June 2025(85155)
A2L All Run

Ran A2L script for all sus during comissioning today. Screenshot attached.

 

Images attached to this report
Comments related to this report
ibrahim.abouelfettouh@LIGO.ORG - 13:46, Wednesday 18 June 2025 (85157)

These values have been added to ISC_LOCK.py (screenshot of specific gains attached).

Images attached to this comment
LHO VE
david.barker@LIGO.ORG - posted 10:23, Wednesday 18 June 2025 (85154)
Wed CP1 fill

Wed Jun 18 10:11:15 2025 INFO: Fill completed in 11min 11secs

 

Images attached to this report
H1 SEI
anthony.sanchez@LIGO.ORG - posted 09:49, Wednesday 18 June 2025 (85153)
H1 ISI CPS Noise Spectra Check - Weekly

H1 ISI CPS Noise Spectra Check - FAMIS 26048

NEW and IMPROVED H1 ISI CPS Noise Spectra check Now includes HAM1 !

HAM1 currently have some very loud VAC equipment attached to it which is running and may be why HAM1 looks so terible relative to the rest of the othe HAMs.

Non-image files attached to this report
H1 AOS
camilla.compton@LIGO.ORG - posted 09:39, Wednesday 18 June 2025 - last comment - 14:13, Wednesday 18 June 2025(85149)
SQZ ASC and FC beamspot control back on

Sheila, Camilla

We can see large SQZ changes dependent on the OPO PZT value, we;vee seen this before, some alignment changes from this PZT should be adjusted for by FC AS and FC beamspot control. The FC beamspot control has been off since the vent, but we're turned FC beamspot control on again in the hope to reduce this dependency.

Yesterday we needed to turn the ASC on to improve high freq sqz 85147 and since we've started using the THERMALIZATION guardian 85083 to slowly adjust SRCL Offset, our squeezing and ASC error signals are reduced slightly (see below). We have turned back on the SQZ ASC as expect this new guardian will stop the ASC running away.

Now we have the THERMALIZATION guardian working, the ADF measured sqz ang change has reduced (see below), we want to try turning back on SQZ_ANG_SERVO which will take a little tuning of settings. You can see in this plot that when the OPo PZT changed, the servo would have adjusted the sqz angle too.

Also touched the SHG launch waveplates to decrease the rejected power in H1:SQZ-SHG_FIBR_REJECTED_DC_POWER.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 14:13, Wednesday 18 June 2025 (85159)

SQZ ASC was running away at the start of the lock so I've turned SQZ ASC off again.

I tried re-measureing the sensing matrix, the result was different to that measrue in September 80373 with the YAW sensor swapped (see output of /sqz/h1/scripts/ASC/python AS42_sensing_matrix_cal.py below) but when I tried it later the sensing matrix seemed to be different, I expect you need to start in good squeezing for it to work well which we were not when I tried it.

Plan to remeasure the sensing matrix more carefully (as in 80373) and then try a new one again.

Also tried today the SQZ_ANG_ADJUST servo using ADF, as the ASC was running away this was confusing, left off.

Using ZM4 and ZM6.
PIT Sensing Matrix is:
[[-0.0006984 -0.0011916]
[ 0.00143 0.001 ]]
PIT Input Matrix is:
[[ 994.44305222 1184.97834103]
[-1422.05356468 -694.51902767]]
YAW Sensing Matrix is:
[[-0.0031535 0. ]
[ 0. -0.00165 ]]
YAW Input Matrix is:
[[-317.10797527 -0. ]
[ -0. -606.06060606]]
H1 CDS
david.barker@LIGO.ORG - posted 09:07, Wednesday 18 June 2025 (85152)
h1guardian1 rebooted to fix AWG issue

Jonathan, Erik, Dave, Ibrahim, Tony:

Following the front end restarts yesterday there has been a spate of Guardian AWG connection issues e.g. alog85130

Erik recommended that we reboot h1guardian1 at the next opportunity to force new front end connections for all Guardian nodes, rather than restart each node as the problem arises.

Following a lock loss this morning, the control room gave us the go ahead to reboot h1guardian1 at 08:49 Wed18jun2025PDT. Like TJs last reboot 12 days ago, h1guardian came back and restarted all of its nodes quickly and without any problems.

H1 General (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 09:06, Wednesday 18 June 2025 (85151)
Locklosss 14:56

Unknown cause lockloss

However our two guesses are:

H1 CDS
david.barker@LIGO.ORG - posted 08:59, Wednesday 18 June 2025 (85150)
New HAM1 vacuum gauge PT100_MOD2 added to MEDM and FOM

Reminder that for the past month HAM1 pressure was being reported by a temporary "H1" version of the old cold-cathode PT100A (called H1:VAC-LY_X0_PT100B_PRESS_TORR) which uses an ADC in h0vacly and calcuates the pressure from the raw voltage signal.

Yesterday Patrick installed the h0vaclx full Beckhoff readout of this gauge via an ethernet connection. The channel name for this gauge from now onwards is H0:VAC-LX_X0_PT100_MOD2_PRESS_TORR

For now I'm showing both channels on the Vacuum MEDM and FOMs, H1:VAC-LY_X0_PT100B_PRESS_TORR upper, H0:VAC-LX_X0_PT100_MOD2_PRESS_TORR lower.

Note the H1 channel is a bigger number, the voltage signal increased slightly when Gerardo plugged the ethernet cable into h0vaclx yesterday.

Images attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 07:33, Wednesday 18 June 2025 (85148)
OPS Day Shift Start

TITLE: 06/18 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 3mph Gusts, 1mph 3min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.05 μm/s
QUICK SUMMARY:

IFO is in NLN and OBSERVING as of 11:57 UTC

Looks like we were able to recover from a 5.8 Mexico quake fully automatically last night.

 

H1 General (GRD)
oli.patane@LIGO.ORG - posted 22:04, Tuesday 17 June 2025 (85147)
Ops Eve Shift End

TITLE: 06/18 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:

Currently Observing at 145 Mpc and have been Locked for almost 2 hours.Everything looking good.

Two hours into our previous lock, our range was slowly dropping and we could see that SQZ wasn't looking very good, and trending back the optic allignments Camilla saw that ZM4 and ZM6 weren't where they were supposed to be because the ASC hadn't been offloaded before all the updates earlier today(ndscope1). To fix this, we popped out of Observing and turned on the SQZ ASC for three minutes, which made the sqzing better(ndscope2)! This better sqzing has persisted to this next lock stretch.

After the 01:30 lockloss (85145), I sat in DOWN for a couple minutes while restarting a few Guardian nodes since awg was restarted earlier today and this should help with the guardian node error (like in 85130) (these are the ones I could think of that probably all use awg): ALS_{X,Y}ARM, ESD_EXC_{I,E}TM{X,Y}, SUS_CHARGE, PEM_MAG_INJ Tagging Guardian
 

LOG:

23:30 Observing and have been Locked for over 1 hour
00:24 Popped out of Observing quickly to run the SQZ ASC for a few minutes
00:27 Back into Observing
01:30 Lockloss
    - Sat in DOWN for a couple minutes while I restarted some Guardian nodes
    - We ended up in CHECK_MICH_FRINGES but that didn't help, so I started an IA
03:05 NOMINAL_LOW_NOISE
03:08 Observing                                                                                                          

Start Time System Name Location Lazer_Haz Task Time End
15:22 SAF LASER SAFE LVEA SAFE LVEA is LASER SAFE ദ്ദി( •_•) 15:37
20:49 ISS Keita, Rahul Optics lab LOCAL ISS array work (Rahul out 23:45) 00:28
Images attached to this report
H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 18:49, Tuesday 17 June 2025 - last comment - 20:44, Tuesday 17 June 2025(85145)
Lockloss

Lockloss @ 06/18 01:30 UTC

Comments related to this report
oli.patane@LIGO.ORG - 20:44, Tuesday 17 June 2025 (85146)

03:08 Observing

H1 CDS
david.barker@LIGO.ORG - posted 17:20, Tuesday 17 June 2025 - last comment - 17:28, Tuesday 17 June 2025(85141)
DAQ restart of PSLDBB and EDC restart for Vacuum LX PT100 and digivideo CAM37 additions (and some removals)

The DAQ was restarted for a second time today at 12:41 (0-leg) and 12:50 (1-leg) for three reasons:

1) h1psldbb had a bad DAQ status following model change at 09:15 to remove an IPC sender

2) H0EPICS_VACLX.ini has new PT100 HAM1 gauge channels

3) H1EPICS_DIGVIDEO.ini has new CAM37 ISCT1-REFL channels (camera was added recently)

I regenerated H1EPICS_DIGVIDEO.ini using generate_camera_daq_ini.py, which correctly added CAM37 channels as a new server machine, but reverted CAMs[21, 23, 24, 26] back to "old" server settings. Remember these cameras were moved to a new server, but lack of masking forced them back to the old server (digivideo2). At this point the EDC was expecting a new set of channels, so as a work around I built a dummy ioc to simulate these channels to keep EDC happy. So to make a long story short, these cameras are back with their original PV set and digivideo_dummy_ioc is no longer needed. To test this I shut down the IOC with no loss of EDC connections, and verified tha the Guardian CDS_COPY "from" list was still functioning.

 

Comments related to this report
david.barker@LIGO.ORG - 17:28, Tuesday 17 June 2025 (85142)

Detailed list of DAQ channel changes (9097 additions, 91 removals)

Non-image files attached to this comment
H1 OpsInfo (ISC)
jennifer.wright@LIGO.ORG - posted 16:57, Tuesday 17 June 2025 (85136)
Instructions for running DARM offset step to do output loss meas

To run the test that calculates how much loss we have between the ASC-AS_C (anti-symmetric port) and the DCPDs at the output of the OMC (where strain is derived from).

  1. This test is set up to run in NLN.
  2. Turn off OMC ASC. Go to sitemap -> OMC -> OMC Control and note down the MASTER GAIN value in circled section in attached photo, then change this value to 0.
  3. Go to /ligo/gitcommon/labutils/darm_offset_step and run:

conda activate labutils

python auto_darm_offset_step.py

The script turns on one notch in the DARM LSC loop, then changes the PCAL line heights so all the power is in just two frequencies.

At the end it reverses both these changes.

It can be stopped using Crtl-C but this will not restore the PCAL line heights back to their default values so you may have to use time machine.

After setting up the PCAL the script steps the DARM offset level in 120 s steps, I think it takes about 21 minutes to run.

After the script finishes, please put the OMC ASC MASTER GAIN back to its original value.

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 16:39, Tuesday 17 June 2025 (85140)
Ops Eve Shift Start

TITLE: 06/17 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 142Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 10mph Gusts, 4mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.07 μm/s
QUICK SUMMARY:

We are Observing and have been Locked for almost 1.5 hours. SQZ doesn't look good, so I might pop out of Observing in another hour to run sqz scan align. Also, if we lose lock I will be restarting a couple guardian nodes that use AWG since that was restarted earlier today, and we've already had one awg error pop up in ALS_YARM (85130).

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:38, Tuesday 17 June 2025 (85137)
OPS Day Shift Summary

TITLE: 06/17 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 140Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:

IFO is in NLN and OBSERVING as of 22:26 UTC

TUES Maintenance Activities:

We had a very productive maintenance day during which the following was completed

LVEA was swept - alog 85128

*Only listing completed WP’s - close your permits if work is done.

Lock Reacquisition:

Lock recovery was quite rough due to a few issues

  1. ITMY Oscillation - Whilst locking ALSY, we noticed that it was oscillating at ~3Hz. No alarms had sounded other than the CDS/Maint work’s ISI/SUS trips expected earlier in the morning that were since successfully reverted. Ryan C discovered that the L1 BOSEMs were also oscillating independently of ALS WFs (which was the initial hypothesis). Then we found a worrying increase in the IY reaction chain gain (which showed an apparent increase of a few ten thousand!). This appears to have started when the CDS upgrade was done even though the right values were save into the SDF SAFE files. CDS advised that any signals from this time are not to be trusted, but clearly something must have moved since the 3Hz signal we were still getting was still ringing (albeit ringing down for the last 3 hrs). We managed to find the appropriate filter banks, IY_R0_L2DAMP_R_GAIN and IY_R0_L2DAMP_P_GAIN and identified them as the culprits, turning them off. After the involvement of Ryan C, Tony, Sheila, Elenna, Keita, Jonathan, Dave, EJ, Oli and myself, we came to the conclusion that we were going to test turning on the gain (nominally set to 1.5 and -2.5 not 10k) with a ramp. We did this and there were no issues. Weird. So essentially CDS is saying that there was no gain and power cycling the gain fixed the IY motion which was seen IY L2, IY L1 OSEMs as well as ALSY during ALS. Elenna still thinks something is weird with IY Reaction Chain so is investigating. Picture of Tony’s IY Power Spectrum Below. (alog 85135)
  2. ALS Y Fault during ScanAlign. ALSY Faulted during scan-align causing a guardian node error. I had to manual out to continue locking. This is now a known issue whereby ALS scan align (not specific to X or Y) fails due to an AWG issue. The SUSCHARGE and ALS guardians will be restarted by Oli once we have our next Lockloss. (alog 85130)
  3. IMC Locking Issues. The CDS work today threw the IMC out of alignment so we had to move MC1, MC2, MC3 and PZT sliders to a time where we were successfully auto-locking.
  4. SEI states, PSL and SUS were all transitioned to their SAFE (Safe State Transition Wiki) and then transition out (Safe State De-transition Wiki)
  5. SDF Screenshots Linked

LOG:

Start Time System Name Location Lazer_Haz Task Time End
15:22 SAF LASER SAFE LVEA SAFE LVEA is LASER SAFE \u0d26\u0d4d\u0d26\u0d3f( \u2022_\u2022) 15:37
15:07 FAC Kim, Nellie LVEA N Technical Cleaning 16:12
15:40 VAC Jordan, Gerardo LVEA N Anchor Drill work 17:31
15:42 OPS' Camilla LVEA N Laser SAFE transition, ISCT1 check 16:14
15:42 3:00 Christina LVEA N Walkabout 17:08
16:05 FAC Randy LVEA N Inventory 20:49
16:12 FAC Kim EX N Technical Cleaning 17:58
16:13 FAC Nellie EX N Technical Cleaning 17:15
16:19 PSL Jason LVEA N PSL Inventory + Cable pull 17:08
16:23 3IFO Betsy, Mitchell, LIGO India Delegation LVEA N Tour, 3IFO Walkabout + Rick 18:20
16:28 VAC Richard LVEA N Check on vac 18:48
16:30 AOS Jennie Optics Lab Local ISS Array 17:21
16:42 ISS Kieta Optics lab Local Joining Jennie W in he optics lab 19:15
16:43 CDS Erik Remote N Restarting EndX IOC 16:53
16:57 EE Fil LVEA & Both Ends N Looking for Scopes & Talkign to Richard. 18:55
17:08 FAC Christina End stations N Getting Pictures of hardware 18:25
17:13 ISC Sheila, Matt, Camilla LVEA N ISCT1 Laser Panels 17:41
17:27 Bee Tyler LVEA N Being a Busy Bee keeper and intercepting Betsy 17:35
17:27 ISS Rahul Optics Lab Local Helping Keita with ISS work 19:15
17:28 ISS Jennie W Optics Lab Local Working on ISS with Keita 19:15
17:42 ISC Camilla ISCT table N ISCT table work 17:58
17:44 PEM Ryan C LVEA n Checking Dust mons 18:18
17:45 VAC Gerardo Mid & EY N Guage replacements 18:56
17:58 FAC Nellie, Kim FCES N Technical Cleaning 18:32
18:20 3IFO Mitchell, Jim, 3IFO team member LVEA N 3IFO inventory check 18:23
18:25 3IFO Brijesh, Jim LVEA N 3IFO Inventory 18:43
18:34 TCS Camilla LVEA N TCS Part Search 18:45
18:57 VAC Gerardo LVEA N Vac Gauge Reconnection 19:06
19:58 OPS Tony LVEA N Sweep 20:49
20:49 ISS Keita, Rahul Optics lab LOCAL ISS array work 22:55
20:58 FAC Chris EX, MX N Air handler checks 22:12
21:11 PSL Rick, 3IFO Delegation PSL Anteroom N PSL Anteroom Tour 22:06
21:21 ISS Jennie W Optics lab Local Working on ISS system 22:51
Images attached to this report
H1 SUS (CDS)
erik.vonreis@LIGO.ORG - posted 16:12, Tuesday 17 June 2025 - last comment - 17:50, Tuesday 17 June 2025(85135)
Excessive movement in ITMY may have been caused by bad IPC table in OMC0

Trend of H1:IOP-SUS_ITMY_WD_OSEM1_RMSOUT shows increased motion during the 10 minutes post-RCG upgrade that OMC0, see alog 85120, was clobbering IPCs, including two peaks.

The attached screenshot has cursors at the approximate start and end of OMC0 clobbering IPCs.  RMS remained high until guardian was started 30 minutes later, after which ITMY continued to ring until guardian was again restarted.

We will attempt to trace the clobbered IPCs to see if they plausibly could have driven ITMY.

Images attached to this report
Comments related to this report
erik.vonreis@LIGO.ORG - 17:37, Tuesday 17 June 2025 (85143)

The attach list shows the mapping from OMC0 IPCs to IPCs that were clobbered during the ten minutes OMC0 was running on the wrong IPC table.

 

Non-image files attached to this comment
erik.vonreis@LIGO.ORG - 17:50, Tuesday 17 June 2025 (85144)

ITMX, which received the same clobbered channel as ITMY, also showed a spike in movement during the same period, but was properly stilled by guardian.

 

 

Images attached to this comment
Displaying reports 161-180 of 82872.Go to page Start 5 6 7 8 9 10 11 12 13 End