Displaying reports 2001-2020 of 77273.Go to page Start 97 98 99 100 101 102 103 104 105 End
Reports until 10:24, Wednesday 01 May 2024
LHO VE
david.barker@LIGO.ORG - posted 10:24, Wednesday 01 May 2024 (77541)
Wed CP1 Fill

Wed May 01 10:13:10 2024 INFO: Fill completed in 13min 6secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 10:13, Wednesday 01 May 2024 (77538)
Lockloss at 17:12 UTC during comissioning

https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1398618745

PRM and SR3 saturations before the lockloss, 2.5 hours into the lock so we still haven't been able to take a thermalized calibration measurement. The only invasive/potential LL causing work going on was some PEM tests by Robert.

LHO General
thomas.shaffer@LIGO.ORG - posted 08:00, Wednesday 01 May 2024 - last comment - 13:12, Wednesday 01 May 2024(77536)
Ops Day Shift Start

TITLE: 05/01 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 2mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.13 μm/s
QUICK SUMMARY: Just got back into Observing after a lock loss, fully auto relock. Planned commissioning today from 9-12 PT

Comments related to this report
camilla.compton@LIGO.ORG - 08:34, Wednesday 01 May 2024 (77537)SQZ

After yesterday's SQZ realigning 77530 we're nearly back to the squeezing levels we had before the output arm alignment shift 77427: 4.3dB above 1kHz, 3.3dB ~350Hz.  Plot attached.

Images attached to this comment
ryan.crouch@LIGO.ORG - 13:12, Wednesday 01 May 2024 (77548)

I made a small change to verbal_alarms tests in test.py to ignore the TEST node being in error as I plan on creating another state to test/learn some camera stuff and I don't want to annoy everyone in the CR with it going into error. Verbal alarms could use a restart to reflect these changes.

H1 General
oli.patane@LIGO.ORG - posted 00:05, Wednesday 01 May 2024 (77535)
Ops EVE Shift End

TITLE: 05/01 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: We are Observing and have been Locked for 1.5 hours. Overall quiet night with two almost fully automated relocks. Reminder that LVEA is still Laser Hazard
LOG:

LVEA is still in LASER HAZARD

22:49 Detector Locked for 52 mins and just got into Observing
22:50 Lockloss after 1m 14s Observing

22:56 Lockloss at LOCKING_ALS
23:08 Lockloss from TRANSITION_DRMI_TO_3F
23:49 NOMINAL_LOW_NOISE
23:52 Observing

03:35 Superevent S240501an

04:34 Lockloss
04:49 Lockloss from TURN_ON_BS_STAGE2
05:38 NOMINAL_LOW_NOISE
05:40 Observing                                                                                                                                                          

Start Time System Name Location Lazer_Haz Task Time End
23:07 VAC Jordan, Janos EY n Turn off pump 23:26
23:46   Betsy, Brian OReilly MY n Approved activity 00:46
H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 21:51, Tuesday 30 April 2024 - last comment - 22:41, Tuesday 30 April 2024(77533)
Lockloss

Lockloss 05/01 04:34UTC

Comments related to this report
oli.patane@LIGO.ORG - 22:41, Tuesday 30 April 2024 (77534)

05:40 Observing

H1 General
oli.patane@LIGO.ORG - posted 20:40, Tuesday 30 April 2024 (77532)
Ops Eve Midshift Status

Have been Locked and Observing for almost 4 hours, and just got a notification about superevent S240501an

LHO General
ryan.short@LIGO.ORG - posted 16:00, Tuesday 30 April 2024 (77521)
Ops Day Shift Summary

TITLE: 04/30 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Busy maintenance day, but the initial alignment and lock acquisition that followed were straightforward. Lost lock shortly after starting to observe.

The LVEA remains Laser HAZARD for more potential viewport tests during commissioning time this week.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
14:30 FAC Chris, Tyler MX, MY - HVAC fan greasing 15:35
15:00 CAL Jeff LVEA - HAM6 racks 15:45
15:01 FAC Kim, Karen FCES - Technical cleaning 15:33
15:03 FAC Mitchell MX - Grabbing case 15:18
15:10 VAC Gerardo MX, EX - Turbopump checks 17:40
15:11 SAF LASER HAZARD LVEA YES LVEA is LASER HAZARD Ongoing
15:13 SAF TJ LVEA YES Transition to HAZARD 15:19
15:15 FAC Eric MER - Heating coil maintenance 17:22
15:16 ISC Sheila, Jennie, Minhyo CR - Single bounce measurements 20:40
15:18 VAC Jordan, Janos All - CP tank repairs 17:05
15:18 VAC - EY - Dewar jacket pumping 17:05
15:22 ISC Jenne +1 LVEA - Tour 15:50
15:26 IAS Jason, RyanC LVEA - FARO surveying 18:46
15:28 SAF Travis All - Captial inventory 16:56
15:33 FAC Karen EY - Technical cleaning 16:30
15:33 FAC Kim EX - Technical cleaning 17:21
15:34 SEI Jim, Mitchell, TJ LVEA - Craning 3IFO storage container 17:15
15:35 FAC Tyler OSB Roof - Inspection 16:00
15:37 PEM Robert, Anamaria LVEA YES Beam hunting (VIEWPORTS OFF) 20:09
15:51 FAC Ken FCTE, LVEA - Cable tray installation 19:02
15:51 SAF Fil All - Interlock checks 18:35
16:01 FAC Tyler EX - Checking for bees 17:22
16:02 CAL Jeff LVEA - Getting SR785 data 16:07
16:41 SQZ Camilla, Naoki, Andrei LVEA - SQZT0 YES Adjust SHG pump AOM/fiber 18:10
17:21 FAC Karen, Kim LVEA - Tech clean 18:45
17:22 FAC Tyler OSB roof n Checking roof status 17:28
17:40 VAC Gerardo LVEA - Capital inventory 18:52
17:44 ISC Sheila LVEA - Turning off sidebands 17:50
17:52 SEI Jim FCES - Troubleshooting HAM8 GS13 18:56
18:15 IAS Tyler LVEA - FARO surveying 18:40
18:49 SUS Jason LVEA YES Troubleshooting ITMX oplev 18:59
19:00 ISC Sheila LVEA - Turning sidebands back on 19:04
19:37 SQZ Terry, Kar Meng Opt Lab LOCAL SHG work Ongoing
19:52 VAC Gerardo MY - Capital inventory 21:09
20:12 FAC Ken FCTE - Electrical work 21:12
20:20 PEM Robert, Anamaria LVEA - Cleanup & sweep 21:39
20:00 CDS Dave EX, EY, CER - Looking for laptops 20:54
H1 General
oli.patane@LIGO.ORG - posted 15:53, Tuesday 30 April 2024 - last comment - 16:53, Tuesday 30 April 2024(77528)
Lockloss

Lockloss at 04/30 22:50 UTC from unknown cause

Comments related to this report
oli.patane@LIGO.ORG - 16:53, Tuesday 30 April 2024 (77531)

23:52UTC Observing

H1 General
oli.patane@LIGO.ORG - posted 15:50, Tuesday 30 April 2024 (77527)
Ops EVE Shift Start

TITLE: 04/30 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 8mph Gusts, 6mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.23 μm/s
QUICK SUMMARY:

Have been Locked for 52 minutes and just got to Observing

LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 15:45, Tuesday 30 April 2024 (77525)
X-Mid and X-End FAMIS Task for Turbo Pumps

Functionality test was done on the following stations:

X-Mid turbo pump station;
     Scroll pump hours: 215.2
     Turbo pump hours: 125
     Crash bearing life is at 100%

X-End turbo pump station;
     Scroll pump hours: 7143.2
     Turbo pump hours: 1093
     Crash bearing life is at 100%

FAMIS tasks 24850 and 24874.

H1 ISC
jennifer.wright@LIGO.ORG - posted 15:45, Tuesday 30 April 2024 - last comment - 04:52, Friday 10 May 2024(77520)
OMC Scans to investigate possible OFI bad throughput

Jennie W, Sheila

 

Today we took OMC scans to help diagnose what is going on with our alignment through the OFI - that is, what is the mode-matching at our the old alignment (as of Monday 22nd) and our new alignment (as of this morning).

 

Sheila turned off the sidebands before the test and we had the ETMs and the ITMX mis-aligned initially for single bounce configuration.

 

Old alignment: SR3 M1 YAW OFFSET = -125 microradians

SR3 M1 PIT OFFSET = -437 microradians

SR2 M1 YAW OFFSET = -421 microradians

SR2 M1 PIT OFFSET = -64 microradians

 

Due to PEM measurements we switched from single bounce off ITMY to single bounce off ITMX.

 

Locked time = 1 minute from GPS 1398534847

Unlocked time = 1 minute from GPS 1398534984

Scan = 200 s starting at 1398535070 GPS

 

New alignment: SR3 M1 YAW OFFSET =  120.2 microradians

SR3 M1 PIT OFFSET = 437.9 microradians

SR2 M1 YAW OFFSET = 2061.7 microradians

SR2 M1 PIT OFFSET = -5.5 microradians

 

Locked time = 1 minute from 1398538330 GPS

Unlocked time = 1 minute from 1398538461 GPS

Scan = 200 s starting at 1398537927 GPS

Dark time with IMC offline and fast shutter closed = 1398538774 GPS

 

Mode mis-match measurments pending...

 

Images attached to this report
Comments related to this report
jennifer.wright@LIGO.ORG - 21:40, Monday 06 May 2024 (77673)

The loss through the OMC appears to have increased after whatever happened to the output path on April 22nd.

I use again Sheila's OMC loss calculation code as we previously used in this entry.

Power on refl diode when cavity is off resonance: 29.698 mW

Incident power on OMC breadboard (before QPD pickoff): 30.143 mW

Power on refl diode on resonance: 5.153 mW

Measured effiency (DCPD current/responsivity if QE=1)/ incident power on OMC breadboard: 56.5 %

assumed QE: 100 %

power in transmission (for this QE) 17.029 mW

HOM content infered: 14.415 %

Cavity transmission infered: 66.501 %

predicted efficiency () (R_inputBS * mode_matching * cavity_transmission * QE): 56.494 %

omc efficency for 00 mode (including pick off BS, cavity transmission, and QE): 66.009 %

round trip loss: 3495 (ppm)

Finesse: 335.598

We compare these values to that found from our scans on the 16th April and it seems like the HOM content has increased substantially, the incident power has decreased, and the measured and predicted cavity efficiency has decreased by 3%.

It would be good to cross-check these figures against the other methods of checking the losses, such as DARM offset step and the mode mis-match I still need to calculate from the mode scan taken on the same day.

jennifer.wright@LIGO.ORG - 04:52, Friday 10 May 2024 (77749)

I forgot to run the same analysis for the locked and unlocked measurements we got at the old (pre April 23rd) alignment of SR2 and SR3.

Power on refl diode when cavity is off resonance: 25.306 mW

Incident power on OMC breadboard (before QPD pickoff): 25.685 mW

Power on refl diode on resonance: 5.658 mW

Measured effiency (DCPD current/responsivity if QE=1)/ incident power on OMC breadboard: 54.1 %

assumed QE: 100 %

power in transmission (for this QE) 13.885 mW

HOM content infered: 19.870 %

Cavity transmission infered: 67.970 %

predicted efficiency () (R_inputBS * mode_matching * cavity_transmission * QE): 54.061 %

omc efficency for 00 mode (including pick off BS, cavity transmission, and QE): 67.467 %

round trip loss: 3289 (ppm)

Finesse: 339.266

H1 ISC (AOS, SUS)
anamaria.effler@LIGO.ORG - posted 15:35, Tuesday 30 April 2024 (77522)
ITMX and ITMY oplev shifts after incursion

Robert, Jason, Anamaria

Today we used the oplev to determine the CP alignment for both ITMX and ITMY. We had to move the sender around to find all the beams so here I mark the shift in the beam location in case someone later wants to look at drifts. The ITMs were realigned slightly after our test so it's not good to 1urad but the oplevs drift around more than that anyway. I also had to choose times after lockloss and before relocking, to not have ASC interfere.

  ITMX P ITMX Y ITMY P ITMY Y
Before -8 -1 -20 1
After -10 5 6 2

Since people don't open the receiver box or the sender box very often we note:  
a) The beams on the oplevs are as large as the QPD which is not great, but depends what they're used for.
b) We found the ITMX oplev beam to be clipping on the nozzle baffle. Jason helped us shift the QPD and then we realigned to this new, more central location.
c) Replacing the sender cover is very difficult without causing a shift in the oplev beam, though we were quite careful to not touch the telescope while doing that. This is why we were not able to center them better.
d) I suppose the last few urad should be done with the QPD stages, but one has to be careful to check from time to time that it doesn't walk out of the aperture over time (which is smaller since the installation of nozzle baffles).

Images attached to this report
H1 ISC (SQZ)
camilla.compton@LIGO.ORG - posted 14:52, Tuesday 30 April 2024 - last comment - 16:42, Tuesday 30 April 2024(77518)
Translating SQZ beam for better AS_C alignment adn OMC thoughput

Naoki, Sheila, Camilla

After the output alignment shift reported in 77427, we tried to revert SQZ to the last successful SQZ-OMC scan in 77515 but when we increased throughput, AS_C alignment was bad. Today we translated the beam to get good throughput on AS_A/B and AS_C and a centered beam on AS_C with OMC DC centering loop running. This was a move of ZM4 -1000urad, ZM5 -650urad in yaw as reported by DAMP_Y_INMON channels

Attached shows previous April 16th OMC-SQZ scan compared to during today's translation and how we left the ZMs:

We're not sure how this change will effect in-lock SQZ as Vicky had to make big changed in 77400  and we may have seen that good OMC scan alignment isn't good SQZ alignment?

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 16:42, Tuesday 30 April 2024 (77530)

Naoki, Sheila, Camilla

This alignment didn't give us good in-vac SQZ.

We tried taking SQZ sliders to before the output arm alinement shift when we had these PSAMS (2024/04/17 23:38 UTC) with the addition of the -1000urad, -650urad offsets on ZM4 and ZM5. This made SQZ worse.

We moved ZM6 to maximize RF3 and RF6 adn then went to anti-sqz and ran SCAN_ALIGMENT_FDS twice. SDFs attached.

Naoki adjusted the SQZ angle to squeezing and reran SCAN_SQZANG, this got us back to 4+dB of SQZ and over 150MPc of range, still room for improvement but much better than last week.

Images attached to this comment
H1 ISC
sheila.dwyer@LIGO.ORG - posted 14:27, Tuesday 30 April 2024 - last comment - 15:50, Tuesday 30 April 2024(77517)
re-ran A2L script

This morning I came into the control room as the magnetic injections were finishing, and ran the a2l script while the charge measurements ran.

I did this because after Jennie Wright ran A2L yesterday afternoon it was not well tuned: 77495

When Jennie ran it yesterday, the coherence was over the threshold for all 8 gains, so all 8 were adjusted.  This morning I ran it and the coherence was below threshold for ITMX P2L and ETMY Y2L, so those were not updated.  I increased their amplitudes in the run_all_a2L.sh script, but as I was re-running the IFO unlocked due to the charge measurements.

The attached screenshots show how it changed the optics that it did run for, this is a large difference from the values that Jennie got yesterday.  Jennie W ran the script 20 hours into the lock, while I ran it 37 hours into the same lock, so these were both with a well thermalized IFO. 

It seems that there may be two problems we are facing with tuning A2L, one that our process doesn't seem to always work well to minimize A2L, and second that it seems that the values may be changing over the course of a lock.

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 15:50, Tuesday 30 April 2024 (77526)

Ran again right at the start of the lock this time, with higer amplitudes.

The coherence improved but is still bad.

We will have to look into this tomorow.

Images attached to this comment
H1 CDS
david.barker@LIGO.ORG - posted 10:22, Sunday 28 April 2024 - last comment - 15:40, Tuesday 30 April 2024(77473)
Restarted picket-fence, it has stopped 03:37:04 UTC Sun 28 April 2024

Another picket fence issue at 03:37 UTC, I think the third in the past few weeks. I restarted the process on nuc5.

Comments related to this report
david.barker@LIGO.ORG - 11:29, Monday 29 April 2024 (77485)
edgard.bonilla@LIGO.ORG - 15:40, Tuesday 30 April 2024 (77524)

Thank you for this summary. I will go dig in the code to see what is happening.

On first glance: the array of data should never be empty on an update, but somehow it is happening some times. I will add a failsafe (or an assert) so that the code can handle this exception and hopefully be more robust.
 

H1 ISC
sheila.dwyer@LIGO.ORG - posted 07:34, Saturday 27 April 2024 - last comment - 15:38, Tuesday 30 April 2024(77460)
Step in Clean range around 5:20 this morning

There was a drop in clean range around 5:20 this monrning, which doesn't seem to be a reapeat of the output arm shift we had earlier this week. 

Attached is a trend of the DARM blrms (based on DARM error, no calibration corrections or cleaning) and comparison of the cleaned and corrected range against the CAL-DELTAL range, only the cleaned channel sees the drop.  This made me concerned that we had a drop in optical gain, but the trend of the time dependent calibration correction factors doesn't show anything happening at this time. 

I don't know what would cause a sudden change in the clean range, since I don't think that the jitter subtraction would cause a sudden change like this.

Images attached to this report
Comments related to this report
jenne.driggers@LIGO.ORG - 15:38, Tuesday 30 April 2024 (77523)CAL

I'm still not at all sure why this drop happened, particularly if Sheila notes that there were no changes in the calibration TDCFs.  However, the drop is seen in all of the GDS-CALIB_STRAIN channels, not just a drop in the _CLEAN channel.  This indicates that the range drop comes from earlier in the process than the line subtraction or the cleaning. 

The crosshair in the attached plot is the same time as the crosshair in Sheila's plot, just to make it a little easier to orient and compare the plots.

Blue: Range from CAL-DELTAL_EXTERNAL

Green: Range from GDS-CALIB_STRAIN

Red: Range from GDS-CALIB_STRAIN_NOLINES

Orange: Range from GDS-CALIB_STRAIN_CLEAN

Images attached to this comment
H1 ISC
sheila.dwyer@LIGO.ORG - posted 09:39, Friday 26 April 2024 - last comment - 16:05, Tuesday 30 April 2024(77440)
OMC alignment scan started

started scan of OMC alignment started with no squeezing: April 26th 2024 16:36:47 ended at 17:30 UTC

Comments related to this report
jennifer.wright@LIGO.ORG - 16:05, Tuesday 30 April 2024 (77529)

I used this jupyter notebook from Gabriele updated with the scan times below and found at: /ligo/home/jennifer.wright/Documents/OMC_ALignment/OMC_Alignment_2024_04_26_2.ipynb

 

start 1398184622 GPS

end 1398187822 GPS

 

The new offsets are found relative to the old offsets by tuning the red lines in this plot to match the peak of the 410 - 411 Hz band-limited RMS (BLRMS) of the OMC DCPDs against the changing OMC ASC loop offsets. This works because the 410 Hz calibration line height scales with optical gain on the OMC DCPDs.

If one looks at the GDS-CALIB-STRAIN BLRMS at 800-900 Hz this dependence cannot be seen (the current value is favoured in these plots) and since these BLRMS are related to shot noise this could mean the squeezer is not well-matched to the OMC currently as suggested by Gabriele here.

 

Images attached to this comment
Displaying reports 2001-2020 of 77273.Go to page Start 97 98 99 100 101 102 103 104 105 End