Displaying reports 3241-3260 of 83069.Go to page Start 159 160 161 162 163 164 165 166 167 End
Reports until 10:29, Thursday 09 January 2025
LHO VE
david.barker@LIGO.ORG - posted 10:29, Thursday 09 January 2025 (82203)
Thu CP1 Fill

Thu Jan 09 10:03:57 2025 INFO: Fill completed in 3min 55secs

Jordan confirmed a good fill curbside. TCmins [-90C, -88C] OAT (1C, 34F)

Images attached to this report
H1 CAL
thomas.shaffer@LIGO.ORG - posted 09:01, Thursday 09 January 2025 (82201)
Calibration Sweep 1630 UTC

Broadband and simulines ran without squeezer today. There was also a small earthquake that started to roll through at 1651UTC during this measurement.

Simulines start:

PST: 2025-01-09 08:36:47.309046 PST
UTC: 2025-01-09 16:36:47.309046 UTC
GPS: 1420475825.309046
 

Simulines end:

PST: 2025-01-09 09:00:29.590371 PST
UTC: 2025-01-09 17:00:29.590371 UTC
GPS: 1420477247.590371
 

Files:

2025-01-09 17:00:29,514 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20250109T1
63648Z.hdf5
2025-01-09 17:00:29,524 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_2025
0109T163648Z.hdf5
2025-01-09 17:00:29,528 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_2025
0109T163648Z.hdf5
2025-01-09 17:00:29,535 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_2025
0109T163648Z.hdf5
2025-01-09 17:00:29,541 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_2025
0109T163648Z.hdf5
 

Images attached to this report
H1 General
thomas.shaffer@LIGO.ORG - posted 08:31, Thursday 09 January 2025 (82199)
Out of Observing 1630UTC for planned commissioning and calibration

This should conclude at 2000UTC or 12000PT

H1 CDS
david.barker@LIGO.ORG - posted 08:27, Thursday 09 January 2025 (82198)
VACSTAT: BSC3 glitch

VACSTAT detected a single BSC3 glitch at 03:18 this morning. This glitch is consistent with sensor noise.

I attempted to restart vacstat_ioc.service on cdsioc0 at 07:44 to reset the glitch, which failed with a leap-seconds error.

I updated the tzdata package on cdsioc0 and was then able to start vacstat_ioc.service.

After the restart, I disabled PT110 HAM6 which is currently not running.

Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 07:36, Thursday 09 January 2025 - last comment - 08:31, Thursday 09 January 2025(82193)
Ops Day Shift Start

TITLE: 01/09 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 124Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 9mph Gusts, 5mph 3min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.63 μm/s
QUICK SUMMARY: Locked for 16 hours, useism up slightly from yesterday. The range has been trending down and we've been going in and out of Observing since Tony had to intervene. Looks like something in the squeezer is unlocking. It just unlocked again as I'm typing this up. Investigation ongoing.

Comments related to this report
thomas.shaffer@LIGO.ORG - 07:41, Thursday 09 January 2025 (82194)SQZ

Since the squeeze guardians reported that it has been unlocking from the OPO LR node, I adjusted the opo temperature hoping that it would help. The log of that node is incredibly long and very difficult to find what actually happened. From SQZ_OPO_LR - "2025-01-09_15:34:33.836024Z SQZ_OPO_LR [LOCKED_CLF_DUAL.run] USERMSG 0: Disabling pump iss after 10 lockloss couter. Going to thro
ugh LOCKED_CLF_DUAL_NO_ISS to turn on again."

I also should have mentioned that there is calibration and commissioning planned for today starting at 1630UTC (0830 PT).

thomas.shaffer@LIGO.ORG - 07:51, Thursday 09 January 2025 (82195)

The opo temperature adjustment did help our range, but it just dropped again. SQZ team will be contacted.

camilla.compton@LIGO.ORG - 08:31, Thursday 09 January 2025 (82196)

The Green SHG pump power needed to get the same 80uW from the OPO has increased from 14mW to 21mW since the OPO crystal swap on Tuesday. The CONTROLMON was at the bottom of it's range trying to inject this level of light, hence the OPO locking issues, plot.

In the past we've got back to Observing by following 70050 and lowering the OPO trans set point. But as we haven't maxed out the amount of power in the SHG to OPO path yet, we tuned this up with H1:SYS-MOTION_C_PICO_I motor 2 instead. A temporary fix if OPO is having issues and the H1:SQZ-OPO_ISS_CONTROLMON is too low can still be following 70050.

It's surprising that the amount of pump light has increased this quickly and quicker than after the last spot move plot, but our OPO refl is still low compared to before meaning that the new crystal spot still has lower losses than the last. We might need to realigned the SHG pump fiber to increase the amount of light we can inject.

We can work on improving Guardian log messages.

Images attached to this comment
H1 General (SQZ)
anthony.sanchez@LIGO.ORG - posted 05:07, Thursday 09 January 2025 (82192)
Early Morning SQZr issues

TITLE: 01/09 Owl Shift: 0600-1530 UTC (2200-0730 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 6mph Gusts, 4mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.53 μm/s
QUICK SUMMARY:
When I was first woken by the H1, we were in Nominal_Low_Noise, but not observing.
I noticed that the SQZ_MANAGER had an issue.
I was Very much not awake yet, So I just watched SQZ_MANAGER LOG and the SZQ_MAN states and observed the following:
SQZ_MANAGER couldn't get all the way up to FREQ_DEP_SQZ[100] and would hit FC_WAIT_FDS[64] and then drop down to FDS_READY_IFO[50] again.
I then started to read read the error messages we were getting, some beam diverter messages that would flash up quickly.
I then started to read some alog about recent SQZ issues.
I saw this alog from Camilla, which links to Oli's, but I wasn't sure if I was having the same exact issue.

But it did kinda point me in A direction.
I then checked the SQZ_LO_LR and the SQZ_OPO_LR  Guardian nodes & Logs and Init'ed SQZ_LO_LR. This did not solve the issue, but I tried other things from Oli's alog like requesting RESET_SQZ_ASC & RESET SQZ_ANG . I stepped away after hitting init on SQZ_Manager, to wash the sleep from my eyes and it was relocked when I came back. 

 

 

 

 

LHO General
ryan.short@LIGO.ORG - posted 22:01, Wednesday 08 January 2025 (82191)
Ops Eve Shift Summary

TITLE: 01/09 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Very quiet shift with H1 staying locked the whole shift and one BBH candidate event. H1 has now been locked for almost 7 hours.

LHO General
thomas.shaffer@LIGO.ORG - posted 16:31, Wednesday 08 January 2025 (82179)
Ops Day Shift End

TITLE: 01/09 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: One lock loss with an automated relock, and some SQZ tuning on the prior lock. Other than that it was a quiet shift.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
22:08 OPS LVEA LVEA N LASER SAFE 15:07
16:26 FAC Kim Opt Lab n Tech clean 16:49
21:40 VAC Gerardo, Jordan EY n Ion pump work in mech room 22:32
21:40 - Betsy Mech room, OSB Rec n ISS array crate moves 21:59
LHO General
ryan.short@LIGO.ORG - posted 16:21, Wednesday 08 January 2025 (82188)
Ops Eve Shift Start

TITLE: 01/09 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 161Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 8mph Gusts, 4mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.48 μm/s
QUICK SUMMARY: H1 has been locked for just over an hour.

X1 SUS (SUS)
corey.gray@LIGO.ORG - posted 16:21, Wednesday 08 January 2025 - last comment - 08:36, Thursday 09 January 2025(82189)
Possible Laser Oddities for the LIGO SUS Fiber Puller

For reference here is a link with a summary of somewhat recent status for the LIGO SUS Fiber Puller:  alog #80406

Fiber pulling was on a bit of a hiatus with a busy Fall/Winter of Ops Shifts and travel.  This week, returned to the Fiber Puller lab  to pull as many fibers as possible (have some free days this/next week).  Thought I would note today's work because of some odd behavior.

Yesterday ended up spending most of the time making new Fiber Stock (we were down to 2, and about 12 more were made).  Also packaged up Fiber Cartridge hardware returned from LLO and stored here to get re-Class-B-ed.  Ended the day yesterday (Jan7) pulling 2025's first fiber:  S2500002.  This fiber was totally fine and no issues.

Today, I did some more lab clean-up but then around lunch I worked on pulling another fiber. 

Issue with RESET Position

The first odd occurrence was with the Fiber Puller/Labview app.  When I RESET the system to take a fresh new Fiber Stock, the Fiber Puller/Labview moved the Upper & Lower Stages to a spot where the Upper Stage/Fiber Cartridge Bracket was too "high" wrt the Fixed Lower Bracket---so I was not able to install a new Fiber Stock.  Closing the Labview App, and a couple of "dry pulls" later, I was FINALLY! able to get the system to the correct RESET position.  (not sure what was the issue here!)

Laser Power Looks Low

Now that the Fiber Stock was set to be POLISHED & PULLED, I worked on getting the Fiber Stock Aligned (this is a step unique to our set up here at LHO).  As I put CO2 beam on the Fiber Stock, the first thing I noticed was very dim light on the stock (as seen via both cameras of the Fiber Puller).  This was at a laser power of 55.5% (of the 200W "2018-new" laser installed for the Fiber Puller last summer).  The alignment should not have changed for the system.  I wondered if there were issues with the laser settings.  I notice I had the laser controller at the "ANV" setting---it should be "MANUAL".  But nothing improved the laser power.  

Miraculously, the laser power (as seen with the cameras) returned to what I saw yesterday for S2500002, so I figured it was a glitch and then moved on to a POLISH, but within a few minutes and being mindful of the laser power on the stock on the cameras, once again the laser spot became dim.  But I did nothing and just let the POLISH finish (about 15min).  Fearing there was power-related issue, I decided to run the Pull (takes less than a minute), but at a higher power (75% vs 65% which had been the norm the last few months).  

S2500003 pulled fine and passed.

I thought we were good, but when I set up for S2500004, I noticed more low and variable laser power.  I worked on this fiber later in the afternoon after lunch.  This one started with power which looked good to me (qualitatively), so I continued with set-up.  But in a case of déjà vu, the laser power noticeably dipped in power as seen via the cameras.  For this fiber I decided to continue, but since S2500003 had a "low-ish" violin fundamental freq (~498Hz), I figured I would return to a power of 65% for the Pull (hoping the laser power was hot enough for the pull (and not cool enough to cause the fiber to break during the pull---something which happened with our old and dying 100W laser we replaced in 2024).

Pulled the fiber and it all looked fine, profiled the fiber, and then ran an analysis on the fiber:  it FAILED.

This is fine.  Fibers fail.  Since Aug we have pulled 29 fibers and this was the 5th FAIL we have had. 

This is where I ended the day, but I'm logging this work only for some early suspicions with the laser.  Stay Tuned For More!

Comments related to this report
corey.gray@LIGO.ORG - 08:36, Thursday 09 January 2025 (82200)

Overall comments I wanted to add:

  • Want to have repeatability with the operation of this system, so if anything is odd it should be noted.  For the most part we are hands off on the hardware of the Fiber Puller--the alignment should rarely need to be changed for many fiber pulls, and there is definitely no need to touch any of the mechanics of the machine.  Our main interface with the machine is to load in a Fiber Stock, possibly make small adjustments of the upper translation stage, and that's about it for the machine.  The rest of the handling is via the Labview program, laser controller, and switches which are outside of the Fiber Puller enclosure.
  • The RESET state issue is worth noting, because there had been minimal changes to the system from the afternoon before.  The switches which control positions of the upper & lower moving stages had not been touched.  Fiber Puller eventually resulted in a correct RESET state, but it was after several other "dry" pulls (and Labview restarts/checks).
  • The CO2 lasers for the Fiber Puller have some variability in laser power (which we see by the camera videos) and this is normal.  Yesterday was notable only because the laser power was very noticeably low.  The previous 100W laser (which was a bit older...manufactured in 2010) started to have symptoms of low power in 2021-22 which ultimately led to us decommissioning it and swapping it for the newer 200W laser (manufactured in 2018).  Only noting the low-power observation as a data point for possible early end of life symptoms for it.  
LHO VE
jordan.vanosky@LIGO.ORG - posted 16:11, Wednesday 08 January 2025 (82183)
Removal of Pressure Gauge on FC-CI C1 Cross

During yesterday's maintenance period, we removed the Inficon BCG450-SE pressure gauge that was on the C1 cross in the FC. This gauge has not been in use since the inital pumpdown of the filter cavity tube and was used in conjunction with the SS500 pump cart.

The +X RAV was closed, along with the redundant RAV before the inficon gauge. Gauge volume was vented and replaced with an o-ring angle vavle and KF40 to CF adapter. Newly install joints were pumped down and leak tested with HLD, and showed no He signal above the leak detector background, <1E-11 Torr-l/s. After leak check was complete, the +X RAV was opened to the main FC volume, redundant RAV remains closed as a pump-out port.

This gauge is now a spare for the HAM6 gauge replacement to follow.

Images attached to this report
H1 TCS
ryan.short@LIGO.ORG - posted 14:55, Wednesday 08 January 2025 (82186)
TCS Monthly Trends

FAMIS 28456

Both CO2 lasers have generally been increasing in output power on each relock. The increased CO2-Y laser temperature lines up clearly with decreasing chiller flow rate on a few occasions in the past month.

Images attached to this report
H1 General
thomas.shaffer@LIGO.ORG - posted 13:46, Wednesday 08 January 2025 (82185)
Lock loss 2130 UTC

1420407046

ETMX glitches in the output drive <1s before lock loss.

Images attached to this report
H1 TCS
camilla.compton@LIGO.ORG - posted 15:50, Tuesday 07 January 2025 - last comment - 12:19, Tuesday 21 January 2025(82151)
RIN of CO2 Lasers with PWM

In 81863 we looked a the RIN of the CO2 lasers on vs off. Today we repeated with PWM (from Synrad UC-2000), the noise level is considerably higher when PWM is on. Note that this isn't really the RIN as we're not dividing by the DC power.

Data saved to camilla.compton/Documents/tcs/templates/dtt/20250107_CO2_RIN.xml and 20250107_CO2Y_RIN.xml

The CO2Y plots also show the dark noise as there is no photo detector plugged into CO2Y_ISS_IN.

For some of the CO2X data, the CO2 guardian was relocking (sweeping PZT), noted above.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 09:18, Wednesday 08 January 2025 (82178)

These channels are calibrated in Volts as the detector would output, before the gain of PD's amplifier D1201111 .

DC levels of power on the ISS PDs at the different powers are:

  • CO2Y ISS_OUT_DC:
    • 100% CW power = 4.86mV
    • 50% PWM power = 3.95mV
  • CO2X ISS_IN_DC:
    • 100% CW power = 7.06mV
    • 50% PWM power = 4.94mV
    • 25% PWM power = 2.39mV
    • 95% PWM power = 6.83mV
  • CO2X ISS_OUT_DC:
    • 100% CW power = 8.00mV
    • 50% PWM power = 5.45mV
    • 25% PWM power = 2.60mV
    • 95% PWM power = 7.56mV

With PWM on, the noise level increases by around a factor of 10. Unsure why the AC channel sees this increase less than the DC channels.

Images attached to this comment
camilla.compton@LIGO.ORG - 13:12, Wednesday 08 January 2025 (82182)

Gabriele, Camilla

  • DC value:
    • Expect ~250mW on each the ISS PD (50W out of laser x 99/1 BS x 50/50 BS) but DC levels show 30-50mW on PDs:
      • Measured 5 to 8mV / 66Ohm (measured 565) / 0.0025A/W (spec sheet) = 30-50mW. Factor of 5-10 below expected but the PD could be saturating. 
      • At 25% laser power, 2.60mV measured / 66Ohm (measured 565) / 0.0025A/W (spec sheet) = 16mW. Expect ~62mW.
    • This a factor of ~5-10 from what we expect but we need to measure the power on the PD with a power meter and check for edge clipping with an iris to confirm what power is incident on the PDs. 
  • Electronics noise (labeled as "dark noise" in plot): 
    • <10e-10 for AC, <10e-8 for DC. This noise dominates the dark noise and 100% CW signals for the DC channel.
  • Dark noise (labeled as "laser off" in plots):
    •  Around 10-9 for AC output.
    • 10-9V / 66Ohm (measured 565) / 0.0025A/W (spec sheet) = 6e-9W/rtHz This is similar to 541 measurement of 4e-7 V/rHz  /66Ohm /0.0025A/W / 255 DC gain = 9.5e-9W/rtHz
  • RIN with laser on 50% PWM:
    • Looking at AC channel (CO2Y): 7e-9V/4e-3V(DC level) = 2e-6/rtHz. Want 10-7/rtHz so factor of 20 too high. 
    • Looking at DC channel (CO2Y): 2e-7V/4e-3V(DC level) = 5e-5/rtHz. Want 10-7/rtHz so factor of 500 too high. 
    • Similar results when using CO2X data.
    • We don't understand why these is a difference between the AC and DC channels, both channels should be showing the same signal: 82181. We can try to sample channels at a higher rate to see if something strange is happening.
camilla.compton@LIGO.ORG - 15:34, Wednesday 08 January 2025 (82187)

Erik showed me that we can look to higher frequencies using H1:IOP-OAF_L0_MADC{2,3}_TP_CH{10-13} channels at 65kHz. Can repeat 50% PWM next week. 

Note that these are before the filtering (listed in 81868), we could filters back in with python... but it's mainly gains and some shaping of the AC channel below 20Hz and above 10kHz, a DC roll of is expected around 100kHz as in D1201111. DC values:

  • CO2X IN DC: 5800
  • CO2X OUT DC: 6600
  • CO2Y IN DC: Unplugged
  • CO2Y OUT DC: 3900
Images attached to this comment
camilla.compton@LIGO.ORG - 12:19, Tuesday 21 January 2025 (82382)

Matt Todd and I measured the powers before the CO2X ISS PDs today:

  • Before BS: 425mW
  • ISS Out path: 230mW (7.7e-3 on ITMX_CO2_ISS_IN_DC)
  • (Inferred) ISS In path: 195mW (8.5e-3 on ITMX_CO2_ISS_OUT_DC)

In 82182 using specs of PDs we think we are measuring 30-50mW. We didn't have time to check for clipping with an iris or alignment, both beams looked small but the PD's are also small ~1mmx1mm. 

LHO VE (VE)
marc.pirello@LIGO.ORG - posted 15:14, Tuesday 07 January 2025 - last comment - 17:32, Wednesday 08 January 2025(82162)
EY Ion Pump 17 Cable Check

WP12271

We investigated the SHV cable connectivity between the Ion Pump Controller at EY and the Ion Pump at Y2-8 checking for faults.  We did not detect any obvious faults, we did detect the prior splice which was installed back in 2016.  We used the Fieldfox in TDR mode with a 1.2 meter launch cable in low pass mode. I have attached scans of these findings:

First photo is a shot from the Controller, there is a large impedance change at 19 meters from the connection point, this is the same location of the splice made in 2016, the repair looks like we would expect a splice to look.
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=28847

Second photo is a shot from the Ion Pump back to the end station.  There is a very small bump at 20 meters from the ion pump that is real, but is very very tiny, could be a kinked cable, but not enough to cause issues.

Otherwise, the cable looks fine.  Future testing is warranted.

M. Pirello, G. Moreno, J. Vanosky

Images attached to this report
Comments related to this report
gerardo.moreno@LIGO.ORG - 17:32, Wednesday 08 January 2025 (82190)VE

After the cable was deemed good, we took a look at the controller, and found some "events" recorded on the log, see attached photo, one such events pertains to the glitch noticed here.  All events recorded were kept on the controller.  We are going to look further into this.

Images attached to this comment
H1 ISC
camilla.compton@LIGO.ORG - posted 16:59, Monday 21 October 2024 - last comment - 11:06, Thursday 09 January 2025(80804)
Automatic DARM Offset Steps

Elenna, Camilla

Ran the automatic DARM offset sweep via Elenna's instructions (took <15 minutes):

cd /ligo/gitcommon/labutils/darm_offset_step/
conda activate labutils
python auto_darm_offset_step.py

DARM offset moves recorded to /ligo/gitcommon/labutils/darm_offset_step/data/darm_offset_steps_2024_Oct_21_23_37_44_UTC.txt

Comments related to this report
camilla.compton@LIGO.ORG - 17:02, Monday 21 October 2024 (80805)

Reverted the attached tramp sdf diffs afterwards, see attached. Maybe the script needs to be adjusted to automatically do this.

Images attached to this comment
elenna.capote@LIGO.ORG - 17:28, Monday 21 October 2024 (80806)

I just ran Craig's script to analyze these results. The script fits a contrast defect of 0.742 mW using the 255.0 Hz data and 0.771 mW using the 410.3 Hz data. This value is lower than the previously 1 mW on July 11 (alog 79045), which matches up nicely with our reduced frequency noise since the OFI repair (alog 80596).

I attached the plots that the code generates.

This result then estimates that the homodyne angle is about 7 degrees.

Non-image files attached to this comment
jennifer.wright@LIGO.ORG - 11:06, Thursday 09 January 2025 (82204)

Last year I added some code to plot_darm_optical_gain_vs_dcpd_sum.py to calculate the losses through HAM 6.

Sheila and I have started looking at those again post-OFI replacement.

Just attaching the plot of power at the anti-symmetric port (ie. into HAM 6) vs. power after the OMC as measured by the DCPDs.

The plot is found in /ligo/gitcommon/labutils/darm_offset_step/figures/ and is also on the last page of the pdf Elenna linked above.

From this plot we can see that the relationship between the power into HAM 6 (P_AS) is related to the power at the output DCPDs as follows.

P_AS = 1.220*P_DCPD + 656.818 mW

Where the second term is light that will be rejected by the OMC + that which gets through the OMC but is insensitive to DARM length changes.

The throughput between the anti-symmetric port and the DCPDs is 1/1.22 = 0.820. So that means 18% of the TM00 light that we want at the DCPDs is lost through HAM 6.

 

Non-image files attached to this comment
Displaying reports 3241-3260 of 83069.Go to page Start 159 160 161 162 163 164 165 166 167 End