Displaying reports 19121-19140 of 87449.Go to page Start 953 954 955 956 957 958 959 960 961 End
Reports until 12:44, Tuesday 13 June 2023
H1 CDS (CDS, VE)
patrick.thomas@LIGO.ORG - posted 12:44, Tuesday 13 June 2023 (70415)
Did not get to clearing error on h0vacly
Ran out of time working on end Y HEPI pump controller upgrade and did not get to WP 11250.
LHO VE (VE)
travis.sadecki@LIGO.ORG - posted 12:25, Tuesday 13 June 2023 (70414)
MX and EX Turbo functionality test

FAMIS tasks 24864 and 24840

Procedure checklist for both stations completed.  No issues were identified at this time.

MX: Scroll pump hours: 199.3

       Crash bearings: 100%

EX: Scroll pump hours: 6306.3

       Crash bearings: 100%

H1 General
camilla.compton@LIGO.ORG - posted 12:03, Tuesday 13 June 2023 - last comment - 17:13, Tuesday 13 June 2023(70411)
OPS Day Mid-shift Summary

Wrapping up Tuesday Maintenance. Main activities were:

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 17:13, Tuesday 13 June 2023 (70429)FMP
The temperature swing shown in this aLOG is relatively small (~ +/-0.4 deg F), entirely explained by the maintenance day activities on the HVAC system, and recovered to 67 +/- 0.2 deg F by 17:00 PDT. See LHO:70428.
H1 ISC
sheila.dwyer@LIGO.ORG - posted 10:34, Tuesday 13 June 2023 - last comment - 15:15, Thursday 18 July 2024(70409)
OMC loss measurement

Daniel, Sheila

We turned off the 9 Mhz, 45 MHz, and 117 MHz sidebands in order to do an OMC loss measurement.  We used a single bounce beam off of ITMX, with 10W input from the PSL. We spent some time trying to improve the alignment before making OMC scans. 

locked: 1370711576  (OMC REFL avg 3.51mW, OMC DCPD sum 15.23mA)

unlocked: 1370711782 (OMC REFL avg 24.73 mW, OMC DCPD sum 0.078 mA)

OMC scan start: 1370712036 duration 100 seconds (2nd order modes are roughly 8% of the 00 mode).

shutter blocked: 1370712337 (OMC REFL avg -0.030 DCPD SUM 8e-4 mA). 

Jennie Wright plans to analyze this data to estimate OMC losses. 

Images attached to this report
Comments related to this report
daniel.sigg@LIGO.ORG - 16:40, Thursday 15 June 2023 (70502)

Here are the plots of ASC-AS_C_NSUM, OMC-QPD_A_NSUM, OMC-QPD_B_NSUM and OMC-REFL_A_LF, during these measurements. ASC-AS_C_NSUM shows between 22.8 and 32.1mW, OMC-QPD_A_NSUM 23.4mW, OMC-QPD_B_NSUM 23.0mW, and OMC-REFL_A_LF 24.8mW. According to Keita OMC-REFL_A_DC has an incorrect calibration and shows 25.2mW. The average of the 2 QPDs would be 23.2mW, which is about 6.5% lower than 24.8mW.

Second screen shots shows a time when the IMC was unlocked. The DC offsets are in the 10s of uW at most.

Images attached to this comment
jennifer.wright@LIGO.ORG - 06:57, Thursday 06 July 2023 (71099)

Using data from the scan I adapted labutils/OMCscan class to plot the fitted scan and adapted labutils/fit_two_peaks.py to fit a sum of two lorentzians functions for distinguishing carrier 20/02 modes.

The first graph is the OMC scan plot, the second is the curvefit for the second order carrier modes.

We expect the HOM spacing to be 0.588 MHz as per this entry and DCC T1500060 Table 25.

The spacing for the modes measured is 0.592 MHz.

From the heights of the two peaks this suggests mode-mismatch of the OMC to be C02+C20/C00 = (0.83+1.158)/(15.32+0.83+1.158) = 11.0% mode mis-match.

From the locked/unlocked powers on the OMC REFL PD the visibility on resonance is 1-(3.51+0.03/24.73+0.03) = 85.7% visibility.

If the total loss is 14.3%, this implies that the other non mode-matching losses are roughly 1.3%.

 


To run the OMC scan code go to 

/ligo/gitcommon/labutils/omc_scan/ and run 

python OMCscan_nosidebands.py 1370712036 100 "Sidebands off, 10W input" "single bounce" --verbose --make_plot -o 2
in the labutils conda environment and on git branch dev.

To do the double peak fitting run:

python fit_two_peaks_no_sidebands.py  
in the labutils conda environment and on git branch dev.

Images attached to this comment
Non-image files attached to this comment
daniel.sigg@LIGO.ORG - 09:26, Tuesday 18 July 2023 (71453)

These scans were done with OM2 cold.

jennifer.wright@LIGO.ORG - 15:15, Thursday 18 July 2024 (79211)

For comparison with new OMC measurements I used Sheila's code to process the visibility, but updated dit to use nds2utils instead of gwpy as I was having trouble using it to get data.

The code is attached and should be run in the nds2utils conda environment on the CDS workstations.

Power on refl diode when cavity is off resonance: 24.757 mW

Incident power on OMC breadboard (before QPD pickoff): 25.239 mW

Power on refl diode on resonance: 3.525 mW

Measured effiency (DCPD current/responsivity if QE=1)/ incident power on OMC breadboard: 70.4 %

assumed QE: 100 %

power in transmission (for this QE) 17.760 mW

HOM content infered: 13.472 %

Cavity transmission infered: 82.111 %

predicted efficiency () (R_inputBS * mode_matching * cavity_transmission * QE): 70.367 %

omc efficency for 00 mode (including pick off BS, cavity transmission, and QE): 81.323 %

round trip loss: 1605 (ppm)

Finesse: 371.769

Non-image files attached to this comment
LHO VE
david.barker@LIGO.ORG - posted 10:23, Tuesday 13 June 2023 (70407)
Tue CP1 Fill

Tue Jun 13 10:10:41 2023 INFO: Fill completed in 10min 40secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 CDS
jonathan.hanks@LIGO.ORG - posted 10:07, Tuesday 13 June 2023 (70406)
Puppet updates as part of WP 11245
As one of the last steps of WP 11245 we formally add the new raw trend archive directory to the nds server's puppet config.  Today I ran puppet in a verbose and noop mode on all the daqd systems to see what differences there were (we expected a difference in nds0).  I then reviewed the differences and updated the puppet config to match what is in production.  This was a good run to do we found a few things that we had changed in production that we had not put in puppet.

 * nds0 - added the new archive raw trend directory.
 * nds1 - found that we had not added the last archive raw trend directory to puppet, so added that.
 * tw0/1 - put into puppet a smaller circular buffer size that we had added to the systems to deal with larger channel lists.
 * gds1 - updated the firewall settings to match gds0, we had done some experimentation in the past to trouble shoot items.

Now puppet matches reality and is a faithful record of the daqd configs.

In addition I removed the mount to /opt/cdscfg from both the daqd systems and the daqd puppet.  This is brought in due to front end needs on the puppet, but is not required for daqd and just causes issues when we switch boot servers.
H1 CDS
david.barker@LIGO.ORG - posted 10:05, Tuesday 13 June 2023 - last comment - 14:10, Tuesday 13 June 2023(70405)
WP11256 EY Beckhoff HEPI Pump Ctrl Install

The EY Beckhoff controller has the temporary name h1hpipumpctrley1 (10.105.0.64/24) and I have enabled port 14 of the EY VEA vacuum rack switch (sw-ey-aux) for this computer.

Comments related to this report
david.barker@LIGO.ORG - 11:22, Tuesday 13 June 2023 (70412)

For archive: here are the original EY HEPI Pump Controller Settings.

Images attached to this comment
david.barker@LIGO.ORG - 14:10, Tuesday 13 June 2023 (70421)

and here are the CS settings for completeness

 

Images attached to this comment
LHO FMCS
bubba.gateley@LIGO.ORG - posted 10:04, Tuesday 13 June 2023 - last comment - 17:11, Tuesday 13 June 2023(70404)
AHU-2 Cooling Coil 3 Strainer Cleaned
This morning Tyler, Chris and myself cleaned the strainer on coil 3 for the chilled water system. This strainer was not as bad as coil 4 which we cleaned 2 weeks ago. It did need cleaning and did improve the flow of the cooling coil. We plan to clean the other 4 strainers as time permits on Tuesdays.
We also found the condensate drain plugged on AHU 1 again. We are working to unplug that drain now. The F/B damper on AHU 1 was closed completely so I have manually opened it to 70% for the time being to see if we can reduce some of the condensate from the coils. Coil 1 & 2 may need cleaning sooner than later if the condensate does not start going away soon. 
Comments related to this report
jeffrey.kissel@LIGO.ORG - 17:11, Tuesday 13 June 2023 (70428)DetChar, FMP
AHU 2 Cooling Coil 3's strainer cleaning has had minimal impact on LVEA temperature value, nor fluctuations. Excellent!
(Note, we've been calling these Fans, but I found out in talking with Bubba and Tyler today that this is a liquid strainer that filters the line that feeds the cooling coils. The *numbering* is still legit, they cleaned AHU2 Cooling Coil 4 strainer on May 30 2023 and today they cleaned AHU2 Coiling Coil 3 strainer)

Separately, the change to have AHU 1 Damper at 60% is OK (Bubba had adjusted from 70% he mentions in his above aLOG to 60% shortly after posting; comments on its effect in the timeline below). 

After unplugging the drain of the air handler, and holding the damper open at 60% in "manual" mode (as opposed to servo controlled "auto" mode), the AHU 1 cooling coils 1 and 2 have now restored to much better/cooler, normal temperatures (~47 deg F) that we had lost by May 30th 2023 (the fateful fire-alarm chaos day; see timeline in LHO:70284). Excellent!

LVEA temperature excursions in Zones 1A (this BSC2 / Beam Splitter), Zone 4 (Output Arm), and Zone 5 (Input Arm) never exceeded 0.4 deg F outside of 67 deg F range, and restored to normal ~67 deg F temperatures with small diurnal fluctuations within 7 hours. Excellent!

Slowly but surely, I think these maintenance activities are good, and not only restoring expected system behavior, but making it better. We now have a much tighter collection of temperatures in the LVEA, cooling coils are operating at a nice low temperature, and more zone heaters are coming alive such that we have the expected "constant heat, constant cool in order to keep the LVEA temperature nice and controlled" behavior.

I now have much more confidence that we can continue to do this kind of maintenance and not have it impact the IFO.

Timeline of today's work all within today's Tuesday Maintenance:
See Today's trend compared against a 7 day and 21 day trend.

Jun 13 2023 08:26 PDT 
    Bubba and Tyler start work, bringing down Air Handler 2, valving out AHU2's cooling coil 3's strainer and clean it.
    Understandably, all zones temperatures start to rise from 67 +/- 0.1 deg F but only max out 67.4 +/- 0.1 deg F.

    Also while out there, this is when they find that air handler 1 AHU 1 is flooded "again."

Jun 13 2023 09:03 PDT 
    Within a half hour, they're done with AHU2 strainer cleaning, AHU1 drain de-clogging, turn AHU2 back on, and temperatures begin to drop accordingly.
    
    Upon restoration, though, and back at the control room work station -- they still see AHU 1's cooling coil temperatures high (~ 61 deg F), as its been since May 30 2023.
    Indeed, also, the damper for AHU 1 is closed at 0%, as it has been doing diurnally since May 30 2023 -- the HVAC servo opens up AHU 1's damper each day (around when outside temperatures exceed 70 deg F), and then gradually ramps closed again by nightfall's temperature drop.
    Prior to May 30 2023, AHU 1 damper stayed around ~35% open throughout the day and night.

Jun 13 2023 09:03 PDT
    Bubba intuits that there's too much condensation gathering in AHU 1 because its damper is closed too often, and that condensation doesn't get drained out because its drain is clogged. As a mitigation attempt, instead of having the HVAC servo drive the damper open percentage, he switches over to Manual mode and holds it at 70% as described in his aLOG.

    This drops both cooling coil's 1 and 2's temperature down from 61 to ~47 deg F. Awesome.
    But, this starts to scare the wimpy scientists (me) who are paying too close attention too quickly, because they see the temperatures in the LVEA drop below the 67 deg F set point.

Jun 13 2023 10:02 PDT
    Zone 4 (input arm) zone heater turns on, and begins to bring the Zone 4 temperature back in line.

Jun 13 2023 11:00 PDT
    Bubba makes a further adjustment of the AHU 1 damper percent open from 70% to 60% open to try to decrease the cooling in the LVEA.

Jun 13 2023 12:07 PDT
    Zone 1A's zone heater collection turns on (human controlled? servo controlled?) for the first time, much like Zones 4 and 5 heaters came on for the first time after AHU2's cooling coil 4's strainer was cleaned, ramping up to ~70% in 5 minutes by 12:15 PDT.

    This really starts to turn around the LVEA temperatures for the better; the temperatures bottom out turn back up, overshoot a bit, and settle to yesterday's mid-day value. 

Jun 13 2023 17:00 PDT
   Temperatures are all restored within to ~67 +/- 0.2 deg F, and again, they never exceeded +/- 0.4 deg F.
   The "natural experiment" reveals the impulse response of the Zone 1A is about 5 hours, as long at the zone heater for that zone comes on!
Images attached to this comment
H1 PSL
jason.oberling@LIGO.ORG - posted 09:22, Tuesday 13 June 2023 (70402)
PSL FSS RefCav Remote Alignment Tweak

This morning I remotely tweaked the beam alignment into the FSS RefCav, as the TPD has been drifting down since last Thursday (6/8).  Once done I ended up with a TPD of ~0.91 V.  One note, while I was finishing up the alignment tweak the TPD jumped from ~0.88 V to ~0.90 V; this jump coincided with the IMC_In power increasing from 2W to 10W (see attachment).  Upon last on-table beam alignment the TPD was at 0.96 V, so I wasn't able to get it back to where it was last time.  This is an indication that an on-table alignment may be needed.  I'll monitor the TPD throughout the week and if it starts dropping again will make plans to do the on-table alignment.

Images attached to this report
H1 AOS
daniel.sigg@LIGO.ORG - posted 09:18, Tuesday 13 June 2023 - last comment - 10:48, Tuesday 13 June 2023(70401)
Atomic Clock Synchronized

After the jump here, we resynchronized the atomic clock with GPS.

Images attached to this report
Comments related to this report
jonathan.hanks@LIGO.ORG - 10:48, Tuesday 13 June 2023 (70410)
The fault codes listed correspond to:

0x16 - reboot alert
0x07 - CBT signal degradation.

So this looks like a reminder that we have had the clock running for a long time and it is getting older.
H1 DAQ
david.barker@LIGO.ORG - posted 08:17, Tuesday 13 June 2023 (70397)
WP11255 Started zpool scrub h1daqframes-0 08:09 PDT
H1 General
camilla.compton@LIGO.ORG - posted 08:04, Tuesday 13 June 2023 - last comment - 08:51, Tuesday 13 June 2023(70396)
OPS Day Shift Start

TITLE: 06/13 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
CURRENT ENVIRONMENT:
    SEI_ENV state: MAINTENANCE
    Wind: 6mph Gusts, 4mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.10 μm/s
QUICK SUMMARY: Starting Maintenance day. Ryan has taken the SEI_CONF to Maintenance, IFO has stayed locked so far.

Dust monitors, VAC, SUS, SEI, CDS okay

Comments related to this report
camilla.compton@LIGO.ORG - 08:51, Tuesday 13 June 2023 (70399)
15:08 UTC Unlocked IFO by turning off H1:IMC-REFL_SERVO_IN1EN. sitemap > IOO > IMC Overview > MC Servo > Turned off INPUT 1
This is Jeff's method for "nice" lockloss i.e. doesn't ring up suspensions.
H1 General
ryan.crouch@LIGO.ORG - posted 07:59, Tuesday 13 June 2023 (70392)
OPS Tuesday OWL shift summary

TITLE: 06/13 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
SHIFT SUMMARY:

Lock #1

Lock #2

Both arms went through increase flashes, Xarm took a while (10 mins)

Couldn't catch DRMI, or PRMI and it was clear from AS AIR that something was badly aligned so I ran an initial alignment. DRMI still struggled a bit despite good flashes and took about 5 minutes to lock.

NLN at 14:51

 

LOG:                                                                                                                                                        

Start Time System Name Location Lazer_Haz Task Time End
13:10 CDS Erik Remote N Restarting NUCs OPSLogin0 13:22
14:24 FAC Tyler MidY N Slowly move snorkle lift to MidY Ongoing
H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 06:20, Tuesday 13 June 2023 - last comment - 07:47, Tuesday 13 June 2023(70394)
Lockloss @ 13:16UTC

Lockloss at 13:16UTC, we were getting hit by a 5.4 from Papau New Guinea, seismic was starting to increase. Verbal did not say if SEI_CONF went to EQ mode

Comments related to this report
ryan.crouch@LIGO.ORG - 07:47, Tuesday 13 June 2023 (70395)

Lockloss ndscopes seem to show a csoft ringup?

Images attached to this comment
H1 CDS
erik.vonreis@LIGO.ORG - posted 06:17, Tuesday 13 June 2023 (70393)
Workstations updated

Control room workstations were updated and rebooted, including opslogin0 (nomachine).  Only operating systems were updated.  Conda environments were not affected.

H1 General (Lockloss)
austin.jennings@LIGO.ORG - posted 17:14, Monday 12 June 2023 - last comment - 10:24, Tuesday 13 June 2023(70386)
Lockloss @ 0:07

LOCKLOSS @ 0:07, had a SRM saturation right before the lockloss. Seeing some movement in INP1 P - scope attached

Images attached to this report
Comments related to this report
bricemichael.williams@LIGO.ORG - 10:24, Tuesday 13 June 2023 (70408)
You can see MICH and SRM increase in the minute preceding lockloss which agrees with the increase in wind to 20mph, see attached.
Images attached to this comment
Displaying reports 19121-19140 of 87449.Go to page Start 953 954 955 956 957 958 959 960 961 End