Displaying reports 18621-18640 of 86954.Go to page Start 928 929 930 931 932 933 934 935 936 End
Reports until 13:39, Tuesday 13 June 2023
H1 CDS
david.barker@LIGO.ORG - posted 13:39, Tuesday 13 June 2023 - last comment - 14:21, Tuesday 13 June 2023(70419)
CDS Maintenance Summary: Tuesday 13th June 2023

WP11256 Upgrade EY HEPI Pump Controller to Beckhoff

Patrick, Fil, Jim, Dave:

Patrick and Fil installed the new Beckhoff HEPI Pump Controller at EY alongside the original "Ben purple box". The new unit has the temporary name h1hpipumpctrley1, Fil ran a new ethernet cable to it from sw-ey-aux port14.

Patrick and Fil got the new unit booted and on the network. Its EPICS IOC is running, there is no overlap in channel names between the old and the new EPICS databases, so they can run at the same time.

The new unit was not put into production today, we are still using the original unit to control EY pumps.

WP11245 TW0 raw minute trend files offload

Jonathan, Dave:

To complete last week's offload of raw minute trend files from TW0 SSD to h1daqframes-0 HDD, the hand edited daqdrc on h1daqnds0 was put into puppet by Jonathan. We verified the configuration when the DAQ was restarted today.

WP11254 Atomic Clock Re-synchronize with timing system

Daniel:

Daniel resync'ed the atomic clock. Please see his alog for details.

WP11257 Remove obsolete filter-modules-with-control parts from h1als[ex,ey]

Daniel, Dave:

Daniel downgraded the H1:LSC-X_ARM_DRIVE and H1:LSC-Y_ARM_DRIVE filter modules on h1als[ex,ey] to standard filter modules. This fixes the issue of the fm-w-ctrl having hardcoded Cin and Cmask attempting to turn on non-existent FM3 and FM5 filters.

Because the MASK PV for these FMs were removed from the INI files, a DAQ restart was required.

DAQ Restart

Dave, Jonathan:

The DAQ was restarted for the above h1als[ex,ey] model changes. Both GDS0 and GDS1 required a second restart to sync their channels lists, other than that it was a good restart and the DAQ configuration is consistently in puppet.

Comments related to this report
david.barker@LIGO.ORG - 13:50, Tuesday 13 June 2023 (70420)

Tue13Jun2023
LOC TIME HOSTNAME     MODEL/REBOOT
12:31:30 h1iscex      h1alsex     
12:32:00 h1iscey      h1alsey     


12:33:45 h1daqdc0     [DAQ] 0-leg
12:33:55 h1daqfw0     [DAQ]
12:33:55 h1daqtw0     [DAQ]
12:33:56 h1daqnds0    [DAQ]
12:34:03 h1daqgds0    [DAQ]
12:35:11 h1daqgds0    [DAQ] gds0 2nd restart


12:36:53 h1daqdc1     [DAQ] 1-leg
12:37:02 h1daqfw1     [DAQ]
12:37:03 h1daqnds1    [DAQ]
12:37:03 h1daqtw1     [DAQ]
12:37:11 h1daqgds1    [DAQ]
12:38:06 h1daqgds1    [DAQ] gds1 2nd restart
 

david.barker@LIGO.ORG - 14:21, Tuesday 13 June 2023 (70422)

DAQ Frame File Channel List Change

Two slow channels removed from the DAQ Frame today (name, size-bytes, data-rate-Hz)

H1:LSC-X_ARM_DRIVE_MASK 4 16

H1:LSC-Y_ARM_DRIVE_MASK 4 16
 

H1 General
betsy.weaver@LIGO.ORG - posted 13:01, Tuesday 13 June 2023 (70416)
Post TUES MAINTENANCE Sweep of VEAs

Per the checklist T1500386, I made a walk thru of the LVEA.  Others were in other VEAs so, JIm did FCES, Ribert EY, and Tony EX.

 

H1 CDS (CDS, VE)
patrick.thomas@LIGO.ORG - posted 12:44, Tuesday 13 June 2023 (70415)
Did not get to clearing error on h0vacly
Ran out of time working on end Y HEPI pump controller upgrade and did not get to WP 11250.
LHO VE (VE)
travis.sadecki@LIGO.ORG - posted 12:25, Tuesday 13 June 2023 (70414)
MX and EX Turbo functionality test

FAMIS tasks 24864 and 24840

Procedure checklist for both stations completed.  No issues were identified at this time.

MX: Scroll pump hours: 199.3

       Crash bearings: 100%

EX: Scroll pump hours: 6306.3

       Crash bearings: 100%

H1 General
camilla.compton@LIGO.ORG - posted 12:03, Tuesday 13 June 2023 - last comment - 17:13, Tuesday 13 June 2023(70411)
OPS Day Mid-shift Summary

Wrapping up Tuesday Maintenance. Main activities were:

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 17:13, Tuesday 13 June 2023 (70429)FMP
The temperature swing shown in this aLOG is relatively small (~ +/-0.4 deg F), entirely explained by the maintenance day activities on the HVAC system, and recovered to 67 +/- 0.2 deg F by 17:00 PDT. See LHO:70428.
H1 ISC
sheila.dwyer@LIGO.ORG - posted 10:34, Tuesday 13 June 2023 - last comment - 15:15, Thursday 18 July 2024(70409)
OMC loss measurement

Daniel, Sheila

We turned off the 9 Mhz, 45 MHz, and 117 MHz sidebands in order to do an OMC loss measurement.  We used a single bounce beam off of ITMX, with 10W input from the PSL. We spent some time trying to improve the alignment before making OMC scans. 

locked: 1370711576  (OMC REFL avg 3.51mW, OMC DCPD sum 15.23mA)

unlocked: 1370711782 (OMC REFL avg 24.73 mW, OMC DCPD sum 0.078 mA)

OMC scan start: 1370712036 duration 100 seconds (2nd order modes are roughly 8% of the 00 mode).

shutter blocked: 1370712337 (OMC REFL avg -0.030 DCPD SUM 8e-4 mA). 

Jennie Wright plans to analyze this data to estimate OMC losses. 

Images attached to this report
Comments related to this report
daniel.sigg@LIGO.ORG - 16:40, Thursday 15 June 2023 (70502)

Here are the plots of ASC-AS_C_NSUM, OMC-QPD_A_NSUM, OMC-QPD_B_NSUM and OMC-REFL_A_LF, during these measurements. ASC-AS_C_NSUM shows between 22.8 and 32.1mW, OMC-QPD_A_NSUM 23.4mW, OMC-QPD_B_NSUM 23.0mW, and OMC-REFL_A_LF 24.8mW. According to Keita OMC-REFL_A_DC has an incorrect calibration and shows 25.2mW. The average of the 2 QPDs would be 23.2mW, which is about 6.5% lower than 24.8mW.

Second screen shots shows a time when the IMC was unlocked. The DC offsets are in the 10s of uW at most.

Images attached to this comment
jennifer.wright@LIGO.ORG - 06:57, Thursday 06 July 2023 (71099)

Using data from the scan I adapted labutils/OMCscan class to plot the fitted scan and adapted labutils/fit_two_peaks.py to fit a sum of two lorentzians functions for distinguishing carrier 20/02 modes.

The first graph is the OMC scan plot, the second is the curvefit for the second order carrier modes.

We expect the HOM spacing to be 0.588 MHz as per this entry and DCC T1500060 Table 25.

The spacing for the modes measured is 0.592 MHz.

From the heights of the two peaks this suggests mode-mismatch of the OMC to be C02+C20/C00 = (0.83+1.158)/(15.32+0.83+1.158) = 11.0% mode mis-match.

From the locked/unlocked powers on the OMC REFL PD the visibility on resonance is 1-(3.51+0.03/24.73+0.03) = 85.7% visibility.

If the total loss is 14.3%, this implies that the other non mode-matching losses are roughly 1.3%.

 


To run the OMC scan code go to 

/ligo/gitcommon/labutils/omc_scan/ and run 

python OMCscan_nosidebands.py 1370712036 100 "Sidebands off, 10W input" "single bounce" --verbose --make_plot -o 2
in the labutils conda environment and on git branch dev.

To do the double peak fitting run:

python fit_two_peaks_no_sidebands.py  
in the labutils conda environment and on git branch dev.

Images attached to this comment
Non-image files attached to this comment
daniel.sigg@LIGO.ORG - 09:26, Tuesday 18 July 2023 (71453)

These scans were done with OM2 cold.

jennifer.wright@LIGO.ORG - 15:15, Thursday 18 July 2024 (79211)

For comparison with new OMC measurements I used Sheila's code to process the visibility, but updated dit to use nds2utils instead of gwpy as I was having trouble using it to get data.

The code is attached and should be run in the nds2utils conda environment on the CDS workstations.

Power on refl diode when cavity is off resonance: 24.757 mW

Incident power on OMC breadboard (before QPD pickoff): 25.239 mW

Power on refl diode on resonance: 3.525 mW

Measured effiency (DCPD current/responsivity if QE=1)/ incident power on OMC breadboard: 70.4 %

assumed QE: 100 %

power in transmission (for this QE) 17.760 mW

HOM content infered: 13.472 %

Cavity transmission infered: 82.111 %

predicted efficiency () (R_inputBS * mode_matching * cavity_transmission * QE): 70.367 %

omc efficency for 00 mode (including pick off BS, cavity transmission, and QE): 81.323 %

round trip loss: 1605 (ppm)

Finesse: 371.769

Non-image files attached to this comment
LHO VE
david.barker@LIGO.ORG - posted 10:23, Tuesday 13 June 2023 (70407)
Tue CP1 Fill

Tue Jun 13 10:10:41 2023 INFO: Fill completed in 10min 40secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 CDS
jonathan.hanks@LIGO.ORG - posted 10:07, Tuesday 13 June 2023 (70406)
Puppet updates as part of WP 11245
As one of the last steps of WP 11245 we formally add the new raw trend archive directory to the nds server's puppet config.  Today I ran puppet in a verbose and noop mode on all the daqd systems to see what differences there were (we expected a difference in nds0).  I then reviewed the differences and updated the puppet config to match what is in production.  This was a good run to do we found a few things that we had changed in production that we had not put in puppet.

 * nds0 - added the new archive raw trend directory.
 * nds1 - found that we had not added the last archive raw trend directory to puppet, so added that.
 * tw0/1 - put into puppet a smaller circular buffer size that we had added to the systems to deal with larger channel lists.
 * gds1 - updated the firewall settings to match gds0, we had done some experimentation in the past to trouble shoot items.

Now puppet matches reality and is a faithful record of the daqd configs.

In addition I removed the mount to /opt/cdscfg from both the daqd systems and the daqd puppet.  This is brought in due to front end needs on the puppet, but is not required for daqd and just causes issues when we switch boot servers.
H1 CDS
david.barker@LIGO.ORG - posted 10:05, Tuesday 13 June 2023 - last comment - 14:10, Tuesday 13 June 2023(70405)
WP11256 EY Beckhoff HEPI Pump Ctrl Install

The EY Beckhoff controller has the temporary name h1hpipumpctrley1 (10.105.0.64/24) and I have enabled port 14 of the EY VEA vacuum rack switch (sw-ey-aux) for this computer.

Comments related to this report
david.barker@LIGO.ORG - 11:22, Tuesday 13 June 2023 (70412)

For archive: here are the original EY HEPI Pump Controller Settings.

Images attached to this comment
david.barker@LIGO.ORG - 14:10, Tuesday 13 June 2023 (70421)

and here are the CS settings for completeness

 

Images attached to this comment
LHO FMCS
bubba.gateley@LIGO.ORG - posted 10:04, Tuesday 13 June 2023 - last comment - 17:11, Tuesday 13 June 2023(70404)
AHU-2 Cooling Coil 3 Strainer Cleaned
This morning Tyler, Chris and myself cleaned the strainer on coil 3 for the chilled water system. This strainer was not as bad as coil 4 which we cleaned 2 weeks ago. It did need cleaning and did improve the flow of the cooling coil. We plan to clean the other 4 strainers as time permits on Tuesdays.
We also found the condensate drain plugged on AHU 1 again. We are working to unplug that drain now. The F/B damper on AHU 1 was closed completely so I have manually opened it to 70% for the time being to see if we can reduce some of the condensate from the coils. Coil 1 & 2 may need cleaning sooner than later if the condensate does not start going away soon. 
Comments related to this report
jeffrey.kissel@LIGO.ORG - 17:11, Tuesday 13 June 2023 (70428)DetChar, FMP
AHU 2 Cooling Coil 3's strainer cleaning has had minimal impact on LVEA temperature value, nor fluctuations. Excellent!
(Note, we've been calling these Fans, but I found out in talking with Bubba and Tyler today that this is a liquid strainer that filters the line that feeds the cooling coils. The *numbering* is still legit, they cleaned AHU2 Cooling Coil 4 strainer on May 30 2023 and today they cleaned AHU2 Coiling Coil 3 strainer)

Separately, the change to have AHU 1 Damper at 60% is OK (Bubba had adjusted from 70% he mentions in his above aLOG to 60% shortly after posting; comments on its effect in the timeline below). 

After unplugging the drain of the air handler, and holding the damper open at 60% in "manual" mode (as opposed to servo controlled "auto" mode), the AHU 1 cooling coils 1 and 2 have now restored to much better/cooler, normal temperatures (~47 deg F) that we had lost by May 30th 2023 (the fateful fire-alarm chaos day; see timeline in LHO:70284). Excellent!

LVEA temperature excursions in Zones 1A (this BSC2 / Beam Splitter), Zone 4 (Output Arm), and Zone 5 (Input Arm) never exceeded 0.4 deg F outside of 67 deg F range, and restored to normal ~67 deg F temperatures with small diurnal fluctuations within 7 hours. Excellent!

Slowly but surely, I think these maintenance activities are good, and not only restoring expected system behavior, but making it better. We now have a much tighter collection of temperatures in the LVEA, cooling coils are operating at a nice low temperature, and more zone heaters are coming alive such that we have the expected "constant heat, constant cool in order to keep the LVEA temperature nice and controlled" behavior.

I now have much more confidence that we can continue to do this kind of maintenance and not have it impact the IFO.

Timeline of today's work all within today's Tuesday Maintenance:
See Today's trend compared against a 7 day and 21 day trend.

Jun 13 2023 08:26 PDT 
    Bubba and Tyler start work, bringing down Air Handler 2, valving out AHU2's cooling coil 3's strainer and clean it.
    Understandably, all zones temperatures start to rise from 67 +/- 0.1 deg F but only max out 67.4 +/- 0.1 deg F.

    Also while out there, this is when they find that air handler 1 AHU 1 is flooded "again."

Jun 13 2023 09:03 PDT 
    Within a half hour, they're done with AHU2 strainer cleaning, AHU1 drain de-clogging, turn AHU2 back on, and temperatures begin to drop accordingly.
    
    Upon restoration, though, and back at the control room work station -- they still see AHU 1's cooling coil temperatures high (~ 61 deg F), as its been since May 30 2023.
    Indeed, also, the damper for AHU 1 is closed at 0%, as it has been doing diurnally since May 30 2023 -- the HVAC servo opens up AHU 1's damper each day (around when outside temperatures exceed 70 deg F), and then gradually ramps closed again by nightfall's temperature drop.
    Prior to May 30 2023, AHU 1 damper stayed around ~35% open throughout the day and night.

Jun 13 2023 09:03 PDT
    Bubba intuits that there's too much condensation gathering in AHU 1 because its damper is closed too often, and that condensation doesn't get drained out because its drain is clogged. As a mitigation attempt, instead of having the HVAC servo drive the damper open percentage, he switches over to Manual mode and holds it at 70% as described in his aLOG.

    This drops both cooling coil's 1 and 2's temperature down from 61 to ~47 deg F. Awesome.
    But, this starts to scare the wimpy scientists (me) who are paying too close attention too quickly, because they see the temperatures in the LVEA drop below the 67 deg F set point.

Jun 13 2023 10:02 PDT
    Zone 4 (input arm) zone heater turns on, and begins to bring the Zone 4 temperature back in line.

Jun 13 2023 11:00 PDT
    Bubba makes a further adjustment of the AHU 1 damper percent open from 70% to 60% open to try to decrease the cooling in the LVEA.

Jun 13 2023 12:07 PDT
    Zone 1A's zone heater collection turns on (human controlled? servo controlled?) for the first time, much like Zones 4 and 5 heaters came on for the first time after AHU2's cooling coil 4's strainer was cleaned, ramping up to ~70% in 5 minutes by 12:15 PDT.

    This really starts to turn around the LVEA temperatures for the better; the temperatures bottom out turn back up, overshoot a bit, and settle to yesterday's mid-day value. 

Jun 13 2023 17:00 PDT
   Temperatures are all restored within to ~67 +/- 0.2 deg F, and again, they never exceeded +/- 0.4 deg F.
   The "natural experiment" reveals the impulse response of the Zone 1A is about 5 hours, as long at the zone heater for that zone comes on!
Images attached to this comment
H1 PSL
jason.oberling@LIGO.ORG - posted 09:22, Tuesday 13 June 2023 (70402)
PSL FSS RefCav Remote Alignment Tweak

This morning I remotely tweaked the beam alignment into the FSS RefCav, as the TPD has been drifting down since last Thursday (6/8).  Once done I ended up with a TPD of ~0.91 V.  One note, while I was finishing up the alignment tweak the TPD jumped from ~0.88 V to ~0.90 V; this jump coincided with the IMC_In power increasing from 2W to 10W (see attachment).  Upon last on-table beam alignment the TPD was at 0.96 V, so I wasn't able to get it back to where it was last time.  This is an indication that an on-table alignment may be needed.  I'll monitor the TPD throughout the week and if it starts dropping again will make plans to do the on-table alignment.

Images attached to this report
H1 AOS
daniel.sigg@LIGO.ORG - posted 09:18, Tuesday 13 June 2023 - last comment - 10:48, Tuesday 13 June 2023(70401)
Atomic Clock Synchronized

After the jump here, we resynchronized the atomic clock with GPS.

Images attached to this report
Comments related to this report
jonathan.hanks@LIGO.ORG - 10:48, Tuesday 13 June 2023 (70410)
The fault codes listed correspond to:

0x16 - reboot alert
0x07 - CBT signal degradation.

So this looks like a reminder that we have had the clock running for a long time and it is getting older.
H1 DAQ
david.barker@LIGO.ORG - posted 08:17, Tuesday 13 June 2023 (70397)
WP11255 Started zpool scrub h1daqframes-0 08:09 PDT
H1 General
camilla.compton@LIGO.ORG - posted 08:04, Tuesday 13 June 2023 - last comment - 08:51, Tuesday 13 June 2023(70396)
OPS Day Shift Start

TITLE: 06/13 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
CURRENT ENVIRONMENT:
    SEI_ENV state: MAINTENANCE
    Wind: 6mph Gusts, 4mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.10 μm/s
QUICK SUMMARY: Starting Maintenance day. Ryan has taken the SEI_CONF to Maintenance, IFO has stayed locked so far.

Dust monitors, VAC, SUS, SEI, CDS okay

Comments related to this report
camilla.compton@LIGO.ORG - 08:51, Tuesday 13 June 2023 (70399)
15:08 UTC Unlocked IFO by turning off H1:IMC-REFL_SERVO_IN1EN. sitemap > IOO > IMC Overview > MC Servo > Turned off INPUT 1
This is Jeff's method for "nice" lockloss i.e. doesn't ring up suspensions.
H1 General (Lockloss)
austin.jennings@LIGO.ORG - posted 17:14, Monday 12 June 2023 - last comment - 10:24, Tuesday 13 June 2023(70386)
Lockloss @ 0:07

LOCKLOSS @ 0:07, had a SRM saturation right before the lockloss. Seeing some movement in INP1 P - scope attached

Images attached to this report
Comments related to this report
bricemichael.williams@LIGO.ORG - 10:24, Tuesday 13 June 2023 (70408)
You can see MICH and SRM increase in the minute preceding lockloss which agrees with the increase in wind to 20mph, see attached.
Images attached to this comment
H1 General
austin.jennings@LIGO.ORG - posted 19:07, Sunday 11 June 2023 - last comment - 14:05, Tuesday 13 June 2023(70349)
Lockloss @ 2:02 UTC

LOCKLOSS @ 2:02, had a SRM saturation right before the lockloss. Seeing some movement in INP1 P and some LSC instability as well.

Images attached to this report
Comments related to this report
bricemichael.williams@LIGO.ORG - 14:05, Tuesday 13 June 2023 (70418)
Looking at the WIND and the SRM, the wind seems to gusting between 10-17mph at the same frequency as the oscillations in the SRM starting at 1m,21s before lockloss. 
Images attached to this comment
H1 CDS
david.barker@LIGO.ORG - posted 16:36, Friday 09 June 2023 - last comment - 13:01, Tuesday 13 June 2023(70317)
unresponsive filters not under local model control

Following up on EJ's alog, I have extended the code to check if the unresponsive filter is under local control by the model. I ran it this afternoon while H1 was in observe, running in 'nice' mode so as to not hammer the frontends with CA requests (delays between requests).

If we disregard locally controlled filters, the number of unresponsive filters is reduced to 3 (see attached). The top one runs on h1lsc, the other two on h1lscaux.

Here is the full list, with locally controlled filters marked with *

Total numer of filtermodules = 13367
num unresponsive 14

h1lsc {'LSC-EXTRA_AI_2': 'FM2  '}

h1lscaux {'LSC-LOCKIN_1_DEMOD_9_I': 'FM1  ',
          'LSC-LOCKIN_1_DEMOD_9_Q': 'FM1  '}

h1sqz {'SQZ-RLF_VCXO_SERVO': 'FM2* '}

h1alsex {'LSC-X_ARM_DRIVE': 'FM3* FM5* '}

h1alsey {'LSC-Y_ARM_DRIVE': 'FM3* FM5* '}

h1sussqzin {'SUS-ZM1_M2_COILOUTF_LL': 'FM1* FM6* ',
            'SUS-ZM1_M2_COILOUTF_LR': 'FM1* FM6* ',
            'SUS-ZM1_M2_COILOUTF_UL': 'FM1* FM6* ',
            'SUS-ZM1_M2_COILOUTF_UR': 'FM1* FM6* ',
            'SUS-ZM3_M2_COILOUTF_LL': 'FM1* FM6* ',
            'SUS-ZM3_M2_COILOUTF_LR': 'FM1* FM6* ',
            'SUS-ZM3_M2_COILOUTF_UL': 'FM1* FM6* ',
            'SUS-ZM3_M2_COILOUTF_UR': 'FM1* FM6* '}
 

 

 

 

 

Images attached to this report
Comments related to this report
daniel.sigg@LIGO.ORG - 14:38, Monday 12 June 2023 (70378)

Turned thses filters off:

h1lsc {'LSC-EXTRA_AI_2': 'FM2  '}
h1lscaux {'LSC-LOCKIN_1_DEMOD_9_I': 'FM1  ',
          'LSC-LOCKIN_1_DEMOD_9_Q': 'FM1  '}

Added gain of 1 filters to:

h1sqz {'SQZ-RLF_VCXO_SERVO': 'FM2* '}
h1sussqzin {'SUS-ZM1_M2_COILOUTF_LL': 'FM1* FM6* ',
            'SUS-ZM1_M2_COILOUTF_LR': 'FM1* FM6* ',
            'SUS-ZM1_M2_COILOUTF_UL': 'FM1* FM6* ',
            'SUS-ZM1_M2_COILOUTF_UR': 'FM1* FM6* ',
            'SUS-ZM3_M2_COILOUTF_LL': 'FM1* FM6* ',
            'SUS-ZM3_M2_COILOUTF_LR': 'FM1* FM6* ',
            'SUS-ZM3_M2_COILOUTF_UL': 'FM1* FM6* ',
            'SUS-ZM3_M2_COILOUTF_UR': 'FM1* FM6* '}

 

This should have no effect on anything.

daniel.sigg@LIGO.ORG - 13:01, Tuesday 13 June 2023 (70417)

Remove the controls from these filter modules:

h1alsex {'LSC-X_ARM_DRIVE': 'FM3* FM5* '}
h1alsey {'LSC-Y_ARM_DRIVE': 'FM3* FM5* '}

Turned these filters off after a model restart.

Displaying reports 18621-18640 of 86954.Go to page Start 928 929 930 931 932 933 934 935 936 End