Displaying reports 1261-1280 of 80620.Go to page Start 60 61 62 63 64 65 66 67 68 End
Reports until 10:32, Tuesday 10 December 2024
LHO VE
david.barker@LIGO.ORG - posted 10:32, Tuesday 10 December 2024 - last comment - 10:45, Tuesday 10 December 2024(81725)
Tue CP1 Fill

Tue Dec 10 10:10:41 2024 INFO: Fill completed in 10min 38secs

 

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 10:45, Tuesday 10 December 2024 (81726)

Today's TC-mins only just exceeded the -70C trip level (-75C,-74C). For tomorrow's fill I've increased the trip to -65C.

H1 TCS (PEM)
camilla.compton@LIGO.ORG - posted 09:21, Tuesday 10 December 2024 (81723)
CO2s turned off while IFO locked at 15:41UTC: SQZ changed in YAW, need to repeat to see coulpling to Crab Puslar at 59Hz

Camilla, Robert, WP12232

While looking for coupling of CO2 chillers into DARM at Crab pulsar 59Hz 81246, and also CO2s effect on SQZ 72244, this morning we turned off the CO2s and stayed locked ~40 minutes which is longer than we expected. Unsure of the lock loss cause as maintenance activities had started.

We aimed to see if the 59Hz coupling to DARM peak changed (needs ~20 minutes+ of data), but the accelerators showed that by increasing the CO2 power dumped in the water cooled beam dump, the load on the chillers changed and the 59Hz peak moved ~0.1Hz which means that it would be very difficult to see in DARM.

In attached plot, can see that the SQZ (especially at high frequency) got worse without CO2s. The SQZ ASC also changed the alignment mainly in YAW, does that mean that the CO2s need some alignment improvements in YAW? Looking at HWS signals, the optics substrate absorption changes as expected and started to level out, although the simulation didn't expect them too level out yet.

In future, to repeat this test, we could dump to 1.7W going into DARM using a normal beam-dump to dump the 1.7W before the periscope so that the load on the chiller isn't changed but the CO2 is not on being injected and anything that could backscatter light after the periscope is blocked. Alternatively, we could shake the table. Still the VPs themselves could be a source of scatter as are only AR coated at 10.6um, meaning they reflect up to 15% of 1064nm per surface DCC D1100439.

Images attached to this report
H1 General
ryan.crouch@LIGO.ORG - posted 07:30, Tuesday 10 December 2024 - last comment - 08:01, Tuesday 10 December 2024(81720)
OPS Tuesday day shift start

TITLE: 12/10 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 6mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.42 μm/s
QUICK SUMMARY:

Comments related to this report
ryan.crouch@LIGO.ORG - 08:01, Tuesday 10 December 2024 (81721)

Running the range comparison for both hours of the bad range this morning vs an hour of good range earlier in the same lock. The noise looks to be largly above ~30Hz.

Non-image files attached to this comment
H1 CDS
erik.vonreis@LIGO.ORG - posted 06:18, Tuesday 10 December 2024 (81719)
Workstations updated

Workstations were upgraded and rebooted.  This was an OS packages upgraded.  Conda packages were not upgraded.

H1 General
oli.patane@LIGO.ORG - posted 22:03, Monday 09 December 2024 (81718)
Ops Eve Shift End

TITLE: 12/10 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: Currently Observing at 158 Mpc and have been Locked for 4 hours. The range is okay now, but we did go through a patch of low range that looks very similar to the low range from the last lock. For both this lock and  the previous lock it looks like the low range started around the same time after locking, 1 hour and 20 minutes after the lock started(ndscope). I couldn't find anything with squeeze or the jitter diaggui.
LOG:

00:26 Running an initial alignment
01:26 Initial alignment done, relocking

02:06 NOMINAL_LOW_NOISE    
02:09 Observing  

Start Time System Name Location Lazer_Haz Task Time End
21:59 CDS Erik, Jonathan MSR N VM Server Setup 23:58
00:07 CDS Jonathan, Erik LVEA remote Moving virtual machines 00:39
00:08 PCAL Tony PCAL y(local) Getting sttuff ready for tomorrow 00:29
00:22 PEM Robert LVEA YES Putting viewport covers back on 01:22
Images attached to this report
H1 SEI
oli.patane@LIGO.ORG - posted 18:29, Monday 09 December 2024 (81715)
ISI CPS Noise Spectra Check Weekly FAMIS

Closes FAMIS#26020, last checked 81615

Non-image files attached to this report
H1 General
oli.patane@LIGO.ORG - posted 17:55, Monday 09 December 2024 (81713)
Ops Eve Shift Start

TITLE: 12/10 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Earthquake
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 4mph Gusts, 2mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.37 μm/s
QUICK SUMMARY:

Late shift start log, but we have been relocking and are currently at REDUCE_RF45_MODULATION_DEPTH

H1 CDS
jonathan.hanks@LIGO.ORG - posted 17:12, Monday 09 December 2024 (81712)
WP 12239 Initial work, installing new VM hypervisor and starting to transfer VMs off of cdsproxmox
Erik, Jonathan, and Tony,

As per WP 12239 we installed a new hypervisor machine, cdsproxmox2.  Erik and Tony racked a spare system in the server3 rack.  We configured the sw-msr-server3 switch to support the new hypervisor using ports 47 & 48.

We installed the same proxmox ve version that runs the current cluster via a bootable iso image.  The install went very smoothly.  We did a simple local install with zfs zraid and 3x2TB disks.  We were able to quickly add it to the cluster by using the cluster management page and the 'join cluster' command.

Some notes of extra steps we needed to do:

 * Add the proper proxmox apt-source (the no-subscription version)
 * Install the ifupdown2 package (after setting up the apt-source)
 * Then we could enable the 2nd ethernet port and create the VMs main ethernet bridge
   * eno2 was configured to autostart, no ip address
   * vmbr1 was configured  to autostart, no ip address, vlan aware, and it is associated with eno2

Then we could migrate some VMs.  We have moved ldap servers, a development machine, our conda mirror (in process) (these can be moved without disturbing the control room).

The point of this work is to provide enough physical hardware to be able to turn off one of the other hypervisors for maintenance work.  Cdsproxmox has disk issues.

This work will continue tomorrow.
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:39, Monday 09 December 2024 - last comment - 10:31, Tuesday 10 December 2024(81711)
OPS Day Shift Summary

TITLE: 12/10 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Earthquake
INCOMING OPERATOR: Oli
SHIFT SUMMARY:

IFO is LOCKING and in ENVIRONMENT due to EARTHQUAKE.

We had a few good hours of locking today but as with yesterday, earthquakes have been rampant. Here are details:

LOG:

Start Time System Name Location Lazer_Haz Task Time End
23:03 HAZ LVEA IS LASER HAZARD LVEA YES LVEA IS LASER HAZARD 06:09
17:08 FAC Karen MY N Technical Cleaning 17:42
17:09 PEM Robert LVEA YES Viewport setup 18:09
19:54 PEM Robert LVEA YES Recording lock acquisition from viewport 20:13
21:59 CDS Erik, Jonathan MSR N VM Server Setup 23:58
22:34 FAC Tyler EX N Cryopump check 21:34
22:48 EE Fil Recieving Rollup N Item transport 23:48
00:07 CDS Jonathan, Erik LVEA remote Moving virtual machines 00:35
00:08 PCAL Tony PCAL y(local) Getting sttuff ready for tomorrow 00:29
00:22 PEM Robert LVEA YES Putting viewport covers back on 01:22
Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 10:31, Tuesday 10 December 2024 (81724)SQZ

Attached is DARM for the no SQZ test time (~10 minute averages). It seems like the noise stopped before the test started. We are seeing worse DARM around 20Hz with SQZ.

Images attached to this comment
H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 15:45, Monday 09 December 2024 - last comment - 18:10, Monday 09 December 2024(81709)
Lockloss

Lockloss @ 12/09 23:12UTC due to large earthquake from Nevada. Sitting in DOWN until ground motion goes down

Comments related to this report
oli.patane@LIGO.ORG - 18:10, Monday 09 December 2024 (81714)

12/10 2:09UTC Back to Observing

H1 TCS (DetChar, DetChar-Request, ISC)
thomas.shaffer@LIGO.ORG - posted 14:33, Monday 09 December 2024 - last comment - 11:04, Tuesday 10 December 2024(81705)
Turned off CO2s for 2 min during poor range

Camilla C, TJ S

This morning we had another period where our range was fluctuating almost 40Mpc, previously seen on Dec 1 (alog81587) and further back in May (alog78089). Camilla and decided to turn off both TCS CO2s for a short period just to completely rule them out, since previously there was correlation with these range dips and a TCS ISS channel. We saw no positive change in DARM during this short duration test, but we didn't want to go too long and lose lock. CO2s were requested to have no output power from 16:12:30-16:14:30UTC

The past times that we have seen this range loss, the H1:TCS-ITMX_CO2_ISS_CTRL2_OUT_DQ channel and the H1:SQZ-FC_LSC_DOF2_OUT_DQ channel had noise that correlated to the loss, but theISS channel showed nothing different this time (attachment 2). We also were in a state of no squeezing at the time. So it's possible that this is a completely different type of range loss.

DetChar, could we run Lasso or check on HVETO for a period during the morning lock with our noisy range?

Images attached to this report
Comments related to this report
jane.glanzer@LIGO.ORG - 11:04, Tuesday 10 December 2024 (81730)DetChar

Here is a link to a lasso run during this time period. The two channels with the highest coefficients are a midstation channel H1:PEM-MY_RELHUM_ROOF_WEATHER.mean and a HEPI pump channel H1:HPI-PUMP_L0_CONTROL_VOUT.mean. 

H1 General (SQZ)
oli.patane@LIGO.ORG - posted 22:44, Sunday 08 December 2024 - last comment - 14:38, Monday 09 December 2024(81690)
Ops Eve Shift End

TITLE: 12/09 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Earthquake
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Currently Observing without squeezing and have been Locked for 1 hour.
All thoughout the day we haven't been able to lock because of many earthquakes, but in the last hour I was finally able to relock the IFO without us losing lock from ground motion.
When we got to NLN we couldn't go into Observing because the squeezer PMC wouldn't lock - the message said "Cannot lock PMC. Check SQZ laser power". Last time this happened we were just able to get it to lock by re-requesting LOCKED for the PMC guardian, but this time it didn't work. I called Camilla, and we saw that although the PZT thought it was scanning to try and lock, the PZT volts were not changing and were just sitting out of range, around -12 volts, and that 10 hours ago something had happened to the PZT volts where they suddently glitched down to over -200, which is much lower than they ever go.
We weren't able to get it back working so we used Ryan Short's script to change the nominal sqz states. I accepted the sdfs, and we went to Observing at 06:31 UTC. Here is Camilla's alog with an ndscope. tagging SQZ
LOG:

22:32 Started an initial alignment
22:58 Initial alignment done, relocking
    - Lockloss from LOCKING_ARMS_GREEN
    - Lockloss from OFFLOAD_DRMI_ASC
    - Lockloss from ENGAGE_ASC_FOR_FULL_IFO
    - Lockloss from RESONANCE due to large EQ hitting
00:23 Put IFO in DOWN to wait out earthquake
- HAM2, HAM3, HAM5 ISI tripped
- IM2 tripped

02:58 Started an initial alignment
03:22 Initial alignment done, relocking
    - Lockloss from PREP_DC_READOUT_TRANSITION due to earthquake
03:43 Sitting in DOWN while earthquake passes

04:48 Trying to relock
05:46 NOMINAL_LOW_NOISE
    - Issues getting PMC to lock - see above
06:31 Observing without squeezing

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 09:06, Monday 09 December 2024 (81696)SQZ

As SQZ_ANG_ADJUST guardian had a conditional for the nominal state depending on sqzparams.use_sqz_angle_adjus, the observation without squeezing code doesn't change it and we edited ADJUST_SQZ_ANG.py ourselves.

 Now we've been observing for months using this state, I'm removing the conditional so it is always 'nominal = 'ADJUST_SQZ_ANG_ADF'' and adding a note to change this in sqzparams.

I will ask Ryan to change the permissions so we can have SQZ_ANG_ADUST included in his switch_nom_sqz_states.py script.

ryan.short@LIGO.ORG - 14:38, Monday 09 December 2024 (81706)SQZ

I've fixed the group permissions on the switch_nom_sqz_states.py script to be 'controls' and added the SQZ_ANG_ADJUST node to be updated along with the others with a nominal up state of 'ADJUST_SQZ_ANG_ADF'.

I also added several models to the sqz_models list in the script that will allow for SDF diffs when going to the no-SQZ configuration. Since the IFO will be observing in a non-nominal configuration, the intent is to not accept SDF diffs associated with this temporary change. The full list of models that will now be excluded is as follows:

  • sqz
  • syscssqz
  • ascsqzifo
  • ascsqzfc
  • susfc2
H1 SQZ (CDS, OpsInfo)
camilla.compton@LIGO.ORG - posted 22:41, Sunday 08 December 2024 - last comment - 15:26, Monday 09 December 2024(81689)
SQZ High Voltage seems to be off. Oli taken IFO to observation without SQZ.

It appears that the SQZ HV (PMC/SHG/OPO PZT, ZM2,4,5 PSAMS) went down at 12/08 20:33UTC. Trends attached. 

Oli found that the SQZ PMC wasn't locking. We tried enabling the PZT manually and although it thought it was scanning, the PZT Volts remained at -12, Oli found it had been like this since 12/08 20:33UTC. Then realized all PSAMS volts are off and other PZTs. Oli has taken the IFO into observing without SQZ for tonight do the CDS/EE team can look at this tomorrow.

Images attached to this report
Comments related to this report
ibrahim.abouelfettouh@LIGO.ORG - 08:40, Monday 09 December 2024 (81693)VE

This was likely caused by a reported HAM7 interlock trip. Tagging Vacuum.

Looking at HAM7 pressure doesn't seem to show anything at the time of trip though.

Images attached to this comment
camilla.compton@LIGO.ORG - 10:14, Monday 09 December 2024 (81699)

Richard untripped the HV in the Mech room mezzanine at 2024/12/09 16:12:14 UTC this morning. The ZM2/4/5 SAMs railed, we brought them back in 81697 by clearing the servo's histories.

patrick.thomas@LIGO.ORG - 15:26, Monday 09 December 2024 (81708)
I logged into h0vacly and looked up the status of PT153. See "Latched Device Error Details" in the attached screenshot. According to the manual, this appears to translate to "Electronics failure, Non-volatile memory failure, invalid internal communication".
Images attached to this comment
H1 SUS
oli.patane@LIGO.ORG - posted 13:56, Sunday 08 December 2024 - last comment - 19:29, Monday 09 December 2024(81668)
Suspension watchdog trips during the large earthquake

Back in March (76269) Jeff and I had updated all the suspension watchdogs (besides OFIS, OPOS, and HXDS since those were already up to date) to use better blrms filtering and to be output into um. We set the suspension watchdog thresholds to values between 100 and 300 µm, but these values were set arbitrarily since there was no way to previously see how far the stages move during different scenarios. We had upped a few of the thresholds after having some suspensions trip when they probably shouldn't have, and this is a continuation of that.
During the large earthquake that hit us on December 5th, 2024 18:46 UTC, all ISI watchdogs tripped as well as some of the stages on several suspensions. After a cursory look, all suspensions that tripped only had either the bottom or bottom+penultimate stage trip, meaning that with the exception of the single suspensions, the others' M1 stage damping should have stayed on.

We wanted to go through and check whether the trips may have just been because of the movement from the ISIs tripping. If that is the case, we want to raise the suspension watchdog thresholds for those stages so that these suspensions don't trip every single time their ISI trips, especially if the amount that they are moving is still not very large.

Suspension stages that tripped:

Triples:
- MC3 M3
- PR3 M2, M3
- SRM M2, M3
- SR2 M2, M3

Singles:
- IM1 M1
- OFI M1
- TMSX M1

MC3 (M3) (ndscope1)

When the earthquake hit and we lost lock, all stages were moving due to the earthquake, but once HAM2 ISI tripped 5 seconds after the lockloss, the rate at which the OSEMs were moving quickly accelerated, so the excess motion looks to mainly be due to the ISI trip (ndscope2).

Stage Original WD threshold Max BLRMS reached after lockloss New WD threshold
M1 150 86 150 (unchanged)
M2 150 136 175
M3 150 159 200

 

PR3 (M2, M3) (ndscope3)

Looks to be the same issue as with MC3.

Stage Original WD threshold Max BLRMS reached after lockloss New WD threshold
M1 150 72 150 (unchanged)
M2 150 162 200
M3 150 151 200

 

SRM (M2, M3) (ndscope4)

Once again, the OSEMs are moving after the lockloss due to the earthquake, but the rate they were moving at greatly increased once HAM5 saturated and the ISI watchdogs tripped.

Stage Original WD threshold Max BLRMS reached after lockloss New WD threshold
M1 150 84 150 (unchanged)
M2 150 165 200
M3 150 174 225

 

 SR2 (M2, M3) (ndscope5)

Again, the OSEMs are moving after the lockloss due to the earthquake, but the rate they were moving at greatly increased once HAM4 saturated and the ISI watchdogs tripped.

Stage Original WD threshold Max BLRMS reached after lockloss New WD threshold
M1 150 102 150 (unchanged)
M2 150 182 225
M3 150 171 225

 

IM1 (M1) (ndscope6)

Again, the OSEMs are moving after the lockloss due to the earthquake, but the rate they were moving at greatly increased once HAM2 saturated and the ISI watchdogs tripped.

Stage Original WD threshold Max BLRMS reached after lockloss New WD threshold
M1 150 175 225

 

OFI (M1) (ndscope7)

Again, the OSEMs are moving after the lockloss due to the earthquake, but the rate they were moving at greatly increased once HAM5 saturated and the ISI watchdogs tripped.

Stage Original WD threshold Max BLRMS reached after lockloss New WD threshold
M1 150 209 250

 

TMSX (M1) (ndscope8)

This one seems a bit questionable - it looks like some of the OSEMs were already moving quite a bit before the ISI tripped, and there isn't as much of a clear place where they started moving more once the ISI had tripped(ndscope9). I will still be raising the suspension trip threshold for this one just because it doesn't need to be raised very much and is within a reasonable range.

Stage Original WD threshold Max BLRMS reached after lockloss New WD threshold
M1 100 185 225
Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 19:09, Sunday 08 December 2024 (81686)

We just had an earthquake come through and trip some of the ISIs, including HAM2, and with that tripped IM2(ndscope1). I checked to see if the movement in IM2 was caused by the ISI trip and sure enough it was (ndscope2). I will be raising the suspension watchdog threshold for IM2 up to 200.

Stage Original WD threshold Max BLRMS reached after lockloss New WD threshold
M1 150 152 200
Images attached to this comment
oli.patane@LIGO.ORG - 22:45, Sunday 08 December 2024 (81691)
Images attached to this comment
oli.patane@LIGO.ORG - 19:29, Monday 09 December 2024 (81716)

Yet another earthquake!. The earthquake that hit us December 9th 23:10 UTC  tripped almost all of our ISIs, and we had three suspension stages trip as well, so here's another round of trying to figure out if they tripped because of the earthquake or because of the ISI trips. The three suspensions that tripped are different from the ones I had updated the thresholds for earlier in this alog.

I will not be making these changes right now since that would knock us out of Observing, but the next time we are down I will make the changes to the watchdog thresholds for these three suspensions.

Suspension stages that tripped:

- MC2 M3

- PRM M3

- PR2 M3

MC2 (M3) (ndscope1)

It's hard to tell for this one what the cause for M3 tripping was(ndscope2), but I will up the threshold here for M3 anyways since I'm sure even if the trip was directly caused by the earthquake, the ISI tripping definitely wouldn't have helped!

Stage Original WD threshold Max BLRMS reached after lockloss New WD threshold
M1 150 88 150 (unchanged)
M2 150 133 175
M3 150 163 200

 

PRM (M3) (ndscope3)

This one it's pretty clear that it was because of the ISI tripping(ndscope4).

Stage Original WD threshold Max BLRMS reached after lockloss New WD threshold
M1 150 44 150 (unchanged)
M2 150 122 175
M3 150 153 200

 

PR2 (M3) (ndscope5)

Again this one it's pretty clear that it was because of the ISI tripping(ndscope6).

Stage Original WD threshold Max BLRMS reached after lockloss New WD threshold
M1 150 108 150 (unchanged)
M2 150 129 175
M3 150 158 200
Images attached to this comment
H1 OpsInfo (ISC)
oli.patane@LIGO.ORG - posted 16:27, Saturday 02 November 2024 - last comment - 20:26, Monday 09 December 2024(81015)
Inspiral range integrand and DARM comparison tool for low range checks

Using the darm_integral_compare.py script from the NoiseBudget repo (NoiseBudget/aligoNB/production_code/H1/darm_integral_compare.py) as a starting point, I made a version that is simplified and easy to run for when our range is low and we want to compare range vs frequency with a previous time.

It takes two starting times, supplied by the user, and for each time, it grabs the DARM data between the start time and an end time of starttime+5400 seconds (1.5 hours). Using this data it calculates the inspiral range integrand and returns two plots(pdf) - one showing the range integrand plotted against frequency for each set of data(png1), and then the second plot just shows DARM for each set of data, along with a trace showing the cumulative difference in range between the two sets as a function of frequency(png2). These are saved both as pngs and in a pdf in the script's folder.

This script can be found at gitcommon/ops_tools/rangeComparison/range_compare.py. To run it you just need to supply the gps times for the two sets of time that you want to compare, although there is also an optional argument you can if you want the length of data taken to be different than the default 5400 seconds. The command used to generate the PDF and PNGs attached to this alog was as follows: python3 range_compare.py --span 5000

Images attached to this report
Non-image files attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 20:26, Monday 09 December 2024 (81717)

I appearently didn't do a very good job of telling you how to run this and forgot to put the example times in the command, so here's a more clear (actually complete) explanation

To find the script, go to:

cd /ligo/gitcommon/ops_tools/rangeComparison/

and then to run the script:

python3 range_compare.py [time1] [time2]

where time1 and time2 are the gps start times for the two stretches of time that you want to compare. The default time span it will run with for each time is 5400 seconds after the start time, but this can be changed by using the --span command followed by the number of seconds you want. For example, the plots from the original alog were made by running the command python3 range_compare.py --span 5000 1414349158 1414586877

H1 SQZ (OpsInfo)
ryan.short@LIGO.ORG - posted 13:27, Tuesday 03 September 2024 - last comment - 15:55, Monday 09 December 2024(79887)
Script to Switch Nominal SQZ Guardian States for Observing w/out SQZ

I've written a script, /opt/rtcds/userapps/release/sqz/h1/scripts/switch_nom_sqz_states.py, which will change the nominal states of the SQZ Guardian nodes to be used when it's decided that H1 will observe without SQZ, a process adapted from the ObservationWithOrWithoutSqueezing wiki. This is intended to save operators time so the switch to a configuration where H1 can observe without SQZ is quick and brings us to observing promptly.

The script first commits any uncommitted changes in the SQZ Guardians to svn with an "as found" message before the nominal states are changed. Once changed, the nodes are then loaded and SQZ_MANAGER is requested to NO_SQUEEZING, which should make sure all nodes are in the state they should be.

This script also adds the 'syscssqz' model to the EXCLUDE_LIST in the DIAG_SDF node when going to the no-SQZ configuration. This is to facilitate any SDF diffs that may appear as a result of SQZ misbehaving and will allow H1 to go to observing while ignoring any diffs in this model. More models to exclude can be added in the script as desired.

Since the pysvn package used by the script to commit to svn is a Debian package not found in conda, all conda environments must be deactivated for this script to work. Hence, when running this script, utilize the 'noconda' bash script wrapper found in userscripts. Calling this script to change to the configuration where H1 will observe without SQZ would look as follows:

noconda python switch_nom_sqz_states.py without

This script can also be used to go back to the configuration where H1 is observing with SQZ; simply replace the 'without' argument when running it with 'with' and the nominal SQZ Guardian states will be changed back to normal and the SQZ models will be removed from DIAG_SDF's exclude list.

Comments related to this report
ryan.short@LIGO.ORG - 15:55, Monday 09 December 2024 (81710)OpsInfo

(Repost from alog81706 to document updates to this script)

I've updated this script so that it now also changes the nominal state for the SQZ_ANG_ADJUST node (previously this was left out due to it being set by a parameter in sqzparams).

I also added several models to the sqz_models list in the script that will allow for SDF diffs when going to the no-SQZ configuration. Since the IFO will be observing in a non-nominal configuration, the intent is to not accept SDF diffs associated with this temporary change. The full list of models that will now be excluded is as follows:

  • sqz
  • syscssqz
  • ascsqzifo
  • ascsqzfc
  • susfc2
Displaying reports 1261-1280 of 80620.Go to page Start 60 61 62 63 64 65 66 67 68 End