Displaying reports 13841-13860 of 84090.Go to page Start 689 690 691 692 693 694 695 696 697 End
Reports until 10:38, Tuesday 22 August 2023
H1 SQZ
victoriaa.xu@LIGO.ORG - posted 10:38, Tuesday 22 August 2023 - last comment - 13:57, Monday 28 August 2023(72371)
Aligned opo pump fiber polarization

Naoki noticed the pump fiber rejected PD (in ham7, rejects pump fiber light that comes out in the wrong polarization) was saturated, so today I re-aligned the pump fiber polarization using sqzt0 pico's (described recently also in 71761).

I'm not sure why pump fiber needs to have its input polarization readjusted so often; I checked the CLF fiber and FCGS fiber, and they both seemed relatively well-aligned despite having not adjusted those in a while. Especially this time, it seems the fiber polarization got misaligned more quickly than before.

Images attached to this report
Comments related to this report
victoriaa.xu@LIGO.ORG - 13:57, Monday 28 August 2023 (72492)

Austin had to reset the sqz pump ISS again on Sunday (72474), lowering the generated sqz level by lowering OPO trans from 80uW (recent nominal) to 65uW. Sqz level correspondingly went down. Naoki and I have re-aligned the pump fiber polarization, and brought the squeezer back to the nominal 80uW generated sqz level.

It's strange that we had to re-align the pump fiber polarization, again. The pump fiber polarization seems to be misaligning more quickly recently, see trends. This time, the fiber polarization misaligned to saturation in 1-2 days (last time was several days, before that we never saw it misaligned to saturation). It also needed both the L/2 and L/4 waveplates to re-align it. We should definitely monitor this situation and see if we can understand/fix why it's happening.

Images attached to this comment
LHO VE
david.barker@LIGO.ORG - posted 10:13, Tuesday 22 August 2023 (72370)
Tue CP1 Fill

Tue Aug 22 10:08:15 2023 INFO: Fill completed in 8min 11secs

Note TCs did not saturate at -200C, the outside temperatures are lower today.

Images attached to this report
H1 General (SUS)
ryan.crouch@LIGO.ORG - posted 08:04, Tuesday 22 August 2023 (72368)
OPS Tuesday day shift start

TITLE: 08/22 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventative Maintenance
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 12mph Gusts, 7mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.10 μm/s
QUICK SUMMARY: We're going to try and stay locked for a while this maintenance period, the BRS at EndX has been turned off for the PCAL team working down there.

LHO General
ryan.short@LIGO.ORG - posted 00:00, Tuesday 22 August 2023 (72362)
Ops Eve Shift Summary

TITLE: 08/21 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: H1 was able to relock and start observing an hour into the shift, then remained observing for the rest of the evening. H1 has now been locked for 7 hours.

The dust monitors for PSL101, EX VEA, and Optics Lab have been alarming on and off throughout the evening. Trends of counts attached.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
01:22 CAL Tony PCal Lab - Preparing equipment 01:41
Images attached to this report
LHO General
ryan.short@LIGO.ORG - posted 20:35, Monday 21 August 2023 (72367)
Ops Eve Mid Shift Report

State of H1: Observing at 152Mpc

H1 was able to relock automatically following the lockloss at the start of the shift. H1 has now been locked for 3.5 hours.

H1 PSL
ryan.short@LIGO.ORG - posted 18:25, Monday 21 August 2023 (72366)
PSL 10-Day Trends

FAMIS 19990

The PMC and FSS alignment from last Tuesday (alog 72222) caused the expected drop in PMC_REFL power and increase in PMC_TRANS. The FSS_TPD signal didn't improve much with that tweak, but over the past 3 days it's risen by ~50mV. The ISS diffracted power is continuting to jump around as we've seen it do recently.

Images attached to this report
H1 General (Lockloss, SQZ)
ryan.short@LIGO.ORG - posted 17:12, Monday 21 August 2023 - last comment - 17:37, Monday 21 August 2023(72363)
Lockloss @ 22:59 UTC

Lockloss @ 22:59 UTC

Immediately prior to the lockloss (at 22:59:42.89 to be precise), the SQZ_MANAGER guardian reported the notifications "SQZ ASC AS42 not on?? Please RESET_SQZ_ASC" and "FC-IR UNLOCKED!"

ISC_LOCK jumped to LOCKLOSS at 22:59:43.58, so I suspect the filter cavity unlocking had something to do with this lockloss. However, I'm not sure if we've seen a lockloss from just the FC unlocking in the past, so I hesitate to say this was the definitive cause or just a case of guardian loop timing showing a red herring.

Comments related to this report
camilla.compton@LIGO.ORG - 17:17, Monday 21 August 2023 (72364)

This might be the same as alog 71659, where ISC_LOCK was just slower than SQZ in detecting the lockloss. If this is the case it could be that ISC_LOCK's lockloss thresholds need updating. 

ryan.short@LIGO.ORG - 17:37, Monday 21 August 2023 (72365)

Comparing the 2kHz ASC-AS_A channel against SQZ-LO_SERVO_ERR_OUT, AS_A sees a bump before the hit to SQZ-LO_SERVO, so the previously mentioned guardian log messages indeed had me misled. ISC_LOCK doesn't jump to LOCKLOSS until almost a full second after this.

Images attached to this comment
LHO General
ryan.short@LIGO.ORG - posted 16:11, Monday 21 August 2023 (72359)
Ops Eve Shift Start

TITLE: 08/21 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Austin
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 11mph Gusts, 9mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.12 μm/s
QUICK SUMMARY: H1 just lost lock, cause still being determined.

LHO General
austin.jennings@LIGO.ORG - posted 16:03, Monday 21 August 2023 (72354)
Monday Operator Summary

TITLE: 08/21 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Aquisition
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:

- Couple of yellow dust monitor alerts from the Optics Lab

- Lockloss @ 22:59 (15 seconds before the end of my shift :(). Leaving a locking IFO to Ryan S.
LOG:                                                                                                                                                                                                                                                                                                                                                                                                                                                                  

Start Time System Name Location Lazer_Haz Task Time End
16:07 FAC Cindy MY N Tech clean 17:28
16:42 FAC Karen VAC/OLab N Tech clean 17:13
17:20 FAC Ken MY/X N Work on lighting ??
18:53 FAC Cindy Receiving N Move cardboard ??
20:18 VAC Travis MY N Check turbo pumps 20:43
21:07 FAC Tyler MX/MY N Checks ??
21:09 EE Fil/Tony PCAL Lab Y (LOCAL) EE work 22:20
H1 SUS (CDS, SUS, SYS)
jeffrey.kissel@LIGO.ORG - posted 13:32, Monday 21 August 2023 (72356)
HAMs 3 thru 7, BSC 123 SUS Computer Clock Cycle Usage Thus Far during O4 :: All User Model SUS that are to be on-the-move below Run Below 25% Usage
J. Kissel

As I continue game-planning the integration of O5 suspensions into the existing system, (see progress thus far in G2301306) one of the questions is the percentage of computer usage needed on a given computer core running different numbers and amounts of suspensions. Given the sheer amount of new SUS coming in coupled with the limited number of computers and IO chassis, we'll need to pull a similar computer core assignment re-arrangement game that we've done before previous observing runs -- (for O4 see G2101636, and implementation aLOGs LHO:59651 for HAM56 and LHO:59669 for HAM7. For O3 see LHO:38827).

Summary of findings shown in the attached trends of the CPU_METER_MAX values for SUSB123 and SUSH34 and SUSH56 and SUSH7 since May 24 2023 16:00 UTC: 

    :: All 65 kHz IOP Models, be it SUSH34, SUSB123, SUSH56, or SUSH7 are consistently using 8 usec / 15 usec ~ 53% of capacity.

    On SUSB123, 
    :: The ITM 16 kHz user models consume the most clock cycle time, running with maximums of ~26 usec / 61 usec ~42%. (I presume the abnormally large computation comes from violin mode damping)
    :: The BS 16 kHz user model only consumes 14 / 61 ~ 23% of its core.

    On SUSH34,
    :: All three HSTSs, MC2, PR2, and SR2 run between 13 / 61 usec ~ 21% of core capacity. MC2 had "run the longest" at 14 and I presume that's from the fast longitudinal global control filtering (where PR2 and SR2 only receive slow angular drift control), but the restart after reboot from the dolphin crash on July 30 2023 (see LHO:71829) seems to have brought it back in symmetry with the other two at 13.

    On SUSH56 and SUSH7,
    :: all SUSH56 and SUSH7 16 kHz user models are using at most only 15 / 61 usec ~ 25% (it's SUSSRM running one globally controlled HSTS, and SUSSQZIN running three ZMs [two HPDS and one HDDS] and the OPO). Some, like SUSSQZOUT which are only running damping loops on ZM4, ZM5, and ZM6 top masses (two HPDSs and one HSDS, respectively) are using as little as 8 / 61 ~13%
Their 65 kHz IOPs are running at ~50% capacity.
    :: The SUSAUXH7 model is running at 4 kHz, and consumes only 4 / 244 ~ 6% of its core on SUSH7. I'll note that no one has populated the coil driver monitor filter banks with any filters, but I'm sure even if it was chock full I doubt it would even come close to "breaking the bank."


This bodes well for re-organizing models for O5; we're not computationally limited in doing so for any suspensions on the move, even if we keep running things at 16 kHz. Further, we can consider combining even more suspensions' controls into single models if we expect the SUS's filter computation to be light [e.g. only damping like in SUSSQZOUT].
Images attached to this report
LHO General
austin.jennings@LIGO.ORG - posted 12:59, Monday 21 August 2023 (72355)
Mid Shift Report

H1 has been locked for 20 hours, all systems appear stable and seismic motion is low.

LHO VE
david.barker@LIGO.ORG - posted 11:02, Monday 21 August 2023 (72353)
Mon CP1 Fill

Mon Aug 21 10:08:31 2023 INFO: Fill completed in 8min 27secs

Travis confirmed a good fill curbside.

Images attached to this report
H1 PEM (DetChar)
robert.schofield@LIGO.ORG - posted 19:22, Friday 18 August 2023 - last comment - 11:13, Monday 11 September 2023(72331)
DARM 52 Hz peak from chilled water pump at EX: HVAC shutdown times

Genevieve, Lance, Robert

To further understand the roughly 10Mpc lost to the HVAC (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72308), we made several focussed shutdowns today. These manipulations were made during observing (with times recorded) because such HVAC changes happen automatically during observing and also, we were reducing noise rather than increasing it. The times of these manipulations are given below.

One early outcome is that the peak at 52 Hz in DARM is produced by the chilled water pump at EX (see figure). We went out and looked to see if the vibration isolation was shorted, it was not, though there are design flaws (the water pipes arent isolated). We switched from CHWP-2 to CHWP-1 to see if the particular pump was extra noisy. CHWP-1 produced a similar peak in DARM at its own frequency. The peak in accelerometers is also similar in amplitude to the one from the water pump at EY. One possibility is that the coupling at EX is greater because of the undamped cryobaffle at EX.

 

Friday HVAC shutdowns; all times Aug. 18 UTC

15:26 CS SF1, 2, 3, 4 off

15:30:30 CS SF5 and 6 off

15:36 CS SF5 and 6 on

15:40 CS SF1, 2, 3, 4 back on

 

16:02 EY AH2 (only fan on) shut down

16:10 EY AH2 on

16:20 EY AH2 off

16:28 EY AH2 on

16:45 EY AH2 and chiller off

16:56:30 EY AH2 and chiller on

 

17:19:30 EX chiller only off, pump stays on

17:27 EXwater pump CHWP-2 goes off

17:32: EX CHWP-2 back on chiller back on right after

 

19:34:38 EX chiller off, CHWP-2 pump stays on for a while

19:45 EX chiller back on

 

20:20 EX started switch from chiller 2 to chiller 1 - slow going

21:00 EX Finally switched

21:03 EX Switched back to original, chiller 1 to chiller 2

 

Non-image files attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 10:29, Monday 21 August 2023 (72350)DetChar, FMP, ISC, OpsInfo
Turning Roberts reference to LHO:72308 into a hyperlink for ease of navigation.

Check out LHO:72297 for a bigger picture representation of how the 52 Hz peak in the broader DARM sensitivity, and from the Time Stamps in Elenna's plots, they were taken at 15:27 UTC, just after the corner station (CS) "SFs 1, 2, 3, 4" are off.

SF stands for "Supply Fans" i.e. those air handler unit (AHU) fans that push the cool air in to the LVEA. Recall, there are two fans per air handler unit, for the two air handler units (AHU1 and AHU2) that feed the LVEA in the corner station.

The channels that you can use to track the corner station's LVEA HVAC system are outlined more in LHO:70284, but in short, you can check the status of the supply fans via the channels
    H0:FMC-CS_LVA_AH_AIRFLOW_1   Supply Fan (SF) 1
    H0:FMC-CS_LVA_AH_AIRFLOW_2   Supply Fan (SF) 2
    H0:FMC-CS_LVA_AH_AIRFLOW_3   Supply Fan (SF) 3
    H0:FMC-CS_LVA_AH_AIRFLOW_4   Supply Fan (SF) 4
jeffrey.kissel@LIGO.ORG - 13:13, Monday 28 August 2023 (72486)DetChar, ISC, SYS
My Bad: -- Elenna's aLOG is showing the sensitivity on 2023-Aug-17, and the Robert logging of times listed above are for 2023-Aug-18. 

Elenna's demonstration is during Robert's site-wide HVAC shut down 2023-Aug-17 -- see LHO:72308 (i.e. not just the corner as I'd errantly claimed above.).
jeffrey.kissel@LIGO.ORG - 11:13, Monday 11 September 2023 (72805)FMP, ISC, OpsInfo
For these 2023-Aug-18 times mentioned in this LHO aLOG 72331, check out the subsequent analysis of impact in LHO:72778.
H1 ISC (PEM)
elenna.capote@LIGO.ORG - posted 11:38, Thursday 17 August 2023 - last comment - 13:15, Monday 28 August 2023(72297)
DARM with and without HVAC

Robert did an HVAC off test. Here is a comparison of GDS CALIB STRAIN NOLINES from earlier on in this lock and during the test. I picked both times off the range plot from a time with no glitches.

Improvement from removal of 120 Hz jitter peak, apparent reduction of 52 Hz peak, and broadband noise reduction at low frequency (scatter noise?).

I have attached a second plot showing the low frequency (1-10 Hz) spectrum of OMC DCPD SUM, showing no appreciable change in the low frequency portion of DARM from this test.

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 14:57, Thursday 17 August 2023 (72302)DetChar, FMP, OpsInfo, PEM
Reminders from the summary pages as to why we got so much BNS range improvement from removing the 52 Hz and 120 Hz features shown in Elenna's ASD comparison.
Pulled from https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20230817/lock/range/.

Range integrand shows ~15 and ~5 MPC/rtHz reduction at the 52 and 120 Hz features.

BNS range time series shows brief ~15 MPC improvement at 15:30 UTC during Robert's HVAC OFF tests.
Images attached to this comment
elenna.capote@LIGO.ORG - 11:50, Friday 18 August 2023 (72321)

Here is a spectrum of the MICH, PRCL, and SRCL error signals at the time of this test. The most visible change is the reduction of the 120 Hz jitter peak also seen in DARM. There might be some reduction in noisy peaks around 10-40 Hz in the signals, but the effect is small enough it would be useful to repeat this test to see if we can trust that improvement.

Note: the spectra have strange shapes, I think related to some whitening or calibration effect that I haven't bothered to think about to make these plots. I know we have properly calibrated versions of the LSC spectra somewhere, but I am not sure where. For now these serve as a relative comparison.

Images attached to this comment
jeffrey.kissel@LIGO.ORG - 10:46, Monday 21 August 2023 (72352)DetChar, FMP, PEM
According to Robert's follow-up / debrief aLOG (LHO:72331) and the time-stamps in the bottom left corner of Elenna's DTT plots, she's is using the time 2023-08-17 15:27 UTC, and that corresponds to the time when Robert had turned off all four the supply fans (SF1, SF2, SF3, and SF4) in the corner station (CS) air handler units (AHU) 1 and 2 that supply the LVEA around 2023-08-17 15:26 UTC.
jeffrey.kissel@LIGO.ORG - 13:15, Monday 28 August 2023 (72487)DetChar, PEM, SYS
My Bad: -- Elenna's aLOG is showing the sensitivity on 2023-Aug-17, and the Robert LHO:72331 logging of times listed above are for 2023-Aug-18. 

Elenna's demonstration is during Robert's site-wide HVAC shut down 2023-Aug-17 -- see LHO:72308 (i.e. not just the corner as I'd errantly claimed above.).
H1 SUS (DetChar, INJ, ISC, OpsInfo)
jeffrey.kissel@LIGO.ORG - posted 09:52, Tuesday 15 August 2023 - last comment - 14:26, Monday 23 October 2023(72221)
ETMX M0 Longitudinal Damping has been fed to TMTS M1 Unfiltered Since Sep 28 2021; Now OFF.
J. Kissel, J. Driggers

I was brainstorming why LOWNOISE_LENGTH_CONTROL would be ringing up a Transmon M1 to M2 wire violin mode (modeled to be at 104.2 Hz for a "production" TMTS; see table 3.11 of T1300876) for the first time on Aug 4 2023 (see current investigation recapped in LHO:72214), and I remembered "TMS tracking..."

In short: we found that ETMX M0 L OSEM damping error signal has been fed directly to TMSX M1 L path global control path, without filtering, since Sep 28 2021. Yuck!

On Aug 30 2021, I resolved the discrepancies between L1 and H1 end-station SUS front-end models -- see LHO:59772. Included in that work, I cleaned up the Tidal path, cleaned up the "R0 tracking" path (where QUAD L2 gets fed to QUAD R0), and installed the "TMS tracking" path as per ECR E2000186 / LLO:53224. In short, "TMS tracking" couples the ETM M0 longitudinal OSEM error signal to the TMS M1 longitudinal "input to the drivealign bank" global control path, with the intent of matching the velocity of the two top masses to reduce scattered light.

On Aug 31 2021, the model changes were installed during an upgrade to the RCG -- see LHO:59797, and we've confirmed that I turned both TMSX and TMSY paths OFF, "to be commissioned later, when we have an IFO, if we need it" at
    Tuesday -- Aug 31 2021 21:22 UTC (14:22 PDT) 

However, 28 days later,
    Tuesday -- Sept 28 2021 22:16 UTC (15:16 PDT)
the TMSX filter bank got turned back on, and must have been blindly SDF saved as such -- with no filter in place -- after an EX IO chassis upgrade -- see LHO:60058. At the time, that RCG 4.2.0 still had the infamous "turn on a new filter with its input ON, output ON, and a gain of 1.0" feature, that has been since resolved with RCG 5.1.1. So ... maybe, somehow, even though the filter was already installed on Aug 31 2021, the IO chassis upgrade rebuild, reinstall, and restart of the h1sustmsx.mdl front end model re-registered the filter as new? Unclear. Regardless this direct ETMX M0 L to TMSX M1 L path has been on, without filtering, since Sep 28 2021. Yuck!

Jenne confirms the early 2021 timeline in the first attachment here.
She also confirms via a ~2 year trend of the H1:SUS-TMSY_M1_FF_L filter bank's SWSTAT, that no filter module has *ever* been turned on, confirmed that there's *never* been filtering.

Whether this *is* the source of 102.1288 Hz problems and that that frequency is the TMSX transmon violin mode is still unclear. Brief investigations thus far include
    - Jenne briefly gathered ASDs of ETMX M0 L (H1:SUS-ETMX_M0_DAMP_L_IN_DQ) and TMSX M1 L OSEMs' error signal (H1:SUS-TMSX_M1_DAMP_L_IN1_DQ) around the time of Oli's LOWNOISE_LENGTH_CONTROL time, but found that at 100 Hz, the OSEMs are limited by their own sensor noise and don't see anything.
    - She also looked through the MASTER_OUT DAC requests (), in hopes that the requested control signal would show something more or different, but found nothing suspicious around 100 Hz there either.
    - We HAVE NOT, but could look at H1:SUS-TMSX_M1_DRIVEALIGN_L_OUT_DQ since this FF control filter should be the only control signal going through that path. I'll post a comment with this.

Regardless, having this path on with no filter is clearly wrong, so we've turned off the input, output, and gain accepted the filter as OFF, OFF, and OFF in the SDF system (for TMSX, the safe.snap is the same as the observe.snap).
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 10:39, Tuesday 15 August 2023 (72226)
No obvious blast in the (errant) path between ETMX M0 L and TMSX M1 L, the control channel H1:SUS-TMSX_M1_DRIVEALIGN_L_OUT_DQ, during the turn on of the LSC FF.

Attached is a screenshot highlighting one recent lock acquisition, after the addition / separation / clean up of calibration line turns ons (LHO:72205):
    - H1:GRD-ISC_LOCK_STATE_N -- the state number of the main lock acquisition guardian,
    - H1:LSC-SRCLFF1_GAIN, H1:LSC-PRCLFF_GAIN, H1:MICHFF_GAIN -- EPICs records showing the timing of when the LSC feed forward is turned on
    - The raw ETMX M0 L damping signal, H1:SUS-ETMX_M0_DAMP_L_IN1_DQ -- stored at 256 Hz
    - The same signal, mapped (errantly) as a control signal to TMSX M1 L -- also stored at 256 Hz
    - The TMSX M1 L OSEMs H1:SUS-TMSX_M1_DAMP_L_IN1_DQ, which are too limited by their own self noise to see any of this action -- but also only stored at 256 Hz.

In the middle of the TRANSITION_FROM_ETMX (state 557), DARM control is switching from ETMX to some other collection of DARM actuators. That's when you see the ETMX M0 L (and equivalent TMSX_M1_DRIVEALIGN) channels go from relatively noisy to quiet.

Then, at the very end of the state, or the start of the next state, LOW_NOISE_ETMX_ESD (state 558), DARM control returns to ETMX, and the main chain top mass, ETMX M0 gets noisy again. 

Then, several seconds later, in LOWNOISE_LENGTH_CONTROL (state 560), the LSC feed forward gets turned on. 

So, while there is control request changes to the TMS, at least according to channels stored at 256 Hz, we don't see any obvious kicks / impulses to the TMS during this transition. 
This decreases my confidence that something was kicking up a TMS violin mode, but not substantially.
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 10:33, Wednesday 16 August 2023 (72275)DetChar, DetChar-Request
@DetChar -- 
This errant TMS tracking has been on throughout O4 until yesterday.

The last substantial nominal low noise segment before the this (with errant, bad TMS tracking) was on  
     2023-08-15       04:41:02 to 15:30:32 UTC
                      1376109680 - 1376148650
the first substantial nominal low noise segment after this change 
     2023-08-16       05:26:08 - present
                      1376198786 - 1376238848 

Apologies for the typo in the main aLOG above, but *the* channels to understand the state of the filter bank that's been turned off are 
    H1:SUS-TMSX_M1_FF_L_SWSTAT
    H1:SUS-TMSX_M1_FF_L_GAIN

if you want to use that for an automated way of determining whether the TMS tracking is on vs. off.

If the SWSTAT channel has a value of 37888 and the GAIN channel has a gain of 1.0, then the errant connection between ETMX M0 L and TMSX M1 L was ON. That channels has now a value of 32768 and 0.0, respectively, indicating that it's OFF. (Remember, for a standard filter module a SWSTAT value of 37888 is a bitword representation for "Input, Output, and Decimation switches ON." A SWSTAT value of 32768 is the same bitword representation for just "Decimation ON.")

Over the next few weeks, can you build up an assessment of how the IFO has performed a few weeks before vs. few weeks after?
     I'm thinking, in particular, in the corner of scattered light arches and glitch rates (also from scattered light), but I would happily entertain any other metric you think are interesting given the context.

     The major difference being that TMSX is no longer "following" ETMX, so there's a *change* in the relative velocity between the chains. No claim yet that this is a *better* change or worse, but there's definitely a change. As you know, the creation of this scattered-light-impacting, relative velocity between the ETM and TMS is related to the low frequency seismic input motion to the chamber, specifically between the 0.05 to 5 Hz region. *That* seismic input evolves and is non-stationary over the few weeks time scale (wind, earthquakes, microseism, etc.), so I'm guessing that you'll need that much "after" data to make a fair comparison against the "before" data. Looking at the channels called out in the lower bit of the aLOG I'm sure will be a helpful part of the investigation.

I chose "a few weeks" simply because the IFO configuration has otherwise been pretty stable "before" (e.g., we're in the "representative normal for O4" 60 W configuration rather than the early O4 75 W configuration), but I leave it to y'all's expertise and the data to figure out a fair comparison (maybe only one week, a few days, or even just the single "before" vs. "after" is enough to see a difference).
ansel.neunzert@LIGO.ORG - 14:31, Monday 21 August 2023 (72357)

detchar-request git issue for tracking purposes.

jane.glanzer@LIGO.ORG - 09:12, Thursday 05 October 2023 (73271)DetChar
Jane, Debasmita

We took a look at the Omicron and Gravity triggers before and after this tracking was turned off. The time segments chosen for this analysis were:

TMSX tracking on: 2023-07-29 19:00:00 UTC - 2023-08-15 15:30:00 UTC, ~277 hours observing time
TMSX tracking off: 2023-08-16 05:30:00 UTC - 2023-08-31 00:00:00 UTC, ~277 hours observing time

For the analysis, the Omicron parameters chosen were SNR > 7.5, and a frequency between 10 Hz and 1024 Hz. The Gravity Spy glitches included a confidence of > 90%. 

The first pdf contains glitch rate plots. In the first plot, we have the Omicron glitch rate comparison before and after the change. The second and third plots shows the comparison of the Omicron glitch rates before and after the change as a function of SNR and frequency. The fourth plot shows the Gravity Spy classifications of the glitches. What we can see from these plots is that when the errant tracking was on, the overall glitch rate was higher (~29 per hour when on, ~15 per hour when off). It was particularly high in the 7.5-50 SNR range and 10Hz - 50Hz range, which is typically where we observe scattering. The Gravity Spy plot shows that scattered light is the most common glitch type when the tracking is both on and off, but reduces after the tracking is off.

We also looked into see if these scattering glitches were coincidence in "H1:GDS-CALIB_STRAIN" and "H1:ASC-X_TR_A_NSUM_OUT_DQ", which is shown in the last pdf. From the few examples we looked at, there does seem to be some excess noise in the transmitted monitor channel when the tracking was on. If necessary, we can look into more examples of this. 
Non-image files attached to this comment
debasmita.nandi@LIGO.ORG - 14:26, Monday 23 October 2023 (73674)
Debasmita, Jane

We have plotted the ground motion trends in the following frequency bands and DOFs

1. Earthquake band (0.03 Hz--0.1 Hz) ground motion at ETMX-X, ETMX-Z and ETMX-X tilt-subtracted
2. Wind speed (0.03 Hz--0.1 Hz) at ETMX
3. Micro-seismic band (0.1 Hz--0.3 Hz) ground motion at ETMX-X

We have also calculated the mean and median of the ground motion trends for two weeks before and after the tracking was turned off. It seems that while motion in all the other bands remained almost same, the microseismic band ground motion (0.1-0.3 Hz) has increased significantly (from a mean value of 75.73 nm/s to 115.82 nm/s) when the TMS-X tracking was turned off. Still, it produced less scattering than before when the TMS-X tracking was on. 

The plots and the table are the attached here.
Non-image files attached to this comment
H1 CAL (DetChar, DetChar-Request, ISC)
jeffrey.kissel@LIGO.ORG - posted 15:26, Tuesday 01 August 2023 - last comment - 14:49, Thursday 07 September 2023(71881)
More Oscillators to PCAL, New Oscillators to DARM Added; CAL_AWG_LINES Function Now Replaced by Newly Installed FE Oscillators
J. Kissel, D. Barker

As of today Dave helped me install the new front-end, EPICs controlled oscillators discussed in LHO:71746. Then, after crafting a few new MEDM screens (see comments below), I've turned ON some of those oscillators in order to replace the unstable function of the CAL_AWG_LINES guardian.

So, there're no "new" calibration lines (not since we turned CAL_AWG_LINES back ON last week at 2023-07-25 22:21:15 UTC -- see LHO:71706) -- but they're now driven by front-end, EPICs controlled oscillators rather than by guardian using the python bindings for awg (which was unstable across computer crashes, and other connection interruptions).

This is true as of the first observation segment today: 2023-08-01 22:02 UTC
However, due to a mishap with me misunderstanding the state of the PCALY SDF system (see LHO:71879), I accidentally overwrote the PCALXY comparison line at 284.01 Hz, and we went into observe. Thus, 
The short observation segment between 22:02 - 22:11 UTC is out of nominal configration, because there's no PCALY line contributing to the PCALXY comparison.
 
The was rectified by the second observation segment starting on 2023-08-01 22:16 UTC.

Also, because of these changes the subtraction team should switch their witness channel for the DARM_EXC frequencies to H1:LSC-CAL_LINE_SUM_DQ.
The PCALY witness channel remains the same, H1:CAL-PCALY_EXC_SUM_DQ, as the newly used oscillators sum in to the same channel.
Below, I define which oscillator number is assigned to which frequency.

Here's the latest list of calibration lines:
Freq (Hz)   Actuator                   Purpose                      Channel that defines Freq             Changes Since Last Update (LHO:69736)     
8.825       DARM (via ETMX L1,L2,L3)   Live DARM OLGTFs             H1:LSC-DARMOSC1_OSC_FREQ              Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG
8.925       PCALY                      Live Sensing Function        H1:CAL-PCALY_PCALOSC5_OSC_FREQ        Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG
11.475      DARM (via ETMX L1,L2,L3)   Live DARM OLGTFs             H1:LSC-DARMOSC2_OSC_FREQ              Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG
11.575      PCALY                      Live Sensing Function        H1:CAL-PCALY_PCALOSC6_OSC_FREQ        Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG
15.175      DARM (via ETMX L1,L2,L3)   Live DARM OLGTFs             H1:LSC-DARMOSC3_OSC_FREQ              Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG
15.275      PCALY                      Live Sensing Function        H1:CAL-PCALY_PCALOSC7_OSC_FREQ        Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG
24.400      DARM (via ETMX L1,L2,L3)   Live DARM OLGTFs             H1:LSC-DARMOSC4_OSC_FREQ              Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG
24.500      PCALY                      Live Sensing Function        H1:CAL-PCALY_PCALOSC8_OSC_FREQ        Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG
15.6        ETMX UIM (L1) SUS          \kappa_UIM excitation        H1:SUS-ETMY_L1_CAL_LINE_FREQ          No change
16.4        ETMX PUM (L2) SUS          \kappa_PUM excitation        H1:SUS-ETMY_L2_CAL_LINE_FREQ          No change
17.1        PCALY                      actuator kappa reference     H1:CAL-PCALY_PCALOSC1_OSC_FREQ        No change
17.6        ETMX TST (L3) SUS          \kappa_TST excitation        H1:SUS-ETMY_L3_CAL_LINE_FREQ          No change
33.43       PCALX                      Systematic error lines       H1:CAL-PCALX_PCALOSC4_OSC_FREQ        No change
53.67         |                            |                        H1:CAL-PCALX_PCALOSC5_OSC_FREQ        No change
77.73         |                            |                        H1:CAL-PCALX_PCALOSC6_OSC_FREQ        No change
102.13        |                            |                        H1:CAL-PCALX_PCALOSC7_OSC_FREQ        No change
283.91        V                            V                        H1:CAL-PCALX_PCALOSC8_OSC_FREQ        No change
284.01      PCALY                      PCALXY comparison            H1:CAL-PCALY_PCALOSC4_OSC_FREQ        Off briefly between 2023-08-01 22:02 - 22:11 UTC, back on as of 22:16 UTC
410.3       PCALY                      f_cc and kappa_C             H1:CAL-PCALY_PCALOSC2_OSC_FREQ        No Change
1083.7      PCALY                      f_cc and kappa_C monitor     H1:CAL-PCALY_PCALOSC3_OSC_FREQ        No Change
n*500+1.3   PCALX                      Systematic error lines       H1:CAL-PCALX_PCALOSC1_OSC_FREQ        No Change (n=[2,3,4,5,6,7,8])
Comments related to this report
jeffrey.kissel@LIGO.ORG - 15:31, Tuesday 01 August 2023 (71884)GRD
As a part of depricating CAL_AWG_LINES, I've updated the ISC_LOCK guardian to use the new main switches for the DARM_EXC lines for the transitions between NOMINAL_LOW_NOISE and NLN_CAL_MEAS. 

That main switch channel is H1:LSC-DARMOSC_SUM_ON, which enables excitations to flow through to the DARM error point when set to 1.0 (and blocks it when set to 0.0).

I've committed the new version of ISC_LOCK to the userapps repo, rev 26039.
jeffrey.kissel@LIGO.ORG - 15:36, Tuesday 01 August 2023 (71885)
Here's the updated 
/opt/rtcds/userapps/release/lsc/common/medm/
    LSC_OVERVIEW.adl
    LSC_DARM_EXC_OSC_OVERVIEW.adl
    LSC_CUST_DARMOSC_SUM_MTRX.adl

The new DARM oscillators screen (LSC_DARM_EXC_OSC_OVERVIEW.adl) is linked in the top-middle of the LSC_OVERVIEW.adl. The only sub screen on the LSC_DARM_EXC_OSC_OVERVIEW.adl is the summation matrix (LSC_CUST_DARMOSC_SUM_MTRX.adl).

I have not yet gotten to adding all the new PCAL oscillators to their MEDM screens, but I'll do so in the fullness of time.
Images attached to this comment
ansel.neunzert@LIGO.ORG - 15:40, Monday 21 August 2023 (72358)

detchar-request git issue for tracking purposes.

jeffrey.kissel@LIGO.ORG - 08:35, Tuesday 29 August 2023 (72503)
I found a bug in the
    /opt/rtcds/userapps/release/lsc/common/medm/
        LSC_DARM_EXC_OSC_OVERVIEW.adl
where DARMOSC1's TRAMP field was errantly displayed as all the 10 oscillator's TRAMPs; a residual from the copy pasta I made during the screen generation.

Fixed it. Now committed to the above location as of rev 26170.
jeffrey.kissel@LIGO.ORG - 16:51, Tuesday 29 August 2023 (72532)
Finally got around to updating the PCAL screens. Check out 
    /opt/rtcds/userapps/release/cal/common/medm/
        PCAL_END_EXC.adl
        CAL_PCAL_OSC_SUM_MATRIX.adl
as of userapps repo rev 26179.

See attached screenshots.
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 14:49, Thursday 07 September 2023 (72742)
This design is the result of ECR E2300227 and IIET:28681.
H1 TCS (OpsInfo)
camilla.compton@LIGO.ORG - posted 09:05, Monday 09 January 2023 - last comment - 16:21, Monday 21 August 2023(66700)
CO2 turned off by ISC_LOCK in DOWN

I un-commented the lines in ISC_LOCK so that both CO2_PWR guardians will again go to NO_OUTPUT in DOWN, initial alignment and manual initial alignment states. This is undoing the change we made in alog 66535. C)2s were on all weekend. I turned them off around 8:15am. We can re-think this change later in the week.

Ryan S locked the IFO with no issues on on Friday while the CO2s remained on but we had only been unlocked for 2 hours. Comparing Ryan's lock to alog 66520, the circulating arm powers still increase by 40kW in the forst 2 hours of lock with the CO2s left on when locking. I predited this would reduce but it didn't.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 16:21, Monday 21 August 2023 (72360)

I only looked at the 2 hour transient rather than any longer 5 hour transient in the previous plots. Attached are plots comparing the lock with the CO2's left on to the lock before where the CO2s had been off for 3.5 hours before relock. The IFO power looks more stable when the CO2s are turned off between locks. This is the opposite of what's expected in 66523.

As the IFO power was fluctuating between locks it would be best to have got more than one relock with the CO2s on. 

Images attached to this comment
H1 AOS
betsy.weaver@LIGO.ORG - posted 17:12, Wednesday 20 September 2017 - last comment - 16:29, Monday 21 August 2023(38727)
Today's in-chamber vent report

Vent progress today:

HAM2 (Hugh, Kissel, Jim, Rick, Sheila)

HAM2 ISI dampers are installed, minus the 3 TMDs which are currently being tuned by Jeff K and will be reinstalled in ~ the morning.  Thanks to Sheila The Bolt Breaker for crumpling and contorting her skeleton in order to reach those difficult to reach ISI guts in that one corner inside the ISI.

Rick assisted with unplugging and moving the ISS array out of the way for the damper install.  He also hunted more appropriate cable relief hardware which he'd like to install in the next few days when the ISS goes back into place.

BSC3 (Betsy, Travis)

ITMX was thoroughly inspected to check for any mechanics* which may have cause the IFO to need it to be maxed in pitch after the May cleaning vent.  All looked fine with zero pitch offset and with maxed pitch offset.  We have zeroed the biases on the main and reaction chains and will reset both chain mechanical pointing to the optical lever when we reinstall the new ITM optic. 

While looking at it, we also mapped the optical lever beams heading out of the viewport down the manifold (Travis hoofed it down there with a head lamp, target and a pen while I steered the suspension around with a CDS laptop).  I'll post results of that separately - at first glance the CP-HR beam and ITM-AR beam look much closer together than we found months ago on the ITMy, good.

Started work to remove the sleeve, vibration absorber blocks, wedges, structure cross braces, and flooring panels (to get the sleeve off).  Face shields were added to the exposed QUAD optic surfaces.  (The suspension is still suspended so violin measurements can be done in the next day or so after we get the LSAT on the structure for measurement equipment support.)

HAM3

Bubba, Mark, and Tyler removed the doors from HAM3 today.

Jim locked the HAM3 ISI.

 

* All 4 main chain pitch adjuster rods were found tightly locked in place.  All chain cabling was as we left it - nominal, not touching any place it shouldn't.  The ACB assembly was still nominal at only ~1mm away from the QUAD structure, but not touching. All magnet flags looked reasonably centered at all stages.  All EQ stops looked well positioned and not touching.  Nothing was found that possibly fell from higher up and landed on a stage in order to cause a pitch change.  (Recall the suspension had a clean bill of health via TFs a few times over the last month, so we didn't expect to find anything really.)

 

Comments related to this report
jeffrey.kissel@LIGO.ORG - 18:15, Thursday 21 September 2017 (38739)DetChar, SEI
J. Kissel

I second the many thanks to Sheila for getting in all of the damping mechanisms for the "Corner 1" blade and V2 GS13 can. I attach some great pictures of her crumpling skills. 

Also attached are pictures of the blade tip and flexure dampers post-install. While the flexure dampers arrangement isn't amazing, I'm quite confident viton is forgiving enough that this will still do the job. I'm also confident that although it looks messy, the flexure damper still won't move around.

Note because this was a difficult ask to just install these components, I made the executive decision *not* to B&K hammer the blades as we'd done for Corner 2 and 3 (in LHO aLOG 38716). The cabling would likely have been a nightmare to keep from touching the innards of the chamber, let alone trying to get a good strike with the hammer while not interfering with the table / measurement. We've seen enough post-blade-tip-damper-install blade responses to know that the fundamental blade mode moves from 153 Hz to 139 Hz, and the Q of the TMDs is low enough that it'll cover any spread we see. (However, this is an exception, we will continue to hammer ever other HAM blade.)

And for the record, at the close of business this day, all of ISIHAM2's dampers (save the to-be-tuned TMDs) on all corners have been installed:
    - Blade-Tip Dampers
    - Flexure Dampers
    - Horz. and Vert. GS13 Can Dampers

Also, also, I grabbed a publicity photo of the back of MC3 while I was there.

Images attached to this comment
jeffrey.kissel@LIGO.ORG - 16:29, Monday 21 August 2023 (72361)EPO
Retroactively tagging EPO for the great pictures, now that we have the tag!
Displaying reports 13841-13860 of 84090.Go to page Start 689 690 691 692 693 694 695 696 697 End