Displaying reports 16701-16720 of 86676.Go to page Start 832 833 834 835 836 837 838 839 840 End
Reports until 08:56, Wednesday 09 August 2023
H1 General
anthony.sanchez@LIGO.ORG - posted 08:56, Wednesday 09 August 2023 (72093)
Early Departure of OBSERVING for Comissioning

Since L1 is not OBSERVING due to logging activity;
H1 has intentionally dropped out of OBSERVING early for PEM Injections and other commissioning.
 

H1 General
anthony.sanchez@LIGO.ORG - posted 08:03, Wednesday 09 August 2023 (72091)
Wednesday Ops Day Shift Start

TITLE: 08/09 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 3mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.08 μm/s 
QUICK SUMMARY:

H1 IFO is currently locked at NOMINAL_LOW_NOISE and OBSERVING for 4 Hours at 140.6 Mpc.
Livingston is down due to logging noise.
Everything here at H1l ooks great.

H1 General
oli.patane@LIGO.ORG - posted 00:11, Wednesday 09 August 2023 (72090)
Ops EVE Shift End

TITLE: 08/09 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Corey
SHIFT SUMMARY:

Pretty quiet shift overall besides SQZ_FC causing us to leave Observing for a bit(72083), the group of saturation messages(72087), and of course the lockloss from 25 minutes ago(72089). Relocking is going smoothly.

 

23:00UTC Detector in Observing and Locked for 1hr 5min

23:42:12 Out of Observing due to SQZ FC_IR unlocking
00:05:12 Back into Observing

00:43:52 Into Commissioning to fix SQZ_FC value
00:44:47 Back to Observing

6:46:49 Lockloss


LOG:                                                                                                                                                                                 

Start Time System Name Location Lazer_Haz Task Time End
18:31 ITMY Keita CTRL Rm N ITMY Single Bounce measurement 19:06
18:35 SQZ Sheila LVEA N LASER HAZARD 19:57
19:26 SEI EARTHQUAKE Samoa N 6.2 MAG Followed by a 5.9 MAG Hit at 18:51 19:56
19:34 PEM Lance LVEA Yes PEM Equipment Setup 20:34
20:02 CAL Jeff CER mezz N Pictures 20:18
20:18 OPS Tony LVEA N LVEA sweep 20:43
21:03 SQZ Sheila LVEA SQZT YES LAZER HAZ - Adjusting SQZ Setup 22:13
21:07 LVEA LASER HAZARD LVEA YES LASER HAZARD - SQZ Table 22:13
21:34 TCS Tony MER N TCS chillers fill 21:43
23:17 PCAL Tony PCal Lab y(local) Putting stuff away 23:25
00:08 SQZ Naoki CR n SQZ tests from FC unlock 00:08
H1 General
oli.patane@LIGO.ORG - posted 23:49, Tuesday 08 August 2023 (72089)
Lockloss

Lockloss at 6:46UTC from unknown causes. Nothing seismic going on at least, and nothing jumps out at me when taking a quick glance over the lockloss select ndscopes.

H1 CAL
louis.dartez@LIGO.ORG - posted 23:08, Tuesday 08 August 2023 - last comment - 23:22, Tuesday 08 August 2023(72075)
calibration improved after inclusion of ETMX L3 3.2kHz pole filter module in CAL-CS
I used Austin's PCALY to DeltaL external broadband measurement (LHO:72047), along with TJ's from late July (LHO:71760), to check the status of the LHO calibration before and after the return of the 3.2kHz "FHPole" ETMX-L3 filter module (LHO:72043).



The attached DTT screenshot shows PCALY2DELTAL_EXTERNAL (left column, uncorrected for TDCFs) and PCALY2GDS-CALIB_STRAIN (right column). The red traces show the two transfer functions after the filter change in LHO:72043 overlaid with the same taken before the change (grey traces). Looking at the right column, the PCALY_RX_PD_OUT / GDS-CALIB_STRAIN systematic error shows a significant improvement now. 

The current calibration status at GDS-CALIB_STRAIN is better up to 450Hz so we should certainly keep the new ETMX L3 filter changes. The error in the region between 60Hz-100Hz has been improved from 6% to ~1.2%.

However, there is still more work to be done: we have near 4% error at the low frequencies and no good explanation for the 1% deviation above 50Hz. 
Images attached to this report
Comments related to this report
ling.sun@LIGO.ORG - 23:22, Tuesday 08 August 2023 (72088)

The low freq ~4% error seems to agree with what's modeled in the uncertainty estimates (e.g., see attached) -- that should be a known error.

Images attached to this comment
H1 General (CDS, SUS)
oli.patane@LIGO.ORG - posted 20:25, Tuesday 08 August 2023 (72087)
Ops EVE MidShift Report

A few hours ago we were pushed out of Observing due to SQZ FC_IR unlocking (72083), but once that was solved we were able to go back into Observing. Later we did purposefully go into Commissioning for less than a minute for a SQZ related fix.

At 2:06:35UTC we got notifications on verbals for PR2, SR2, and MC2 all at once. Similar notifications showed up during the CDS issues from a couple of days ago (72019).

H1 General (SQZ)
oli.patane@LIGO.ORG - posted 17:07, Tuesday 08 August 2023 - last comment - 10:36, Wednesday 09 August 2023(72083)
Out of Observing due to SQZ FC-IR Unlocked

23:42:12UTC Pushed out of Observing by SQZ_MANAGER

We got multiple SDF Diffs appearing (attachment1 and 2), but they kept changing so the ones in the attached screenshot were the earliest ones we could get. Attachment3 shows the SQZ_MANAGER log from that time and Attachment4 shows the SQZ_FC log from that time.

Naoki and Vicky were able to relock the squeezer by manually moving the beam spot, but are running some tests before we go back into Observing.

Images attached to this report
Comments related to this report
victoriaa.xu@LIGO.ORG - 17:16, Tuesday 08 August 2023 (72084)OpsInfo, SQZ

Naoki, Vicky

Re-aligning FC2 P/Y sliders manually was able to bring squeezer back. Steps taken just now, which seemed to work (tagging OpsInfo). Screenshot of some relevent medms.

  • Paused SQZ_FC Guardian manually
  • sitemap > SQZ > SQZ Overview > FC SERVO (medms)
    --> manually locked FC VCO GREEN by engaging (closing) the servo (red switch) on the common mode board. Can also increase common gain slider from e.g. 5 -->15/20 if alignment has trouble holding. (GRD will reset everything, so adjust gains however to hold a lock)
    --> could see a weak and flashing TEM00 spot on FC GREEN TRANS camera, so adjusted FC2 P/Y sliders to maximize FC GREEN TRANS spot on the camera. This also maximized FC GREEN transmission flashes, like on H1:SQZ-FC_TRANS_C_LF_OUTPUT.
  • Trends screenshot shows trends when squeezer dropped. FC2 was aligned, and you can see flashes on the bright green FC transmission PD (H1:SQZ-FC_TRANS_C_LF_OUTPUT) increase. For alignment, you can use green transmission flashes on the PD, or watch the spot on the camera, which got brighter to what you see in the screenshot (note also the location of the beam spot on the camera; this is where the cavity should be). 
  • Un-paused SQZ-FC guardian. Then from SQZ_MANAGER, re-selected FREQ_DEP_SQZ. The squeezer went back fine after this.
  • Accepted FC2 SUS SDF diffs after squeezer was back to FDS.

Will investigate why the alignment had to be manually walked (we haven't done this in 1-2 months; ASC typically takes care of FC alignment fine). Likely similar issue that Corey had last night LHO:72050; if this happens again, we can try aligning FC2 in this way.

Images attached to this comment
oli.patane@LIGO.ORG - 17:08, Tuesday 08 August 2023 (72085)

00:05:12 Returned to Observing

naoki.aritomi@LIGO.ORG - 10:36, Wednesday 09 August 2023 (72086)

We found that SQZ ASC has been OFF since 6 days ago (first attached figure). We took SQZ data 6 days ago in alog71902. During this measurement, we turned off SQZ ASC for alignment of anti squeezing. We changed the SQZ ASC flag in sqzparams to False and set back to True after the measurement, but we may forgot to reload the SQZ_MANAGER guardian. We asked Oli to reload the SQZ_MANAGER guardian when lockloss happens. We went to commissioning and manually turned on the SQZ ASC switch and went back to observing in a minute.

We also found that the SQZ ASC switch (H1:SQZ-ASC_WFS_SWITCH) and FC ASC switch (H1:SQZ-FC_ASC_SWITCH) are not monitored. We monitored them as shown in the second, third attached figure.

Images attached to this comment
H1 General
oli.patane@LIGO.ORG - posted 16:06, Tuesday 08 August 2023 (72082)
Ops EVE Shift Start

TITLE: 08/08 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 15mph Gusts, 11mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.07 μm/s
QUICK SUMMARY:

Taking over for Tony. Observing and Locked for 1hr 12min.

H1 General
anthony.sanchez@LIGO.ORG - posted 16:04, Tuesday 08 August 2023 (72080)
Tuesday Ops Day Shift END

TITLE: 08/08 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Tony
INCOMING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 13mph Gusts, 9mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.08 μm/s 
QUICK SUMMARY:

Tuesday Aug 8th Maintenence Day Activities

Apollo & Pest control contrators were on site.

Craning West bay -- Done  
HAM7 & 5 work with Fil & Rahul --Done  
PCAL Beam Spot move _completed by 18:00 UTC
CAL looking through LVEA for cables -- Done by 18:00
Fil CP2 interlock --Done by 18:13 UTC
keita Checking OM2 heater voltage --Done 18:25
Travis VAC Walkabout MX and EX Turbos --completed by 18:37 UTC
Liquid Nitrogen Fill at EX driving away at 18:42 UTC

Daq Restart Started 18:30 ish finish at 18:50 UTC
Keita ITMY Single Bounce measurement -- interuppted by Earthquake  

LVEA LASER HAZARD at 18:52 UTC
Large EQ 18:51 UTC
LVEA IS LASER SAFE 19:58 UTC

Locking Started 20:57 UTC
LVEA Is laser HAZARD 21:01 UTC
Stopped at ADS_TO_CAMERAS at 21:39 UTC to give time for Sheila to finish her SQZ table adjustments.
LVEA IS Laser Safe 22:00
H1 in NOMINAL_LOW_NOISE at 21:53 UTC
H1 Reached Observing at 22:07 UTC

Activity LOG:                                                                                                                                                 

Start Time System Name Location Lazer_Haz Task Time End
07:52 ops corey remote n H1 remote 12:25
14:22 als corey remote n ALS_X in FAULT state 15:23
15:02 FAC Karen EY N Technical cleaning 16:26
15:07 FAC Christina, Nicole, Ernest EX, EY, FCES YES (EX) Inventory 18:02
15:07 FAC Cindi HAM SHAQ N Technical cleaning 16:22
15:14 CAL Jeff, Luis S. LVEA, MER, CER N Checking chassis and cables 18:02
15:17 CAL Tony, Julianna, Rick EX YES PCal end station measurement 17:47
15:22 VAC Jordan, Janos EY N FTIR samples 17:22
15:34 EE Fil LVEA - HAM7 N Cabling on chamber 17:17
15:52 VAC Travis MX, EX N Turbopump maintenance 18:37
16:25 PEM Robert LVEA N Set up PEM injection equipment 16:58
16:31 FAC Cindi, Karen LVEA N Technical cleaning 18:31
16:51 VAC Gerardo LVEA N Checking CP1 17:17
16:57 ISC Keita LVEA N Check OM2 heater voltage 18:26
17:18 VAC Gerardo, Fil LVEA - LX N CP2 interlock work 18:13
17:22 VAC Jordan, Janos EX Mech Rm N FTIR samples 17:51
17:30 EPO Louis & friends LVEA N Tour 18:15
17:38 PEM Robert LVEA, EX N LVEA IS LASER HAZARD 18:48
17:51 VAC Jordan, Janos FCES N Dropping off parts 18:07
18:09 VAC Randy FCES N moving Leak detectors 18:44
18:17 VAC Janos LVEA N Moving parts 18:48
18:18 PEM Robert EX& EY N Ground Injections setup 18:46
18:25 VAC Gerardo EY N Power Cycling Alarm channels for EY VAC  18:58
18:31 ITMY Keita CTRL Rm N ITMY Single Bounce measurement 19:06
18:35 SQZ Sheila LVEA N LASER HAZARD 19:57
18:44 PEM Lance LVEA N PEM Parts cleanup 18:45
18:46 CDS Dave Remote N Daq Restarts finish 18:47
19:26 SEI EARTHQUAKE Samoa N 6.2 MAG Followed by a 5.9 MAG Hit at 18:51 19:56
19:34 PEM Lance LVEA Yes PEM Equipment Setup 20:34
20:02 CAL Jeff CER mezz N Pictures 20:18
20:18 OPS Tony LVEA N LVEA sweep 20:43
21:03 SQZ Sheila LVEA SQZT YES LAZER HAZ - Adjusting SQZ Setup 22:13
21:07 LVEA LASER HAZARD LVEA YES LASER HAZARD - SQZ Table 22:13
21:34 TCS Tony MER N TCS chillers fill 21:43


 

H1 SQZ
naoki.aritomi@LIGO.ORG - posted 16:04, Tuesday 08 August 2023 - last comment - 12:50, Tuesday 26 September 2023(72081)
Pump AOM and fiber alignment

Sheila, Naoki, Vicky

Last week we aligned the pump AOM and fiber in alog71875, but the alignment procedure was not correct. Today we realigned them with correct procedure.

Procedure for pump AOM and fiber alignment:

1) set the ISS drivepoint at 0V to make only 0th order beam and check 90% of AOM throughput with power meter. The measured AOM throughput was 33.8 mW/36 mW = 94%.

2) set the ISS drivepoint at 5V and align the AOM to maximize the 1st order beam. After the AOM alignment, the 1st order beam was 11 mW and the 0th order beam was 23 mW. We measured again the AOM throughput including both 0th and 1st order beam. The AOM throughput was 36 mW/38 mW = 95%.

3) set the ISS drivepoint at 5V and align the fiber by maximizing the H1:SQZ-OPO_REFL_DC_POWER.

4) adjust the OPO temperature by maximizing the H1:SQZ-CLF_REFL_RF6_ABS_OUTPUT.

After this alignment, the SHG output is 42.7 mW, the pump going to fiber is 20.5 mW, and the rejected power is 2.7 mW. The ISS can be locked with OPO trans of 80 while the ISS control monitor is 4.2, which is in stable region.

Comments related to this report
naoki.aritomi@LIGO.ORG - 12:50, Tuesday 26 September 2023 (73114)

Regarding 2), we maximized the +1st order beam, which is left side of 0th order beam looking from the laser side of SQZT0.

H1 DetChar (CAL, DetChar)
derek.davis@LIGO.ORG - posted 15:01, Tuesday 08 August 2023 - last comment - 21:49, Tuesday 29 August 2023(72064)
Excess noise near 102.13 Hz calibration line

Benoit, Ansel, Derek

Benoit noticed that for recent locks, the 102.13 Hz calibration line is much louder than typical for the first few hours of the lock. An example of this behavior is shown in the attached spectrogram of H1 strain data on August 5 - this is the first day this behavior appeared. Ansel noted that this feature includes a comb-like structure around the line that is only present in the H1:GDS-CALIB_STRAIN_NOLINES channel and not H1:GDS-CALIB_STRAIN (see spectra for CALIB_STRAIN and CALIB_STRAIN_NOLINES on Aug 5). This issue also visible in the PCAL trends for the 102.13 Hz line. 

We are not sure if the excess noise near 102.13 Hz is from the calibration line itself or another noise source that is near the line. However, the behavior has been present for every lock since 12:30 UTC on August 5 2023. 

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 09:21, Wednesday 09 August 2023 (72094)CAL
FYI, 
$ gpstime Aug 05 2023 12:30 UTC
    PDT: 2023-08-05 05:30:00.000000 PDT
    UTC: 2023-08-05 12:30:00.000000 UTC
    GPS: 1375273818.000000
so... this behavior seems to have started at 5:30a local time on a Saturday. Therefore *very* unlikely that the start of this issue is intentional / human change driven.

The investigation continues....
making sure to tag CAL.
jeffrey.kissel@LIGO.ORG - 09:42, Wednesday 09 August 2023 (72095)
Other facts and recent events:
- Attached are 2 screenshots that show the actual *digital* excitation is not changing with time in anyway.
    :: 2023-08-08_H1PCALEX_OSC7_102p13Hz_Line_3mo_trend.png shows the specific oscillator, --- PCALX's OSC7 which drives the 102.13 Hz line's EPICs channel version of its output. The minute trend shows the max, min, and mean of the output, and there's no change in amplitude.
    :: 2023-08-08_H1PCALEX_EXC_SUM_3mo_trend.png shows a trend of the total excitation sum from PCAL X. This also shows *no* change in time in amplitude.

Both trends show the Aug 02 2023 change in amplitude kerfuffle I caused that Corey found and a bit later rectified -- see LHO:71894 and subsequent comments, but that was done, over with an solved, definitely by Aug 03 2023 UTC and unrelated to the start up of this problem.

It's also well after I installed new oscillators and rebooted the PCALX, PCALY, and OMC models on Aug 01 2023 (see LHO:71881).
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 10:44, Wednesday 09 August 2023 (72100)
The front-end version of the calibration's systematic error at 102.13 Hz also shows the long, time-dependent issue -- this will allow us to trend the issue against other channels

Folks in the calibration group have found that the online monitoring system for the
    -  overall DARM response function systematic error
    - (absolute reference) / (Calibrated Data Product) [m/m] 
    - ( \eta_R ) ^ (-1) 
    - (C / 1+G)_pcal / (C / 1+G)_strain
    - CAL-DELTAL_REF_PCAL_DQ / GDS-CALIB_STRAIN
(all different ways of saying the same thing; see T1900169) in calibration at each PCAL calibration line frequency -- the "grafana" pages -- are showing *huge* amounts of systematic error during these times when the amplitude of the line is super loud.

Though this metric is super useful because it's dreadfully obvious that things are going wrong -- this metric is not in any normal frame structure, so you can't compare it against other channels to find out what's causing the systematic error.

However -- remember -- we commissioned a front-end version of this monitoring during ER15 -- see LHO:69285.

That means the channels 
     H1:CAL-CS_TDEP_PCAL_LINE8_COMPARISON_OSC_FREQ        << the frequency of the monitor
    H1:CAL-CS_TDEP_PCAL_LINE8_SYSERROR_MAG_MPM        << the magnitude of the systematic error
    H1:CAL-CS_TDEP_PCAL_LINE8_SYSERROR_PHA_DEG        << the phase of the systematic error 

tell you (what's supposed to be***) equivalent information.

*** One might say that "what's suppose to be" is the same as "roughly equivalent" due to the following reasons: 
    (1) because we're human, the one system is displaying the systematic error \eta_R, and the other is displaying the inverse ( \eta_R ) ^ (-1) 
    (2) Because this is early-days in the front-end system, it uses the "less complete" calibrated channel CAL-DELTAL_EXTERNAL_DQ rather than the "fully correct" channel GDS-CALIB_STRAIN

But because the problem is so dreadfully obvious in these metrics, even though they're only *roughly* equivalent, you can see the same thing.
In the attached screenshot, I show both metrics for the most recent observation stretch, between 10:15 and 14:00 UTC on 2023-Aug-09.

Let's use this front-end metric to narrow down the problem via trending.
Images attached to this comment
jeffrey.kissel@LIGO.ORG - 10:58, Wednesday 09 August 2023 (72102)CAL, DetChar
There appears to be no change in the PCALX analog excitation monitors either.

Attached is a trend of some key channels in the optical follower servo -- the analog feedback system that serves as intensity stabilization and excitation power linearization for the PCAL's laser light that gets transmitted to the test mass -- the actuator of which is an acousto-optic modulator (an AOM). There seems to be no major differences in the max, min, and mean of these signals before vs. after these problems started on Aug 05 2023.

H1:CAL-PCALX_OFS_PD_OUT_DQ
H1:CAL-PCALX_OFS_AOM_DRIVE_MON_OUT_DQ
Images attached to this comment
madeline.wade@LIGO.ORG - 11:34, Wednesday 09 August 2023 (72104)

I believe this is caused by the presence of another line very close to the 102.13 Hz pcal line.  This second line is present at the start of a lock stretch but seems to go away as the lock stretch continues.  I have attached a plot showing a zoom-in on an ASD around 102.1-102.2 Hz right after a lock stretch (orange), where the second peak is evident, and well into a lock stretch (blue) where the PCAL line is still present, but the second peak right below it in frequency is gone.  This ASD is computed using an hour of data for each curve, so we can get the needed resolution for these two peaks.

I don't know the origin of this second line.  However, a quick fix to the issue could be moving the PCAL line over by about a Hz.  The second attached plot shows that the spectrum looks pretty clean from 101-102 Hz, so somewhere in there would be probably be okay for a new location of the PCAL line.

Images attached to this comment
ansel.neunzert@LIGO.ORG - 11:47, Wednesday 09 August 2023 (72105)

Since it looks like the additional noise is at 102.12833 Hz, I did a quick check in Fscan data from Aug 5 for channels where there is high coherence with DELTAL_EXTERNAL at 102.12833 but *not* at 102.13000 Hz. This narrows down to just a few channels:

  • H1:PEM-EX_MAG_EBAY_SEIRACK_{Z,Y}_DQ .
  • H1:PEM-EX_ADC_0_09_OUT_DQ
  • H1:ASC-OMC_A_YAW_OUT_DQ. Note that other ASC-OMC channels (Fscan tracks A,B and PIT,YAW) see high coherence at both frequencies.

(lines git issue opened as we work on this.)

jeffrey.kissel@LIGO.ORG - 14:45, Wednesday 09 August 2023 (72110)
As a result of Ansel's discovery, and conversation on the CAL call today -- I've moved the calibration line frequency from 102.13 to 104.23 Hz. See LHO:72108.
derek.davis@LIGO.ORG - 13:28, Friday 11 August 2023 (72157)

This line may have appeared in the previous lock the day before (Aug 4). The daily spectrogram for Aug 4 shows a line near 100 Hz starting at 21:00 UTC. 

Images attached to this comment
elenna.capote@LIGO.ORG - 16:49, Friday 11 August 2023 (72163)

Looking at alogs leading up to the time Derek notes above, I noticed that Gabriele retuned and tested new LSC FF. This change may be related to this new peak. Remembering some issues we had recently where DHARD filter impulses were ringing up violin modes, I checked the new LSC FF filters and how they are engaged in the guardian. Some of them have no ramp time, and the filter bank is turned on immediately along with the filters in the guardian. I have no idea why that would cause a peak at 102 Hz, but I updated those filters to have a 3 second ramp.

oli.patane@LIGO.ORG - 16:51, Friday 11 August 2023 (72164)

Reloaded the H1LSC model to load in Elenna's filter changes

ansel.neunzert@LIGO.ORG - 13:36, Monday 14 August 2023 (72197)

Now that the calibration line has been moved, the comb-like structure at the calibration line frequency is no longer present (checked in the CLEAN channel).

We can also see the shape of the 102.12833 Hz line much more clearly without the overlapping calibration line. I have attached a plot for reference on the width and shape.

Images attached to this comment
camilla.compton@LIGO.ORG - 16:33, Monday 14 August 2023 (72203)ISC

As discussed in todays commissioning meeting, I checked TMSX and ETMX movement for a kick during locking and couldn't see anything suspicious. I did find some increase motion/noise every 8Hz in TMSX 1s into ENGAGE_SOFT_LOOPS when ISC_LOCK isn't explicitly doing anything, plot attached. However this noise was present prior to Aug 4th, (July 30th attached).

TMS is suspicious as Betsy found that TMS's have violin modes ~103-104Hz.

Jeff draws attendtion to 38295, showing modes of quad blade springs above 110Hz, and 24917 showing quad top wire modes above 300Hz.

Elenna's notes with calibration lines off (as we are experimenting with for current lock) we can see this 102Hz peak at ISC_LOCK state ENGAGE_ASC_FOR_FULL_IFO. We were mistaken.

Images attached to this comment
elenna.capote@LIGO.ORG - 21:49, Tuesday 29 August 2023 (72544)

To preserve documentation, this problem has now been solved, with more details in 7253772319, and 72262.

The cause of this peak was a spurious, narrow, 102 Hz feature in the SRCL feedforward that we didn't catch when the filter was made. This has been been fixed, and the cause of the mistake has been documented in the first alog listed above so we hopefully don't repeat this error.

H1 TCS
anthony.sanchez@LIGO.ORG - posted 14:57, Tuesday 08 August 2023 (72077)
TCS Chiller Water Level Top-Off - FAMIS 21127

FAMAS 21127

No water was added to Either TCSX or Y Chillers , There was no water in the cup in the corner.

  TCS X TCS Y
Previous Level 30.0 10.0
New Level 30.0 10.0
Water added 0mL 0mL
H1 CDS
david.barker@LIGO.ORG - posted 13:35, Tuesday 08 August 2023 - last comment - 08:23, Wednesday 09 August 2023(72067)
CDS Maintenance Summary: Tuesday 8th August 2023

WP11359 Consolidated power control IOC software

Erik:

Erik installed new IOC code for the control of Pulizzi and Tripplite network controlled power switches. No DAQ restart was needed.

WP11358 Add new end station Tripplite EPICS channels to the DAQ

Dave:

I modified H1EPICS_CDSACPWR.ini to add the new end station Tripplite control boxes. I found that this file was very much out of date, and only had the MSR Pulizzi channels. I added the missing seven units to the file. DAQ + EDC restart was needed.

WP11357 Add missing ZM4,5 SUSAUX channels to DAQ

Jeff, Rahul, Fil, Marc, Dave:

Jeff discovered that the FM4,5  M1 VOLTMON channels were missing from the SUSAUX model. In this case these are being read by h1susauxh56. It was found that the existing ZM6 channels were actually reading ZM5's channels. h1susauxh56.mdl was modified to:

Change the ADC channels being read by ZM6 from 20-23 to 24-27

Add ZM4 reading ADC channels 16-19

Add ZM5 reading ADC channels 20-23

DAQ Restart was needed.

DAQ Restart

Dave:

Another messy DAQ restart. The sequence was:

  1. Restart the 1-leg
  2. Restart the EDC to include the power control channels
  3. gds1 needed a second restart
  4. When fw1 was back up and had written its first full frame:
  5. Restart the 0-leg
  6. gds0 needed a second restart
  7. After fw0 had written its first full frame, fw1 spontaneously crashed!

This was an interesting data point, last week I restarted the DAQ in the opposite order of 0-leg then 1-leg, and as fw1 was coming back fw0 spontaneously crashed, which in that case resulted in some full frame files not being written by either fw.

Could it be that starting the second fw impacts the first fw's disk access speed (perhaps LDAS gap checker switching between file systems)?

As we have found to be always the case, once the errant fw has crashed once, it does not crash again.

 

Comments related to this report
david.barker@LIGO.ORG - 13:45, Tuesday 08 August 2023 (72068)

DAQ missing full frames GPS times (no overlap between fw0 and fw1 lists) (missing because of crash highlighted)

Scanning directory: 13755...
FW0 Missing Frames [1375554944, 1375555008]
FW1 Missing Frames [1375554752, 1375554816, 1375555072, 1375555136, 1375555200]
 

david.barker@LIGO.ORG - 15:01, Tuesday 08 August 2023 (72078)

This morning new filtermodules were added to h1susauxh56 to readout the ZM4,5 quadrants. The RCG starts new filtermodules in an inactive state, namely with INPUT=OFF, OUTPUT=OFF, GAIN=0.0. It can be a bit time consuming to manually activate the MEDM switches by hand.

I wrote a script to activate new filtermodules, called activate_new_filtermodules. It takes the filtermodule name as its argument.

Here is an example using a h1pemmx filtermodule:

david.barker@opslogin0: activate_new_filtermodule PEM-MX_CHAN_12
H1:PEM-MX_CHAN_12_SW1 => 4
H1:PEM-MX_CHAN_12 => ON: INPUT
H1:PEM-MX_CHAN_12_SW2 => 1024
H1:PEM-MX_CHAN_12 => ON: OUTPUT
H1:PEM-MX_CHAN_12_GAIN => 1.0

 

david.barker@LIGO.ORG - 15:12, Tuesday 08 August 2023 (72079)

------------------- DAQ CHANGES: ------------------------

REMOVED:

No channels removed from the DAQ frame

ADDED:

+8 fast channels added (all at 256Hz)

< H1:SUS-ZM4_M1_VOLTMON_LL_OUT_DQ 4 256
< H1:SUS-ZM4_M1_VOLTMON_LR_OUT_DQ 4 256
< H1:SUS-ZM4_M1_VOLTMON_UL_OUT_DQ 4 256
< H1:SUS-ZM4_M1_VOLTMON_UR_OUT_DQ 4 256
< H1:SUS-ZM5_M1_VOLTMON_LL_OUT_DQ 4 256
< H1:SUS-ZM5_M1_VOLTMON_LR_OUT_DQ 4 256
< H1:SUS-ZM5_M1_VOLTMON_UL_OUT_DQ 4 256
< H1:SUS-ZM5_M1_VOLTMON_UR_OUT_DQ 4 256
 

+112 slow channels added

david.barker@LIGO.ORG - 08:23, Wednesday 09 August 2023 (72092)

Tue08Aug2023
LOC TIME HOSTNAME     MODEL/REBOOT
11:31:41 h1susauxh56  h1susauxh56 <<< Correct ZM6, add ZM4 and ZM5


11:32:59 h1daqdc1     [DAQ] <<< 1-leg restart
11:33:10 h1daqfw1     [DAQ]
11:33:11 h1daqnds1    [DAQ]
11:33:11 h1daqtw1     [DAQ]
11:33:19 h1daqgds1    [DAQ]


11:33:53 h1susauxb123 h1edc[DAQ] <<< EDC restart for CDSACPWR


11:34:25 h1daqgds1    [DAQ] <<< 2nd gds1 restart needed


11:36:06 h1daqdc0     [DAQ] <<< 0-leg restart
11:36:17 h1daqfw0     [DAQ]
11:36:17 h1daqtw0     [DAQ]
11:36:18 h1daqnds0    [DAQ]
11:36:25 h1daqgds0    [DAQ]
11:37:14 h1daqgds0    [DAQ] <<< 2nd gds0 restart needed


11:39:10 h1daqfw1     [DAQ] <<< FW1 crash!!
 

H1 SUS (CDS, SUS)
rahul.kumar@LIGO.ORG - posted 12:26, Tuesday 08 August 2023 - last comment - 14:24, Tuesday 08 August 2023(72057)
Performed h1susauxh56 model change to add ZM4, ZM5 M1 VOLTMON readback and correct it for ZM6

Dave, Rahul

This morning we made changes to h1susauxh56 model and added the missing ZM4, ZM5 M1 VOLTMON readback and corrected it for ZM6 - I am attaching a screenshot of the latest model which was successfully installed and restarted. This was an oversight from O4 installation of the HXDS and was discovered by Jeff (while he was going through O5 SUS electronics layout). The details of the wiring diagram is shown in D2000202_v12 for ZM4, ZM5 (see page6, look at ADC5) and D1002740_V10 (page6, ADC5). The details of the new ADC channels (added 8 new fast channels) for each suspension on h1susauxh56 model is given below,

ZM4 ADC5 CH16-CH19 
ZM5 ADC5 CH20-CH23
ZM6 ADC5 CH24-CH27 (changed from CH21-CH24) 

h1susauxh56.mdl has been committed on the SVN (located at cd /opt/rtcds/userapps/release/sus/h1/models) and Dave has successfully performed a DAQ restart.

The VOLTMON for ZM4 and ZM5 were showing zero reading, which meant we had to switch ON the filters and apply a Gain a 1.0 to all four quadrants for both the suspensions. The filters were located at

cd /opt/rtcds/lho/h1/medm/h1susauxh56

and using the Command "medm -x H1SUSAUXH56_ZM4_M1_VOLTMON_LL.adl" we switched ON the filters and applied the Gain.

I am attaching a screenshot of ZM4 and ZM5 and the VOLTMON readouts on both the suspensions are fully functional.

WP 11357 is now complete and I am closing it.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 14:24, Tuesday 08 August 2023 (72074)

The new VOLTMON channels for ZM4 and ZM5 have been initialized, accepted, and monitored in the susauxh56 SDF table, screenshot attached.

Images attached to this comment
H1 CAL
louis.dartez@LIGO.ORG - posted 17:00, Monday 07 August 2023 - last comment - 12:08, Thursday 10 August 2023(72043)
3.2 kHz HF pole filter module restored in CAL-CS ETMX L3 bank
Taking advantage of the fact that we're not locked, I put the missing ETMX "HFPole" filter module (LHO:72030) back in the H1CAL-CS_DARM_ANALOG_ETMX_L3 filterbank. From inspecting the filter archive, it looks like the "HFpole" ETMX filter module was removed on 4/25/2023. This is around the time we were rolling out the cmd-dev infrastructure for the calibration group. 

The plan is to follow up with a Broadband measurement later tonight or at the earlier opportunity to establish whether or not to keep this filter in place. 


The zpk string I used is zpk([], [3226.75],1,"n"). The value 3226.75 was calculated by summing the poles for all four ESD quadrants from LHO:46773 as per LHO:27150.

I've attached screenshots of the ETMX filterbank and the GDS TP window.

GDS table diff

324c324
< # DESIGN   CS_DARM_ANALOG_ETMX_L3 2 zpk([],[3226.75],1,"n")
---
> # DESIGN   CS_DARM_ANALOG_ETMX_L3 2 zpk([],[],9.787382864894167e-13,"n")
343c343
< CS_DARM_ANALOG_ETMX_L3 2 21 1      0      0 HFPole     4.158812836234200838170239e-01  -0.1682374327531596   0.0000000000000000   1.0000000000000000   0.0000000000000000
---
> CS_DARM_ANALOG_ETMX_L3 2 21 1      0      0 TEST_Npct_50W 9.787382864894166725851836e-13   0.0000000000000000   0.0000000000000000   0.0000000000000000   0.0000000000000000

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 17:17, Monday 07 August 2023 (72045)
The above aLOG covers another *solution* top the on-going studies about the ~5-10% systematic error in the calibration -- namely, what's unique to LHO and *left over* after the flaw in GDS filters that was fixed in LHO:71787.

The filter was loaded by 2023-08-07 17:15 UTC.
louis.dartez@LIGO.ORG - 15:39, Tuesday 08 August 2023 (72076)
This change has been added to the LHO record of calibration pipeline changes for O4, DCC:T2300297
jeffrey.kissel@LIGO.ORG - 12:08, Thursday 10 August 2023 (72135)
Correction to the timing of this filter update -- The filter was loaded by 2023-08-07 17:15 PDT -- i.e. 2023-08-08 00:15 UTC
Displaying reports 16701-16720 of 86676.Go to page Start 832 833 834 835 836 837 838 839 840 End