Displaying reports 861-880 of 77255.Go to page Start 40 41 42 43 44 45 46 47 48 End
Reports until 11:01, Wednesday 26 June 2024
H1 General
andrei.danilin@LIGO.ORG - posted 11:01, Wednesday 26 June 2024 (78679)
Bouncing modes in SQZ-FC_LSC_DOF2

Andrei, Sheila

We've observed that upon IFO switching its state to OBSERVING (H1:GRD-IFO_OK), we observe the so-called bounce modes of mirror suspencions at freqeuencies around 27.041 Hz in H1:SQZ-FC_LSC_DOF2 channel. These modes can be present in measurements for over 10 minutes. We have checked the occurances of these events over a few days, which will hopefully help us to determine the exact suspencion responcible for this behavior (Bouncemodes.png/ligo/home/andrei.danilin/Documents/Oscillations.xml).

 

Images attached to this report
H1 SYS
sheila.dwyer@LIGO.ORG - posted 10:56, Wednesday 26 June 2024 (78667)
looking for fast shutter checks for AS_C clipping in DRMI

TJ, Sheila

The fast shutter guardian runs a check in DRMI on AS_A and AS_B, it checks that the power on those PDs are above and below thresholds, which are set by the 'SYS-MOTION_C_SHUTTER_G_TRIGGER_LOW' and 'SYS-MOTION_C_SHUTTER_G_TRIGGER_HIGH' channels.   This test was passed when we had the pressure spikes, because the beam to those diodes wasn't clipped.

TJ and I have been discussing adding an additional check to try to catch the type of problem we had in our recent pressure spikes, where there was clipping on AS_C.  The table below shows values with DRMI locked, to see if we can detect the problem before fully locking the IFO.  It seems that a check on the power on AS_C would cause false alarms, because of the variablity of DRMI build ups.  A check on the ratio of AS_C to AS_A or AS_B seems more viable, but still would probably risk false alarms.  The difference between the lowest "good" ratio for AS_C/AS_A and the highest bad ratio is only 3.5%, for AS_C/AS_B that difference is only 7%. 

  AS_C NSUM (W in HAM6) AS_A NSUM AS_ B NSUM AS_C/ AS_A AS_C/ AS_B
6/6 20:46 UTC (bad) 0.046 3908 4240 1.17e-5 1.08e-5
6/7 2:30 UTC (bad) 0.049 4162 4515 1.17e-4 1.08e-5
6/7 3:20 UTC (bad) 0.048 4145 4488 1.15e-5 1.07e-5
6/6 12:08 (good) 0.052 4240 4433 1.22e-5 1.16e-5
6/6 7:10 UTC (good) 0.0506 4098 4267 1.23e-5 1.18e-5
6/6 5:21 UTC (good) 0.053 4361 4575 1.21e-5 1.16e-5
6/6 00:56 UTC (good) 0.048 3943 4105 1.21e-5 1.17e-5
6/5 23:44 UTC (good) 0.053 4319 4513 1.23e-5 1.17e-5

 

LHO VE
david.barker@LIGO.ORG - posted 10:52, Wednesday 26 June 2024 (78678)
Wed CP1 Fill

Wed Jun 26 10:09:07 2024 INFO: Fill completed in 9min 3secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 10:44, Wednesday 26 June 2024 - last comment - 10:51, Wednesday 26 June 2024(78676)
LIGO 28AO32 h1susex DAC Test MEDM

I've written a python program which genenerates a DAC_TEST MEDM. It is accessible from the SITEMAP via the WD pull down (the CDS pull down is full).

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 10:51, Wednesday 26 June 2024 (78677)

Currently the first 8 DAC channels are routed to the ADC on h1pemex (first 8 channels). This is achieved using a custom DB25 cable, which connects to one of the 4 DAC inteface plates on the back of the IO Chassis and splits these channels into 8 BNC cables which are connected to the first 8 channels of the PEM AA chassis.

Note that this is a standard PEM AA chassis, with a gain of 10.

By moving this cable between the IO Chassis interface/header connectors, the new DAC's channels can be tested 8 at a time.

The h1susetmx DAC_MTRX permits any signal to be routed to these ADC channels.

H1 CDS
david.barker@LIGO.ORG - posted 10:38, Wednesday 26 June 2024 (78671)
CDS Maintenance Summary: Tuesday 25th June 2024

WP11928 Install New LIGO-DAC in h1susex

Richard, Marc, Fil, Erik, EJ, Dave:

A new LIGO 28bit DAC card was installed in h1susex. The current 18 and 20 bit DAC drives can be tee'ed to the new DAC via a selection matrix. The first 8 DAC channels are being read back by an ADC in h1iscex, by the h1pemex model.

h1susex was moved from the production RCG 5.1.4 boot server (h1vmboot1) to the new RCG 5.3.0 boot server (h1vmboot0).

h1susauxex was powered down during this install.

The h1iopsusex model was modified to add the new DAC. h1susetmx was modified to drive the new DAC with any combination of the four production DACs it is driving (two are 18bit, two are 20bit).

A DAQ restart was required. Due to a DAQ configuration issue the 0-leg was offline for about an hour.

WP11937 New Cal models

Louis, Joe, Dave:

New h1cal[cs, ex, ey] were installed. A DAQ restart was needed.

WP11944 New Seiproc model

Jim, Dave:

A new h1seiproc model was installed. A DAQ restart was required.

DAQ Restart

Jonathan, Erik, Dave:

As mentioned above this was a messy restart because the new boot server rewrote the DAQ master file to only include the h1susex system. Our brief h1pemmx test yesterday did not pick this up since we did not do a DAQ restart.

The issue was found on the 0-leg, so did not impact the control room while we identified the problem as resolved it. The 1-leg restart went ahead with no issues.

After the DAQ was back up and running, we noticed that FW0 frame write times were sometimes 10-20 seconds longer than FW1's, and sometimes about the same. This situation persisted for about an hour and then resolved itself.

3IFO Dewpoint Sensor for HAM3

Bubba, Fil, Dave:

Fil investigated the HAM3 3IFO dewpoint sensor. It looks like a failed sensor.

Roof Camera

Fil, Marc, Dave:

Fil and Marc went onto the roof and restored the camera which attaches to the viewing platform. This camera is now working again.

h1sush2a DAC high quarter error

Erik, EJ, Dave:

We discovered that all the h1sush2a DACs had been in high-quarter-fifo error since 9th April 2024 (last day of O4 break). This frontend controls MC1, MC3, PRM, PR3.

We restarted the models and the DACs are now good.

See Erik's alog for details.

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 10:34, Wednesday 26 June 2024 (78675)
h1susex IO Chassis running with Beckhoff Timing Error

The Beckhoff timing system is reporting an error at the EX station following the new DAC install yesterday, which required a timing card with an updated firmware version. This error is actually just an informational message acknowlegding the firmware version difference on the timing card attached to the fourth port (port3) of the EX timing fanout.

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 10:30, Wednesday 26 June 2024 (78672)
CDS Overview Shows Front Ends Built With Other RCG

While h1susex is running on RCG5.3.0 I have modified the CDS overview to show this so we don't forget that these models need a special build.

I've used the sliver between the filter-status and the ipc-status to show this.

Images attached to this report
H1 General
ryan.crouch@LIGO.ORG - posted 10:00, Wednesday 26 June 2024 - last comment - 10:33, Wednesday 26 June 2024(78669)
Morning IA issues

This morning it seems XARM was having some issues related to the PLL (seen yesterday in alog78660) beatnote. This caused the XARM to keep jumping from unlocked -> increase_flashes -> fault, as seen in the log. After a bit it was able to fix itself and the GREEN_ARMS section of IA finished in about ~30 minutes, the rest of IA went fine and only took 15 minutes. The longer GREEN_ARMS time caused the H1_MANAGER IA timer of 40 minutes to expire and IA got stuck in INIT_ALIGN_FINISHED and SRC_ALIGN_OFFLOADED for ALIGN_IFO.

Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 10:33, Wednesday 26 June 2024 (78674)

ALS_XARM was going into FAULT(state 14) every time INCREASE_FLASHES was tried due to issues with PLL, PDH, VCO, and the beat note strength. Trending some channels, it looks like the fault was corrected when the laser head crystal frequency went down and ALS-X_FIBR_DEMON_RFMON went back to above -10dB.

Images attached to this comment
H1 SUS (CDS, SUS)
erik.vonreis@LIGO.ORG - posted 09:39, Wednesday 26 June 2024 (78668)
28 bit DAC loopback channel-to-channel connection and gain for ESD drive

Five SUSETMX ESD channels are connected to the new 28 bit DAC and looped back into ADC4 on PEMEX.

The connections are made using a mux matrix named DAC_MTRX in the ETMX model.

L3_ESD_DC -> ADC channel 0

L3_ESD_UR -> ADC channel 1

L3_ESD_LR -> ADC channel 2

L3_ESD_UL -> ADC channel 3

L3_ESD_LL -> ADC channel 5

 

The 20-bit digital DAC output going to the ESD is sent through the mux matrix and then scaled with a gain of 51.  This gain takes into account the 8 extra bits on the 28 bit DAC, the 10x gain in the AA chassis, and the 2x voltage range of the A2D compared to the DAC. 

The gain results in close to max A2D range for a railed 20-bit DAC.

H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 09:05, Wednesday 26 June 2024 - last comment - 10:15, Wednesday 26 June 2024(78666)
Lockloss

Lockloss @ 06/26 16:04 UTC unknown cause

Comments related to this report
oli.patane@LIGO.ORG - 10:15, Wednesday 26 June 2024 (78670)

17:14 Observing

 

H1 General
oli.patane@LIGO.ORG - posted 07:31, Wednesday 26 June 2024 - last comment - 08:47, Wednesday 26 June 2024(78662)
Ops Day Shift Start

TITLE: 06/26 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 2mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.06 μm/s
QUICK SUMMARY:

Relocking and at FIND_IR

Comments related to this report
ryan.crouch@LIGO.ORG - 08:45, Wednesday 26 June 2024 (78663)PSL

Overnight there were notifications for the PSL chiller being low, I went out to check and it was close to the MIN so I added 100 ml of water to get the water level just under MAX.

oli.patane@LIGO.ORG - 08:47, Wednesday 26 June 2024 (78664)

15:46UTC Observing

LHO General
ryan.short@LIGO.ORG - posted 01:07, Wednesday 26 June 2024 (78661)
Ops Eve Shift Summary

TITLE: 06/26 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: One lockloss this shift with a longer recovery due to some unknown early state locklosses, but otherwise a quiet shift. H1 has been locked for 3 hours.

LOG: No log for this shift.

H1 ISC
sheila.dwyer@LIGO.ORG - posted 21:56, Tuesday 25 June 2024 - last comment - 09:48, Thursday 11 July 2024(78652)
OM2 impact on low frequency sensitivity and optical gain

The first attachment shows spectra (GDS CALIB STRAIN clean, so with calibration corrections and jitter cleaning updated and SRCL FF retuned) with OM2 hot vs cold this week, without squeezing injected.  The shot noise is slightly worse with OM2 hot, while the noise from 20-50Hz does seem slightly better with OM2 hot.  This is not as large of a low frequency improvement as was seen in December.  The next attachment shows the same no squeezing times, but with coherences between PRCL and SRCL and CAL DELTAL.  MICH is not plotted since it's coherence was low in both cases.  This suggests that some of the low frequency noise with OM2 cold could be due to PRCL coherence. 

The optical gain is 0.3% worse with OM2 hot than it was cold (3rd attachment), before the OMC swap we saw a 2% decrease in optical gain when heating OM2 in Decmeber 74916 and last July 71087.  This seems to suggest that there has been a change in the OMC mode matching situation since last time we did this test. 

The last attachment shows our sensitivity (GDS CALIB STRAIN CLEAN) with squeezing injected.  The worse range with OM2 hot can largely be attributed to worse squeezing, the time shown here was right after the PSAMs change this morning 78636 which seems to have improved the range to roughly 155Mpc with cleaning; it's possible that more psams tuning would improve the squeezing further. 

Times used for these comparisons (from Camilla):

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 11:43, Friday 28 June 2024 (78722)

Side point about some confusion caused by a glitch:

The first attachment shows something that caused me some confusion, I'm sharing what the confusion was in case this comes up again.  It is a spectrum of the hot no sqz time listed above, comparing the spectrum produced by dtt with 50 averages, 50% overlap, and BW 0.1 Hz (which requires 4 minutes at 15 seconds of data), compared to a spectrum produced by the noise budget code at the same time. The noise budget uses a default resolution of 0.1Hz and 50% overlap, and the number of averages is set by the duration of data we give it which is most often 10 minutes.  The second screenshot shows that there was a glitch 4 minutes and 40 seconds into this data stretch, so that the spectrum produced by the noise budget shows elevated noise compared to the one produced by dtt. The third attachment shows the same spectra comparison, where the noise budget span is set to 280 seconds so the glitch is not included and the two spectra agree. 

Comparison of sensitivity with OM2 hot and cold wihtout squeezing:

The next two attachments show spectra comparisons for no sqz times with OM2 hot and cold, (same times as above), the first shows a comparison of the DARM spectrum, and the second shows the range accumating as a function of frequency.  In both plots, the bottom panel shows the difference in accumulated range, so this curve has a positive slope where the sensitivity of OM2 hot is better than OM2 cold, and a negative slope where OM2 hot is worse.  The small improvement in sensitivity between 20-35 Hz improves the range by almost 5Mpc, then there is a new broad peak at 33Hz with OM2 hot which comes and goes, and again a benefit of about 4Mpc due to the small improvement in sensitivity from 40-50 Hz. 

From 90-200 Hz the sensitivity is slightly worse with OM2 hot.  The coupled cavity pole dropped from 440Hz to 424Hz while OM2 warmed up, we can try tuning the offsets in AS72 to improve this as Jennie and Keita did a few weeks ago: 78415

Comparison of with squeezing:

Our range has been mostly lower than 160 Mpc with OM2 hot, which was also true in the few days before we heated it up.  I've picked a time when the range just hit 160Mpc after thermalization, 27/6/2024 13:44 UTC to make the comparison of our best sensititivites with OM2 hot vs cold. This is a time without the 33Hz peak, we gain roughly 7 Mpc from 30-55 Hz, (spectra and accumulated range comparisons) and loose nearly all of that benefit from 55-200 Hz.  We hope that we may be able to gain back some mid frequency sensitivty by optimizing the PSAMs for OM2 hot, and by adjusting SRM alignment.  This is why we are staying with this configuration for now, hoping to have some more time to evaluate if we can improve the squeezing enough here.  

There is a BRUCO running for the 160Mpc time with OM2 hot, started with the command:

python -m bruco --ifo=H1 --channel=GDS-CALIB_STRAIN_CLEAN --gpsb=1403531058 --length=400 --outfs=4096 --fres=0.1 --dir=/home/sheila.dwyer/public_html/brucos/GDS_CLEAN_1403531058 --top=100 --webtop=20 --plot=html --nproc=20 --xlim=7:2000 --excluded=/home/elenna.capote/bruco-excluded/lho_excluded_O3_and_oaf.txt

It should appear here when finished: https://ldas-jobs.ligo.caltech.edu/~sheila.dwyer/brucos/GDS_CLEAN_1403531058/

 

 

Images attached to this comment
gerardo.moreno@LIGO.ORG - 15:59, Wednesday 10 July 2024 (78829)VE

(Jenne, Jordan, Gerardo)

On Monday June 24, I noticed an increase on pressure at HAM6 pressure gauge only.  Jordan and I tried to correlate the rise on pressure to other events but we found nothing, we looked at RGA data, but nothing was found, then Jenne pointed us to the OM2 thermistor.

I looked at the event on question, and one other event related to changing the temperature of OM2, and the last time the temperature was modified was back on October 10, 2022.

Two events attached.

Images attached to this comment
camilla.compton@LIGO.ORG - 09:48, Thursday 11 July 2024 (79026)

Some more analysis on pressure vs OM2 temperature in alog 78886: this recent pressure rise was smaller than the first time we heated OM2 after the start of O4 pumpdown.

H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 19:14, Tuesday 25 June 2024 - last comment - 22:15, Tuesday 25 June 2024(78659)
Lockloss @ 01:50 UTC

Lockloss @ 01:50 UTC - link to lockloss tool

No immediately obvious cause. We had an incoming EQ alert at the same time, so I thought an S-wave could've hit and caused the lockloss, but no noticable ground motion was seen at the time.

Comments related to this report
ryan.short@LIGO.ORG - 22:15, Tuesday 25 June 2024 (78660)

H1 back to observing at 05:00 UTC.

Lots of unexplained locklosses at random pre-DRMI states, making this reacquisition longer. Not sure what the issue was, but otherwise PRM needed adjustment to eventually lock DRMI.

I also twice had an issue with the ALS X PLL showing the "Beat note strength" error message, which prevented it from locking and would put the ALS_XARM into FAULT. The beatnote was around -11dBm and the lower limit was set to -10dBm, so I simply raised the threshold to -12 and the PLL locked just fine (I later reverted the SDF diff to start observing). Trending the ALSX PLL beatnote back, it does seem to sometimes get this low, but not very often (attached). It's already made its way back up to -5dBm about an hour later.

Images attached to this comment
H1 SQZ (DetChar, DetChar-Request)
camilla.compton@LIGO.ORG - posted 09:04, Thursday 20 June 2024 - last comment - 10:34, Wednesday 26 June 2024(78549)
SQZ Laser is Glitchy since May 20th, CLF ISS gltihcy since O4b start.

Vicky, Begum, Camilla: Vicky and Begum noted that the CLF ISS and SQZ laser is glitchy.

Vicky's plot shows CLF ISS glitches started with O4b, attached.

Timeline below shows SQZ laser glitches started May 20th and aren't related to TTFSS swaps. DetChar - Request : Do you see these gltiches in DARM since May 20th?

  • Pre-April 25: no glitches
  • 25 April:  TTFSS issues, Removed Chassis S2300258, Installed Chassis S2300259 (alog 77418); also caused SQZ rack glitch: 77424 / FRS 31061
  • 25 April - 7 May: no glitches apart from on Tuesday April 30th
  • 7 May: Reinstalled repaired TTFSS. Removed Chassis S2300259, Installed Chassis S2300258 (77688)
  • 7 May - May 20th: Glitches stay small
  • May 20th 22:00UTC (Monday): First glitch shown on detchar summary page /20240520/sqz/glitches/
    • Day shift summary: 77941 included PR2 spot move, but don't expect glitches could be  IFO alignment dependent.
  • Lots of glitches since May 20th.  These glitches aren't constant and seem to get better and worse on different days, in general got worse since May 20th.

Summary pages screenshots from: before glitches started, first glitch May 20th (see top left plot 22:00UTC), bad glitches since then.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 12:25, Monday 24 June 2024 (78626)

Missed point:

  • 9 May: TTFSS issues, Removed Chassis S2300258, Installed Chassis S2300259, 77734  later saw the issues were from fiber polarization not the chassis.
andrei.danilin@LIGO.ORG - 14:08, Monday 24 June 2024 (78627)

In addition to previous report, I must note that glitches started on May 9th and continued for several times even before to May 25th.
Glitches are usually accompanied by the increased noise in H1:SQZ-FIBR_EOMRMS_OUT_DQ and H1:SQZ-FIBR_MIXER_OUT_DQ channels.

Images attached to this comment
andrei.danilin@LIGO.ORG - 10:34, Wednesday 26 June 2024 (78673)

Andrei, Camilla

Camilla swapped the TTFSS fiber box 78641 on June 25th in hopes that this will resolve the glitches issue.

However, it made no difference: Figure (see from 20:40 UTC, as it is when TTFSS box was swapped).

Images attached to this comment
Displaying reports 861-880 of 77255.Go to page Start 40 41 42 43 44 45 46 47 48 End