Displaying reports 7361-7380 of 85593.Go to page Start 365 366 367 368 369 370 371 372 373 End
Reports until 14:32, Wednesday 09 October 2024
H1 SUS (SUS)
anthony.sanchez@LIGO.ORG - posted 14:32, Wednesday 09 October 2024 (80571)
Weekly In-Lock SUS Charge Measurements

FAMIS 28374 Weekly In-Lock SUS Charge Measurements

 

The nominal set of plots that the Famis task requests are plotted as usual.
ETMX looks fine
ETMY also looks fine
ITMX has a point on the plot that skews the plot so much that we cant really see any recent relative motion of the measurement.
Same thing happened with ITMY.

I'm not sure if this is actually useful.... so here are the same ones but only 4 months of data instead to get those really low ones off the plots.

4 Month ETMX
4 Month ETMY
4 Month ITMX
4 Month ITMY

Maybe this isn't useful to anyone but I was mildly interested.

Images attached to this report
H1 General (ISC, OpsInfo)
oli.patane@LIGO.ORG - posted 14:03, Wednesday 09 October 2024 (80569)
filter_bank_changes script is now executable in medm

The filter_bank_changes script, which shows how filters and on/off switches within a filter bank have changed over a period of time (example output, original alog), is now executable within medm screens by doing right click > Execute > filter_bank_changes. It works similar to ndscope and fmscan in that you have to select a channel on the medm screen. The channel can be any of the channels within the filter bank (ex. _SWSTAT, _INMON, _Name00, etc).
If you'd like to run it outside of an MEDM screen, the script can be found in /ligo/gitcommon/filterBankChanges/ and can be run with ./filterbankchanges.sh CHANNEL_NAME  , where CHANNEL_NAME should end in _SWSTAT or one of the other aforementioned filter bank channel endings

Images attached to this report
H1 PSL
sheila.dwyer@LIGO.ORG - posted 13:43, Wednesday 09 October 2024 (80566)
NPRO controller swapped, IFO relocked and glitches still present

Rick, Keita, Sheila, Daniel, remote help from Matt Heintze

This morning we swapped the NPRO laser controller S2200009 out for S2200008. 

Settings before we started: Laser diode A set temperature 18.3C, Laser diode B set temperature 15.99C, laser diode injection current 2.134 Amps, laser crystal set temperature 26.04 C, laser crystal actual temperature 26.10 C. 

We went to the diode room, and followed the notes from the procedure Ryan Short outlined to me to shut down the amplifiers, turning off amp2, then amp1, then closing the shutter, then we went one step beyond his instructions and shut off the NPRO.  We swapped the controller with all the cables, including the interlock. 

When we powered on S2200008 laser diode A temperature was set to 17.14, B set to 22.96C.  We adjusted the pots on the front panel until they matched the values we had written down from the original controller.  We turned the knob on the front panel for the injection current to 0.  Rick and I went to the laser diode room and turned the NPRO back on, Keita confirmed that this turned on the controller.  We noticed that the laser diode 1 and 2 set temps were what we had set them to be by adjusting the pots for A and B, but the act temp readbacks weren't matching, we confirmed with photos that with the original controller the set and actual temps matched. (I will attach a photo of this state).  At the rack Ketia turned the injection current up to about 100mA,   this didn't change the temperature readbacks. 

We had a discussion with Matt Heinze, who agreed this is probably a readback issue and that it was safe to keep going.  We think that this is because we haven't adjusted the pots on the LIGO daughter board following T1900456.  Keita slowly turned up the injection current knob while Rick and I watched from the diode room, the NPRO power came back to 1.8W which was the same as before.  The laser diode act power readbacks for diode 1 says 3.6W, while it was 5W with the original controller, and diode 2 says 2.48W where it also said 5W with the original controller.  Since the power output is the same, we think these are also just readback problems due to not following T1900456.   We opened the shutter, turned on the amplifiers after a few minutes, and set the power watchdogs back on, as well as the pump diode watchdog. 

The PMC and FSS locked without issue, Daniel found that the squeezer laser frequency had to increase by 850MHz to get the squeezer TTFSS locked, so we moved the NPRO temperature up on the FSS by about 0.3 K. After this the squeezer and ALS lasers locked easily, and Oli relocked the IFO. 

The first attached photo is the NPRO screen on the beckhoff machine before the controller swap, the second photo is after.

While Oli was relocking we saw that there are still glitches, both of the attached screenshots were taken while the IFO was in the engage ASC state.

Seeing this and based on Camilla's observation that the locklosses started 80561 on the same day that we redistributed gains to increase range in the IMC servo board in 79998, we decided to revert the change, which was intended to help us ride out earth quakes.  Daniel points out that this makes some sense, that it might help to ride out the earthquake if we limit the range available to the carm servo.  We hope that this will help us to ride out the glitches better without loosing lock.

Rick went back to the diode room and reset the watchdogs after the laser had been running for about 2 hours.

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 13:14, Wednesday 09 October 2024 (80567)
Back to Observing

After going down to swap the NPRO controller (80566), we are back to Observing

H1 DAQ
daniel.sigg@LIGO.ORG - posted 12:24, Wednesday 09 October 2024 (80565)
Atomic clock reset

As part of yesterday's maintenance, the atomic clock has been resynchronized with GPS. The tolerance has been reduced to 1000ns again. Will see how long it lasts this time.

LHO VE
david.barker@LIGO.ORG - posted 10:54, Wednesday 09 October 2024 (80564)
Wed CP1 Fill

Wed Oct 09 10:07:43 2024 INFO: Fill completed in 7min 40secs

Gerardo confirmed a good fill curbside. Note new fill time of 10am daily.

Images attached to this report
H1 SUS (SUS)
ryan.crouch@LIGO.ORG - posted 10:52, Wednesday 09 October 2024 (80536)
OPLEV charge measurements, ETMX, ETMY

I ran the OPLEV charge measurements for both of the ETMs yesterday morning.

ETMX's charge looks to be stagnant.

ETMY's charge looks to be on a small upward trend, the charge is above 50V on LL_{P,Y} and UL_Y.

Images attached to this report
H1 General (CDS, Lockloss, PSL)
oli.patane@LIGO.ORG - posted 10:11, Wednesday 09 October 2024 (80563)
H1 DOWN for Corrective Maintenance

At 17:03UTC I purposely broke lock so we could start corrective maintenance for the PSL. Not sure how long we will be down. We'll also be taking this opportunity to fix the camera issue that has been taking us out of Observing every hour since 8:25UTC this morning(80556).

H1 ISC (PSL)
camilla.compton@LIGO.ORG - posted 09:27, Wednesday 09 October 2024 - last comment - 16:13, Friday 11 October 2024(80561)
First PSL/IMC caused NLN locklosses on 13th September

I added a tag to the locklost tool to tag "IMC" if the IMC looses lock within 50ms of the AS_A channel seeing the lockloss: MR !139. This will work for all future locklosses.

I then ran just the refined time plugin on all NLN locklosses from the emergency break end 2024-08-21 until now in my personal lock loss account. The first lockloss from NLN tagged "IMC" was 2024-09-13 1410300819 (plot here), then got more and more frequent: 2024-09-19, 2024-09-21, then almost every day since the 2024-09-23.

Oli found that we've ben seeing PSL/FSS glitches since before June 80520 (they didn't check before that). The first lockloss tagged FFS_OSCLATION (where FSS_FAST_MON >3 in the 5 seconds before lock loss) was Tuesday September 17th 80366.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 09:51, Wednesday 09 October 2024 (80562)

Keita and Sheila discussed that on September 12th the IMC gain distribution was changed by 7dB (same output but gain moved from input of board to output): 79998. We think this shouldn't effect the PSL but it could possibly have exasperated the PSL glitching issues. We could try to revert this change if the PSL power chassis swap doesn't help.

camilla.compton@LIGO.ORG - 16:13, Friday 11 October 2024 (80623)Lockloss

On Wednesday 9th Oct the IMC gain redistribution was reverted: 80566. It seems like this change has helped reduce the locklosses from a glitching PSL/FSS but hasn't solved it completely.

  • In the 48 hours since the change we've had one locklosss tagged IMC: 1412609367
  • Before then (see testing lockloss website), we had 28 NLN locklosses tagged with IMC, on average ~2 per day.  

 

H1 General
oli.patane@LIGO.ORG - posted 07:47, Wednesday 09 October 2024 - last comment - 09:05, Wednesday 09 October 2024(80556)
Ops Day Shift Start

TITLE: 10/09 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 8mph Gusts, 5mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.22 μm/s
QUICK SUMMARY:

Observing and have been Locked for 7 hours. Overnight we went in and out of Observing many times, not sure why yet but it looks like it wasn't because of squeezing at least

Comments related to this report
camilla.compton@LIGO.ORG - 08:03, Wednesday 09 October 2024 (80557)TCS

One of these observing drops was for 90seconds as the CO2Y laser unlocked and relocked at a lightly lower power. It is a known issue that the laser is coming to the end of it's life and we plan to swap it out in the coming ~weeks 79560.

Images attached to this comment
oli.patane@LIGO.ORG - 08:11, Wednesday 09 October 2024 (80558)

Looks like the camera servo had the beamsplitter camera get stuck, causing CAMERA_SERVO to turn the servos off and then back on, which worked. Since Dave had just had to restart that camera an hour earlier because of it getting stuck (80555), I wonder if it came back in a weird state?

ISC_LOCK:

2024-10-09_08:24:57.089769Z ISC_LOCK [NOMINAL_LOW_NOISE.run] USERMSG 0: CAMERA_SERVO: has notification
2024-10-09_08:25:10.719794Z ISC_LOCK [NOMINAL_LOW_NOISE.run] Unstalling CAMERA_SERVO
2024-10-09_09:25:01.088160Z ISC_LOCK [NOMINAL_LOW_NOISE.run] USERMSG 0: CAMERA_SERVO: has notification
2024-10-09_10:25:04.090580Z ISC_LOCK [NOMINAL_LOW_NOISE.run] USERMSG 0: CAMERA_SERVO: has notification
2024-10-09_11:25:07.094947Z ISC_LOCK [NOMINAL_LOW_NOISE.run] USERMSG 0: CAMERA_SERVO: has notification
2024-10-09_12:25:10.092275Z ISC_LOCK [NOMINAL_LOW_NOISE.run] USERMSG 0: CAMERA_SERVO: has notification
2024-10-09_13:07:53.531208Z ISC_LOCK [NOMINAL_LOW_NOISE.run] USERMSG 0: TCS_ITMY_CO2_PWR: has notification
2024-10-09_13:25:13.087490Z ISC_LOCK [NOMINAL_LOW_NOISE.run] USERMSG 0: CAMERA_SERVO: has notification
2024-10-09_14:25:16.088011Z ISC_LOCK [NOMINAL_LOW_NOISE.run] USERMSG 0: CAMERA_SERVO: has notification
 

CAMERA_SERVO example:

2024-10-09_14:25:16.020827Z CAMERA_SERVO [CAMERA_SERVO_ON.run] USERMSG 0: ASC-CAM_PIT1_INMON is stuck! Going back to ADS
2024-10-09_14:25:16.021486Z CAMERA_SERVO [CAMERA_SERVO_ON.run] USERMSG 1: ASC-CAM_YAW1_INMON is stuck! Going back to ADS
2024-10-09_14:25:16.078955Z CAMERA_SERVO JUMP target: TURN_CAMERA_SERVO_OFF
2024-10-09_14:25:16.079941Z CAMERA_SERVO [CAMERA_SERVO_ON.exit]
2024-10-09_14:25:16.080398Z CAMERA_SERVO STALLED
2024-10-09_14:25:16.144160Z CAMERA_SERVO JUMP: CAMERA_SERVO_ON->TURN_CAMERA_SERVO_OFF
2024-10-09_14:25:16.144402Z CAMERA_SERVO calculating path: TURN_CAMERA_SERVO_OFF->CAMERA_SERVO_ON
2024-10-09_14:25:16.144767Z CAMERA_SERVO new target: DITHER_ON
2024-10-09_14:25:16.146978Z CAMERA_SERVO executing state: TURN_CAMERA_SERVO_OFF (500)
2024-10-09_14:25:16.148768Z CAMERA_SERVO [TURN_CAMERA_SERVO_OFF.main] ezca: H1:ASC-CAM_PIT1_TRAMP => 0
2024-10-09_14:25:16.149295Z CAMERA_SERVO [TURN_CAMERA_SERVO_OFF.main] ezca: H1:ASC-CAM_PIT1_GAIN => 0
...
2024-10-09_14:25:29.432120Z CAMERA_SERVO [TURN_CAMERA_SERVO_OFF.main] ezca: H1:ASC-CAM_YAW3_SW1 => 4
2024-10-09_14:25:29.683381Z CAMERA_SERVO [TURN_CAMERA_SERVO_OFF.main] ezca: H1:ASC-CAM_YAW3 => OFF: INPUT
2024-10-09_14:25:29.757840Z CAMERA_SERVO REQUEST: CAMERA_SERVO_ON
2024-10-09_14:25:29.757840Z CAMERA_SERVO STALL cleared
2024-10-09_14:25:29.757840Z CAMERA_SERVO calculating path: TURN_CAMERA_SERVO_OFF->CAMERA_SERVO_ON
2024-10-09_14:25:29.824193Z CAMERA_SERVO EDGE: TURN_CAMERA_SERVO_OFF->DITHER_ON
2024-10-09_14:25:29.824193Z CAMERA_SERVO calculating path: DITHER_ON->CAMERA_SERVO_ON
2024-10-09_14:25:29.824193Z CAMERA_SERVO new target: TURN_ON_CAMERA_FIXED_OFFSETS
2024-10-09_14:25:29.824193Z CAMERA_SERVO executing state: DITHER_ON (300)
2024-10-09_14:25:59.577353Z CAMERA_SERVO [DITHER_ON.run] ezca: H1:ASC-ADS_YAW5_DOF_GAIN => 20
2024-10-09_14:25:59.581068Z CAMERA_SERVO [DITHER_ON.run] timer['wait'] = 0
2024-10-09_14:25:59.767573Z CAMERA_SERVO EDGE: DITHER_ON->TURN_ON_CAMERA_FIXED_OFFSETS
2024-10-09_14:25:59.768055Z CAMERA_SERVO calculating path: TURN_ON_CAMERA_FIXED_OFFSETS->CAMERA_SERVO_ON
2024-10-09_14:25:59.768154Z CAMERA_SERVO new target: CAMERA_SERVO_ON
2024-10-09_14:25:59.768948Z CAMERA_SERVO executing state: TURN_ON_CAMERA_FIXED_OFFSETS (405)
2024-10-09_14:25:59.769677Z CAMERA_SERVO [TURN_ON_CAMERA_FIXED_OFFSETS.enter]
2024-10-09_14:25:59.770639Z CAMERA_SERVO [TURN_ON_CAMERA_FIXED_OFFSETS.main] timer['wait'] done
...
2024-10-09_14:26:33.755960Z CAMERA_SERVO [TURN_ON_CAMERA_FIXED_OFFSETS.run] ADS clk off: ASC-ADS_YAW5_OSC_CLKGAIN to 0
2024-10-09_14:26:33.761759Z CAMERA_SERVO [TURN_ON_CAMERA_FIXED_OFFSETS.run] ezca: H1:ASC-ADS_YAW5_OSC_CLKGAIN => 0.0
2024-10-09_14:26:33.763017Z CAMERA_SERVO [TURN_ON_CAMERA_FIXED_OFFSETS.run] ezca: H1:ASC-ADS_YAW5_DEMOD_I_GAIN => 0
2024-10-09_14:26:37.898356Z CAMERA_SERVO [TURN_ON_CAMERA_FIXED_OFFSETS.run] timer['wait'] = 4

jonathan.hanks@LIGO.ORG - 08:44, Wednesday 09 October 2024 (80559)
From the h1digivideo2 cam26 server logs (/tmp/H1-VID-CAM26_gst.log), at 25 minutes past the error we see a timeout on the camera.

2024-10-09 12:25:16,806 UTC - INFO - startCamera done
2024-10-09 13:25:05,745 UTC - ERROR - Duplicate messages in the previous second: 0 ... psnap.snapCamera failed: timeout
2024-10-09 13:25:05,745 UTC - ERROR - Attempting to stop camera
2024-10-09 13:25:05,748 UTC - ERROR - Attempting to start camera
2024-10-09 13:25:05,750 UTC - INFO - Opened camera
2024-10-09 13:25:24,404 UTC - INFO - Duplicate messages in the previous second: 0 ... Opened camera
2024-10-09 13:25:24,427 UTC - INFO - setupCamera done
2024-10-09 13:25:24,441 UTC - INFO - startCamera done
2024-10-09 14:25:08,817 UTC - ERROR - Duplicate messages in the previous second: 0 ... psnap.snapCamera failed: timeout
2024-10-09 14:25:08,817 UTC - ERROR - Attempting to stop camera
2024-10-09 14:25:08,827 UTC - ERROR - Attempting to start camera
2024-10-09 14:25:08,828 UTC - INFO - Opened camera
2024-10-09 14:25:31,865 UTC - INFO - Opened camera
2024-10-09 14:25:31,888 UTC - INFO - Duplicate messages in the previous second: 0 ... setupCamera done
2024-10-09 14:25:31,904 UTC - INFO - startCamera done
2024-10-09 15:07:36,397 UTC - ERROR - Duplicate messages in the previous second: 0 ... psnap.snapCamera failed: could not grab frame
2024-10-09 15:07:36,397 UTC - ERROR - Attempting to stop camera
2024-10-09 15:07:36,446 UTC - ERROR - Attempting to start camera
2024-10-09 15:07:36,447 UTC - INFO - Opened camera
2024-10-09 15:07:36,463 UTC - INFO - setupCamera done
2024-10-09 15:07:36,476 UTC - INFO - startCamera done
2024-10-09 15:07:36,477 UTC - ERROR - setupReturn = True
2024-10-09 15:07:36,477 UTC - ERROR - Failed to get frame - try number: 1
2024-10-09 15:25:11,892 UTC - ERROR - Duplicate messages in the previous second: 0 ... psnap.snapCamera failed: timeout
2024-10-09 15:25:11,893 UTC - ERROR - Attempting to stop camera
2024-10-09 15:25:11,898 UTC - ERROR - Attempting to start camera
2024-10-09 15:25:11,900 UTC - INFO - Opened camera
2024-10-09 15:25:39,541 UTC - INFO - Opened camera
2024-10-09 15:25:39,564 UTC - INFO - setupCamera done
2024-10-09 15:25:39,580 UTC - INFO - startCamera done
oli.patane@LIGO.ORG - 09:05, Wednesday 09 October 2024 (80560)DetChar

Tagging DetChar - These glitches that happen once an hour around 11Hz are because of this camera issue (the glitches are caused by the camera ASC turning off and ADS turning on). We are still having this issue and are planning on trying a correction at the next lockloss.

Images attached to this comment
H1 General (CDS)
anthony.sanchez@LIGO.ORG - posted 00:41, Wednesday 09 October 2024 (80555)
H1 stuck in ADS_TO_CAMERAS

TITLE: 10/09 Owl Shift: 0500-1430 UTC (2200-0730 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 3mph Gusts, 2mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.23 μm/s
QUICK SUMMARY:


H1 called me at midnight to tell me that the it was stuck in ADS_TO_CAMERAS.
I read the DIAG main messages and It mentioned the CAMERA_SERVO Guardian was the issue. Opened up that to find that ASC-YAW1 camera was stuck...
This is not all that help full of an error message by it's self.
So ...Checked the alogs for something like "ASC-CAM Stuck" and found Corey's alog!:

https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=79685
Then checked the BS (26) on h1digivideo2 and noticed that the centroids were not updating. Yup same problem! so I tried to restart, stop & start the service, none of which worked. Then I called Dave, who then logged in and fixed it for me.

We are now Observing. Thanks Dave!

 

 

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 22:00, Tuesday 08 October 2024 (80554)
OPS Eve Shift Summary

TITLE: 10/09 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Tony
SHIFT SUMMARY:

IFO is LOCKING at CHECK_IR

We're having locking issues, primarily due to an IMC that's taking a while to lock. There seems to be some glitchiness here too when we lose lock sometimes and the power stays at 10W. We've gotten as high as PRMI/MICH_FRINGES but keep losing lock.

The last lock was fully auto (and DRMI locked immediately) with the one exception that IMC took 20 mins to lock.

EQ-Mode Worthy EQ coming from Canada (5.0 mag)

Lockloss 1: alog 80552

Lockloss 2: alog 80553
LOG:

None

 

H1 PSL (PSL)
ibrahim.abouelfettouh@LIGO.ORG - posted 21:00, Tuesday 08 October 2024 (80553)
Lockloss 03:52 UTC

FSS/PSL Caused Lockloss - IMC and ASC lost lock within 20ms. FSS Glitchy behavior started 2 mins before locloss- at a threshold, we lost lock. Screenshots attached.

Issues have been happening since mid-Sept.

 

Images attached to this report
H1 SQZ
camilla.compton@LIGO.ORG - posted 10:09, Monday 07 October 2024 - last comment - 12:25, Thursday 10 October 2024(80506)
SQZ ASC turned on using AS42 on ZM4/6

Sheila, Camilla.

New SQZ ASC using AS42 signals with feedback to ZM4 and ZM6 tested and implemented.  We still need to watch that this can keep a good SQZ alignment during thermalization. In O4a we used a SQZ ASC with  ZM5/6, we have not had a SQZ ASC for the majority of O4b.

Prep to improve SQZ:

Testing ASC from 80373:

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 17:16, Monday 07 October 2024 (80519)OpsInfo

In the first 20 minutes of the lock, the SQZ ASC appears to be working well, plot.

Note to operator team: if the squeezing gets really bad, you should be able to use the SQZ Overview > IFO ASC (black linked button) > "!graceful clear history" script to turn off the SQZ ASC. Then change /opt/rtcds/userapps/release/sqz/h1/guardian/sqzparams.py use_ifo_as42_asc to False and go though NO_SQZEEZING then FREQ_DEP_SQUEEZING in SQZ_MANAGER and accept the sdfs for not using SQZ ASC. If SQZ still looks bad, put ZM4/6 osems (H1:SUS-ZM4/6_M1_DAMP_P/Y_INMON) back to when squeezing was last good and if needed run scan sqz alignment and scan sqz angle with SQZ_MANAGER.

Images attached to this comment
camilla.compton@LIGO.ORG - 09:37, Tuesday 08 October 2024 (80534)

Sheila moved the  "0.01:0" integrators from the ASC_POS/ANG_P/Y filters into the ZM4/5/6_M1_LOCK_P/Y filter banks.

This will allow us to more easily adjust the ASC gains and to use the guardian ZM offload states. We turned them on on ZM4/6. Edited OFFLOAD_SQZ_ASC to offload for ZM4,5,6. And tested by putting an offset on ZM4.  We put ZM4/6 back to positions they were in in lock via the osesms. SDFs for filters accepted. I removed the "!offload AS42" button from the SQZ > IFO ASC screen (liked to sqz/h1/scripts/ASC/offload_IFO_AS42_ASC.py) as it caused a lockloss yesterday. 

Images attached to this comment
camilla.compton@LIGO.ORG - 14:10, Wednesday 09 October 2024 (80570)

Oli tested the SQZ_MANAGER OFFLOAD_SQZ_ASC guardian state today and it worked well.  We still need to make the state request-able. 

camilla.compton@LIGO.ORG - 12:25, Thursday 10 October 2024 (80594)

ASC now turns off before  SCAN_SQZANG_FDS/FIS and SCAN_ALIGNMENT_FDS/FIS. It wil check if the ASC is on via H1:SQZ-ASC_WFS_SWITCH and turn the asc off before scanning alignment or angle.

We changed the paths so that to get from SCAN_SQZANG_FDS/FIS and SCAN_ALIGNMENT_FDS/FIS back to squeezing, the guardian will go though SQZ_ASC_FDS/FIS to turn back on ASC afterwards.

H1 DetChar (DetChar, Lockloss)
bricemichael.williams@LIGO.ORG - posted 11:33, Thursday 12 September 2024 - last comment - 16:04, Wednesday 30 October 2024(80001)
Lockloss Channel Comparisons

-Brice, Sheila, Camilla

We are looking to see if there are any aux channels that are affected by certain types of locklosses. Understanding if a threshold is reached in the last few seconds prior to a lockloss can help determine the type of lockloss, which channels are affected more than others, as well as

We have gathered a list of lockloss times (using https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi) with:

  1. only Observe and Refined tags (plots, histogram)
  2. only Observe, Refined, and Windy tags (plots, histogram)
  3. only Observe, Refined, and Earthquake tags (plots, histogram)
  4. Observe, Refined, and Microseism tags (note: all of these also have an EQ tag, and all but the last 2 have an anthropogenic tag) (plots, histogram)

(issue: the plots for the first 3 lockloss types wouldn't upload to this aLog. Created a dcc for them: G2401806)

We wrote a python code to pull the data of various auxilliary channels 15 seconds before a lockloss. Graphs for each channel are created, a plot for each lockloss time are stacked on each of the graphs, and the graphs are saved to a png file. All the graphs have been shifted so that the time of lockloss is at t=0.

Histograms for each channel are created that compare the maximum displacement from zero for each lockloss time. There are also a stacked histogram based on 12 quiet microseism times (all taken from between 4.12.24 0900-0930 UTC). The histrograms are created using only the last second of data before lockloss, are normalized by dividing by the numbe rof lockloss times, and saved to a seperate pnd file from the plots.

These channels are provided via a list inside the python file and can be easily adjusted to fit a user's needs. We used the following channels:

channels = ['H1:ASC-AS_A_DC_NSUM_OUT_DQ','H1:ASC-DHARD_P_IN1_DQ','H1:ASC-DHARD_Y_IN1_DQ','H1:ASC-MICH_P_IN1_DQ', 'H1:ASC-MICH_Y_IN1_DQ','H1:ASC-SRC1_P_IN1_DQ','H1:ASC-SRC1_Y_IN1_DQ','H1:ASC-SRC2_P_IN1_DQ','H1:ASC-SRC2_Y_IN1_DQ', 'H1:ASC-PRC2_P_IN1_DQ','H1:ASC-PRC2_Y_IN1_DQ','H1:ASC-INP1_P_IN1_DQ','H1:ASC-INP1_Y_IN1_DQ','H1:ASC-DC1_P_IN1_DQ', 'H1:ASC-DC1_Y_IN1_DQ','H1:ASC-DC2_P_IN1_DQ','H1:ASC-DC2_Y_IN1_DQ','H1:ASC-DC3_P_IN1_DQ','H1:ASC-DC3_Y_IN1_DQ', 'H1:ASC-DC4_P_IN1_DQ','H1:ASC-DC4_Y_IN1_DQ']
Images attached to this report
Comments related to this report
bricemichael.williams@LIGO.ORG - 17:03, Wednesday 25 September 2024 (80294)DetChar, Lockloss

After talking with Camilla and Sheila, I adjusted the histogram plots. I excluded the last 0.1 sec before lockloss from the analysis. This is due to (in the original post plots) the H1:ASC-AS_A_NSUM_OUT_DQ channel have most of the last second (blue) histogram at a value of 1.3x10^5. Indicating that the last second of data is capturing the lockloss causing a runawawy in the channels. I also combined the ground motion locklosses (EQ, Windy, and microseism) into one set of plots (45 locklosses) and left the only observe (and Refined) tagged locklosses as another set of plots (15 locklosses). Both groups of plots have 2 stacked histograms for each channel:

  1. Blue:
    • the max displacement from zero between one second before and 0.1 second before lockloss, for each lockloss. 
    • The data is one second before until 0.1 second before lockloss, for each lockloss
    • the histogram is the max displacement from zero for each lockloss
    • The counts are weighted as 1/(number of locklosses in this data set) (i.e: the total number of counts in the histogram)
  2. Red:
    • I took all the data points from eight seconds before until 2 seconds before lockloss for each lockloss.
    • I then down-sampled the data points from 256 Hz to 16Hz sampling rate by taking every 16th data point.
    • The histogram is the displacement from zero of these down-sampled points
    • The counts are weighted as 1/(number of down-samples data points for each lockloss) (i.e: the total number of counts in the histogram)

Take notice of the histogram for the H1:ASC-DC2_P_IN1_DQ channel for the ground motion locklosses. In the last second before lockloss (blue), we can see a bimodal distribution with the right groupling centered around 0.10. The numbers above the blue bars is the percentage of the counts in that bin: about 33.33% is in the grouping around 0.10. This is in contrast to the distribution for the observe, refined locklosses where the entire (blue) distribution is under 0.02. This could indicate a threshold could be placed on this channel for lockloss tagging. More analysis will be required before that (I am going to next look at times without locklosses for comparison).

 

Images attached to this comment
bricemichael.williams@LIGO.ORG - 14:17, Wednesday 09 October 2024 (80568)DetChar, Lockloss

I started looking at the DC2 channel and the REFL_B channel, to see if there is a threshold in REFL_B that can be put for a new lockloss tag. I plotted the last eight seconds before lockloss for the various lockloss times. This time I split up the times into different graphs based on if the DC2 max displacement from zero in the last second before lockloss was above 0.06 (based on the histogram in previous comment): Greater = the max displacement is greater than 0.06, Less = the max displacement is less than 0.06. However, I discovered that some of the locklosses that are above 0.06 for the DC2 channel, are failing the logic test in the code: getting considered as having a max displacement less than 0.06 and getting plotted on the lower plots. I wonder if this is also happening in the histograms, but this would only mean that we are underestimating the number of locklosses above the threshold. This could be suppressing possible bimodal distributions for other histograms as well. (Looking into debugging this)

I split the locklosses into 5groups of 8 and 1 group of 5 to make it easier to distinghuish between the lines in the plots.

Based on the plots, I think a threshold for H1:ASC-REFL_B_DC_PIT_OUT_DQ would be 0.06 in the last 3 seconds prior to lockloss

 

 

Images attached to this comment
bricemichael.williams@LIGO.ORG - 11:30, Tuesday 15 October 2024 (80678)DetChar, Lockloss

Fixed the logic issue for splitting the plots into pass/fail the threshold of 0.06 as seen in the plot.

The histograms were unaffected by the issue.

Images attached to this comment
bricemichael.williams@LIGO.ORG - 16:04, Wednesday 30 October 2024 (80949)

Added code to the gitLab

Displaying reports 7361-7380 of 85593.Go to page Start 365 366 367 368 369 370 371 372 373 End