Displaying reports 4441-4460 of 83146.Go to page Start 219 220 221 222 223 224 225 226 227 End
Reports until 16:30, Monday 04 November 2024
H1 General
ryan.crouch@LIGO.ORG - posted 16:30, Monday 04 November 2024 (81054)
OPS Monday EVE shift start

TITLE: 11/05 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Wind
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 32mph Gusts, 18mph 3min avg
    Primary useism: 0.09 μm/s
    Secondary useism: 0.51 μm/s
QUICK SUMMARY:

LHO General (PSL)
ryan.short@LIGO.ORG - posted 16:30, Monday 04 November 2024 (81049)
Ops Day Shift Summary

TITLE: 11/04 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Environment - Wind
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: H1 has been down all shift due to high microseism for the first half of the day and high winds for the second. Meanwhile, we've been able to continue glitch investigations with the PSL/IMC from alog80990. We sat with the IMC locked and ISS off for 2 hours starting at 21:37 and saw no fast glitches (trend in first attachment, white background). After turning the ISS back on, things were calm for a while before the IMC lost lock at 23:54 (second attachment, black background). There were three more IMC locklosses within 10 minutes of this, but things have been steady again since.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
21:41 SAF Laser LVEA YES LVEA is laser HAZARD Ongoing
16:25 FAC Karen MY n Technical cleaning 17:49
18:18 FAC Kim MX n Technical cleaning 19:44
18:18 CDS Fil CER - Power cycling PSL chassis 19:14
18:58 PEM Robert, Lance LVEA   Check on Magnetometer in PSL racks 19:03
19:44 CDS Erik, Fil, Dave CER - Swapping ADC card 20:41
20:41 PEM Robert, Lance LVEA - Installing magnetometer in PSL racks 20:58
21:18 TCS Camilla MER n Fixing water container 21:45
21:21 CDS Fil MY n Picking up equipment 22:21
Images attached to this report
H1 ISC
elenna.capote@LIGO.ORG - posted 13:29, Monday 04 November 2024 - last comment - 22:16, Monday 04 November 2024(81051)
Calibrated H1/L1 PRCL and MICH coupling comparisons

This alog is a continuation of previous efforts to correctly calibrate and compare the LSC couplings between Hanford and Livingston. Getting these calibrations right has been a real pain, and there's a chance that there could still be an error in these results.

These couplings are measured by taking the transfer function between DARM and the LSC CAL CS CTRL signal. All couplings were measured without feedforward. The Hanford measurements were taken during the March 2024 commissioning break. The Livingston MICH measurement is from Aug 29 2023 and the PRCL measurement from June 23 2023.

As an additional note, during the Hanford MICH measurement, both MICH and SRCL feedforward were off. However, for the Livingston MICH measurement, SRCL feedforward was on. For the PRCL measurements at both sites, both MICH and SRCL feedforward were engaged.

Results:

The first plot attached shows a calibrated comparison between the MICH, PRCL and SRCL couplings at LHO.

The second plot shows a calibrated comparison between the Hanford and Livingston MICH couplings. I also included a line indicating 1/280, an estimated coupling level for MICH based on an arm cavity finesse of 440. Both sites have flat coupling between about 20-60 Hz. There is a shallow rise in the coupling above 60 Hz. I am not sure if that's real, or some artifact of incorrect calibration. The Hanford coupling below 20 Hz has steeper response, which appears like some cross coupling between SRCL perhaps (it looks about 1/f^2 to me). Maybe this is present because SRCL feedforward was off.

The third plot shows a calibrated comparison between the Hanford and Livingston PRCL couplings. I have no sense of what this coupling should look like. If the calibration here is correct, this indicates that the PRCL coupling at Hanford is about an order of magnitude higher than Livingston. Whatever coupling is present has a different response between both sites, so I don't really know what to make of this.

Calibration notes:

The Hanford measurements used H1:CAL-DELTAL_EXTERNAL_DQ and the darm calibration from March 2024 (/ligo/groups/cal/H1/reports/20240311T214031Z/deltal_external_calib_dtt.txt)

The Livingston measurement used L1:OAF-CAL_DARM_DQ and a darm calibration that Dana Jones used in her previous work (74787, saved in /ligo/home/dana.jones/Documents/cal_MICH_to_DARM/L1_DARM_calibration_to_meters.txt)

LHO MICH calibration: I updated the CAL CS filters to correctly match the current drive filters. However, I made the measurement in March 11 before catching some errors in the filters. I incorrectly applied a 200:1 filter, and multiplied by sqrt(1/2) when I should have multipled by sqrt(2) (76261). Therefore, my calibration includes a 1:200 filter and a factor of 2 to appropriately compensate for these mistakes. Additionally, my calibration includes a 1e-6 gain to convert from um to m, and an inverted whitening filter [100, 100:1, 1]. This is all saved in a DTT template: /ligo/home/elenna.capote/LSC_calibration/MICH_DARM_cal.xml

LLO MICH calibration: I started with Dana Jones' template (74787), and copied it over into my directory: /ligo/home/elenna.capote/LSC_calibration/LLO_MICH.xml. I inverted the whitening filter using [100,100,100,100,100:1,1,1,1,1] and applied a gain of 1e-6 to convert um to m.

LHO PRCL calibration: I inverted the whitening using [100,100:1,1] and converted from um to m with 1e-6.

LLO PRCL calibration: I inverted the whitening using [10,10,10:1,1,1] and converted from um to m with 1e-6.

I exported the calibrated traces to plot myself. Plotting code and plots saved in /ligo/home/elenna.capote/LSC_calibration

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 22:16, Monday 04 November 2024 (81060)

Evan Hall has a nice plot of PRCL coupling from O1 in his thesis, Figure 2.16 on page 37. I have attached a screen grab of his plot. It appears as if the PRCL coupling now in O4 is lower than it is in this measurement (from I am assuming O1)- eyeballing about 4e-4 m/m at 20 Hz now in O4 compared to about 2e-3 m/m at 20 Hz in Evan's plot.

Images attached to this comment
LHO VE
david.barker@LIGO.ORG - posted 12:45, Monday 04 November 2024 (81050)
Mon CP1 Fill

Mon Nov 04 10:10:29 2024 INFO: Fill completed in 10min 26secs

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 10:59, Monday 04 November 2024 - last comment - 09:28, Tuesday 05 November 2024(81048)
Power cycle of h1psl0

WP12186

Richard, Fil, Erik, Dave:

We performed a complete power cycle of the h1psl0. Note, this is not on the Dolphin fabric so no fencing was needed. Procedure was

The system was power cycled at 10:11 PDT. When the iop model started, it reported a timing error. The duotone signal (ADC0_31) was a flat line signal of about 8000 counts with a noise of a few counts.

Erik thought the timing card had not powered up correctly, so we did a second round of power cycles at 10:30 and this time the duotone was correct.

NOTE: the second ADC failed its AUTOCAL on both restarts. This is the PSL FSS ADC.

If we continue to have FSS issues, the next step is to replace the h1pslfss model's ADC and 16bit DAC cards.

[   45.517590] h1ioppsl0: INFO - GSC_16AI64SSA : devNum 0 : Took 181 ms : ADC AUTOCAL PASS
[   45.705599] h1ioppsl0: ERROR - GSC_16AI64SSA : devNum 1 : Took 181 ms : ADC AUTOCAL FAIL
[   45.889643] h1ioppsl0: INFO - GSC_16AI64SSA : devNum 2 : Took 181 ms : ADC AUTOCAL PASS
[   46.076046] h1ioppsl0: INFO - GSC_16AI64SSA : devNum 3 : Took 181 ms : ADC AUTOCAL PASS

Comments related to this report
david.barker@LIGO.ORG - 13:00, Monday 04 November 2024 (81052)

We decided to go ahead and replace h1pslfss model's ADC and DAC card. The ADC because of the continuous autocal fail, the DAC to replace an aging card which might be glitching.

11:30 Powered system down, replace second ADC and second DAC cards (see IO Chassis drawing attached).

When the system was powered up we had good news and bad news. The good news, ADC1 autocal passed after the previous card had been continually failing since at least Nov 2023. The bad news, we once again did not have a duotone signal in ADC0_31 channel. Again it was a DC signal, with amplitude 8115+/-5 counts.

11:50 Powered down for a 4th time today, replaced timing card and ADC0's interface card (see drawing)

12:15 powered the system back up, this time everything looks good. ADC1 AUTOCAL passed again. Duotone looks correct.

Note that the new timing card duotone crossing time is 7.1uS, and the old card had a crossing of 7.6uS

Images attached to this comment
david.barker@LIGO.ORG - 13:04, Monday 04 November 2024 (81053)

Here is a summary of the four power cycles of h1psl0 we did today:

Restart ADC1 AUTOCAL Timing Card Duotone
10:11 FAIL BAD
10:30 FAIL GOOD
11:30 (new card) PASS BAD
12:15 (new card) PASS (new cards) GOOD

 

david.barker@LIGO.ORG - 15:18, Monday 04 November 2024 (81055)

Card Serial Numbers

Card New (installed) Old (removed)
64AI64 ADC 211109-24 110203-18
16AO16 DAC 230803-05 (G22209) 100922-11
Timing Card S2101141 S2101091
ADC Interface S2101456 S1102563

 

david.barker@LIGO.ORG - 09:28, Tuesday 05 November 2024 (81067)

Detailed timeline:

Mon04Nov2024
LOC TIME HOSTNAME     MODEL/REBOOT
10:20:14 h1psl0       ***REBOOT***
10:21:15 h1psl0       h1ioppsl0   
10:21:28 h1psl0       h1psliss    
10:21:41 h1psl0       h1pslfss    
10:21:54 h1psl0       h1pslpmc    
10:22:07 h1psl0       h1psldbb    
10:33:20 h1psl0       ***REBOOT***
10:34:21 h1psl0       h1ioppsl0   
10:34:34 h1psl0       h1psliss    
10:34:47 h1psl0       h1pslfss    
10:35:00 h1psl0       h1pslpmc    
10:35:13 h1psl0       h1psldbb    
11:43:20 h1psl0       ***REBOOT***
11:44:21 h1psl0       h1ioppsl0   
11:44:34 h1psl0       h1psliss    
11:44:47 h1psl0       h1pslfss    
11:45:00 h1psl0       h1pslpmc    
11:45:13 h1psl0       h1psldbb    
12:15:47 h1psl0       ***REBOOT***
12:16:48 h1psl0       h1ioppsl0   
12:17:01 h1psl0       h1psliss    
12:17:14 h1psl0       h1pslfss    
12:17:27 h1psl0       h1pslpmc    
12:17:40 h1psl0       h1psldbb    
 

H1 PSL (OpsInfo)
ryan.short@LIGO.ORG - posted 10:33, Monday 04 November 2024 (81047)
PSL_FSS Guardian Update

C. Compton, R. Short

I've updated the PSL_FSS Guardian so that it won't jump into the FSS_OSCILLATING state whenever there's just a quick glitch in the FSS_FAST_MON_OUTPUT channel. Whenever the FSS fastmon channel is over its threshold of 0.4, the FSS_OSCILLATION channel jumps to 1, and the PSL_FSS Guardian responds by dropping the common and fast gains to -10 dB then stepping them back up to what they were before. This is meant to catch when there's a servo oscillation in the FSS, but it's also been happening when the FSS glitches. Since moving the gains around makes it difficult for the IMC to lock, the Guardian should now only do that when there's an actual oscillation and should save time relocking the IMC.

H1 PSL
ryan.short@LIGO.ORG - posted 08:28, Monday 04 November 2024 (81045)
PSL 10-Day Trends

FAMIS 31058

Trends are all over the place in the last 10 days due to several incursions, but I motsly tried to focus on how things have looked since we brought the PSL back up fully last Tuesday (alog80929). Generally things have been fairly stable, and at least for now, I dn't see the PMC reflected power slowly increasing anymore.

Images attached to this report
LHO General
ryan.short@LIGO.ORG - posted 07:39, Monday 04 November 2024 (81044)
Ops Day Shift Start

TITLE: 11/04 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 28mph Gusts, 19mph 3min avg
    Primary useism: 0.07 μm/s
    Secondary useism: 0.56 μm/s
QUICK SUMMARY: H1 has been sitting in PREP_FOR_LOCKING since 08:05 UTC. Since the secondary microseism has been slowly decreasing, I'll try locking H1 and we'll see how it goes.

H1 General
ryan.crouch@LIGO.ORG - posted 22:01, Sunday 03 November 2024 - last comment - 06:48, Monday 04 November 2024(81036)
OPS Sunday eve shift summary

TITLE: 11/04 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism, LOCK_ACQUISITION
INCOMING OPERATOR: Corey
SHIFT SUMMARY:

We spent most of the day with the secondary microseism mostly above the 90th percentile, it started to decrease ~8 hours ago but still remains about half above the 90th percentile. The elevated microseism today is from both the storms by Greenland and the Aleutians as seen by the high phase difference for both the arms with the corner station (bottom plot).

We keep losing it at the beginning of LOWNOISE_ESD_ETMX, I see some ASC ringups on the top of NUC29 (CHARD_P I think?), no FSS glitches before the locklosses, no tags on the lockloss tool besides WINDY. It's always a few seconds into the run method, I was able to avoid this by slowly stepping through some of the higher states (PREP_ASC, ENGAGE_ASC, MAX_POWER, LOWNOISE_ASC). I'm not sure which of these is the most relevent to not see the ringup, probably not the ASC states as when I paused there but forgot MAX_POWER I still saw the ringup.

LOG: No log.

Images attached to this report
Comments related to this report
corey.gray@LIGO.ORG - 06:48, Monday 04 November 2024 (81043)

For OWL shift, received 12:03amPT notification, but I had a phone issue.

H1 General
oli.patane@LIGO.ORG - posted 16:39, Sunday 03 November 2024 (81041)
Ops Day Shift End

TITLE: 11/04 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Currently still trying to relock, and at POWER_25W. All day we've been working on getting back up, but the secondary microseism is so high. It is slowly coming down though. At one point, we were in OMC_WHITENING for over an hour trying to damp violins so we could go into NLN,  but we lost lock before we could get there.
LOG:

15:30 In DOWN due to very high secondary useism
15:38 Started relocking with an initial alignment
    - When we got to PRC, I had to do the Pausing-PSL_FSS-Until-IMC-Is-Locked thing again (see 81022)
    - 16:04 Initial alignment done, relocking
    - Lockloss from ACQUIRE_DRMI_1F, OFFLOAD_DRMI_ASC, ENGAGE_ASC_FOR_FULL_IFO, TRANSITION_FROM_ETMX
    - 18:07 Sitting in DOWN for a bit
    - 18:45 Started relocking
    - More locklosses
    - 21:48 Lost lock after sitting in OMC_WHITENING damping violins for over an hour
    - More trying to lock and losing lock                                                                                                                                               

Start Time System Name Location Lazer_Haz Task Time End
17:37 PEM Robert CER n Improving ground measurement 17:44
H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 16:34, Sunday 03 November 2024 - last comment - 10:25, Monday 04 November 2024(81037)
Looking at locklosses from over the weekend

 Most of the locklosses over this weekend have the IMC tag and those do show the IMC losing lock at the same time as AS_A, but since I don't have any further insight into those, I wanted to point out a few locklosses where the causes are different from what we've been seeing lately.

2024-11-02 04:17UTC

2024-11-02 17:13 UTC

2024-11-02 18:38:30.844238 UTC

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 10:25, Monday 04 November 2024 (81046)

I checked during some normal NLN times and the H1:PSL-ISS_AOM_DRIVER_MON_OUT_DQ channel does not normally drop below 0.31 while we are in NLN (plot). In Oli's times before in lockloss, it drops to 0.28 when we loose lock. Maybe we can edit the PSL glitch scripts 80902 to check this channel.

Images attached to this comment
H1 General
ryan.crouch@LIGO.ORG - posted 16:30, Sunday 03 November 2024 (81039)
OPS Sunday EVE shift start

TITLE: 11/03 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 22mph Gusts, 18mph 3min avg
    Primary useism: 0.13 μm/s
    Secondary useism: 0.60 μm/s
QUICK SUMMARY:

H1 TCS
oli.patane@LIGO.ORG - posted 16:13, Sunday 03 November 2024 (81040)
TCS Chiller Water Level Top-Off FAMIS

Closes FAMIS#27801

Last checked a week ago by Camilla, but since the water swap was done recently, I am doing the next check only a week later.

CO2X

CO2Y

There was no water in the leak cup.

LHO VE
david.barker@LIGO.ORG - posted 10:54, Sunday 03 November 2024 (81035)
Sun CP1 Fill

Sun Nov 03 10:13:13 2024 INFO: Fill completed in 13min 10secs

 

Images attached to this report
H1 ISC (ISC)
marc.pirello@LIGO.ORG - posted 15:12, Tuesday 17 September 2024 - last comment - 20:25, Sunday 03 November 2024(80147)
SUS-ETMX Ligo DAC 32 (LD32) testing at EX (continued)

Building on work last week, we installed a 2nd PI AI chassis (S1500301) in order to keep the PI signals separate from the ESD driver signals.  Original PI AI chassis S1500299.

We routed the LD32 Bank 0 thorugh the first PI AI chassis to the ESD drive L3, while keeping the old ESD driver signal driving the PI through the new PI AI chassis.

We routed the LD32 Bank 1 to the L2 & L1 suspension drive.

We did not route LD32 Bank 2 or Bank 3 to any suspensions.  The M0 and R0 signals are still being driven by the 18 bit DACs.

The testing did not go as smoothly as planned, a watchdog on DAC slot 5 (the L1&L2 drive 20 bit DAC) continousouly tripped the ESD reset line.  We solved this by attaching that open DAC port (slot 5) to the PI AI chassis to clear the WD error.

Looks like we made it to observing.

F. Clara, R. McCarthy, F. Mera, M. Pirello, D. Sigg

Comments related to this report
jenne.driggers@LIGO.ORG - 17:54, Tuesday 17 September 2024 (80157)DetChar-Request

Part of the implication of this alog is that the new LIGO DAC is currently installed and in use for the DARM actuator suspension (the L3 stage of ETMX).  Louis and the calibration team have taken the changes into account (see, eg, alog 80155). 

The vision as I understand it is to use this new DAC for at least a few weeks, with the goal of collecting some information on how it affects our data quality.  Are there new lines?  Fewer lines?  A change in glitch rate?  I don't know that anyone has reached out to DetChar to flag that this change was coming, but now that it's in place, it would be helpful (after we've had some data collected) for some DetChar studies to take place, to help improve the design of this new DAC (that I believe is a candidate for installation everywhere for O5).

tabata.ferreira@LIGO.ORG - 20:25, Sunday 03 November 2024 (81042)DetChar

Analysis of glitch rate:

We selected Omicron transients during observing time across all frequencies and divided the analysis into two cases: (1) rates calculated using glitches with SNR>6.5, and (2) rates calculated using glitches with SNR>5. The daily glitch rate for transients with SNR greater than 6.5 is shown in Figure 1, with no significant difference observed before and after September 17th. In contrast, Figure 2, which includes all Omicron transients with SNR>5, shows a higher daily glitch rate after September 17th.

The rate was calculated by dividing the number of glitches per day by the daily observing time in hours.

Images attached to this comment
Displaying reports 4441-4460 of 83146.Go to page Start 219 220 221 222 223 224 225 226 227 End