Displaying reports 6141-6160 of 83403.Go to page Start 304 305 306 307 308 309 310 311 312 End
Reports until 11:34, Friday 16 August 2024
H1 SUS (SUS)
ryan.crouch@LIGO.ORG - posted 11:34, Friday 16 August 2024 (79574)
ETM OPLEV charge measurement

I ran the OPLEV charge measurements for both the ETMs this morning.

On ETMX the charge still looks to be decreasing towards zero on all DOF/quads.

On ETMY the charge looks fairly stable, hovering just above 0 and 50 on all the DOFs/quads, where it has been the past few measurements.

Images attached to this report
LHO FMCS (PEM)
oli.patane@LIGO.ORG - posted 09:15, Friday 16 August 2024 (79571)
HVAC Fan Vibrometers Check FAMIS

Closes FAMIS#26323, last checked 79458

Corner Station Fans (attachment1)
- All fans are looking normal and within range.

Outbuilding Fans (attachment2)
- All fans are looking normal and within range.

Images attached to this report
H1 PSL
thomas.shaffer@LIGO.ORG - posted 08:50, Friday 16 August 2024 (79569)
Added 175mL to PSL chiller

We get a "Check PSL chiller" verbal alarm this morning so I did exactly that. The water level was about 1/2 way between max and min, but not in alarm. I added 175mL to get it back to max. The filter on the wall is starting to not look as pristine as it once was, but I'm not sure at what point it needs to be replaced. All else looks good.

LHO VE
david.barker@LIGO.ORG - posted 08:34, Friday 16 August 2024 (79568)
Fri CP1 Fill

Fri Aug 16 08:07:59 2024 INFO: Fill completed in 7min 55secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 07:34, Friday 16 August 2024 (79566)
Ops Day Shift Start

TITLE: 08/16 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    SEI_ENV state: MAINTENANCE
    Wind: 19mph Gusts, 15mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.06 μm/s
QUICK SUMMARY:

Going to be turning ion pumps on today and start some basic alignment stuff

H1 CDS
david.barker@LIGO.ORG - posted 16:44, Thursday 15 August 2024 - last comment - 10:48, Friday 16 August 2024(79561)
Corner Station Dolphin Glitch, h1omc0 fail

We had a failure of h1omc0 at 16:37:24 PDT which precipitated a Dolphin crash of the usual corner station system.

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 17:02, Thursday 15 August 2024 (79562)

System recovered by:

Fencing h1omc0 from Dolphin

Complete power cycle of h1omc0 (front end computer and IO Chassis)

Bypass SWWD for BSC1,2,3 and HAM2,3,4,5,6

Restart all models on h1susb123, h1sush2a, h1sush34 and h1sush56

Reset all SWWDs for these chambers

Recover SUS models following restart.

david.barker@LIGO.ORG - 17:04, Thursday 15 August 2024 (79563)

Cause of h1omc0 crash: Low Noise ADC Channel Hop

oli.patane@LIGO.ORG - 17:18, Thursday 15 August 2024 (79564)OpsInfo

Right as this happened, LSC-CPSFF got much noisier, but there was not any motion seen by peakmon or HAM2 GND-STS in Z direction(ndscope). After everything was back up, it was still noisy. Probably nothing weird but still wanted to mention it.

Also, I put the IMC in OFFLINE for the night since it decided to now have trouble locking and was showing a bunch fringes. Tagging Ops aka tomorrow morning's me

Images attached to this comment
david.barker@LIGO.ORG - 08:20, Friday 16 August 2024 (79567)

FRS31855 Opened for this issue

LOGS:

2024-08-15T16:37:24-07:00 h1omc0.cds.ligo-wa.caltech.edu kernel: [11098181.717510] rts_cpu_isolator: LIGO code is done, calling regular shutdown code
2024-08-15T16:37:24-07:00 h1omc0.cds.ligo-wa.caltech.edu kernel: [11098181.718821] h1iopomc0: ERROR - A channel hop error has been detected, waiting for an exit signal.
2024-08-15T16:37:25-07:00 h1omc0.cds.ligo-wa.caltech.edu kernel: [11098181.817798] h1omcpi: ERROR - An ADC timeout error has been detected, waiting for an exit signal.
2024-08-15T16:37:25-07:00 h1omc0.cds.ligo-wa.caltech.edu kernel: [11098181.817971] h1omc: ERROR - An ADC timeout error has been detected, waiting for an exit signal.
2024-08-15T16:37:25-07:00 h1omc0.cds.ligo-wa.caltech.edu rts_awgtpman_exec[28137]: aIOP cycle timeout
 

david.barker@LIGO.ORG - 10:48, Friday 16 August 2024 (79572)

Reboot/Restart Log:

Thu15Aug2024
LOC TIME HOSTNAME     MODEL/REBOOT
16:49:17 h1omc0       ***REBOOT***
16:50:45 h1omc0       h1iopomc0   
16:50:58 h1omc0       h1omc       
16:51:11 h1omc0       h1omcpi     
16:53:56 h1sush2a     h1iopsush2a 
16:53:59 h1susb123    h1iopsusb123
16:54:03 h1sush34     h1iopsush34 
16:54:10 h1sush2a     h1susmc1    
16:54:13 h1susb123    h1susitmy   
16:54:13 h1sush56     h1iopsush56 
16:54:17 h1sush34     h1susmc2    
16:54:24 h1sush2a     h1susmc3    
16:54:27 h1susb123    h1susbs     
16:54:27 h1sush56     h1sussrm    
16:54:31 h1sush34     h1suspr2    
16:54:38 h1sush2a     h1susprm    
16:54:41 h1susb123    h1susitmx   
16:54:41 h1sush56     h1sussr3    
16:54:45 h1sush34     h1sussr2    
16:54:52 h1sush2a     h1suspr3    
16:54:55 h1susb123    h1susitmpi  
16:54:55 h1sush56     h1susifoout 
16:55:09 h1sush56     h1sussqzout 
 

LHO General
thomas.shaffer@LIGO.ORG - posted 16:33, Thursday 15 August 2024 (79559)
Ops Day Shift End

TITLE: 08/15 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
SHIFT SUMMARY:

After the HAM6 cameras were reinstalled and the high voltage was turned back on then we were ready for light in the vacuum. The mode cleaner locked within about 30 seconds of requesting IMC_LOCK to Locked! Tomorrow we will bring back our seismic systems as best we can and then try some single bounce to ensure that we can get some type of light into HAM6.

Ion pumps 1,2,3,14 are in a known error state. Gerardo warned this was intentional.

LOG:                                                                                                                                                                                                      

Start Time System Name Location Lazer_Haz Task Time End
14:34 FAC Karen Opt lab n Tech clean 14:50
14:58 FAC Tyler LVEA n Grab a tool 15:06
15:07 FAC Karen LVEA n Tech clean 15:19
15:11 FAC Kim LVEA n Tech clean 15:21
16:25 FAC Kim MX n Tech clean 17:11
16:34 VAC Gerardo LVEA n Vac checks at HAM5,6,7 17:39
17:44 ISC Camilla, Oli LVEA n Reinstall cameras on HAM6 19:06
18:14 SYS Betsy Opt Lab n Betsy things 19:15
19:00 CDS Marc, Fernando LVEA n Turn on high voltage 19:15
19:27 - Sam, tour (5) LVEA n Tour of LVEA 20:16
20:53 SAF TJ LVEA YES Transition to regular laser safe 21:18
22:02 TCS Camilla, Marc Opt Lab Local Cheeta 22:57
H1 ISC (DetChar)
oli.patane@LIGO.ORG - posted 16:24, Thursday 15 August 2024 - last comment - 17:19, Thursday 15 August 2024(79548)
Looking for correlations between the OFI Incident(TM) and earthquakes

We were exploring any weird behavior during the locklosses preceding the OFI burns to try and narrow down possible causes, and we recently learned from a scientist experienced with KTP optics that the movement of a high powered beam passing through the KTP could cause damage to the optic, so that created the theory that this could have happened due to earthquakes.

There were a couple of decently-sized earthquakes before the incidents:

April Incident (seismic/LL summary alog79132)

- April 18th - nearby earthquake from Canada - we lost lock from this (lockloss ndscope)

- April 20th - nearby earthquake from Richland - stayed locked

- April 23rd - drop in output power noticed - the lockloss right before this had NOT been caused by an earthquake (previous lockloss ndscope)

July Incident

- July 11th - nearby earthquake from Canada (alog79023) (lockloss ndscope, zoomed out lockloss ndscope)

- July 12th - noticed similarities to the April incident

We used ndscope to compare ground motion to DARM, AS_A, AS_C, and the IFO state. In looking over the ndscopes, we don't see anything that would make us think that these earthquakes changed anything in the output arm.

So yes, we did have two decently sized earthquakes (+a local one) before the IFO burns took place, but we also have earthquakes hitting us all the time, many with higher ground velocities. Overall, we did not see anything strange during these earthquake locklosses in the AS_A and AS_C channels that would lead us to think that the earthquakes played a part in the OFI issues.

Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 17:19, Thursday 15 August 2024 (79565)

People who worked on this:

Kiet Pham, Sushant Sharma-Chaudhary, Camilla, TJ, Oli

H1 TCS
thomas.shaffer@LIGO.ORG - posted 15:24, Thursday 15 August 2024 - last comment - 12:24, Wednesday 21 August 2024(79560)
CO2Y only outputting 24W after vent return

I turned the CO2s back on today and CO2X came back to its usual 53W, but CO2Y came back at 24W. We've seen in the past that it will jump up a handful of watts overnight after a break, so maybe we will see something similar here. If not, we will have to investigate this loss. Trending the output of this laser, it has definitely been dropping in the last year, but we should be higher than 24W.

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 08:56, Friday 16 August 2024 (79570)

Sure enough, the power increased overnight and we are back to 34W. This is still low, but in line with the power loss that we've been seeing. Camilla is looking into the spare situation and we might swap it in the future.

Images attached to this comment
camilla.compton@LIGO.ORG - 12:24, Wednesday 21 August 2024 (79622)

We searched for home on both CO2 lasers, took them back to minium power and then asked  CO2_PWR guardian to NOM_ANNULAR_POWER. This gave 1.73W on CO2X and  CO2Y to 1.71W (1.4W before bootstrapping).

We re calibrated following /opt/rtcds/userapps/release/tcs/h1/scripts/RS_calibration/README.txt (last done in February 76008), both were 1.7W after bootstrapping.
Ryan S has updated the PSL calibration to just use python (/psl/h1/scripts/RotationStage/CalibRotStage.py), so we should change to this method next time.
Images attached to this comment
H1 ISC
marc.pirello@LIGO.ORG - posted 13:35, Thursday 15 August 2024 (79557)
High Voltage Reactivated at Corner

Per WP12042 high voltage was reactivated at the corner station via M1300464v13.

1 - ESD ITM's supply enabled
2 - HAM6 Fast Shutter and OMC PZT supply enabled
3 - Ring Heaters ITMs & SR3 enabled in TCS racks
4 - HAM6 Fast Shutter at Rack verified HV enabled
5 - MER SQZ PZT & Piezo driven PSAMs supply enabled

M. Pirello, F. Mera

H1 ISC
camilla.compton@LIGO.ORG - posted 12:17, Thursday 15 August 2024 (79556)
HAM6 cameras/housings reinstalled

WP12040. Oli and I reinstalled the HAM6 cameras/housings we revmoved beore the vent in 79213. Daniel plugged them in. 

LHO VE
thomas.shaffer@LIGO.ORG - posted 10:12, Thursday 15 August 2024 - last comment - 11:38, Thursday 15 August 2024(79552)
Vacuum channel alarm levels increased for new instrument air compressor

The new compressor runs at a higher pressure so the alarms needed to move with that. I changed the minor and major high alarms for H0:VAC-MR_INSTAIR_PT199_PRESS_PSIG for 127 and 130 respectively in epics via the vacuum computer in the back of the control room. These alarm levels cannot be changed outside of the vacuum network. I did not see this channel in the VAC SDF.

Patrick will check if these values are hard coded into Beckhoff and will adjust accordingly.

Comments related to this report
david.barker@LIGO.ORG - 11:00, Thursday 15 August 2024 (79554)

Currently the slow-controls SDF is not able to monitor non-VAL fields (e.g. alarm fields). Future release will have this feature.

patrick.thomas@LIGO.ORG - 11:38, Thursday 15 August 2024 (79555)
I made the following changes to the PLC code generation scripts (not pushed to the running code) to enable setting the alarm levels for each instrument air pressure EPICS channel separately. I set the alarm levels to what they currently are. Should any of these be changed?
https://git.ligo.org/cds/ifo/beckhoff/lho-vacuum/-/commit/8d17cfe3284147b4e398262091ec9d9f2ddbb6ab

If there are no objections, I will plan to push this change when we make the changes to add and rename filter cavity ion pump controllers.
H1 DetChar (CAL, DetChar)
derek.davis@LIGO.ORG - posted 13:23, Tuesday 15 August 2023 - last comment - 14:56, Thursday 15 August 2024(72239)
Issue with calibration line subtraction at beginning of locks

Arianna, Derek

We noticed that the calibration line subtraction pipeline is often not correctly subtracting the calibration lines at the start of lock segments.

A particularly egregious case is on July 13, where the calibration lines are still present in the NOLINES data for ~4 minutes into the observing mode segment but then quickly switch to being subtracted. This on-to-off behavior can be seen in this spectrogram of H1 NOLINES data from July 13, where red lines at the calibration frequencies disappear at 1:10 UTC. Comparing the H1 spectrum at 1:06 UTC and 1:16 UTC shows that the H1:GDS-CALIB_STRAIN spectrum is not changed, but the H1:GDS-CALIB_STRAIN_NOLINES spectrum has calibration lines in the 1:06 UTC spectrum and no lines in the 1:16 UTC spectrum. This demonstrates that the problem is with the subtraction rather than the actual calibration lines. 

This problem is still ongoing for recent locks. The most recent lock on August 15 has the same problem. The calibration lines "turn off" at 4:46 UTC as seen in the attached spectrogram

The on-to-off behavior of the subtraction is particularly problematic for data analysis pipelines as the quickly changing line amplitude can result in the data being over- or under-whitened.   

Images attached to this report
Comments related to this report
aaron.viets@LIGO.ORG - 14:56, Thursday 15 August 2024 (79558)CAL

A while back, I investigated this and found that the reason for occasional early lock subtraction problems is that, at the end of the previous lock, the lines were turned off, but the TFs were still being calculated.  Then, at the beginning of the next lock, it takes several minutes (due to the 512-s running median) to update the TFs with accurate values.  There were two problems that contributed to this issue.  I added some more gating in the pipeline to use the line amplitude channels to gate the TF calculation.  This fix was included included in gstlal-calibration-1.5.3, as well as the version currently in production (1.5.4). However, there have been some instances in which those channels were indicating that the lines were on when they were actually off at the end of the previous lock, which means this code change, by itself, does not fix the problem in all cases.  The version that is currently in testing, gstlal-calibration-1.5.7, includes some other fixes for line subtraction, which may or may not improve this problem.

A more reliable way to solve this issue would be to ensure that the line amplitude channels we are using always carry accurate real-time information.  Specifically, we would need to prevent the occurance of the lines turning off long before these channels indicate this has occurred.  The names of these channels are:

{IFO}:SUS-ETMX_L{1,2,3}_CAL_LINE_CLKGAIN
{IFO}:CAL-PCAL{X,Y}_PCALOSC{1,2,3,4,5,6,7,8,9}_OSC_SINGAIN

Displaying reports 6141-6160 of 83403.Go to page Start 304 305 306 307 308 309 310 311 312 End