Displaying reports 1981-2000 of 85632.Go to page Start 96 97 98 99 100 101 102 103 104 End
Reports until 22:23, Wednesday 30 July 2025
LHO General
ryan.short@LIGO.ORG - posted 22:23, Wednesday 30 July 2025 (86112)
Ops Eve Shift Summary

TITLE: 07/31 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: While last night's shift went on without any observing time, tonight was the exact opposite with H1 locked and observing for the duration with steady range. Many aftershocks of yesterday's large earthquake with magnitudes over 5.0 rolled through continuously this evening, but H1 rode through all of these, occasionally entering earthquake mode during some of the larger motion. H1 has now been locked for over 15 hours.

On an operations note, based on TJ's suggestion from his time on last night's owl shift, I've edited the H1_MANAGER node to start an initial alignment instead of immediately calling for assistance if IR is not found during the 'FIND_IR' state. It will call for assistance, however, if an alignment has already been run and IR is still not found. Changes have been loaded and committed to svn.

H1 General
ryan.crouch@LIGO.ORG - posted 16:31, Wednesday 30 July 2025 (86105)
OPS Wednesday DAY shift summary

TITLE: 07/30 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: We've been in and out of Earthquake mode all day from the constant aftershocks from the 8.8 yesterday off the coast of eastern Russia. We've been locked for almost 9.5 hours.
LOG:                                                                                                                               

Start Time System Name Location Lazer_Haz Task Time End
13:45? FAC CBEX survey crew MidY N Survey work 20:45
15:20 FAC Nellie Optics lab N Tech clean 15:34
16:32 FAC Chris Vac prep lab N Painting 18:32
18:55 ISC Jennie Optics lab LOCAL ISS array 19:52
20:13 FAC Chris VAC prep lab N Painting 21:30
21:13 ISC Keita Optics lab N Quick checks 21:30
21:15 EE Marc MidY N Grab some parts 21:56
21:20 FAC Eric EndY N Chiller yard alarm reset 21:39
21:51 AOS Mitch MidX N Look for a pelican case 22:10

I increased the gain on ITMY5/6 from +0.01 to +0.015 and it damps mode5 60-70% faster and mode6 100-110% faster based on the short and long monitors.

X1 SUS
ibrahim.abouelfettouh@LIGO.ORG - posted 16:24, Wednesday 30 July 2025 (86110)
BBSS BRD Mounts Recieved and Fit-Checked

Ibrahim

Got the newly cleane BRD mounts (D1900570) for the BBSS and fit-checked them on the suspension. They fit just fine with the only note being that the bottom loop wires (the small ends after the clamp) had to be moved out of the way. Of the 2 I fit-checked, 3 of the 4 holes went in very easily and the other fit mostly easily (no extra force applied going in just was not totally smooth turning in). Pictures attached.

The plan now is to get with Rahul and install the actual BRDs, then re-take the Bounce and Roll mode measurements to confirm that they are damping as expected. 

Images attached to this report
H1 ISC (Lockloss, SUS)
oli.patane@LIGO.ORG - posted 16:10, Wednesday 30 July 2025 (86108)
ETMX glitch locklosses vs ETMX bias - O3b vs O4a

Camilla, Oli

We were wondering what the bias had been during O3, as compared to the amount of ETMX glitch locklosses that had occurred during O3. We referenced Iain Morton's presentation, slide 17. Ignoring the 'SAME' lockloss tags since we know those were caused by the PSL glitching, we found that only 5% of the locklosses from O3b had an ETM_GLITCH tag, while 15% of the locklosses from O4a (plus a bit of O4b) had an ETM_GLITCH tag. Since the method in use at this time for finding ETMX glitches wasn't very good at catching the smaller glitches, we can guess that both percentages may actually be a bit higher, but either way, the percentage of locklosses from ETMX glitches during O3b were much less. We decided to look at what the ETMX bias was during O3b and how that compares to the bias we've been using throughout O4 (until it was changed on Monday 86027).

For all of O3 and up until February 28, 2023 (67698), we had been at a bias of around -450, and then after that date we changed to a bias of around +130, which is what we've had for all of O4 up until a couple of days ago.

 

Images attached to this report
LHO General
ryan.short@LIGO.ORG - posted 16:02, Wednesday 30 July 2025 (86109)
Ops Eve Shift Start

TITLE: 07/30 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 14mph Gusts, 9mph 3min avg
    Primary useism: 0.09 μm/s
    Secondary useism: 0.06 μm/s
QUICK SUMMARY: H1 has been locked for almost 9 hours.

H1 General (SQZ)
ryan.crouch@LIGO.ORG - posted 11:28, Wednesday 30 July 2025 - last comment - 12:36, Wednesday 30 July 2025(86104)
OPS Wednesday DAY midshift update

STATE of H1: Observing at 145Mpc

We've been locked for 4:21.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 12:36, Wednesday 30 July 2025 (86106)

After Ryan adjusted the OPO temperature, there seemed to be a ~30minute to 1 hour time constant for the SQZ_ANG_ADJUST servo to bring the squeezing angle to the best for optimum squeezing, see attached. After this time, the range debatably improved. Although some of the improvement could be attributed to less earthquake ground movement. This time constant is slow but seems to be generally working well as the servo has not ran away since being implemented in 85820.

Images attached to this comment
LHO VE
david.barker@LIGO.ORG - posted 10:52, Wednesday 30 July 2025 (86103)
Wed CP1 Fill

Wed Jul 30 10:19:53 2025 INFO: Fill completed in 19min 49secs

Gerardo confirmed a good fill curbside.

Images attached to this report
LHO General
tyler.guidry@LIGO.ORG - posted 10:39, Wednesday 30 July 2025 - last comment - 13:41, Monday 04 August 2025(86102)
Unbeelievable Summer at LHO
The site has been abuzz as of late. For the past few months, western honeybees have turned LIGO from interferometer to apiary. The first swarm, near the LSB lift station was reported in early May. Since then, Facilities group and others have been combing the site, and some 15 colonies have been captured and extracted with the help of Phillip Johnson from The Bee Team (seen in buzzworthy photos below). 

The bees have been largely indiscriminate about hive habitat selection. To date we have removed them from spools, BTE interiors, irrigation boxes and connexes. It's not clear where the bees came from, or how they got here. The nearest location to site that I could find which would utilize bees, Brainstorm Cellars, is some 6 miles away. This is further than a swarm will migrate. Stranger yet, the habitat between us and the nearest possible location is, to my understanding, not favorable for hive building which would rule out leap frogging to us. Nevertheless, the bees are here and thriving. The queens are prolific egg layers; the forager bees are caked in pollen and the extractions of more established colonies are chalked full with 10's of pounds of delicious, capped honey. Bee temperament is, across the board, very mild. Even during our most invasive extractions they seem largely unbothered. 

All that to say, I hope to see the increase of bees across site taper off strongly, especially as we inch closer to Fall. In the meantime, M. Landry has recently reached out to the WSU Honeybees and Pollinators Program to help us understand how we came to find this abnormal explosion of bees. That meeting has yet to take place. 


C. Soike, M. Robinson, T. Guidry
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 13:41, Monday 04 August 2025 (86176)
This work demands we the dust off the ol' LHO Paper Plate Award!!

"This buzz-worthy award goes to Tyler Guidry, Phillip Johnson, Mitch Robison, Chris Soike, and Kim Stewart for LHO's Un-bee-lievble Summer (LHO aLOG 86102)."

Congratulations!
Images attached to this comment
H1 General (CAL)
ryan.crouch@LIGO.ORG - posted 07:30, Wednesday 30 July 2025 (86098)
OPS Wednesday DAY shift start

TITLE: 07/30 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 154 Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 2mph 3min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.05 μm/s
QUICK SUMMARY:

H1 General
thomas.shaffer@LIGO.ORG - posted 01:25, Wednesday 30 July 2025 - last comment - 17:03, Wednesday 30 July 2025(86095)
Ops Owl Update

Looks like the first attempt at locking after the ground motion settled enough failed at find IR. I was notified, but we should change this to start an initial aligment first.

Comments related to this report
thomas.shaffer@LIGO.ORG - 04:27, Wednesday 30 July 2025 (86096)

These aftershocks just keep coming!

Had a decent attempt at DRMI and PRMI before this latest one rolled thorugh, but it just wouldn't catch. After another initial alignment, where I had to touch SRM again to get SRY to lock, we're ready to another lock attempt. The ground is still moving a bit though, so I'll try to be patient.

thomas.shaffer@LIGO.ORG - 06:32, Wednesday 30 July 2025 (86097)

DRMI has locked a few times but we can't seem to make it too far after now. While the ground motion seems to have finally gone away, the FSS isn't locking. Looks like the TPD voltage is lower than where we want, but maybe thats just because it's unlocked?

After futzing with it for 40min it eventually locked after I gave the autolocker a break for ~10min. I actually forgot to turn it back on while I was digging through alogs, but that did the trick!

Another seismon warning of incoming aftershock.

DRMI just locked again! Looks like it's moving on well now so I'll set H1_MANAGER back up.

Images attached to this comment
jason.oberling@LIGO.ORG - 17:03, Wednesday 30 July 2025 (86111)OpsInfo, PSL

Re: FSS issues, Ryan S. and I have been having a chat about this.  From what we can tell the FSS was oscillating pretty wildly during this period, see the attached trends that Ryan put together of different FSS signals during this time.  We do not know what caused this oscillation.  The FSS autolocker is bouncing between states 2 and 3 (left center plot), an indication of the loop oscillating, which is also seen in the PC_MON (upper center plot) and FAST_MON (center center plot) signals.  The autolocker was doing its own gain changing during this time as well (upper right plot), but this isn't enough to clear a really bad loop oscillation.  The usual cure for this is to manually lower the FSS Fast and Common gains to the lowest slider value of -10 and wait for the oscillation to clear (it usually clears pretty quickly), then slowly raise them back to their normal values (usually in 1dB increments); there is an FSS guardian node that does this, but we have specifically stopped it from doing this during RefCav lock acquisition as it has a tendency to delay RefCav locking (the autolocker and the guardian node aren't very friendly with each other).

In the future, should this happen again try manually lowering the FSS gain sliders (found on the FSS MEDM screen under the PSL tab on the Sitemap) to their minimum and wait a little bit.  If this doesn't clear the oscillation then contact either myself or Ryan S. for further assistance.

Images attached to this comment
Working Groups DetChar (DetChar)
shivaraj.kandhasamy@LIGO.ORG - posted 23:00, Tuesday 29 July 2025 (86094)
DQ Shift report for the week from 2025-07-21 to 2025-07-27

Below is the summary of the DQ shift for the week from 2025-07-21 to 2025-07-27

The full DQ shift report with day-by-day details is available at https://wiki.ligo.org/DetChar/DataQuality/DQShiftLHO20250721

LHO General
ryan.short@LIGO.ORG - posted 22:07, Tuesday 29 July 2025 (86093)
Ops Eve Shift Summary

TITLE: 07/30 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Earthquake
INCOMING OPERATOR: TJ
SHIFT SUMMARY: No observing time this shift due to the M8.8 (revised) earthquake off the Russian coast this afternoon and about a dozen M5.0+ aftershocks since. I waited until around 04:50 UTC to start untripping suspensions and seismic platforms to give the Earth time to calm down so they wouldn't just immediately trip again; everything came back up without issue, although IFO alignment looks poor. Since the ground is still shaking and the seismic environment is still in earthquake mode, H1 won't be locking for a while. However, I did set up the auto owl shift so that H1_MANAGER will start the process (and likely also need to run an alignment) once we're out of earthquake mode, even if that won't be for another hour or two.

H1 General (ISC)
anthony.sanchez@LIGO.ORG - posted 17:18, Tuesday 29 July 2025 (86092)
Daily DRMI Locking attempts since the Vent

I have attatched a PDF that contains 54 plots with minute trend data of the daily DRMI Locking process since the vent.
The DRMI Locking process being claimed here is channel H1:GRD-ISC_LOCK_STATE_N  being between states 18 and 101.
All times are in UTC.

These plots were made by  sc_iterator.py which relies on statecounter 3.0. 

 

 

Non-image files attached to this report
H1 CAL (CDS)
francisco.llamas@LIGO.ORG - posted 17:11, Tuesday 29 July 2025 (86091)
EX Pcal cabling...

FranciscoL, TonyS

Summary: We connected a BNC from the "Out Mon" of D1300599 to "In 2" of D1400423. The channel H1:CAL-PCALX_OFS_AOM_DRIVE_MON_OUT16 now plots what the AOM is doing. Problem solved.


Earlier today I went to track where in the chain of connections the InLoopOut signal was "lost" (I made that claim in 85753). All the signals of InLoopOut were working all the way out to the TCS rack. However, after a closer look on all the schematics for the Pcal and the TCS boards, I noticed that I was tracking the wrong channel. In other words, the InLoopOut channel was not the problematic channel -- this would otherwise be a super urgent matter, as the InLoopOut is the signal coming straight from the OFS PD -- but it was the AOM Mon channel (for the interested reader, look at D1300226V13 --> page 3 --> X1-TCS-C1-4 --> second (top to bottom) 9-pi D-Sub Male output, to find the channel). At the time of realizing my mistake, the maintenance period was long over, so I left it as a pending task for our next visit to EX.

The world gifted me with very high seismic activity and Tony and I finally finished the problem by connecting the BNC that was missing according to D1300226 schematic and stated in the summary. Attached are screenshots of the ndscope before (earlier today, as I was looking for the problem) and after plugging the BNC in the chassis, in their respective order. The change is seen in the green trace on the second (top to bottom) plot.

Tagging CDS so we are all aware that the problem was solved and it is concluded now.

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 15:28, Tuesday 29 July 2025 - last comment - 08:29, Wednesday 30 July 2025(86080)
CDS Maintenance Summary: Tuesday 29th July 2025

WP12709 Replace EX Dolphin IX S600 Switch

Jonathan, Erik, EJ, Dave:

The Dolphin IX switch at EX was damaged by the 06apr2025 power glitch, it continued to function as a switch but its nework interface stopped working. This meant we couldn't fence a particular EX front end from the Dolphin fabric by disabling its switch port via the network interface. Instead we were using the IPC_PAUSE function of the IOP models. Also because the RCG needs to talk over the network to a swich on startup, Erik configured EX frontends to control an unused port on the EY switch.

This morning Erik replaced the broken h1rfmfecex0 with a good spare. The temporary control-EY-switch-because-EX-is-broken change was removed.

Before this work the EX SWWD was bypassed on h1iopseiex and h1cdsrfm was powered down.

During the startup of the new switch, the IOP models for SUS and ISC were time glitched putting them into a DACKILL state. All models on h1susex and h1iscex were restarted to recover from this.

Several minutes later h1susex spontaneously crashed, requiring a reboot. Everything as been stable from this point onwards.

WP12687 Add STANDDOWN EPICS channels to DAQ

Dave:

I added a new H1EPICS_STANDDOWN.ini to the DAQ, it was installed as part of today's DAQ restart.

WP12719 Add two FCES Ion Pumps and One Gauge To Vacuum Controls

Gerardo, Janos, Patrick, Dave

Patrick modifed h0vacly Beckhoff to read out two new FCES Ion Pumps and a new Gauge. 

The new H0EPICS_VACLY.ini was added to the DAQ, requiring a EDC+DAQ retstart.

WP12689 Add SUS SR3/PR3 Fast Channels To DAQ

Jeff, Oli, Brian, Edgard, Dave:

New h1sussr3 and h1suspr3 models (HLTS suspensions) were installed this morning. Each model added two 512Hz fast channels to the DAQ. Renaming of subsystem parts resulted in the renaming of many fast and slow DAQ channels. A summary of the changes:

In sus/common/models three files were changed (svn version numbers shown):

HLTS_MASTER_W_EST.mdl production=r31259 new=32426

SIXOSEM_T_STAGE_MASTER_W_EST.mdl  production=r31287 new=32426

ESTIMATOR_PARTS.mdl production=r31241 new=32426

HLTS_MASTER_W_EST.mdl:

only change is to the DAQ_Channels list, added two chans M1_ADD_[P,Y]_TOTAL

SIXOSEM_T_STAGE_MASTER_W_EST.mdl:

At top level, change the names of the two ESTIMATOR_HXTS_M1_ONLY blocks:

PIT -> EST_P

YAW -> EST_Y

Inside the ADD block:

Add two testpoints P_TOTAL, Y_TOTAL (referenced by HLTS mdl)

ESTIMATOR_PARTS.mdl:

Rename block EST -> FUSION

Rename filtermodule DAMP_EST -> DAMP_FUSION

Rename epicspart DAMP_SIGMON -> OUT_DRIVEMON

Rename testpoint DAMP_SIG -> OUT_DRIVE

DAQ_Channels list changed according to the above renames.

DAQ Changes:

This results in a large number of DAQ changes for SR3 and PR3. For each model:

+496 slow chans, -496 slow chans (rename of 496 channels).

+64 fast chans, -62 fast chans (add 2 chans, rename 62 chans).

DAQ Restart

Jonathan, Dave:

The DAQ was restarted for several changes:

New SR3 and PR3 INI, fast and slow channel renames, addition of 512Hz fast channels.

New H0EPICS_VACLY.ini, adding Ion Pumps and Gauge to EDC.

New H1EPICS_STANDDOWN.ini, adding ifo standdown channels to EDC.

This was a full EDC DAQ restart. Procedure was:

stop TW0 and TW1, then restart EDC

restart DAQ 0-leg

restart DAQ 1-leg

As usual GDS1 needed a second restart, but unusual FW1 spontaneously restarted itself after have ran for 55 minutes, an uncommon late restart.

Jonathan tested new FW2 code which sets the run number in one place and propagates it to the various frame types.

Comments related to this report
david.barker@LIGO.ORG - 08:29, Wednesday 30 July 2025 (86101)

Detailed DAQ changes in attached file

Non-image files attached to this comment
david.barker@LIGO.ORG - 08:20, Wednesday 30 July 2025 (86100)

Tue29Jul2025
LOC TIME HOSTNAME     MODEL/REBOOT
09:02:53 h1susex      h1iopsusex  <<< Restarts following EX Dolphin IXS600 switch replacement
09:02:57 h1iscex      h1iopiscex  
09:03:07 h1susex      h1susetmx   
09:03:11 h1iscex      h1pemex     
09:03:21 h1susex      h1sustmsx   
09:03:25 h1iscex      h1iscex     
09:03:35 h1susex      h1susetmxpi 
09:03:39 h1iscex      h1calex     
09:03:53 h1iscex      h1alsex     


09:11:30 h1susex      h1iopsusex  <<< h1susex crash
09:11:43 h1susex      h1susetmx   
09:11:56 h1susex      h1sustmsx   
09:12:09 h1susex      h1susetmxpi 


12:15:20 h1sush2a     h1suspr3    <<< New models, EST rename and 2 fast chans added
12:16:28 h1sush56     h1sussr3    


12:19:29 h1susauxb123 h1edc[DAQ] <<< EDC for new VAC-LY and STANDDOWN
12:21:00 h1daqdc0     [DAQ] <<< 0-leg
12:21:08 h1daqfw0     [DAQ]
12:21:08 h1daqtw0     [DAQ]
12:21:10 h1daqnds0    [DAQ]
12:21:17 h1daqgds0    [DAQ]
12:24:10 h1daqdc1     [DAQ] <<< 1-leg
12:24:17 h1daqfw1     [DAQ]
12:24:17 h1daqtw1     [DAQ]
12:24:20 h1daqnds1    [DAQ]
12:24:27 h1daqgds1    [DAQ]
12:25:18 h1daqgds1    [DAQ] <<< GDS1 2nd restart


13:19:29 h1daqfw1     [DAQ] <<< spontaneous restart


16:17:57 h1susex      h1iopsusex  <<< Replace TMSX 18bit-DAC
16:18:10 h1susex      h1susetmx   
16:18:23 h1susex      h1sustmsx   
16:18:36 h1susex      h1susetmxpi 
 

Displaying reports 1981-2000 of 85632.Go to page Start 96 97 98 99 100 101 102 103 104 End