Displaying reports 1681-1700 of 80669.Go to page Start 81 82 83 84 85 86 87 88 89 End
Reports until 22:08, Monday 18 November 2024
H1 General (SUS)
anthony.sanchez@LIGO.ORG - posted 22:08, Monday 18 November 2024 (81346)
Monday Ops Day Shift End

TITLE: 11/19 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 161Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
Shift consisted of 3 locks and 2 IMC locklosses
Locking has been fairly straight forward once the IMC decides to get and stay locked.
Survived a 5.6M Tonga Quake, and a few PI ring ups.

Tagging SUS because ITMY Mode 5 is ringing up.  I have turned OFF the Gain to it as the current Nominal state is making IYM5 worse and has been for the last few locks.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
00:24 EPO Corey +1 Overpass, Roof N Tour 00:24
01:06 PCAL Rick S PCAL Lab Y Looking for parts 02:16

 

Images attached to this report
H1 General (Lockloss)
anthony.sanchez@LIGO.ORG - posted 20:50, Monday 18 November 2024 (81345)
Monday Eve Lockloss #2

4:20 UTC Lockloss https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1416025235

This Alog was brought to you by your favorite IMC Lockloss tag. 

 

Images attached to this report
H1 General (Lockloss)
anthony.sanchez@LIGO.ORG - posted 19:24, Monday 18 November 2024 (81344)
Another IMC Lockloss from Nominal Low noise.

TITLE: 11/19 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 16mph Gusts, 9mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.67 μm/s
QUICK SUMMARY:
1:57:53 UTC  IMC Lockloss
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/?event=1416016690

Fought with IMC a little to get the IMC to lock, by requesting Down, and Init, and offline a few times. But once it finally locked, relocking went fast.
2:55:01 UTC  Nominal LowNoise

Incoming 5.6M EQ.


3:04 UTC Observing Reached

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 16:39, Monday 18 November 2024 (81342)
Monday Ops Eve Shift Start

TITLE: 11/19 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 9mph Gusts, 6mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.70 μm/s
QUICK SUMMARY:
Despite the Usiesm being elevated, H1 has been Locked for 20 minutes.
The Plan is to continue Observing all night.

 
 

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:26, Monday 18 November 2024 (81341)
OPS Day Shift Summary

TITLE: 11/19 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing
INCOMING OPERATOR: Tony
SHIFT SUMMARY:

IFO is in NLN and OBSERVING as 00:17 UTC

A rough day for locking.

First, we had planned comissioning from 8:30AM to 11:30AM, during which comissioners continued investigating the IMC.

Sheila went into the LVEA to realign onto the POP and ALS beatnotes following a PR3 move that got higher buildups, which was successful. For this, we stayed at CHECK_VIOLINS_BEFROE_POWERUP (losing lock twice from there due to IMC losing lock).

We then got to locking and were able to acquire NLN and go into OBSERVING for 30 minutes (starting 21:44 UTC). The IMC glitch then caused a Lockloss (alog 81338).

After this LL, we were unable to lock DRMI despite just having done an initial alignment. Tried touching up PRM and SRM during PRMI and DRMI respectively, succeeding in the former and failing in the latter.

I decided to do an initial alignment, which happened fully automatically. We were able to lock DRMI but a lockloss at CARM_5_PICOMETERS brought us down due to the IMC glitch. We were then able to come back up to NLN fully automatically, finally getting to OBSERVING.

Overall, high microseism and IMC glitches made locking difficult. We still got there in the end though.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
21:41 SAF Laser LVEA YES LVEA is laser HAZARD 08:21
16:36 FAC Karen Optics Lab N Technical Cleaning 16:36
16:53 FAC Kim MX, EX N Technical Cleaning 17:57
16:58 FAC Karen MY N Technical cleaning 18:57
17:38 VAC Janos EY N Compressor measurement 17:59
18:28 ISC Sheila, Oli LVEA Y Realigning ALS Beatnote 19:28
22:33 CDS Patrick MSR N Timing removed from the Beckhoff system 22:41
00:24 EPO Corey +1 Overpass, Roof N Tour 00:24
H1 CDS
patrick.thomas@LIGO.ORG - posted 15:06, Monday 18 November 2024 (81339)
Removed timing fiber from Beckhoff spare computer in MSR
Dave, Fernando, Patrick

Fernando and I powered off the computer, unplugged the power and other cables from the back of it to slide it forward in the rack, took the top off the chassis, disconnected the blue timing fiber from the PCI board, capped the fiber ends, pulled the fiber out of the chassis, slid the computer back into the rack, and plugged the cables back in, except for the kvm switch cable that is not used anyway. We left the computer on.

This was to stop the timing overview from turning red when the computer is powered on.

We did not put in a work permit, but cleared it with Daniel and informed the operator.
H1 ISC (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 14:18, Monday 18 November 2024 (81338)
Lockloss 22:15 UTC

IMC caused Lockloss 30 mins into observing.

H1 CDS
david.barker@LIGO.ORG - posted 11:13, Monday 18 November 2024 (81335)
EY CNS-II GPS receiver glitched again this morning 05:47 PDT for 12 mins

We had another instance of the -800nS timing glitch of the EY CNS-II GPS receiver this morning from 05:47:50 to 06:00:04 PDT (duration 12min 11 sec).

A reminder of this receiver's history:

At EX and EY we had no CNS-II issues for the whole of O4 until the past few months.

The EY 1PPS, nominally with a -250 +/- 50 nS difference to the timing system, went into a -800nS diff state on 30th Sept, then again 1st October and again 18th October. 

We replaced its power supply Tue 22nd October 2024, after which there have been no glitches for 27 days until this morning.

Images attached to this report
H1 SEI (SEI)
ryan.crouch@LIGO.ORG - posted 10:44, Monday 18 November 2024 (81331)
H1 ISI CPS Noise Spectra Check - Weekly FAMIS

Closes FAMIS26017, last checked in alog81183.

HAMs:

HAM sensor noise at 7-9Hz seems to be reduced for most chambers (HAMs and BSCs both stages of BSCs).

BSCs:

ITMY_ST1_CPSINF_V2 looks reduced above 30 Hz.

All the BSC ST2 specifically, sensor noise at 6-9Hz is reduced.

Non-image files attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:30, Monday 18 November 2024 (81332)
Mon CP1 Fill

Mon Nov 18 10:13:07 2024 INFO: Fill completed in 13min 4secs

 

Images attached to this report
H1 ISC
camilla.compton@LIGO.ORG - posted 10:03, Monday 18 November 2024 - last comment - 21:14, Tuesday 17 December 2024(81329)
LSC POP found to be clipping, PR3 being moved to improve this.

Sheila, Ibrahim,  Jenne, Vicky, Camilla

After Ibrahim told Sheila that DRMI has been struggling to lock, she checked the POP signals and found that somethings been drifting and POP now appears to be clipping, see POP_A_LF trending down. We use this PD in full lock so don''t want any clipping.

We has similar issues last year that gave us stability issues. These issues last year didn't have "IMC" lock-losses so we think this isn't the main issue we've having now but maybe effecting stability.

Trends showing the POP clipping last year, and now. Last year we moved PR3 to removing this slipping while looking at the coherence between ASC-POP_A_NSUM (one of the QPD on the sled in HAM3) and LSC-POP_A (LSC sensor in HAM1): 74578, 74580, 74581.

Similar coherence plots to 74580, for now are show the coherence is bad:

Sheila is moving the PRC cavity now which is improving POPAIR signals, plot attached is the ASC-POP_A_NSUM to LSC-POP_A coherence with DRMI only locked before and during the move. See improvements. Sheila is checking she's in the best position now.

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 10:20, Monday 18 November 2024 (81330)

We have been holding 2W with ASC on before powering up, and using the guardian state PR2_SPOT move, which lets us move the sliders on PR3 and moves PR2, IM4 and PRM sliders to follow PR3. 

Moving PR3 by -1.7 urad (slider counts) increased the power on LSC POP, and POPAIR 18, but slightly misaligned the ALS comm beatnote.  We continued moving PR3 to see how wide the plateau is on LSC POP, we moved it another -3urad without seeing power drop on LSC POP, but the ALS paths started to have trouble staying locked so we stopped there.  (POP18 was still improving, but I went to ISCT1 after the OFI vent and adjusted alignment onto that diode 79883 so it isn't a nice reference). I moved PR3 yaw back to 96 urad on the yaw slider (we started at 100urad), a location where we were near the top for POP and POPAIR 18, so in total we started with PR3 yaw at 100 on the slider and ended with 96 on the slider.   

 

Images attached to this comment
sheila.dwyer@LIGO.ORG - 21:14, Tuesday 17 December 2024 (81885)

Please see Tony's comment about DRMI lock acquisition times here:  81879

When we moved PR3, we inceased the level of light on the POPAIR B diode.  That does two things, first it means that the LSC controls were triggered at a lower percentage of the full buildups, which could cause difficulty locking DRMI (we want to maintain the trigger thresholds similar to what was documented in 44348.  ). Also, the alignment check that Tony describes won't work well, because we didn't update the threshold the guardian used to decide that the alignment was poor and we needed to do PRMI. 

From Tony's historgrams I'm not sure which of these is the main impact on DRMI locking times, if it's the triggering or the alignment check.  We updated the trigger levels today, but not the threshold for the alignment check.

In the future we should check all these levels again 44348 when we have a change in power on POPAIR.

H1 ISC
camilla.compton@LIGO.ORG - posted 09:15, Monday 18 November 2024 (81328)
Microsiem level seemed correlated with IMC isuues this weekend but not further back.

Ibrahim, Camilla

Over the weekend we had a long ~13 hour lock and 2 days with minimal IMC issues, this was during a time when the microseism was low. Plot attached.  We see similar long locks around Novmeber 6th/7th when the microseism was low, but we always have less issues locking when the micorsiem is low, plot attached.

We checked back to the quiet IMC time shown in 80951, between Oct 11th to 16th, but the microseism was actually relatively elevated during this time. Plot attached.

In both plots, between the t-cursors show quieter IMC glitch times.

Images attached to this report
H1 PSL
ryan.short@LIGO.ORG - posted 09:09, Monday 18 November 2024 (81327)
PSL 10-Day Trends

FAMIS 31060

Enclosure incursion last Tuesday (alog81247); enclosure temps came back as expected, but power out of AMP2 has been lower by about 1W since then.

PSL Beckhoff was restarted last Thursday (alog81281), which apparently brought the chiller back with a slightly lower flow rate. At this time, there was also a slight jump up in temperature for AMP2, DB2, CB2, and the diode room (?).

PMC reflected power has also been on the rise again, but generally steady for the past ~4 days.

Images attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 07:37, Monday 18 November 2024 (81326)
OPS Day Shift Start

TITLE: 11/18 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 15mph Gusts, 11mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.58 μm/s
QUICK SUMMARY:

IFO is LOCKING at ACQUIRE DRMI

When I arrived, IFO had just found PRMI. It seems this is the first lock acquisition since an auto-initial alignment following the last lockloss at 14:23 UTC. Microseism is high and increasing.

 

H1 General
thomas.shaffer@LIGO.ORG - posted 04:07, Monday 18 November 2024 (81325)
Ops Owl Update

The IFO timed out after a long period of IMC locking struggles, then it got stuck trying to lock ALSY during initial alignment. I minor tweak to ETMY fixed the latter issue, but then SRM tripped during the SRC alignment step. I adjusted SRM to fix this. We then made it up to just after the power up before losing lock again (141596598, IMC tag).

I'll let it try a few more times by itself before stepping in again.

H1 TCS
corey.gray@LIGO.ORG - posted 17:47, Wednesday 13 November 2024 - last comment - 10:52, Monday 18 November 2024(81271)
TCSy Laser Unlocks & Relocks, Dropping H1 From Observing Briefly

H1 was dropped out of OBSERVING due to the TCS ITMy CO2 laser unlocking at 0118utc.  The TCS_ITMY_CO2 guardian relocked the 2min.

It was hard to see the reason why at first (there were no SDF Diffs, but eventually saw a User Message via GRD IFO (on Ops Overview) pointing to something wrong with TCS_ITMY_CO2.  Oli was also here and they mentioned seeing this, along with Camilla, on Oct9th (alog)---this was the known issue of the TCSy laser nearing the end of its life.  It was replaced a few weeks later on  Oct22nd (alog).

Here are some of the lines from the LOG:

2024-11-13_19:43:31.583249Z TCS_ITMY_CO2 executing state: LASER_UP (10)
2024-11-14_01:18:56.880404Z TCS_ITMY_CO2 [LASER_UP.run] laser unlocked. jumping to find new locking point

.

.
2024-11-14_01:20:12.130794Z TCS_ITMY_CO2 [RESET_PZT_VOLTAGE.run] ezca: H1:TCS-ITMY_CO2_PZT_SET_POINT_OFFSET => 35.109375
2024-11-14_01:20:12.196990Z TCS_ITMY_CO2 [RESET_PZT_VOLTAGE.run] ezca: H1:TCS-ITMY_CO2_PZT_SET_POINT_OFFSET => 35.0
2024-11-14_01:20:12.297890Z TCS_ITMY_CO2 [RESET_PZT_VOLTAGE.run] timer['wait'] done
2024-11-14_01:20:12.379861Z TCS_ITMY_CO2 EDGE: RESET_PZT_VOLTAGE->ENGAGE_CHILLER_SERVO
2024-11-14_01:20:12.379861Z TCS_ITMY_CO2 calculating path: ENGAGE_CHILLER_SERVO->LASER_UP
2024-11-14_01:20:12.379861Z TCS_ITMY_CO2 new target: LASER_UP

Comments related to this report
camilla.compton@LIGO.ORG - 10:52, Monday 18 November 2024 (81334)

CO2Y has only unlocked/relocked once since we power cycled the chassis on Thursday 14th (t-cursor in attached plot).

Images attached to this comment
corey.gray@LIGO.ORG - 17:48, Wednesday 13 November 2024 (81272)TCS

0142:  ~30min later had another OBSERVING-drop due to CO2y laser unlock.

thomas.shaffer@LIGO.ORG - 09:21, Thursday 14 November 2024 (81276)

While it is normal for the CO2 lasers to unlock from time to time, whether it's from running out of range of their PZT or just generically losing lock, this is happening more frequently than normal. The PZT doesn't seem to be running out of range, but it does seem to be running away for some reason. Looking back, it's unlocking itself ~2 times a day, but we haven't noticed since we haven't had a locked IFO for long enough lately.

We aren't really sure why this would be the case, chiller and laser signals all look as they usually do. Just to try the classic "turn it off and on again", Camilla went out to the LVEA and power cycled the control chassis. We'll keep an eye on it today and if it happens again, and we have the time to look further into it, we'll see what else we can do.

Images attached to this comment
H1 ISC
camilla.compton@LIGO.ORG - posted 15:38, Wednesday 13 November 2024 - last comment - 13:35, Monday 18 November 2024(81250)
IMC TRANS power changes up to 1% once we get to NLN and are thermalizing

While looking at locklosses today, Vicky and I noticed that after we reach NLN, the MC2/IM4 TRANS power increases 0.1 to 0.5%.  

Daniel helped look at this and we expect the ISS to change to keep the power out of the IMC constant, but the power after the IMC on IM4 TRANS (not centered) changes ~1% too. Everything downstream of the ISS AOM sees this change plot, something is seeing a slow ~1hour thermalization.

The same signals at LLO are show a similar amount of noise (slightly more in the MC2_TRANS) but no thermalization drift, but LLO has a lot less IFO thermalization.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 13:35, Monday 18 November 2024 (81337)

Elenna and Craig noted this in 68370 too. 

H1 PSL (ISC, Lockloss, OpsInfo)
camilla.compton@LIGO.ORG - posted 11:12, Monday 11 November 2024 - last comment - 15:06, Tuesday 19 November 2024(81193)
Overview of PSL Story so far

After discussions with control room team: Jason, Ryan S, Sheila, Tony, Vicky, Elenna, Camilla 

Conclusions: The NPRO glitches aren't new. Something changed to make us not be able to survive them as well in lock. The NRPO was swapped so isn't the issue. Looking the timing of these "IMC" locklosses, they are caused by something in the IMC or upstream 81155.

Tagging OpsInfo: Premade templates for looking at locklosses are in /opt/rtcds/userapps/release/sys/h1/templates/locklossplots/PSL_lockloss_search_fast_channels.yaml and will come up with command 'lockloss select' or 'lockloss show 1415370858'.

Comments related to this report
camilla.compton@LIGO.ORG - 15:45, Monday 18 November 2024 (81340)ISC
  • November 12th:
    • Took TFs 81247
    • Tightened loose ISS AOM RF cable 81247
    • Tested and characterized in-service TTFSS, SN LHO01: tuned notches and replaced a bad PA85 op amp 81247
  • November 13th:
    • IMC stayed locked without IFO 81262
    • FSS OLG and gain change 81254
    • IMC OLG and gain change 81259
  • Novmeber 14th:
    • PSL Beckhoff power cycle 81281
    • Replace 35MHz with Marconi: 81277
    • Tightened loose ISS AOM cable: 81280
  • Novmeber 18th:
    • LSC POP un-clipped 81329
    • Issues found with IMC locked checker in ISC_LOCK's READY 81336 (maybe keeping us in DOWN longer than needed)

Updated list of things that have been checked above and attache a plot where I've split the IMC only tagged locklosses (orange) from those tagged IMC and FSS_OSCIALTION (yellow). The non IMC ones (blue) are the normal lock losses and (mostly) only once we saw before September. 

Images attached to this comment
camilla.compton@LIGO.ORG - 15:06, Tuesday 19 November 2024 (81365)ISC
  • Novmeber 14th:
    • PMC OLG measured 81283
  • November 19th:
    • Scopes setup at PSL racks 81359
    • PMC+FSS only test shows glitches 81356
    • Reverted EX 28bit LIGO-DAC change back to 20-bit 81350
H1 ISC
oli.patane@LIGO.ORG - posted 09:34, Friday 08 November 2024 - last comment - 11:20, Monday 18 November 2024(81142)
Edit to ISC_LOCK to make sure the IMC is locked before leaving READY

Camilla, Oli

Recently, because of the PSL/IMC issues, we've been having a lot of times where the IFO (according to verbals) seemingly goes into READY and then immediately goes to DOWN because the IMC is not locked. Camilla and I checked this out today and it turns out that these locklosses are actually from LOCKING_ARMS_GREEN - the checker in READY that is supposed to make sure the IMC is locked was actually written as nodes['IMC_LOCK'] == 'LOCKED' (line 947), which just checks that the requested state for IMC_LOCK is LOCKED, and it doesn't actually make sure the IMC is locked. So READY will return True, it will continue to LOCKING_ARMS_GREEN, and immediately lose lock because LOCKING_ARMS_GREEN actually makes sure the IMC is locked. This all happens so fast that verbals doesn't have time to announce LOCKING_ARMS_GREEN before we are taken to DOWN.

To (hopefully) solve this problem, we changed nodes['IMC_LOCK'] == 'LOCKED' to be nodes['IMC_LOCK'].arrived and nodes['IMC_LOCK'].done, and this should make sure that we stay in READY until the IMC is fully locked. ISC_LOCK has been reloaded with these changes.

Comments related to this report
thomas.shaffer@LIGO.ORG - 11:20, Monday 18 November 2024 (81336)OpsInfo

The reason it has been doing this is because there is a return True in the in main method of ISC_LOCK's READY state. Then a state returns True in its main method, it will skip the run method.

I've loaded the removal of this in ISC_LOCK.

Displaying reports 1681-1700 of 80669.Go to page Start 81 82 83 84 85 86 87 88 89 End