Displaying reports 7321-7340 of 86139.Go to page Start 363 364 365 366 367 368 369 370 371 End
Reports until 17:01, Saturday 09 November 2024
H1 General
anthony.sanchez@LIGO.ORG - posted 17:01, Saturday 09 November 2024 (81168)
Friday Ops Day shift End

TITLE: 11/10 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Calibration
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:
The morning was plagued with IMC Fault issues alog

After simply holding ISC_LOCK in  DOWN and holding IMC in LOCKED [ Specifically  to Not be in "ISS_ON " ] to wait for the IMC to be calm before starting to relock. Then wait again for a 5.8M EQ from Panama, I was able to get a lock all the way to Observing.

I was able to pass a locked IFO to Ibrahim.


LOG:                                                                                                                                                                                                                                                                                               

Start Time System Name Location Lazer_Haz Task Time End
21:41 SAF Laser LVEA YES LVEA is laser HAZARD 08:21
15:58 PEM Robert CS Mech room  -> Chiller Yard N Cable crossing road to End X from Mech room to Chiller yard 23:58
17:27 PEM Robert LVEA Yes Setting up chiller pad & turning off the lights. 22:06
20:30 Tours Mike, Fred & co Control Room N Saturday tours 23:58

 

H1 CAL
ibrahim.abouelfettouh@LIGO.ORG - posted 16:52, Saturday 09 November 2024 (81169)
Calibration Sweep 11/09

Calibration Sweep done with new ini file Louis asked us to use for Simulines: /ligo/groups/cal/H1/simulines_settings/newDARM_20231221/settings_h1_20241005_lowerPcal_higherPUM.ini

Broadband Start Time: 1415232600

Broadband End Time: 1415204090

Simulines Start Time: 1415233192

Simulines End Time: 1415234617

Files Saved:

2024-11-10 00:42:42,493 | INFO | Commencing data processing.
2024-11-10 00:42:42,493 | INFO | Ending lockloss monitor. This is either due to having completed the measurement, and this functionality being terminated; or because the whole process was aborted.
2024-11-10 00:43:19,012 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20241110T001937Z.hdf5
2024-11-10 00:43:19,020 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20241110T001937Z.hdf5
2024-11-10 00:43:19,025 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20241110T001937Z.hdf5
2024-11-10 00:43:19,030 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20241110T001937Z.hdf5
2024-11-10 00:43:19,034 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20241110T001937Z.hdf5
ICE default IO error handler doing an exit(), pid = 3252963, errno = 32

Images attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:02, Saturday 09 November 2024 (81167)
OPS Day Shift Start

TITLE: 11/10 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 122Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 2mph Gusts, 0mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.43 μm/s
QUICK SUMMARY:

IFO is in NLN and OBSERVING as of 21:24 UTC

Since we are thermalized, I am about to go into CALIBRATION for our calibration sweep of the day.

H1 General (SEI)
anthony.sanchez@LIGO.ORG - posted 13:45, Saturday 09 November 2024 (81165)
Mid Shift report.

TITLE: 11/09 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 2mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.44 μm/s
QUICK SUMMARY:

I set ISC_LOCK into Idle while waiting for the 5.8M Earthquake to stop shaking us for a few minutes.

I've been tryign to get this IFO to lock since I walked in this morning.
The IMC is unlocking and going into Fault at random times in the Locking process and In the Initial Alignment process.
15:47UTC
15:56:31 UTC
And multiple other times, including when I had taken ISC_LOCK to IDLE.
I called Elenna to try an confirm where this glitch was coming from and was the reason that I was losing all my locks.



Observing reached at 21:24 UTC
 

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:07, Saturday 09 November 2024 (81163)
Sat CP1 Fill

Sat Nov 09 10:05:20 2024 INFO: Fill completed in 5min 17secs

 

Images attached to this report
LHO FMCS
david.barker@LIGO.ORG - posted 10:03, Saturday 09 November 2024 - last comment - 08:50, Sunday 10 November 2024(81162)
Mid Y Chiller Supply Water Temperature Increase

Note to FMCS, MY chilled supply water temp increase which started around 9am Fri 08nov2024 PST.

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 08:50, Sunday 10 November 2024 (81174)

Sunday trend attached. MY VEA temps are holding steady around 65F. The second pump started around noon Fri, but this did not bring the H2O_SUP temp back to nominal.

Images attached to this comment
H1 General
anthony.sanchez@LIGO.ORG - posted 08:03, Saturday 09 November 2024 - last comment - 16:50, Saturday 09 November 2024(81161)
Saturday Ops Day Shift Start

TITLE: 11/09 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 4mph Gusts, 3mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.40 μm/s
QUICK SUMMARY:
Walking in I was greeted with ISC_LOCK in Engage Soft Loops.
Unfortunately we lost lock at Transition_From_ ETMX  and again at DRMI
Starting an Initial_Alignment

Notes on OWL:
IFO called Ryan early in the morning https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=81160.
Over the Owl Shift there were a number of locks and locklosses, I'll follow up with a comment to this alog with more information about each lockloss.

Comments related to this report
anthony.sanchez@LIGO.ORG - 11:19, Saturday 09 November 2024 (81164)Lockloss

Relocking notes:
Relocking has been difficult as it seems the IMC is unlocking at random intervals.
Called Dr. Capote, for back up.
Holding in IDLE for the IMC to become stable.
IMC has been stable for 5 minutes now.
Starting the Initial_Alignment.



 

anthony.sanchez@LIGO.ORG - 16:50, Saturday 09 November 2024 (81166)Lockloss, PSL

I wanted to get to this earlier this morning but a break down of the locklosses last night:

Lockloss ID 1415198551 https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1415198551
Multiple - ScreenShots - attached -for this Lockloss - including one of the PSL-FSS_FAST_MON Zoomed way in , but all the normal ones are there too.
This has that weird PSL/ IMC issue and was even tagged so on the Lockloss page.

Lockloss ID 1415193456 An unknown Lockloss. Which this does have some PSL-FSS_FAST_MON motion before hand but it doesnt seem as pronounced. Also it's not tagged as an IMC via the Lockloss page.
Multiple - ScreenShots - attached -for this Lockloss , Just to deliniate from other locklosses 

 

Lockloss ID 1415184378 was an earthquake I'm not attaching screenshots for this one.
 

Lockloss ID 1415176335 https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1415184378
Multiple - ScreenShots - attached -for this Lockloss - including one of the PSL-FSS_FAST_MON Zoomed way in , but all the normal ones are there too.
This has that weird PSL/ IMC issue and was even tagged so on the Lockloss page just like the first one.

Images attached to this comment
H1 General
ryan.crouch@LIGO.ORG - posted 02:44, Saturday 09 November 2024 (81160)
OPS OWL assistance

H1 called for assistance from the 2 hour "NLN" locking timer expired, We were at LOWNOISE_COIL_DRIVERS and reached NLN shortly after and Observing at 10:41 UTC. I didn't have to touch anything. 

H1 General
anthony.sanchez@LIGO.ORG - posted 22:00, Friday 08 November 2024 (81159)
Friday Eve shift End

TITLE: 11/09 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:

I was passed a Locked IFO:
Lockloss potentially Caused by PSL issues:  https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1415141124

Relocked by 23:38 UTC
Lockloss potentially Caused by PSL issues https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=81153

Relocked by 3:50 UTC after an Initial Alignment
Unknown Lockloss https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=81156

H1 Is currently Locked and Observing for 2 Hours.
Everything is currently running smoothly now.


LOG:
Josh -> H2 building to retrieve laptop.

 

H1 AOS
joshua.freed@LIGO.ORG - posted 20:35, Friday 08 November 2024 (81158)
PRM OSEM Noise Injections

J. Freed,

PRM shows very little coupling but there was a strong ~9.6Hz unknown noise was introduced.

Yesterday we did damping loop injections on all 6 BOSEMs on the PRM M1. This is a continuation of the work done previously for ITMX, ITMY, PR2, and PR3

As with PR3, gains of 300 and 600 were collected (300 is labled as low_noise).

The plots, code, and flagged frequencies are located at /ligo/home/joshua.freed/20241031/scrpts. While the diaggui files are at /ligo/home/joshua.freed/20241031/data. This time, 600 gain data was also saved as a reference in the diaggui files (see below), saved in 20241107_H1SUSPRM_M1_OSEMNoise_T3_LN.xml

I used part of Elennas code (80862) to produce some of my plots. The code is under scripts in test.py
 
PRM_LN_OSEM_darm.png Shows PRMs noise contributions to DARM at 300 gain on injections, PRM_OSEM_darm.png shows the same but at 600 gain on injections. The 300 gain shows double the contributions as the 600 around 9hz. This may be because the code normalizes the contributions based on the injections. Since this strange 9Hz seem to be comming from another source than the injections, it got halved in the 600 plot in comparison to the 300. 
 
The inteferometer went down during the last injection, there is no information of T3 at 600 gain
 
PRM_OSEM_darm_NoSum.png Shows that the majority of the noise contributions to darm we caused by T2 at 600gain.
 
GPS Times of injections (reference number in diaggui files)
Background time: 1415041155 (ref0 DARM, ref1 LF_out, ref2 RT_out, ref3 SD_out, ref4 T1_out, ref5 T2_out, ref6 T3_out)
LFL time:     1415041497          (ref7 DARM, ref8 LF_out)
LF time:     1415041737            (ref9 DARM, ref10 LF_out)
RTL time:     1415041886         (ref11 DARM, ref12 RT_out)
RT time:     1415041999           (ref13 DARM, ref14 RT_out)
SDL time:     1415042143         (ref15 DARM, ref16 SD_out)
SD time:     1415042269           (ref17 DARM, ref18 SD_out)
T1L time:     1415042417         (ref19 DARM, ref20 T1_out)
T1 time:     1415042536           (ref21 DARM, ref22 T1_out)
T2L time:     1415042696         (ref23 DARM, ref24 T2_out)
T2 time:     1415042815           (ref25 DARM, ref26 T2_out)
T3L time:     1415042980         (ref27 DARM, ref28 T3_out)
T3 time:         ---------------
 
PS The injections were not the sole cause of the interferometer going down but may have contributed to it.
Images attached to this report
H1 General (Lockloss)
anthony.sanchez@LIGO.ORG - posted 19:01, Friday 08 November 2024 (81156)
Lockloss after 1 Hour 57 minutes

Unknown Lockloss
This Lockloss does not have the same PSL_FSS signal signaturet that the previous locklosses tonigh have had.

Vickie and I are both thinking that this was a different type of lockloss then what we have seen earlier tonight.

Images attached to this report
H1 General (Lockloss, PSL)
anthony.sanchez@LIGO.ORG - posted 16:55, Friday 08 November 2024 - last comment - 18:35, Friday 08 November 2024(81153)
Lockloss after 14 minutes of NLN

Another lockloss
Looks like this lockloss may have also been an IMC/ PSL-FSS issue.
 

Images attached to this report
Comments related to this report
victoriaa.xu@LIGO.ORG - 18:35, Friday 08 November 2024 (81155)Lockloss

Daniel, Tony, Vicky

edit: see PSL lockloss trends for the previous lockloss, 81157. Both of this and the previous lockloss look PSL-related (ie upstream of IMC- which does see power fluctuations earlier than suggested by the IMC_TRANS channel). Might be worth trying whether the IMC can stay locked with the ISS_ON vs. ISS_OFF tests again, now that the PSL laser is not obviously mode hopping.

This lockloss maybe looks strange: see 20-ms trends, 1-second trends, 5-second trends.  It looks like a very fast lockloss (< 1ms) coincident with various FSS/ISS/TPD glitches - is this different than what we saw before? Here, AS port loses light within like < 1 ms (as witnessed on Keita's new lockloss monitor, AS_A, and LSC-REFL which sees some corresponding power increase). Lockloss looks within 1ms of changes in PSL-FSS TPD, FAST_MON, PC_MON and ISS AOM , SECONDLOOP, etc.

Weirdly IMC-TRANS_IN1_DQ (fast channel 16k) does not see power change until 5 ms later, which I don't understand?  I think like DARM loses lock (which should be why AS port and LSC-REFL both change inversely, right?), while the power on IMC-TRANS doesn't change, despite the PSL FSS and ISS loops all see glitches.

Daniel suggested maybe there could be some more analog filtering on IMC-TRANS slows down this channel (even though it is recorded at 16k?) - we're not sure why there is such a delay, and whether this IMC-TRANS channel a super reliable timing metric for what is happening. The ~5ms seems too fast for the storage time given IMC's ~8.8 kHz cavity pole (1/(2*pi*8.8kHz) ~ 18 microseconds).

Daniel helped add some fast PSL-ISS_SECONDLOOP channels to the Sheila's scopes, and I've added Keita's lockloss monitor channel too (81080, H1:PEM-CS_ASC_5_19_2K_OUT_DQ ), then saved this scope at /ligo/home/victoriaa.xu/ndscope/PSL/PSL_lockloss_search_fast_iss.yaml

Images attached to this comment
H1 General (Lockloss, PSL)
anthony.sanchez@LIGO.ORG - posted 15:14, Friday 08 November 2024 - last comment - 19:43, Friday 08 November 2024(81152)
Friday Eve shift Early start

TITLE: 11/08 Eve Shift: 0030-0600 UTC (1430-2200 PST), all times posted in UTC
STATE of H1: Observing at 161Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 1mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.43 μm/s
QUICK SUMMARY:
H1 has been locked for 1 hour.
All systems running well.
Lockloss potentially Caused by PSL issues

The first Channels that show motion is the H1:PSL-FSS_FAST_MON_OUT_DQ

Relocking now, Currently @  Engage soft loops.

 

Images attached to this report
Comments related to this report
victoriaa.xu@LIGO.ORG - 19:43, Friday 08 November 2024 (81157)

Trends on this lockloss are coincident with PSL FSS / ISS / TPD glitches - so I think the PSL tag is appropriate here.

We added MC2_TRANS to the scope (top right subplot), and it shows power changes earlier than IMC_TRANS_{IN1_OUT}_DQ channels.

I think this means that IMC power changes do happen within 1 ms of FSS glitches starting, which wasn't clear from the IMC_TRANS channel we've been using (where the 16k IN1_DQ and 2k OUT_DQ channels both showed >ms delays of IMC power changing).

Images attached to this comment
H1 SEI
anthony.sanchez@LIGO.ORG - posted 14:40, Friday 08 November 2024 (81150)
Trend the BRSX and BRSY Drift

FAMIS 26448 : Trend the BRSX & BRSY Drift

The minute  trends of the driftmon for BRSX hav 2 spikes but both of those are from Jim adjusting the BRSX on Sep 19th and Sep 24th.

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 14:38, Friday 08 November 2024 (81151)
Ops Day Shift End

TITLE: 11/08 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 161Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Putting my alog in early since I'm leaving early. Currently observing at 160Mpc and have been Locked for 50 minutes. Secondary useism is high-ish, but we just went back to CALM from USEISM. The relock attempt from when I came in this morning wasn't too bad, and then relocking after the lockloss was also not bad besides SRM tripping during SRC align in IA and not wanting to lock SRC.
LOG:

15:30 In PREP_FOR_LOCKING
    - stuck?
15:42 DOWN and then started relocking
    16:28 Lockloss from MAX_POWER
    16:43 Lockloss from ACQUIRE_DRMI_1F
    16:45 Started an initial alignment
    17:04 Initial alignment done
17:55 NOMINAL_LOW_NOISE
17:59 Observing

20:17 Lockloss due to earthquake
    20:25 Started an initial alignment
        - SRM WD tripped during SRC aligning
            - I raised the WD trip level for SRM M3 to 200 from 150 since we get this tripping quite a bit
            - Tried SR2 align and AS centering again, didn''t work
            - Left IA
    20:56 Left IA, relocking
21:46 NOMINAL_LOW_NOISE
22:05 Observing

                                                                                                                                                                                                                                                                                                

Start Time System Name Location Lazer_Haz Task Time End
21:41 SAF Laser LVEA YES LVEA is laser HAZARD 08:21
17:33 PEM Robert CER n Moving cable for grounding study 17:39
17:37 FAC Kim H2 n Tech clean 17:54
20:38 PEM Robert LVEA Y Setting up scattering measurement 20:58
H1 ISC (ISC)
keita.kawabe@LIGO.ORG - posted 18:45, Thursday 07 November 2024 - last comment - 18:01, Friday 08 November 2024(81130)
Fast shutter bouncing happens with an inconvenient timing

Summary:

Attached shows a lockloss at around 11:31 PST today (tGPS~1415043099). It seems that the fast shutter, after it was shut, bounced down to momentarily unblock the AS beam at around the time the power peaked.

For this specific lock loss, the energy deposited into HAM6 was about 17J and the energy that got passed the fast shutter is estimated to be ~12J because of the bouncing.

Bouncing motion was known to exist for some time (e.g. alogs 79104 and 79397, the latter has an in-air slow-mo video showing the bouncing), it seems as if the self damping is not working. Could this be an electronics issue or mechanical adjustment issue or something else?

Also, if we ever open HAM6 again (before this fast shutter is decomissioned), it might be a good idea to make the shutter unit higher (shim?) so the beam is still blocked when the mirror reaches its lowest position while bouncing up and down.

Details:

The top panel shows the newly installed lockloss power monitor (blue) and ASC-AS_A_DC_NSUM which monitors the power downstream of the fast shutter (orange).

The shutter was triggered when the power was ~3W or so at around t~0.36 and the ASC-AS_C level drops by a factor of ~1E4 immediately (FS mirror spec is T<1000ppm, seems like it's ~100ppm in reality).

However, 50ms later at t~0.41 or so, the shutter bounced back down and stayed open for about 15ms.  Unfortunately this roughly coincided with the time when the power coming into HAM6 reached its maximum of ~760W.

Green is a rough projection of the power that went to OMC (aka "AS_A_DC_NSUM would have looked like this if it didn't rail" trace). This was made by simply multiplying the power mointor itself with  AS_A_DC_NSUM>0.1 (1 if true, 0 if false), ignoring the 2nd and 3rd and 4th bouncing.

All in all, for this specific lock loss, the energy coming to HAM6 was 16~17J, and the energy that got past FS was about 11~12J because the timing of the bounce VS the power. OMC seems to be protected by the PZT though, see the 2nd attachemt with wider time range,

The time scale of the lock loss spike itself doesn't seem that different from the L1 lock loss in LLO alog 73514 where the power coming to HAM6 peaked tens of ms after AS_A/B/C power appreciably increased.

OMC DCPDs might be OK since they didn't record crazy high current (though I have to say the IN1 should have been constantly railing once we started losing lock, which makes the assessment difficult), and since we've been running with bouncy FS and the DCPDs have been good so far. Nevertheless we need to study this more.

Images attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 12:29, Friday 08 November 2024 (81148)

Two lock losses, one from last night (1415099762, 2024-11-08 11:15:44 UTC, 03:15:44 PST) and another one that just happened (1415132263, 2024/11/08 20:17:25 UTC) look OK.

The shutter bounced ~50ms after the trigger but the power went down before that.

Images attached to this comment
keita.kawabe@LIGO.ORG - 18:01, Friday 08 November 2024 (81154)

Two more lock losses from today (1415141124, 2024-11-08 22:45:06 UTC and 1415145139, 2024-11-08 23:52:00 UTC) look OK.

In these plot, shutter open/close is judged by p(monitor)/p(AS_A) < some_threshold (open if true).

Images attached to this comment
Displaying reports 7321-7340 of 86139.Go to page Start 363 364 365 366 367 368 369 370 371 End