Displaying reports 14921-14940 of 86428.Go to page Start 743 744 745 746 747 748 749 750 751 End
Reports until 08:01, Monday 23 October 2023
H1 General
ryan.crouch@LIGO.ORG - posted 08:01, Monday 23 October 2023 - last comment - 09:28, Monday 23 October 2023(73663)
OPS Monday day shift start

TITLE: 10/23 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 163Mpc
OUTGOING OPERATOR: Camilla (OWL)
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 8mph Gusts, 7mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.33 μm/s
QUICK SUMMARY:

Comments related to this report
ryan.crouch@LIGO.ORG - 08:59, Monday 23 October 2023 (73667)

Dropped out of observing from 15:56:44 to 15:58:03UTC from the TCS_ITMX_CO2 laser losing lock and relocking itself.

ryan.crouch@LIGO.ORG - 09:28, Monday 23 October 2023 (73668)

Spike seen by ITMY_{X,Z}_BLRMS_10_30 mostly Z at 16:14UTC

Images attached to this comment
H1 ISC (DetChar)
gabriele.vajente@LIGO.ORG - posted 07:21, Monday 23 October 2023 (73662)
New SRCLFF improved DARM RMS a bit and improved 20-50 Hz noise a bit too

The new high pass filter for the SRCL FF (73622) has improved the DARM RMS between 2 and 4 Hz a bit, although it's not clear why there is a new ~4 Hz peak. Now the largest contribution to the DARM RMS are the lines at 1.3 Hz, which we know are caused by the ETMX ISI (73625) and by the peak at 3.44 Hz which is coherent with CHARD_Y.

The lower DARM RMS in the 2-4 Hz region seems to have improved a little bit the non-stationary noise above 10 Hz, see spectrogram and scatter plot of DARM RMS in the 20-50 Hz region vs RMS in the 2.5-4 Hz region.

It's worth trying to further improve the DARM RMS by

Looking at the DARM bicoherence, there are three main components to the noise modualtion above 20 Hz: 1.3 Hz (ETMX ISI), 2.7-2.8 Hz  and 3.4 Hz (CHARD_Y). Not sure what's causing the DARM motion below 3 Hz, but there is a lot of coherence with ETMX M0 and R0 damping signals.

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 00:00, Monday 23 October 2023 (73660)
Sun EVE Ops Summary

TITLE: 10/22 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 162Mpc
INCOMING OPERATOR: Camilla (OWL)
SHIFT SUMMARY:

n/a
LOG:

2332-2351 Attempted Broadband + simullines calibration measurements per Jenne's request while L1 was down, but had errors (as noted in previous alog).

H1 CAL (CAL)
corey.gray@LIGO.ORG - posted 17:01, Sunday 22 October 2023 - last comment - 16:19, Monday 23 October 2023(73661)
Failed Attempt At: H1 Calibration Measurement Run (broadband + simulines)

Below are the steps for a (failed) attempt at running a broadband + simmulines calibration mesaurement w/ H1 was locked for 3+hrs and with L1 still in recovery from a power outtage:

2023-10-22 23:42:28,941 | INFO | Drive, on L3_SUSETMX_iEXC2DARMTF, at frequency: 544.99, is now running for 23 seconds.
2023-10-22 23:42:35,388 | INFO | Drive, on PCALY2DARMTF, at frequency: 8.69, is finished. GPS start and end time stamps: 1382053346, 1382053370
2023-10-22 23:42:35,389 | INFO | Scanning frequency 8.99 in Scan : PCALY2DARMTF on PID: 106437
2023-10-22 23:42:35,389 | INFO | Drive, on PCALY2DARMTF, at frequency: 8.99, is now running for 31 seconds.
^C2023-10-22 23:42:40,347 | ERROR | Ramping Down Excitation on channel H1:SUS-ETMX_L1_CAL_EXC
2023-10-22 23:42:40,347 | ERROR | Ramping Down Excitation on channel H1:SUS-ETMX_L3_CAL_EXC
2023-10-22 23:42:40,347 | ERROR | Ramping Down Excitation on channel H1:SUS-ETMX_L2_CAL_EXC
2023-10-22 23:42:40,347 | INFO | Ending lockloss monitor. This is either due to having completed the measurement, and this functionality being terminated; or because the whole process was aborted.
2023-10-22 23:42:40,348 | ERROR | Ramping Down Excitation on channel H1:LSC-DARM1_EXC
2023-10-22 23:42:40,348 | ERROR | Aborting main thread and Data recording, if any. Cleaning up temporary file structure.
2023-10-22 23:42:40,348 | ERROR | Ramping Down Excitation on channel H1:CAL-PCALY_SWEPT_SINE_EXC
ICE default IO error handler doing an exit(), pid = 106373, errno = 32
PDT: 2023-10-22 16:42:44.751376 PDT
UTC: 2023-10-22 23:42:44.751376 UTC
GPS: 1382053382.751376

Here were the last few lines at the end of the broadband measurement:

notification: new test result
notification: new test result
notification: new test result
notification: end of measurement
notification: end of test
diag> save /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20231022T233325Z.xml
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20231022T233325Z.xml saved
diag> quit
EXIT KERNEL

INFO | bb measurement complete.
INFO | bb output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20231022T233325Z.xml
INFO | all measurements complete.
ICE default IO error handler doing an exit(), pid = 99711, errno = 32

Images attached to this report
Comments related to this report
corey.gray@LIGO.ORG - 16:19, Monday 23 October 2023 (73677)CAL

This may be Operator Error---I showed Vlad my entry about my issue with these measurements, and he noticed there is a  " ^C " in the simulines measurement. 

Conceivably, I think it's possible that I did a CTRL-C (copy of text) as I was drafting my alog while the measurement was running.  (I wonder if I did this for the broadband measurement too...after only ~2min for both!).  :(

LHO General
corey.gray@LIGO.ORG - posted 16:15, Sunday 22 October 2023 (73659)
Sun EVE Ops Transition

TITLE: 10/22 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 160Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 2mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.21 μm/s
QUICK SUMMARY:

H1's been locked 3hrs and got the rundown of what Ryan needed to do with the Squeezer with Sheila/Naoki's help earlier.  Useism is a bit lower than last night and winds are low.

H1 General (SQZ)
ryan.crouch@LIGO.ORG - posted 16:06, Sunday 22 October 2023 (73658)
OPS Sunday day shift summary

TITLE: 10/22 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: One lockloss, automated relock, but there was a small SQZ issue.

Lockloss at 18:59UTC

The only issue I had relocking was the SQZ_LO_LR guardian reported that there was "no 3MHz on the OMC, could be a bad alignment", I talked to Sheila and I did a !graceful_clear_history on the SQZ_ASC_IFO screen and that fixed the issue, Naoki also recommended adjusting the OPO temperature again like yesterday which I also did and accepted in SDF.

LOG:

H1 General
ryan.crouch@LIGO.ORG - posted 12:01, Sunday 22 October 2023 - last comment - 13:28, Sunday 22 October 2023(73656)
Lockloss 18:59UTC

https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1382036394

Comments related to this report
ryan.crouch@LIGO.ORG - 13:28, Sunday 22 October 2023 (73657)

Back into observing at 20:28UTC

LHO VE
david.barker@LIGO.ORG - posted 10:14, Sunday 22 October 2023 (73655)
Sun CP1 Fill

Sun Oct 22 10:11:26 2023 INFO: Fill completed in 11min 22secs

Images attached to this report
H1 General
ryan.crouch@LIGO.ORG - posted 08:02, Sunday 22 October 2023 (73654)
OPS Sunday day shift start

TITLE: 10/22 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 164Mpc
OUTGOING OPERATOR: Camilla (OWL)
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 3mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.23 μm/s
QUICK SUMMARY:

LHO General
corey.gray@LIGO.ORG - posted 23:59, Saturday 21 October 2023 (73651)
Sat EVE Ops Summary

TITLE: 10/21 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
INCOMING OPERATOR: Camilla (OWL)
SHIFT SUMMARY:

Locked entire shift with Lock Clock at just over 8hrs.  There was one M4.9EQ from Mexico which H1 rode through.
LOG:

LHO General (PEM)
corey.gray@LIGO.ORG - posted 20:36, Saturday 21 October 2023 (73653)
HVAC Fan Vibrometers FAMIS Check (FAMIS 26258)

All looks well for the last week for all site HVAC fans (see attached).

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 19:59, Saturday 21 October 2023 (73652)
Mid-Shift Status

H1's been locked for more than half the shift thus far & has even rode through another Mexico EQ.  The range has been increasing the entire shift and it looks like are finally touching 160Mpc.  Violin mode (~500Hz) is now below 10^-17.

LHO General
corey.gray@LIGO.ORG - posted 16:09, Saturday 21 October 2023 (73650)
Sat EVE Ops Transition

TITLE: 10/21 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 144Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 3mph Gusts, 1mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.34 μm/s
QUICK SUMMARY:

Literally just took H1 to Observing (at2303utc, after Camera Servo completed at 2301utc) after chatting with Ryan.  (Looks like I totally lucked out with RyanC getting H1 to NLN after all the work he did along with Camilla & Dave during the Owl shift due to last night's dolphin crash!).

Violins look a little elevated since last remember (currently just under 10^-16.

For this current lock, Ryan mentioned needing to possibly be mindful of PR3 drifting (because of PRMI not looking good during acquisition).  Mainly want to watch H1 to get past an hr and get back to a longer lock!

H1 AOS
ryan.crouch@LIGO.ORG - posted 16:07, Saturday 21 October 2023 - last comment - 10:22, Monday 23 October 2023(73648)
OPS Saturday day shift summary

TITLE: 10/21 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Spent most of the shift continuing to try and recover alignments from the dolphin glitch at 8:05UTC. Currently waiting for ADS to converge to go into observing.

I had lots of issues to start in the morning, ALS_X would lose lock for no apparent reason or its WFS would oscilate too hard and kill its lock, then Diff_IR kept being "not really found" then PRMI buildups were very unstable and we couldn't get good enough flashes to get DRMI. I tried an IA, and then moving SR3 and PR3 based on their OPLEVS to where they were before the dolphin glitch, adjusting SR3 made PRMI better and PR3 made it worse.

I ended up trying to a full slider restore to a time when we were locking for the lock that was killed by the computer crashes (10/19 18:09UTC), still no luck getting PRMI or DRMI. I was considering doing another IA or restoring the sliders to after Camilla's IA this morning, after calling and talking with Jenne I did an initial alignment and not another restore. After this IA we went back up to NLN without any interventions, I stepped slowly through the states just in case. The violins had rung up a bit, but they damped down pretty quickly and weren't much of a hinderance.

Once I got to NLN, the SQZ manager had the notification "SQZ ASC AS42 not on?" So I went to RESET_SQZ_ASC then back to FREQ_DEP_SQZ following Vickys instructions in alog71083. While waiting for the ADS to converge I went through the large amout of SDF diffs for BOS and OAF from 8:50UTC which looked to be when Dave restarted their models, I reverted almost all of them after talking to Jenne (they were all COEFF) except for 1 in BOS. Naoki also hopped on to optimize the squeezing before we went back into observing, I optimized the OPO temperature when he was done and accepted it in SDF then went into observing at 20:49UTC.

Lockloss at 21:22UTC

Lost lock during PRMI_ASC from Yarm losing lock. PRMI keeps locking then losing it seconds later, flashes were super low. Tried CHECK_MICH after a few PRMIs. I trended PRM, PR3, SR3, and the BS to see if anything had moved since the LL, I ended up adjusting PR3 in yaw by .2 (155.5 ->155.3) since it looked like it drifted a little bit and then the flashes looked much better. Not sure why it moved? I also had to tap PRM a little but in pitch (2 microradians, -1502.4 -> -1504.4) and it caught PRMI then DRMI. Guardian brought us up the rest of the way no problem.

LOG:

No log

Images attached to this report
Comments related to this report
ryan.crouch@LIGO.ORG - 10:22, Monday 23 October 2023 (73670)

The intention bit: H1:ODC-OBSERVATORY_MODE was set to LOCK_AQUISITION (state 21) during my locking attempts when I should have set it to CORRECTIVE MAINTENANCE (state 53).

H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 14:24, Saturday 21 October 2023 (73649)
Lockloss 21:22UTC

No obvious cause, https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1381958566

H1 General
ryan.crouch@LIGO.ORG - posted 10:40, Saturday 21 October 2023 - last comment - 13:16, Saturday 21 October 2023(73637)
OPS Saturday day shift update

Kept losing lock from green arms, the ALS error signals didn't look great, so I started an Initial alignment at 15:12UTC which finished at 15:33UTC. I kept losing it at ALS from XARM_WFS_DOF1_P so I turned it off (I had to do this every lock attempt) and then DIFF_IR kept reported "IR not actually found" so I had to move it around by hand. During PRMI I misaligned PRM to check on the BS and it looked great on AS_AIR, guardian took us to CHECK_MICH while I was in the middle of checking the BS.

Defiantly something is misaligned, I couldn't get PRMI for a while and finally I tried to adjust SR3 in yaw to match its OPLEV from before the glitch and that got PRMI to finally lock, but it was unstable and lost it soon after, so something else is still badly aligned, the PIT signal looked good for SR3 based on the OPLEVS. PR3 OPLEV looks a little different as well so I adjusted it back to where it was.

I have to slowly step up in states to not lose lock going through ALS and IR, even with the DOF dance Xarm just loses it for seemingly no reason. I tried requesting LOCK_SLOW_NO_WFS let it settle in LOCKING_GREEN_ARMS then requested LOCKING_ALS and I did not have to turn off any WFS for Xarm. In CHECK_IR DIFF IR was not really found as it hasn't been every attempt. As I was trying to find DIFF_IR we went to EQ mode and lost lock from the quick ground motion increase (5.0 from Mexico?). I going to go make a cup of coffee while the ground motion calms down.

Comments related to this report
ryan.crouch@LIGO.ORG - 11:07, Saturday 21 October 2023 (73643)

After then getting zero flashes on PRMI after getting back up, I decided to try to restore all the sliders to when we were locking before the previous lock (10/19 18:19UTC) at 18:03UTC. The arms are going through increase flashes now.

Things seemed worse after the IA that I ran? During the IA everything looked good though, the signals were all reasonable and AS_AIR looked correct after each section. If this restore all doesn't help I'm going to restore things to how they were after Camilla finshed her IA earlier in the morning (~14:00UTC)

ryan.crouch@LIGO.ORG - 11:30, Saturday 21 October 2023 (73644)

Going through each state manually and slowly, no Xarm issues this time, IR seems to be looking good too. AS_AIR looks bad at DRMI. Lockloss at CHECK_MICH

ryan.crouch@LIGO.ORG - 12:04, Saturday 21 October 2023 (73646)

After talking with Jenne, I'm runing another intial alignment and I did not restore the optics to after Camillas IA. We're still at the 10/19 18:19UTC alignment that I restored too.

ryan.crouch@LIGO.ORG - 13:16, Saturday 21 October 2023 (73647)

Reaquired NLN at 20:16, waiting for ADS to converge to go into observing.

H1 AOS
david.barker@LIGO.ORG - posted 01:39, Saturday 21 October 2023 - last comment - 11:30, Saturday 21 October 2023(73629)
Several corner station front ends down

Investigating a dolphin crash of several corner station front ends.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 02:06, Saturday 21 October 2023 (73630)

Opened FRS 29441. This crash appeared to cause locklosss at 8:05UTC.

david.barker@LIGO.ORG - 02:19, Saturday 21 October 2023 (73631)

I first verified for the IOPs with DACKILLs that their IO Chassis could be seen, which was the case.

I restarted all the models (to clear IOP DACKILLS) on h1oaf0, h1lsc0, h1susb123, h1sush2a, h1sush34, h1sush56. I put all the corresponding SEI IOP SWWD into bypass mode before restarting the SUS IOPs.

All the models restarted with no issues, no reboots (with their associated Dolphin fencing) were needed.

In the morning I'll see if I can find which front end caused the Dolphin crash.

david.barker@LIGO.ORG - 02:20, Saturday 21 October 2023 (73632)

The EDC is slow in reconnecting to susmc3 and susprm channels, but it looks like it will get there in time.

david.barker@LIGO.ORG - 02:26, Saturday 21 October 2023 (73633)
Images attached to this comment
david.barker@LIGO.ORG - 11:30, Saturday 21 October 2023 (73645)

Sat21Oct2023
LOC TIME HOSTNAME     MODEL/REBOOT
01:48:20 h1oaf0       h1iopoaf0   
01:48:34 h1oaf0       h1pemcs     
01:48:48 h1oaf0       h1tcscs     
01:49:02 h1oaf0       h1susprocpi 
01:49:16 h1oaf0       h1seiproc   
01:49:30 h1oaf0       h1oaf       
01:49:44 h1oaf0       h1calcs     
01:49:58 h1oaf0       h1susproc   
01:50:12 h1oaf0       h1calinj    
01:50:26 h1oaf0       h1bos       
01:52:02 h1lsc0       h1ioplsc0   
01:52:16 h1lsc0       h1lsc       
01:52:30 h1lsc0       h1lscaux    
01:52:44 h1lsc0       h1sqz       
01:52:58 h1lsc0       h1ascsqzfc  
01:55:25 h1susb123    h1iopsusb123
01:55:39 h1susb123    h1susitmy   
01:55:53 h1susb123    h1susbs     
01:56:07 h1susb123    h1susitmx   
01:56:21 h1susb123    h1susitmpi  
01:58:57 h1sush2a     h1iopsush2a 
01:59:11 h1sush2a     h1susmc1    
01:59:25 h1sush2a     h1susmc3    
01:59:39 h1sush2a     h1susprm    
01:59:53 h1sush2a     h1suspr3    
02:01:57 h1sush34     h1iopsush34 
02:02:11 h1sush34     h1susmc2    
02:02:25 h1sush34     h1suspr2    
02:02:39 h1sush34     h1sussr2    
02:04:13 h1sush56     h1iopsush56 
02:04:27 h1sush56     h1sussrm    
02:04:41 h1sush56     h1sussr3    
02:04:55 h1sush56     h1susifoout 
02:05:09 h1sush56     h1sussqzout 
 
 

Displaying reports 14921-14940 of 86428.Go to page Start 743 744 745 746 747 748 749 750 751 End