Displaying reports 13381-13400 of 86358.Go to page Start 666 667 668 669 670 671 672 673 674 End
Reports until 16:00, Thursday 04 January 2024
LHO General (DetChar)
austin.jennings@LIGO.ORG - posted 16:00, Thursday 04 January 2024 (75168)
Thursday Operator Summary

TITLE: 01/04 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Aquisition
INCOMING OPERATOR: Corey
SHIFT SUMMARY:

- 16:17 - GRB short E463149

- 16:51 - Superevent S240104b 

- 17:11 - 18:10 - Fire pump testing began - Tagging DetChar in case this shows up as noise on their end

- EX saturation @ 22:02

- 22:35 - went into COMMISSIONING for opportunistic DARM measurements while LLO is down due to microseism

- ISC went through MICH FRINGES a few times without being able to move up, which is my cue to begin running an IA, which is currrently ongoing

- Mystery rise and steady hold of primary microseism is still apparent, coupled with a now rising secondary microseism, I have moved the SEI CONF guardian to USEISM to help make locking less cumbersome

LOG:                                                                                                                                                                                                                                                                                                                                                                                                                                                 

Start Time System Name Location Lazer_Haz Task Time End
16:50 FAC Johnson Controls Site N Fire panel work ??
17:03 FAC Karen Wood shop N Tech clean 17:20
18:26 FAC Kim H2 N Tech clean 18:30
19:33 VAC Janos MX N Vac checks 19:45
21:26 FAC Randy EX Mech N Inventory 21:56
22:34 VAC Gerardo MX N Check purge air ??
23:09 CDS Jonathan EY N Finish set up of HWS computer 23:59
H1 AOS
louis.dartez@LIGO.ORG - posted 15:36, Thursday 04 January 2024 (75175)
lockloss when transitioning to New DARM
Sheila, Louis

We tried transitioning to the new DARM configuration today slowly to debug the transition from NEW_DARM -> [nominal] DARM. We've had plenty of success transitioning into the new DARM state recently using the ETMY_NLN and NEW_DARM ISC_LOCK guardian states. However, today we encountered several errors when doing so. Some syntax errors cropped up from a few lines in ETMY_NLN (that neither of us recognized) that kept us from moving beyond the main() method. We ran the run() instructions by hand (with the GRD state back in NLN) and fixed the syntax errors in ETMY_NLN and then continued by hand to the NEW_DARM configuration (while the guardian state stayed in NLN) but lost lock when swapping the gains on SUS-ETM{Y,X}_L3_LOCK_L. We confirmed that all the expected DARM loop filters were installed and engaged as expected before moving to ETMX (NEW_DARM) from ETMY. It's not clear to us yet why we lost lock this time around after having virtually no issues switching to the new DARM state recently.
H1 General (Lockloss)
austin.jennings@LIGO.ORG - posted 14:59, Thursday 04 January 2024 (75174)
Lockloss @ 22:57

Lockloss @ 22:57 - caused by commissioner measurement.

LHO General
austin.jennings@LIGO.ORG - posted 12:31, Thursday 04 January 2024 (75172)
Mid Shift Report

H1 is still locked, currently at 44.5 hours. All systems appear stable, though primary microseism is currently on the rise - cause of it is unknown.

LHO VE
david.barker@LIGO.ORG - posted 10:24, Thursday 04 January 2024 (75170)
Thu CP1 Fill

Thu Jan 04 10:07:02 2024 INFO: Fill completed in 6min 58secs

Gerardo confirmed a good fill.

Images attached to this report
H1 SEI (SEI)
anthony.sanchez@LIGO.ORG - posted 09:47, Thursday 04 January 2024 (75169)
H1 ISI CPS Noise Spectra Check - Weekly FAMIS 25972

FAMIS 25972

BSC high freq noise is elevated for these sensor(s)!!!
    
ITMX_ST2_CPSINF_H1    
ITMX_ST2_CPSINF_V1

The Primary Microseism is currently elevated in a strange way, which may explain a slight increase in the noise Spectra across all chambers. But other than that I did not see anything that looked wildly different from the last ISI CPS Noise Spectra Check referenced in alog 75088.

Images attached to this report
Non-image files attached to this report
LHO General
austin.jennings@LIGO.ORG - posted 08:03, Thursday 04 January 2024 (75167)
Ops Day Shift Start

TITLE: 01/04 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 3mph 5min avg
    Primary useism: 0.11 μm/s
    Secondary useism: 0.36 μm/s 
QUICK SUMMARY:

- H1 just hit a 40 hour lock and appears stable

- CDS/DMs ok

- EQ band looks to be slowly on the rise but still within a region in which we can operate

LHO General
corey.gray@LIGO.ORG - posted 23:58, Wednesday 03 January 2024 (75161)
Wed EVE Ops Summary

TITLE: 01/04 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:

Nice shift with H1 locked for 32hrs and H1/L1 double coincident the entire shift.  Warnings of earthquakes, but nothing noticeable in the control room.  Winds slowly tapered down as the shift went on and all else is well.

LHO General
corey.gray@LIGO.ORG - posted 23:53, Wednesday 03 January 2024 (75160)
Wed EVE Ops Transition

TITLE: 01/04 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 161Mpc
OUTGOING OPERATOR: Austin
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 4mph 5min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.58 μm/s
QUICK SUMMARY:

H1's just passed the 24hr lock mark w/ a range centered at 160Mpc.  We are running without our pair of large central TV monitors (staging to be replaced)!  Note of no commissioning until Friday.

LHO General
corey.gray@LIGO.ORG - posted 20:02, Wednesday 03 January 2024 (75166)
Mid-Shift Status

Smooth running shift with no issues to report and H1 rocking steady slightly above 160Mpc & 28hrs of lock!

X1 SUS (SUS)
ibrahim.abouelfettouh@LIGO.ORG - posted 16:18, Wednesday 03 January 2024 - last comment - 14:04, Thursday 04 January 2024(75163)
BBSS Test Stand M1 Updates and Transfer Functions

Ibrahim, Oli, Betsy, Arnaud, Fil

Context: In December ('23) We were having issues confirming that the damping, OSEMs, electronics and model were working (or rather, which wasn't working).

I have more thorough details elsewhere but in short:

Eventually, we were able to go through Jeff and Oli's alog 74142. Here is what was found:

All "crude push around offsets" in the test bank yielded positive drives in the damp channels. These are the ndscope screenshots. Different offsets were needed to make the offset change more apparent in the motion (such as with L). A minimum of 1,000 was arbitrarily chosen and was usually enough.

Transfer Functions: where it gets interesting... (DTT Screenshots)

In these DTTs, each reference (black) are the transfer functions without the damping, while the red traces are with the damping.

All "translation" degrees of freedom (L, V, T) showed correct damping, peak location and resonance

All "rotation" degrees of freedom (P, R, Y) showed completely incorrect damping, usually showing shifted peaks to the right (higher freq).

In trying to figure out why this is, we asked:

(In)conclusion:

It seems that whenever the OSEMs push in the same direction, everything goes as planned, hence why all translation damping works. When we ask the OSEMs to push in opposing directions with respect to one another though, they seem to freak out. This seems to be the prime "discovery" of finally getting the transfer functions.

This is the "for now" update - will keep trying to find out why until expertise becomes available.

Images attached to this report
Comments related to this report
ibrahim.abouelfettouh@LIGO.ORG - 14:04, Thursday 04 January 2024 (75173)

Rahul, Ibrahim, Austin

Context: After hopping off the TS call where we decided to try the Pitch TF again and reducing the Damping gain, I met with Rahul and Austin in the control room and we decided to check some more basic OSEM health first.

Something I forgot: When Oli and I were in the control room taking the transfer functions the first time around, I noticed that for the rotational degrees of freedom (P, Y, R), the OSEM outputs were railing immediately (both visibly in number and on the overflow page). I wondered whether I should re-do the TFs without the saturations, by empirically testing the gain until it doesn't overflow. I ultimately kept the nominal -1G in order to report the initial "this is how bad it is" results. This will become relevant later.

Rahul was concerned that the OSEM spectra for the OSEMs that are in M1 were too noisy so we took some spectra measurements of the OSEMs themsevles to see if this was the case ... and it was. These are the screenshots below. We tried them with and without damping to see if damping works, and it doesn't seem like the damping is working exactly as it should be. Additionally, the <10Hz noise is 1-2 orders of magnitude too high according to Rahul. This is a way more "up the chain" (down the chain?) issue and could result in the weirdness we're seeing at the TF level. Why is this?

  1. Environmental reasons such as dangling ribbon cable or cleanroom turbulence
    1. When Rahul and I previously were taking AOSEM noise spectra, we ran into a similar issue where the free hanging of the ribbon cables were impeding the cleanliness of the results. I will go in to check if this is the case and if it is, I will secure all cables and re-take the measurements.
    2. Since the cleanroom cannot be turned off for long, Rahul suggested turning them off briefly to see if the results improve. I will consider this option if spectra are still noisy after everything else. The BBSS is currently surrounded by a protective cover and so it should be a bit more impervious to such turbulence so this may not be relevant.
  2. Rubbing/Touching
    1. It is likely that the (now relevant) saturations seen in the rotational modes may have something to do with touching or railing. As Rahul described it to me, if the OSEM is only rubbing when damping is on then it will have a sort of critical failure as it tries to damp something it is touching, causing it to rail almost immediately.
    2. Rahul and I went into the damping configurations just to see how low the damping needs to be in order for the DAC outputs not to overflow:
      1. R had to be reduced by 10X
      2. P had to be reduced by 100X
      3. Y had to be reduced by 10X
    3. So again, we can see that the clear culprits are the rotational DoFs but this time, we can see that P in particular is misbehaving. Why?
      1. P is the only DoF which uses 3 OSEMs (F1, F2, F3) so this could just be as a result of generally exacerbated poor OSEM noise behavior. This would potentially explain why the undamped P reading was terrible combined to the other ones.
  3. Unhealthy OSEMs
    1. Rahul also suggested that the OSEMs could just be unhealthy and that we could do Open Light BOSEM Electronic Noise Spectra
    2. After some discussion we concluded that this is probably not the case because of the clean translational TF results (which showed that a combo of all OSEMs were working well without clear culprits) so it's less likely that all of them are bad. Though this would be the last resort check.

The Plan

Following these quick checks - once I'm out of the Staging Building:

  • I will retake noise spectra measurements (same measruements as these screenshots)
  • I will maintain course on taking new undamped P TFs followed by empirically determined G damped P, R, Y TFs as discussed with Calum, Betsy, Gabriele.

Minor tasks also include:

  • Re-checking electronics with Fil given new results
  • Checking with Erik/CDS why the GDS overflow page failed (I only have the state word to check for overflows now)
  • Re-checking magnet polarity (though again the offset test kind of shows the minus math is right in the coil output page. This can be done quite easily though if we do end up concluding that some OSEMs are unhealthy and need to be swapped/checked for OL Electronic Noise Spectra

Updates incoming.

Images attached to this comment
LHO FMCS
austin.jennings@LIGO.ORG - posted 16:00, Wednesday 03 January 2024 (75142)
Wednesday Shift Summary

TITLE: 01/03 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:

- Commissioning period took place from 20:00 - 23:00 UTC

- EX saturation @ 22:58

- 15:02 - Temperature alert for the chillers at MX

LOG:

Start Time System Name Location Lazer_Haz Task Time End
17:30 FAC Karen OLab/Vac prep N Tech clean 17:55
18:33 FAC Karen MY N Tech clean 19:33
19:08 FAC Kim MX N Tech clean 21:08
21:09 CDS Erik EX/EY N Swap HWS server 23:09
22:10 VAC Janos MX N VAC checks 22:35
Images attached to this report
H1 ISC
jenne.driggers@LIGO.ORG - posted 15:17, Wednesday 03 January 2024 (75157)
Half percent improvement in optical gain via OMC alignment

One of the things we had on our to-do list with the cold OM2 was to check if there was a different OMC alignment that would improve our optical gain.  I moved the OMC QPD offsets around a bit, and I can certainly make the optical gain worse.  I think I found a place where we've got about 0.5% more optical gain (kappa_c went from 1.010 to 1.015-ish), so I've accepted those QPD offsets in both our Observe and safe.snap files (see the observe.snap screenshot attached).

The second attachment shows that, while I didn't raster, I went both directions with pit and yaw on both the A and B QPDs, and there weren't any dramatically better places.  The one peak where the optical gain poked up as high as 1.018 seems to just be a fluctuation.  We've been sitting in the same alignment (according to both the QPD offsets as well as the OM3 and OMC OSEMs), and haven't seen that again. Despite the flucutations, our average seems to consistenly now be above where it was before today's commissioning period began.

Images attached to this report
H1 CDS (TCS)
david.barker@LIGO.ORG - posted 13:22, Wednesday 03 January 2024 - last comment - 15:37, Wednesday 03 January 2024(75152)
HWS ITMX computer replacement completed

WP11598 Upgrade HWS computer hardware

Jonathan, Erik, TJ, Camilla, Dave:

Yesterday Jonathan and Erik replaced the original h1hwsmsr computer with a spare V1 computer. They moved the bootdisk and the /data RAID disks over to the new computer, and restored the /data NFS file system for the ITMY HWS code (h1hwsmsr1). At the time the new computer was not connecting to the ITMX HWS camera.

This morning Camilla worked on initialized the camera connection and we were at that time able to control and see images from the camera.

This afternoon at 1pm during the commissioning period we stopped the temporary HWS ITMX IOC on cdsioc0 and Camilla started the actual HWS ITMX code on h1hwsmsr. We verified that the code is running correctly, images are being taken, settings were restored from SDF.

During the few minutes between stopping the dummy IOC and starting the actual IOC both the EDC and h1tcshwssdf SDF had disconnected channels, which then reconnected over the subsequent minutes after channel restoration.

Comments related to this report
david.barker@LIGO.ORG - 13:22, Wednesday 03 January 2024 (75153)

Erik is building a new h1hwsex computer and will install it at EX in the next hour.

erik.vonreis@LIGO.ORG - 15:18, Wednesday 03 January 2024 (75158)

h1hwsex had crashed and only needed a reboot.

I installed the "new" V1 server as h1hwsey at EY.  It's physically connected and running, but is not on the network.  It requires some more in-person which we'll do Friday or earlier when out of observe.

david.barker@LIGO.ORG - 15:37, Wednesday 03 January 2024 (75159)

I've restarted the camera control software on h1hwsex (ETMX) and h1hwsmsr (ITMX). All the HWS cameras are now off (external trigger mode).

LHO General
corey.gray@LIGO.ORG - posted 16:54, Tuesday 02 January 2024 - last comment - 16:53, Wednesday 03 January 2024(75135)
Tues EVE Ops Transition

TITLE: 01/03 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Austin
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 6mph Gusts, 4mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.46 μm/s
QUICK SUMMARY:

Austin handed off an H1 nearly at NLN.  I eventually took H1 to Observe, at 0011utc, but at 0037, have been getting bumped out of OBSEVERING due to a gain change (i.e. H1:CAL-INJ_CW_GAIN) for CALINJ SDF.

Attached is the last 20+ min of these gain changes from 1.0 to 0.0 every ~1min.  Have Louis on the phone now.

Images attached to this report
Comments related to this report
corey.gray@LIGO.ORG - 16:53, Wednesday 03 January 2024 (75165)SEI

Forgot to mention that Jim chatted with me regarding the HAM3 ISI, and that if the glitching issues return, I should phone him if it is before ~8pm.  Luckily, we have not had to deal with this for our current 24+hr lock!

H1 CDS
david.barker@LIGO.ORG - posted 09:44, Monday 01 January 2024 - last comment - 16:09, Wednesday 03 January 2024(75106)
h1hwsmsr (ITMX) offline, ITMX HWS camera is ON

At 23:37 Sun 31 Dec 2023 PST the h1hwsmsr computer crashed. At this time: EDC disconnect count went to 88, Slow Controls SDF (h1tcshwssdf) discon_chans count = 15, GRD DIAG_MAIN cannot connect to HWS channel

The main impact on the IFO is that the ITMX HWS camera cannot be controlled and is stuck in the ON state (taking images at 7Hz).

Time line for camera control:

23:22 Sun 31 Dec 2023 PST Lock Loss, ITMX and ITMY cams = ON
23:37 Sun 31 Dec 2023 PST h1hwsmsr computer crash, no ITMX cam control
04:37 Mon 01 Jan 2024 PST H1 lock, ITMY cam = OFF, ITMX stuck ON

 

Comments related to this report
ryan.short@LIGO.ORG - 10:20, Monday 01 January 2024 (75108)DetChar, OpsInfo

Tagging DetChar in case the 7Hz comb reappears since the ITMX HWS camera was left on for the observing stretch starting this morning at 12:41 UTC.

I also removed ITMX from the "hws_loc" list in the HWS test in DIAG_MAIN and restarted the node at 18:08 UTC so that DIAG_MAIN could run again and clear the SPM diff (tagging OpsInfo). This did not take H1 out of observing.

david.barker@LIGO.ORG - 10:54, Monday 01 January 2024 (75110)

Similar to what I did on 23 Dec 2023 when we lost h1hwsex, I have created a temporary HWS ITMX dummy IOC which is running under a tmux session on cdsioc0 as user=ioc. All of its channels are zero except for the 15 being monitored by h1tcshwssdf which are set to the corresponding OBSERVE.snap values.

EDC and SDF are back to being GREEN.

camilla.compton@LIGO.ORG - 11:33, Wednesday 03 January 2024 (75145)DetChar, TCS

The H1:PEM-CS_MAG_LVEA_OUTPUTOPTICS_Y_DQ channel 74900 shows the 7Hz has been present since ​​​​​ 07:37UTC 01 Jan 2024 when the h1hwsmsr computer crashed. Plan to restart the code turning the camera off during locks 74951 during commisioning today.

In 75124 Jonathan, Erik and Dave replaced the computer and today we were again able to communicate with the camera (needed to use the alias init_hws_cam='/opt/EDTpdv/initcam -f /opt/EDTpdv/camera_config/dalsa_1m60.cfg').  At 18:25-18:40UTC we adjusted from 7Hz to 5Hz, off and left back at 7Hz.  We'll plan to stop Dave's dummy IOC and restart the code later today.  Once this is successful, the CDS team will look at replacing the h1hwsex 75004 and  h1hwsey 73906. Erik has WP 11598

camilla.compton@LIGO.ORG - 16:09, Wednesday 03 January 2024 (75162)

From 23:35UTC these combs are gone, 75159

H1 General (Lockloss, VE)
anthony.sanchez@LIGO.ORG - posted 14:27, Tuesday 26 December 2023 - last comment - 11:46, Thursday 04 January 2024(75043)
Ops Mid Day report

STATE of H1: Observing at 154Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 8mph Gusts, 5mph 5min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.94 μm/s
QUICK SUMMARY:

InLock Sus charge measurements likely caused the lockloss this morning. Though there was a PI message at the same time. but the PI didn't seem high enough to break a lock.
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1387641172

Relocking  started before 1600UTC.
16:45 UTC NOMINAL_LOW_NOISE reached and OBSERVING at 16:51 UTC

SQZ manager dropped us into commissioning at 17:22 UTC
Back to Observing at 17:24 UTC

N2 truck arrived at Y end around 17:34.
I missed the time that the N2 truck left. I believe it was shortly after 1900 UTC

The Temps in the in the VPW ranged from 66F to 73F. Im not sure what the correct range should be, but the one that read 73 was nearest the warmed server exhaust on that side of the room.

Comments related to this report
gerardo.moreno@LIGO.ORG - 17:22, Tuesday 26 December 2023 (75046)DetChar-Request, PEM, VE

Attached is a plot of today's noisy events related to the LN2 delivery to the tank for CP7.  Since the IFO was locked flagging the respective groups.

Non-image files attached to this comment
camilla.compton@LIGO.ORG - 16:45, Wednesday 03 January 2024 (75164)

We lost lock during the SETUP step of ESD_EXC_ETMX, plot attached. ETMX_L3_DRIVEALIGN_L2L (bottom right) had an output but I don't think it should have as the feedback is on ITMX at this point and the excitation hadn't started yet. Should check this before next Tuesday.

ESD_EXC_ETMX log before lockloss:

2023-12-26_15:52:32.263999Z ESD_EXC_ETMX [SETUP.main] ezca: H1:SUS-ETMX_L3_DRIVEALIGN_L2L_TRAMP => 1
2023-12-26_15:52:32.264996Z ESD_EXC_ETMX [SETUP.main] ezca: H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN => 1
2023-12-26_15:52:33.394157Z ESD_EXC_ETMX [SETUP.main] ezca: H1:SUS-ETMX_L3_DRIVEALIGN_L2L_SW1S => 0
2023-12-26_15:52:33.645345Z ESD_EXC_ETMX [SETUP.main] ezca: H1:SUS-ETMX_L3_DRIVEALIGN_L2L => ONLY ON: OUTPUT, DECIMATION

Images attached to this comment
camilla.compton@LIGO.ORG - 11:46, Thursday 04 January 2024 (75171)

Looking at a successful ETMX SETUP state, you can see that there is still a small time where ETMX_L3_DRIVEALIGN_L2L has an output, the IFO just survives the glitch. This happens when the gain is changed from 0 to 1 (to allow excitation through) but before the INPUT is turned off. I've swapped the order of these two lines and added a 1sec sleep between them to make sure that the input is turned off before the gain is ramped to 1. Edit has been saved and will reload during next commisioning period.

Images attached to this comment
Displaying reports 13381-13400 of 86358.Go to page Start 666 667 668 669 670 671 672 673 674 End