Displaying reports 3801-3820 of 83112.Go to page Start 187 188 189 190 191 192 193 194 195 End
Reports until 10:27, Sunday 08 December 2024
LHO VE
david.barker@LIGO.ORG - posted 10:27, Sunday 08 December 2024 (81677)
Sun CP1 Fill

Sun Dec 08 10:12:22 2024 Fill completed in 12min 19secs

Note to VAC: Texts were sent for this fill, but no emails

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 10:23, Sunday 08 December 2024 - last comment - 16:51, Sunday 08 December 2024(81676)
alarms issue 01:00 Sunday 08dec2024

Jonathan, Dave:

The alarms service on cdslogin stopped reporting around 1am this morning. Symptoms are status file was not being updated (caused alarm block on CDS Overview MEDM to turn PURPLE) and the report file was not being updated. Presumably no alarms would have been sent from this time onwards.

At 08:10 I restarted the alarms.service on cdslogin. A new report file was created but not written to, the /tmp/alarm_status.txt file was not changed (still frozen at 01:00) but I did get a startup text. Then 14 minutes later the files started being written. I raised a test alarm and got a text, but no email.

At 09:38 after not getting a keepalive email at 09:00 or any SSH login emails I rebooted cdslogin. Same behavior as 08:10; report file created not written, tmp file not created, startup text sent successfully. After 14 minutes alarms starts running, writes to file system, test alarms are texted but no emails at all.

Jonathan is going to check on bepex.

Comments related to this report
david.barker@LIGO.ORG - 10:53, Sunday 08 December 2024 (81678)

Jonathan rebooted bepex which has fixed the no-email problem with alarms and alerts. I raised a test alarm and alert to myself and got both texts and emails.

david.barker@LIGO.ORG - 11:01, Sunday 08 December 2024 (81680)
david.barker@LIGO.ORG - 16:51, Sunday 08 December 2024 (81685)

Alarms got stuck again around noon today, presumably due to a reoccurring bepex issue. I have edited the code to skip trying to use bepex and only use twilio for texts. alarms.service was restarted on cdslogin at 16:48 PST.

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 07:32, Sunday 08 December 2024 - last comment - 10:23, Sunday 08 December 2024(81674)
OPS Day Shift Start

TITLE: 12/08 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Microseism
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 20mph Gusts, 16mph 3min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.71 μm/s
QUICK SUMMARY:

IFO is in ENVIRONMENT and LOCKING. IFO stayed in down all last night due to high winds and microseism.

Since last night, the microseiem has leveled off and even gone down a bit. The wind hasn't changed much. Attempting to lock to see where we get to.

Comments related to this report
ibrahim.abouelfettouh@LIGO.ORG - 10:23, Sunday 08 December 2024 (81675)

OBSERVING as of 18:12 UTC.

Got very close to NLN earlier (LASER_NOISE_SUPPRESSION) but lost lock due to 3 back to back mid magnitude (4s and 5s) EQs that were exacerbated by very high microseism.

H1 General (ISC)
oli.patane@LIGO.ORG - posted 18:42, Saturday 07 December 2024 (81673)
Ops Eve Shift End

TITLE: 12/08 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: TJ/Oli
SHIFT SUMMARY: Currently unlocked. We've been sitting in DOWN for the past hour while the secondary microseism stays really high. We also currently have a smallish earthquake coming through. I will set the ifo to start trying to relock again.

Earlier while trying to relock, we were having issues with the ASLX crystal frequency. When this is a consistant issue we have to fix it by going out to the end station to adjust the crystal temperature. I trended the ALSX channels alongside the EX VEA temperatures and it looks like a couple of the temperatures went down, section D down by almost one degree F, right around when we started having crystal frequency issues. The wind also was blowing into the VEA, which we know since the dust counts were high then. I believe it's possible that the wind was cooling down the air in the part of the VEA near where we have the ALS box and changing the temperature of the crystal enough to affect the beatnote. I only have this one screenshot right now (ndscope), but I had trended back a few months and seen a possible correlation between when we get into the CHECK_CRYSTAL_FREQUENCY state for ALSX, the temperature inside the EX VEA, and the dust counts indicating wind entering the VEA. It's hard to know for sure especially because the air/wind outside is now much colder than it was a couple months ago, but it would be interesting to know the location of the D section and look for these correlations more closely. tagging ISC
LOG:

22:15 started an initial alignment
22:40 initial alignment done, relocking
    - ALSX beatnote issue - CHECK_CRYSTAL_FREQUENCY
        - toggled force/no force
        - finally caught with no force
    - ALSX beatnote issue again
        - toggled force/no force and enable/disable
    00:01 Put ifo in DOWN since we can't get past DRMI due to the high microseism
    00:29 tried relocking
    01:06 back to DOWN
02:38 Trying relocking again                                                                                                                              

Start Time System Name Location Lazer_Haz Task Time End
23:03 PEM Robert LVEA YES Finish setting up for Monday 23:48
Images attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:27, Saturday 07 December 2024 (81672)
OPS Day Shift Summary

TITLE: 12/08 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: Oli
SHIFT SUMMARY:

IFO is DOWN due to MICROSEISM/ENVIRONMENT since 22:09 UTC

First 6-7 hrs of shift were very calm and we were in OBSERVING for majority of the time.

The plan is to stay in DOWN and intermittently try to lock but the last few attempts have resulted in 6 pre-DRMI LLs with 0 DRMI acquisitions. Overall, microseism is just very high.

LOG:                                                                                                                                                                       

Start Time System Name Location Lazer_Haz Task Time End
23:03 PEM Robert LVEA YES Finish setting up for Monday 23:48
23:03 HAZ LVEA IS LASER HAZARD LVEA YES LVEA IS LASER HAZARD 06:09
H1 General
oli.patane@LIGO.ORG - posted 16:21, Saturday 07 December 2024 (81671)
Ops Eve Shift Start

TITLE: 12/08 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: SEISMON_ALERT_USEISM
    Wind: 15mph Gusts, 9mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.78 μm/s
QUICK SUMMARY:

Currently in DOWN and trying to wait out the microseism a bit. Thankfully wind has gone back down

H1 ISC (SUS)
ibrahim.abouelfettouh@LIGO.ORG - posted 16:12, Saturday 07 December 2024 (81670)
Investigating SRM M3 WD Trips During Initial Alignment Part 2

Trying to gather more info about the nature of these M3 SRM WD trips in light of OWL Ops being called (at least twice in recent weeks) to press one button.

Relevant Alogs:

Part 1 of this investigation: 81476

Tony OWL Call: alog 81661

TJ OWL Call: alog 81455

TJ OWL Call: alog 81325

It's mentioned in some more OPS alogs but no new info.

Next Steps:

H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 14:11, Saturday 07 December 2024 (81667)
Lockloss

Lockloss @ 12/07 22:09 UTC. Possibly due to a gust of  wind since at EY it had jumped up from lower 20s to almost 30mph the same minute of the lockloss? A possible contributer could also be the secondary microseism - it has been quickly rising over the last several hours and is now up to 2 um/s.

H1 CAL (CAL)
ibrahim.abouelfettouh@LIGO.ORG - posted 14:03, Saturday 07 December 2024 (81666)
Calibration Sweep 12/07

Calibration sweep done using the usual wiki.

Broadband Start Time: 1417641408

Broadband End Time: 1417641702

Simulines Start Time: 1417641868

Simulines End Time: 1417643246

Files Saved:

2024-12-07 21:47:09,491 | INFO | Commencing data processing.
2024-12-07 21:47:09,491 | INFO | Ending lockloss monitor. This is either due to having completed the measurement, and this functionality being terminated; or because the whole process was aborted.
2024-12-07 21:47:46,184 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20241207T212404Z.hdf5
2024-12-07 21:47:46,191 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20241207T212404Z.hdf5
2024-12-07 21:47:46,196 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20241207T212404Z.hdf5
2024-12-07 21:47:46,200 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20241207T212404Z.hdf5
2024-12-07 21:47:46,205 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20241207T212404Z.hdf5
ICE default IO error handler doing an exit(), pid = 2104592, errno = 32
PST: 2024-12-07 13:47:46.270025 PST
UTC: 2024-12-07 21:47:46.270025 UTC
GPS: 1417643284.270025

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:12, Saturday 07 December 2024 (81665)
Sat CP1 Fill

Sat Dec 07 10:09:18 2024 INFO: Fill completed in 9min 15secs

 

Images attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 07:52, Saturday 07 December 2024 (81664)
OPS Day Shift Start

TITLE: 12/07 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 6mph Gusts, 2mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.44 μm/s
QUICK SUMMARY:

IFO is in NLN and OBSERVING as of 11:47 UTC (4 hrs)

There was one lockloss last night and a known issue where SUS SRM WD trips during initial alignment. OWL was called (alog 81661) to untrip it.

H1 General (SUS)
anthony.sanchez@LIGO.ORG - posted 03:03, Saturday 07 December 2024 - last comment - 15:26, Saturday 07 December 2024(81661)
SRM Watchdog trip

TITLE: 12/07 Owl Shift: 0600-1530 UTC (2200-0730 PST), all times posted in UTC
STATE of H1: Aligning
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 5mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.28 μm/s
QUICK SUMMARY:

IFO stuck in initial alignment because SRM watchdog H1:SUS-SRM_M3_WDMON_STATE trip.
Watch dog tripped while we were in Initial alignmnet not before, and was not due to ground motion.


I logged in discovered the trip. Reset the watchdog and reselected myself for Remote OWL notifications.
 

Images attached to this report
Comments related to this report
anthony.sanchez@LIGO.ORG - 03:49, Saturday 07 December 2024 (81662)SUS

SUS SDF drive aligh L2L gain change accepted.

Images attached to this comment
ibrahim.abouelfettouh@LIGO.ORG - 15:26, Saturday 07 December 2024 (81669)

Just commenting that this is not a new issue. TJ and I were investigating it earlier and had early thoughts that SRM was catching on the wrong mode during SRC alignment in ALIGN_IFO either during the re-alignment of SRM (pre-SRC align) or after the re-misalignment of SRM. This results in the guardian thinking that SRC is aligned, which results in saturations and trips because it's actually not. Again, we think this is the case as of 11/25 but still investigating. I have an alog about it here: 81476.

H1 General
oli.patane@LIGO.ORG - posted 22:00, Friday 06 December 2024 (81660)
Ops Eve Shift End

TITLE: 12/07 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Currently Observing at 160 Mpc and have been Locked for 16.5 hours. Quiet shift where nothing happened and we were just Observing the entire time.
LOG:

00:30 Observing and have been Locked for 11 hours

H1 General
oli.patane@LIGO.ORG - posted 20:03, Friday 06 December 2024 (81659)
Ops EVE Midshift Status

Currently observing at 155 Mpc and have been Locked for 14.5 hours. Quiet evening with nothing to report

X1 DTS
joshua.freed@LIGO.ORG - posted 17:09, Friday 06 December 2024 (81658)
Initial Noise results for Double Mixer

J. Freed, 

 

Update on Double Mixer Progress 81593, Proceded through step 2a of the double mixer test plan T2400327. Initial results are sugesting a possible noise improvemnt compaired to the other options in the area of interest for SPI.

DM_PN1.pdf Shows the phase noise test run in step 2a of the Double Mixer Test plan. While not a true 1-to-1 comparison of the phase noise performance of the double mixer compered to other options (step 2b is for that), it shows that adding the double mixer into this system (except for the peaks) improves phase noise performance by a factor of 2-5 from 100 Hz to 20 kHz. Of note, there is a large peak are centered around the 4096 Hz that is of interest to SPI. As there was only a cursory attept to properly phase match the signals in the internals of the double mixer for this initial test, the 4096 Hz sideband was not properly removed. 

A possible cause of this phase miss match is our phase delayer inside the double mixer (ZMSCQ-2-90B) causes a phase delay of about 89.82 degrees at 80MHz and not the 90 degrees we are expecting.

Possible fixes

The very low frequency (<0.05Hz) contains DC noise caused by the external phase mismatch of the double mixer and the reference source for the phase noise measurments. It is not an indication of double mixer drift and there has not been yet investigations in drift.

Non-image files attached to this report
H1 PEM (DetChar, PEM, TCS)
robert.schofield@LIGO.ORG - posted 18:06, Thursday 14 November 2024 - last comment - 10:19, Thursday 19 December 2024(81246)
TCS-Y chiller is likely hurting Crab sensitivity

Ansel reported that a peak in DARM that interfered with the sensitivity of the Crab pulsar followed a similar time frequency path as a peak in the beam splitter microphone signal. I found that this was also the case on a shorter time scale and took advantage of the long down times last weekend to use  a movable microphone to find the source of the peak. Microphone signals don’t usually show coherence with DARM even when they are causing noise, probably because the coherence length of the sound is smaller than the spacing between the coupling sites and the microphones, hence the importance of precise time-frequency paths.

Figure 1 shows DARM and the problematic peak in microphone signals. The second page of Figure 1 shows the portable microphone signal at a location by the staging building and a location near the TCS chillers. I used accelerometers to confirm the microphone identification of the TCS chillers, and to distinguish between the two chillers (Figure 2).

I was surprised that the acoustic signal was so strong that I could see it at the staging building - when I found the signal outside, I assumed it was coming from some external HVAC component and spent quite a bit of time searching outside. I think that this may be because the suspended mezzanine (see photos on second page of Figure 2) acts as a sort of soundboard, helping couple the chiller vibrations to the air. 

Any direct vibrational coupling can be solved by vibrationally isolating the chillers. This may even help with acoustic coupling if the soundboard theory is correct. We might try this first. However, the safest solution is to either try to change the load to move the peaks to a different frequency, or put the chillers on vibration isolation in the hallway of the cinder-block HVAC housing so that the stiff room blocks the low-frequency sound. 

Reducing the coupling is another mitigation route. Vibrational coupling has apparently increased, so I think we should check jitter coupling at the DCPDs in case recent damage has made them more sensitive to beam spot position.

For next generation detectors, it might be a good idea to make the mechanical room of cinder blocks or equivalent to reduce acoustic coupling of the low frequency sources.

Non-image files attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 14:12, Monday 25 November 2024 (81472)DetChar, TCS

This afternoon TJ and I placed pieces of damping and elastic foam under the wheels of both CO2X and CO2Y TCS chillers. We placed thicker foam under CO2Y but this did make the chiller wobbly so we placed thinner foam under CO2X.

Images attached to this comment
keith.riles@LIGO.ORG - 08:10, Thursday 28 November 2024 (81525)DetChar
Unfortunately, I'm not seeing any improvement of the Crab contamination in the strain spectra this week, following the foam insertion.

Attached are ASD zoom-ins (daily and cumulative) from Nov 24, 25, 26 and 27.
Images attached to this comment
camilla.compton@LIGO.ORG - 15:02, Tuesday 03 December 2024 (81598)DetChar, TCS

This morning at 17:00UTC we turned the CO2X and CO2Y TCS chiller off and then on again, hoping this might change the frequency they are injecting into DARM. We do not expect it to effect it much we had the chillers off for a ling period 25th October 80882 when we flushed the chiller line and the issue was seen before this date.

Opened FRS 32812.

There were no expilcit changes to the TCS chillers bettween O4a and O4b although we swapped a chiller for a spare chiller in October 2023 73704

camilla.compton@LIGO.ORG - 11:27, Thursday 05 December 2024 (81634)TCS

Between 19:11 and 19:21 UTC, Robert and I swapped the foam from under CO2Y chiller (it was flattened and not providing any damping now) to new, thicker foam and 4 layers of rubber. Photo's attached. 

Images attached to this comment
keith.riles@LIGO.ORG - 06:04, Saturday 07 December 2024 (81663)
Thanks for the interventions, but I'm still not seeing improvement in the Crab region. Attached are daily snapshots from UTC Monday to Friday (Dec 2-6).
Images attached to this comment
thomas.shaffer@LIGO.ORG - 15:53, Tuesday 10 December 2024 (81745)TCS

I changed the flow of the TCSY chiller from 4.0gpm to 3.7gpm.

These Thermoflex1400 chillers have their flow rate adjusted by opening or closing a 3 way valve at the back of the chiller. for both X and Y chillers, these have been in the full open position, with the lever pointed straight up. The Y chiller has been running with 4.0gpm, so our only change was a lower flow rate. The X chiller has been at 3.7gpm already, and the manual states that these chillers shouldn't be ran below 3.8gpm. Though this was a small note in the manual and could be easily missed. Since the flow couldn't be increased via the 3 way valve on back, I didn't want to lower it further and left it as is.

Two questions came from this:

  1. Why are we running so close to the 3.8gpm minimum?
  2. Why is the flow rate for the X chiller so low?

The flow rate has been consistent for the last year+, so I don't suspect that the pumps are getting worn out. As far back as I can trend they have been around 4.0 and 3.7, with some brief periods above or below.

Images attached to this comment
keith.riles@LIGO.ORG - 07:52, Friday 13 December 2024 (81806)
Thanks for the latest intervention. It does appear to have shifted the frequency up just enough to clear the Crab band. Can it be nudged any farther, to reduce spectral leakage into the Crab? 

Attached are sample spectra from before the intervention (Dec 7 and 10) and afterward (Dec 11 and 12). Spectra from Dec 8-9 are too noisy to be helpful here.



Images attached to this comment
camilla.compton@LIGO.ORG - 11:34, Tuesday 17 December 2024 (81866)TCS

TJ touched the CO2 flow on Dec 12th around 19:45UTC 81791 so the flowrate further reduced to 3.55 GPM. Plot attached.

Images attached to this comment
thomas.shaffer@LIGO.ORG - 14:16, Tuesday 17 December 2024 (81875)

The flow of the TCSY chiller was further reduced to 3.3gpm. This should push the chiller peak lower in frequency and further away from the crab nebula.

keith.riles@LIGO.ORG - 10:19, Thursday 19 December 2024 (81902)
The further reduced flow rate seems to have given the Crab band more isolation from nearby peaks, although I'm not sure I understand the improvement in detail. Attached is a spectrum from yesterday's data in the usual form. Since the zoomed-in plots suggest (unexpectedly) that lowering flow rate moves an offending peak up in frequency, I tried broadening the band and looking at data from December 7 (before 1st flow reduction), December 16 (before most recent flow reduction) and December 18 (after most recent flow reduction). If I look at one of the accelerometer channels Robert highlighted, I do see a large peak indeed move to lower frequencies, as expected.

Attachments:
1) Usual daily h(t) spectral zoom near Crab band - December 18
2) Zoom-out for December 7, 16 and 18 overlain
3) Zoom-out for December 7, 16 and 18 overlain but with vertical offsets
4) Accelerometer spectrum for December 7 (sample starting at 18:00 UTC)
5) Accelerometer spectrum for December 16
6) Accelerometer spectrum for December 18 
Images attached to this comment
Displaying reports 3801-3820 of 83112.Go to page Start 187 188 189 190 191 192 193 194 195 End