Displaying reports 13161-13180 of 84072.Go to page Start 655 656 657 658 659 660 661 662 663 End
Reports until 00:00, Friday 22 September 2023
LHO General
austin.jennings@LIGO.ORG - posted 00:00, Friday 22 September 2023 (73043)
Thursday Eve Shift Summary

TITLE: 09/22 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:

- H1 had issues today aligning our Y arm, more info from Ryan's alog here

- TJ and Tony did some troubleshooting by walking the Y arm suspensions in combination with turning off any egregious output signals coming from the ALS Y WFS once they turned on, this seemed to do the trick

- Upon relocking - the good news is that ALS Y caught fairly quick but I am noticing that ALS Y WFS DOF 1 Y is still swinging a lot more than the rest of the WFS

- Back to NLN @ 1:05 UTC/OBSERVE @ 1:23

LOG:
No log for this shift.

 

 

LHO General
austin.jennings@LIGO.ORG - posted 20:00, Thursday 21 September 2023 (73044)
Mid Shift Eve Report

Following issues during the day shift trying to get a stable Y arm on ALS, H1 looks to have fully recovered and is back in observing. Systems appear stable and ground motion is low.

H1 SEI (ISC)
jim.warner@LIGO.ORG - posted 16:43, Thursday 21 September 2023 - last comment - 16:46, Thursday 21 September 2023(73038)
Intermittent ITMY ISI 1.32hz peak from a misbehaving St1 H1 L4C

I noticed in the most recent cps famis task that there is a very sharp peak in a number of ITMY ISI sensors. It's visible in a number of dofs, but trolling through the spectrograms on the summary pages, it seems like it's not always visible, so I never noticed it on the wall foms. This peak seems to be very coherent with at least DARM and CHARD when it is visible, so it's I will try to figure out a configuration that will let us avoid it. It hasn't been visible today,so I haven't been able to try anything, but suspect that using a blend that doesn't use the sensor that I suspect is failing. This makes 3 "bad" l4cs in the IFO all ST1 H1 L4Cs, I don't have an explanation, but that is very suspicious.

As far as which sensor, I suspect it is the St1 H1 L4C. Looking at tranfer functions among the various mostly co-aligned sensors, the H1 L4C consistently has diferent behaviour from the other sensors, particularly around and below the L4C pendulum frequency. 

First image, shows the corner ITMY ground sensor and the ITMY St1 Y T240. Red is the ground sts, blue is the ITMY St1Y, green is the ITMX St1Y. The peak at 1.32hz on ITMY is more than a factor of 10 above the ground motion, but this was at 9 utc on the 20th, asds don't show the peak right now.

Looking at the some of the ground and ISI motion blrms channels this peak has been coming and going over the last couple of weeks. Second image compares the ground, itmy st1, bs st1 and itmx st1 y 1-3hz blrms over the last couple weeks. It certainly seems like about the time that the ITMY st2 h2 cps started glitching, the ITMY ST1 H1 L4C started to intermittently get noisy. Seems like a reasonable explanation is the L4C is failing some way internally and the multiple trips from the CPS glitching made that get worse.

Third tf plot shows the co-located passive sensor tfs for the L4Cs, the dashed lines are the H1 l4c to H1 cps and X1 T240 tfs, the other traces are the H2 and H3 l4cs. Red, blue and green should all be pretty similar and look something like the typical 1hz seismometer response, but the dashed red trace looks like something has changed the low frequency response of the H1 L4C. There's a notch in the blue and green traces, I think because the peak in the H1 sensor is so loud it's all the other sensor can see. The brown, light blue and pink traces are the local L4C to CPS tfs. Kind of similar, kind of different, the low frequency reponse of the H1 sensor is different from the other 2 sensor pairs, and but the peak doesn't show up as much in the other sensors because of CPS noise or something. Need to think about that.

Fourth asd plot shows the peak shows up the most in the H1 L4C, less visible in the other L4Cs.

Last tf plot shows the local CPS to T240 tfs. Peak is equally visible in all these sensor pairs, transfer functions all look generally the same.

Probably the best shot for fixing this at the moment is switching to blends that don't used the H1 L4C, but the peak needs to be visible to test that and it hasn't been cooperating so far today. The channel H1:ISI-ITMY_ST1_FFB_LOG_Y_1_3 has been the easiest to trend witness so far, peak is active when that channel hits ~1.

Images attached to this report
Comments related to this report
jim.warner@LIGO.ORG - 16:46, Thursday 21 September 2023 (73042)

Attaching a dtt measurement to show the peak is visible in the St2 gs13s, is coherent with DARM and CHARD P, and lines up with peaks in both.

Images attached to this comment
LHO General
ryan.short@LIGO.ORG - posted 16:36, Thursday 21 September 2023 (73040)
Ops Day Shift Summary

TITLE: 09/21 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Aligning
INCOMING OPERATOR: Austin
SHIFT SUMMARY: H1 recovered after this morning's h1sush7 crash and was observing for most of the morning. Since then, we've struggled to align the Y arm well enough to lock. Recovery is ongoing.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
15:37 CDS Dave, Fil, Rahul MER/Remote - Troubleshooting IOPSUSH7 crash 15:48
17:42 FAC Kim H2 - Technical cleaning 17:58
17:46 VAC Janos MY - Vacuum checks 18:00
18:02 SEI Jim Remote - Diagnosing ITMY ISI 19:30
19:30 FIT Camilla MY - Jogging 19:58
21:06 ISC Keita LVEA - OM2 heater thermistor 21:23
21:10 AOS Randy LVEA - Walkabout/checks 21:27
LHO General
austin.jennings@LIGO.ORG - posted 16:01, Thursday 21 September 2023 (73041)
Ops Eve Shift Start

TITLE: 09/21 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 8mph Gusts, 7mph 5min avg
    Primary useism: 0.06 μm/s
    Secondary useism: 0.22 μm/s 
QUICK SUMMARY:

- IFO is currently DOWN troubleshooting some alignment issues coming this morning's failed IO chassis

H1 SQZ (ISC)
victoriaa.xu@LIGO.ORG - posted 13:52, Thursday 21 September 2023 - last comment - 14:58, Thursday 21 September 2023(73034)
Some recent hot/cold OM2 comparisons with sqz/no-sqz

To continue trying to understand how OM2 is improving range, I compared some recent sqz / no-sqz times with hot/cold OM2. So far, I still see the optical gain reduction with hot OM2 (i.e. cold OM2 is better >kHz), and the improvement in low-frequency noise with hot OM2 (~30-300 Hz?).

****Caveat regarding the calibration: I am simply plotting GDS-CALIB_STRAIN_NOLINES for these various times, but this is not the most accurate for the traces before September, as I did not use any corrections to GDS for those times.

I'm interested in this because OM2 was re-heated this week LHO:72967 after having been off for some time, possibly off since ~8/29. If we assume OM2 was cold on bad sqz night following 9/10, LHO:72796, and OM2 was hot & thermalized for today's 9/21 no-sqz time due to IO chassis power issues impacting HAM7 ISI LHO:73026 (reasonable to assume OM2 is hot; the remaining thermister 1 shows a plateau in temperature today) -- then we have some long quiet stretches to compare. For the sqz subtraction (ie, sqz dB as a function of frequency), last done with hot om2 on 8/2 for many sqz angles 72565, these long quiet sqz/no-sqz stretches are super useful. I'm still working on more IFO modelling to be more confident in quantum noise parameters as a function of OM2.

For reference, Camilla helped find the following sqz quiet times, building on the OM2 times used before in LHO:71533. A full dictionary of times used for plotting is attached.
'cold OM2 4':{  # bad sqz night @ https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72796
                     'no sqz': {
                         'gps start': 1378445718, 
                         'gps stop' : 1378446918
                     },
                     'FDS': {
                         'gps start': 1378440000,
                         'gps stop' : 1378441800
                     }},
'hot OM2 4':{   # OM2 got cold sometime https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72967
             # these hot sqz/no-sqz times are from 9/21 when ham7 isi tripped for several hours, https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=73020
                     'no sqz': {
                         'gps start': 1379340154,
                         'gps stop' : 1379345854
                     },
                     'FDS': {
                         'gps start': 1379324600,
                         'gps stop' : 1379328200
                     }},

Images attached to this report
Non-image files attached to this report
Comments related to this report
victoriaa.xu@LIGO.ORG - 14:58, Thursday 21 September 2023 (73036)

I think it's okay to assume OM2 was cold for the sqz data I pulled from 9/10-9/11. I checked trends of AS port alignments over the past 2 months which includes intentional OM2 cycling, and zoomed in from the past 1 month with an un-monitored temperature cycling. 

From the same times Daniel found in 72970, that is on Tuesday 8/29, we see about the same alignment shifts that Elenna saw in 70886 from changing OM2 temperature.

Looking at the trends, the alignment shifts with OM2 temps seem reasonably reproducible; I put a green box around trends which seemed the most correlated (i.e. OM1, 2, 3 and OMC SUS alignments). 

Images attached to this comment
H1 General (Lockloss, PSL)
ryan.short@LIGO.ORG - posted 11:05, Thursday 21 September 2023 - last comment - 13:36, Thursday 21 September 2023(73033)
Lockloss @ 17:26 UTC

Lockloss @ 17:26 UTC - ISS secondloop looks to have saturated.

IMC_LOCK guardian noted at 17:26:05 "1st loop is saturated, and open" and moved to OPEN_ISS to open the secondloop. Upon entering the OPEN_ISS state, IMC_LOCK jumped to DOWN (I'm unsure exactly why) and after turning off the secondloop boosts, the lockloss occurred. I'll continue looking into this.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 13:36, Thursday 21 September 2023 (73035)

The secondloop saturation may have been a red herring. Looking at the faster ASC-AS_A_DC channel, there's movement 290ms before the ISS turns off.

Images attached to this comment
LHO VE
david.barker@LIGO.ORG - posted 10:37, Thursday 21 September 2023 (73031)
Thu CP1 Fill

Thu Sep 21 10:23:37 2023 INFO: Fill completed in 8min 33secs

Travis confirmed a good fill curbside.

Code did not run the fill at the scheduled time of 10:00:00 due to time bug. I rescheduled for 10:15:00, at which time it ran correctly.

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 09:34, Thursday 21 September 2023 - last comment - 10:17, Thursday 21 September 2023(73026)
h1sush7 models down at 03:48:52 PDT due to IO Chassis power issue

Ryan, Rahul, Fil, Dave:

All h1sush7 models stopped running at:

PDT: 2023-09-21 03:48:52.000000 PDT
UTC: 2023-09-21 10:48:52.000000 UTC
GPS: 1379328550.000000
 

From this time onwards H1 was still in lock with a depressed range of ~130MPc and was out of OBSERVE. h1seih7 SWWDs were tripped.

Recovery process was:

Stop models, fence h1sush7 from the Dophin fabric, reboot h1sush7.

When h1sush7 came back, I verified that the IO Chassis could not be seen. I then fenced and powered the front end down.

Fil went onto the mech room mezzanine and verified that the SUS side of the Kepco dual-power-supply had tripped (SEI side was OK). He powered the IO Chassis back on, I powered h1sush7 computer and all came back correctly.

I untripped the SWWDs to get h1seih7 driving again, Ryan and Rahul recovered the SUS models.

 

Comments related to this report
david.barker@LIGO.ORG - 09:37, Thursday 21 September 2023 (73027)

Opened  FRS29160 for this issue.

This is the third time this has happened

06 Dec 2021 FRS21524
31 Dec 2022 FRS26345
21 Sep 2023 FRS29160

Fil has opened a workpermit to replace this Kepco power supply next Tuesday.

ryan.short@LIGO.ORG - 10:17, Thursday 21 September 2023 (73030)ISC, SQZ

Total H1 observing time lost this morning was 4h,58m from 10:49 UTC to 15:47 UTC. During this time, there was no squeezing and H1's range averaged around 135Mpc.

Images attached to this comment
H1 SEI (SEI)
anthony.sanchez@LIGO.ORG - posted 09:14, Thursday 21 September 2023 - last comment - 10:53, Thursday 21 September 2023(73025)
FAMIS HEPI PUMP Trends

Famis 26458
HEPI Pump Trends for the last 45 days are attached.
H1:HPI-PUMP_L0_CONTROL_VOUT has seen a change of 8[units], in the last 3 days. Im not sure if movement of 8 is unreasonable or not. Also what are the units of these channels?
 

Images attached to this report
Comments related to this report
jim.warner@LIGO.ORG - 10:53, Thursday 21 September 2023 (73032)

The increase in drive is probably related to Tyler's change to the MR heater coil settings on the 18th. I think the units on the LO VOUT channel are just drive counts, but I'm not sure.

H1 AOS
david.barker@LIGO.ORG - posted 16:39, Wednesday 20 September 2023 - last comment - 09:50, Thursday 21 September 2023(73013)
DAQ EDC missing 6 HWS ITMY channels

H1EPICS_HWS.ini has 88 ITMY channels, all are connecting except the following 6:

H1:TCS-ITMY_HWS_BEAM_POS_X
H1:TCS-ITMY_HWS_BEAM_POS_Y
H1:TCS-ITMY_HWS_CO2_POS_X
H1:TCS-ITMY_HWS_CO2_POS_Y
H1:TCS-ITMY_HWS_RH_POS_X
H1:TCS-ITMY_HWS_RH_POS_Y
 

Comments related to this report
camilla.compton@LIGO.ORG - 17:08, Wednesday 20 September 2023 (73014)

Explanation of the issue in 73012. As we are now in observing and these are not important channels, we will fix tomorrow.

david.barker@LIGO.ORG - 17:10, Wednesday 20 September 2023 (73015)

Camilla confirms it is OK to run without these channels overnight, they will be added back tomorrow when H1 is out of observe.

david.barker@LIGO.ORG - 09:50, Thursday 21 September 2023 (73029)

Camilla installed the new HWS ITMY code at 08:22 while H1 was out of observe. EDC is now green.

H1 ISC (AWC, DetChar, ISC)
keita.kawabe@LIGO.ORG - posted 12:01, Tuesday 19 September 2023 - last comment - 12:01, Tuesday 03 October 2023(72967)
OM2 thermistors are connected back to Beckhoff (but not the heater driver voltage input, for which we're still using the voltage reference) (Daniel, Keita)

Detchar, please tell us if the 1.66Hz comb is back.

We changed the OM2 heater driver configuration from what was described in alog 72061.

We used a breakout board with jumpers to connect all OM2 thermistor readback pins (pin 9, 10, 11, 12, 22, 23, 24, 25) to Beckhoff at the back of the driver chassis. Nothing else (even the DB25 shell on the chassis) is connected to Beckhoff.

Heater voltage inputs (pin 6 for positive and 19 for negative) are connected to the portable voltage reference powered by a DC power supply to provide 7.15V.

BTW, somebody powered the OM2 heater off at some point in time, i.e. OM2 has been cold for some time but we don't know exactly how long.

When we went to the rack, half of the power supply terminal (which we normally use for 9V batteries) was disconnected (1st picture), and there was no power to the heater. Baffling. FYI, if it's not clear, the power terminal should look like in the second picture.

Somebody should have snagged cables hard enough, and didn't even bother to check.

Next time you do it, since reconnecting is NOT good enough, read alog 72286 to learn how to set the voltage reference to 7.15V and turn off the auto-turn-off function, then do it, and tell me you did it. I promise I will thank you.

Images attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 12:07, Tuesday 19 September 2023 (72969)AWC, ISC

Thermistors are working.

Images attached to this comment
daniel.sigg@LIGO.ORG - 12:22, Tuesday 19 September 2023 (72970)

There is an OSEM pitch shift of OM2 at the end of the maintenance period 3 weeks ago (Aug 29)

Images attached to this comment
jenne.driggers@LIGO.ORG - 14:35, Tuesday 19 September 2023 (72979)CAL

Having turned the heater back on will likely affect our calibration.  It's not a bad thing, but it is something to be aware of.

ryan.short@LIGO.ORG - 14:47, Tuesday 19 September 2023 (72980)CAL

Indeed it now seems that there is a ~5Mpc difference in the range calculations between the front-ends (SNSW) and GDS (SNSC) compared to our last observation time.

Images attached to this comment
ansel.neunzert@LIGO.ORG - 10:22, Wednesday 20 September 2023 (73000)

It looks like this has brought back the 1.66 Hz comb. Attached is an averaged spectrum for 6 hours of recent data (Sept 20 UTC 0:00 to 6:00); the comb is the peaked structure marked with yellow triangles around 280 Hz. (You can also see some peaks in the production Fscans from the previous day, but it's clearer here.)

Images attached to this comment
keita.kawabe@LIGO.ORG - 13:12, Wednesday 20 September 2023 (73006)

To see if one of the Beckhoff terminals for thermistors is kaput, I disconnected thermistor 2 (pins 9, 11, 22 and 24) from Beckhoff at the back of the heater driver chassis.
For a short while the Beckhoff cable itself was disconnected but the cable was connected back to the breakout board at the back of the driver chassis by 20:05:00 UTC.

Thermistor 1 is still connected. Heater driver input is still receiving voltage from the voltage reference.

 

ansel.neunzert@LIGO.ORG - 09:53, Thursday 21 September 2023 (73028)

I checked a 3-hour span starting at 04:00 UTC today (Sept 21) and found something unusual. There is a similar structure peaked near 280 Hz, but the frequency spacing is different. These peaks lie on integer multiples of 1.1086 Hz, not 1.6611 Hz. Plot attached.

Images attached to this comment
keita.kawabe@LIGO.ORG - 14:32, Thursday 21 September 2023 (73037)DetChar-Request

Detchar, please see if there's any change in 1.66Hz comb.

At around 21:25 UTC, I disconnected OM2 thermistor 1 (pins 10, 12, 23, 25 of the cable at the back of the driver chassis) from Beckhoff and connected thermistor 2 (pins 9, 11, 22, 24).

Images attached to this comment
ansel.neunzert@LIGO.ORG - 11:19, Friday 22 September 2023 (73054)

Checked 6 hours of data starting at 04:00 UTC Sept 22. The comb structure persists with spacing 1.1086 Hz.

jeffrey.kissel@LIGO.ORG - 11:07, Tuesday 03 October 2023 (73235)DetChar
Electrical grounding of the beckhoff systems has been modified as a result of this investigation -- see LHO:73233.
jeffrey.kissel@LIGO.ORG - 12:01, Tuesday 03 October 2023 (73236)CAL
Corroborating Daniel's statement that the OM2 heater power supply was disrupted on Tuesday Aug 29th 2023 (LHO:72970), I've zoomed in on the pitch *OSEM* signals for both 
    (a) when the suspected time of power distruption (first attachment), and 
    (b) when the power and function of OM2 was restored (second attachment).

One can see that upon power restoration and resuming the HOT configuration of TSAMS on 2023-09-19 (time (a) above), OM2 pitch *decreases* by 190 [urad] over the course of ~1 hour, with a characteristic "thermal time constant" exponential shape to the displacement evolution.

Then, heading back to 2023-08-29, we can see a similar shaped event that causes OM2 pitch to *increase* by 160 [urad] over the course of ~40 minutes (time (b) above). Then at 40 minutes IFO recovery from maintenance begins, and we see the OM2 pitch *sliders* adjusted to account for the new alignment, as had been done several times before with the OM2 ON vs. OFF state changes.

I take this to be consistent with:
    The OM2 TSAMS heater was inadvertently turned OFF and COLD on 2023-08-29 at 18:42 UTC (11:42 PDT), and 
    The OM2 TSAMS heater was restored turned ON and HOT on 2023-09-19 18:14 UTC (11:14 PDT).
Images attached to this comment
H1 AOS
jason.oberling@LIGO.ORG - posted 12:54, Tuesday 12 September 2023 - last comment - 15:45, Thursday 21 September 2023(72836)
ITMx Optical Lever Armored Fiber and Cooler Installed (FRS 4544 and WP 11422)

J. Oberling, A. Jennings

With a relatively light maintenance window and a chance for some extended Laser Hazard time, we finally completed the installation of an old OpLev ECR, FRS 4544.  Austin and I removed the existing 10m single-mode fiber and installed a new, 3m armored single-mode fiber, and placed the laser in a "thermal isolation enclosure" (i.e. a Coleman cooler).

To start, I confirmed that there was power available for the new setup; the OpLev lasers are powered via a DC power supply in the CER.  I read no voltage at the cable near the ITMx OpLev, so with Fil's help we traced the cable to its other end, found it unplugged, and plugged it in.  I confirmed we had the expected voltage, which we did, so we moved on with the installation.  We had to wait for the ITMx front end computer to come back up (had tripped as part of other work), so while we waited Austin completed the transition to Laser Hazard.  We took a picture (1st attachment) of the ITMx OpLev data (SUM counts, pitch and yaw readings), then powered down the laser.  We placed the laser in the cooler and plugged in the new power supply; laser turned on as expected.  We then installed a Lexan viewport cover and removed the cover from the ITMx OpLev transmitter pylon.  The old 10m fiber was removed, and we found 2 areas where the fiber had been crushed due to over-zealous strain relief with cable ties (attachments 2-4; this is why we originally moved to armored fibers); I'm honestly somewhat surprised any light was being passed through this fiber.  We installed the armored fiber, being careful not to touch the green camera housing and to not overly bend the fiber or jostle the transmitter assembly, and turned on the laser.  Unfortunately we had very little signal (~1k SUM counts) at the receiver, and the pitch and yaw readings were pretty different.  We very briefly removed the Lexan cover (pulled it out just enough to clear the beam) and the SUM counts jumped up to ~7k; we then put the Lexan back in place; we also tried increasing the laser power, but saw no change in SUM counts (laser already maxed out).  This was an indication that we did manage to change the transmitter alignment during the fiber swap, even though we were careful not to jostle anything (it can happen, and it did), and that the Lexan cover greatly changes the beam alignment.  So we loosened the locking screws for the pitch and yaw adjustments and very carefully adjusted the pitch and yaw of the launcher to increase the SUM counts (which also had the effect of centering the beam on the receiver).  The most we could get was ~8.3k SUM counts with the Lexan briefly removed, which then dropped to ~7k once we re-installed the transmitter cover and completely removed the Lexan (no viewport exposure with the transmitter cover re-installed).  We made sure not to bump anything when re-installing the transmitter cover, yet the SUM counts dropped and the alignment changed (the pitch/yaw readings changed, mostly yaw by ~10 µrad).  Maybe this pylon is little more loose than the others?  That's a WAG, as the pylon seems pretty secure.

I can't explain why the SUM counts are so much lower; could be the difference between the new and old fibers, we could have really changed the alignment so we're now catching a ghost beam (but I doubt this, we barely moved anything).  Honestly I'm a little stumped.  Given more time on a future maintenance day we could remove the receiver cover and check the beam at the QPD, but as of now we have what appears to be a good signal that responds to pitch and yaw alignment changes, so we moved on.  We re-centered the receiver QPD, and now have the readings shown in the 5th attachment; ITMx did not move, it stayed in its Aligned state the entire time.  This is all the result of our work on the OpLev.  We'll keep an eye on this OpLev over the coming days, especially watching the SUM counts and pitch/yaw readings (looking for drift and making sure the laser is happy in its new home; it is the oldest installed OpLev laser at the moment).  The last few attachments are pictures of the new cooler and the fiber run into the transmitter assembly.  This completes LHO WP 11422 and FRS 4544.

Images attached to this report
Comments related to this report
rahul.kumar@LIGO.ORG - 15:45, Thursday 21 September 2023 (73039)SUS

The ITMX OPLEV sum has been dropping past one week. It was around 7000 counts last week and since then it has gone down to around 4000 counts - please see screenshot attached.

Images attached to this comment
Displaying reports 13161-13180 of 84072.Go to page Start 655 656 657 658 659 660 661 662 663 End