Displaying reports 19681-19700 of 87661.Go to page Start 981 982 983 984 985 986 987 988 989 End
Reports until 16:12, Wednesday 31 May 2023
H1 General
camilla.compton@LIGO.ORG - posted 16:12, Wednesday 31 May 2023 (70051)
OPS Evening Shift Start

TITLE: 05/31 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Commissioning
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 8mph Gusts, 7mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.10 μm/s
QUICK SUMMARY: IFO has been locked 3h20. Just finishing some commissioning measurements before we go back to observing.

Ryan has new violin damping settings for IY5/6, I'll watch this tonight. 70039

Dust monitors, SUS, SEI, VAC, okay. CDS just has commissioning related open test points and sdf diffs. 

H1 General
ryan.crouch@LIGO.ORG - posted 16:03, Wednesday 31 May 2023 (70039)
OPS Wednesday day shift summary

TITLE: 05/31 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
SHIFT SUMMARY:

Lock #1:

Picket Fence seismometer data is back

Quick lockloss @ 18:03UTC, possibly a fast LSC ringup at 10Hz. The lockloss tool still seems to be having trouble, this lockloss isn't showing up on it.

Lock#2:

No luck on DRMI or PRMI, AS_AIR looked like something was badly aligned and I couldn't get any flashes on the fast buildups so I went into an initial alignment

I had to intervene at green arms, where I had to tap ETMY in yaw less than a microradian and it caught after increase flashes finished and it was struggling in ENGAGE_WFS. After SRC alignment AS_AIR was still very yawed and ASC_AS_A on IR_INIT_ALIGN ndscope (bottom right plot) was only around 3 instead of 4/5, so I adjusted SRM in yaw (~110 microradians) to maximize peak to trough for the ASC_AS_A signal and to get a more symmetrical AS_AIR spot

DRMI locked itself almost immediately after the IA finished.

PI31 successfully damped during INJECT_SQUEEZING

NLN @ 19:48UTC, In Observing @ 19:56UTC

For IY5/6 first off I'm trying a sign flip, a gain of -0.01. Looks promising based on the drive converging, the narrow, and the broad monitors, based on DARM spectra (0.01 BW) this setting brings down IY5/6 by ~10% per hour based on a hour of damping. I also stepped up the gain on IY4 from 0.08 to 0.4 in a few steps and it damps much faster, I also tried stepping up IY8s gain but didn't see any appreciable difference.

Out of Observing at 21:01UTC for planned commissioning work: SQZ (angle tuning, NoSQZ reference, tuning for SRCL offset), SRCL offsets, PRCL OLG...

LOG:                                                                                                                                                                                                                                  

Start Time System Name Location Lazer_Haz Task Time End
15:13 FAC Kim Optics lab LOCAL Technical cleaning 15:35
15:33 FAC Tyler Mids N work on strainers for HVAC 18:09
15:39 CDS Johnathan MSR N DMT work, replacing h1dmtlogin 16:48
16:21 VAC Janos MidX N Check on pump 16:55
17:53 FAC Kim MidX N Technical cleaning 18:46
20:26 FAC Bubba MidX/Y N Equipment search 21:26
21:00 LSC, SQZ,... Jenne, Elenna, Sheila CR N Commissioning work Ongoing
21:40 FAC Christina Receiving N Roll up doors 22:00
21:28 FAC Richard Carpenters shop N   21:40
21:54 VAC Janos, Jordan MidX N Hepta pump check, Mech room 22:14
22:19 FAC Richard FCES N Search for equipment 22:30
H1 DetChar (ISC)
derek.davis@LIGO.ORG - posted 11:50, Wednesday 31 May 2023 - last comment - 15:33, Wednesday 31 May 2023(70044)
Ring up at 11 Hz near end of locks

Elenna, Derek

Since at least May 28, there has been a consistent ring-up near 11 Hz starting 1-2 hours before lock loss for locks lasting longer than a few hours. I've attached a spectrogram and glitchgram from May 28 showing the ring up at 11 Hz. I also include glitchgrams from May 27 (that doesn't show this ring up) and May 30 (which does show this).  The most recent lock on May 31 also shows evidence of this ring up, as shown in the attached live glitchgram.

 

Elenna notes that this could be related to PRCL no longer having an appropriate unity gain frequency (UGF) after the most recent ring heater change. 

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 11:54, Wednesday 31 May 2023 (70045)ISC

I have attached one particular scope with the LSC trends right before lockloss. During the engineering run, we ran a UGF servo for PRCL for several locks to check how the digital gain needed to be adjusted. I think that trend is now incorrect because of the change to the ring heater. I think the best thing to do would be to recommission the PRCL UGF servo for the next few locks. This will give us enough information to determine the new gain trend for PRCL. Then, we can put that information in the thermalization guardian and turn off the servo

The UGF servo will require the injection of a line that will appear in DARM around 50 Hz.

Images attached to this comment
gabriele.vajente@LIGO.ORG - 14:24, Wednesday 31 May 2023 (70047)

The PRCL error signal shows a glitchy peak at 12.5 Hz that increases slowly over time into a lock.

This is very likely a low-gain peaking in the PRCL loop that gets wors with time.

Looking at the error signal at the beginning of the lock and about 5 hours in, there seems to be a loss of gain of a factor of about 3, and a large peak / bump at 12 Hz

Images attached to this comment
andrew.lundgren@LIGO.ORG - 15:33, Wednesday 31 May 2023 (70049)DetChar, ISC
It's not just PRCL, SRCL also gets the same feature and it seems to actually be bigger. MICH just barely sees the feature. I'm attaching spectrograms of SRCL and PRCL, made with the same settings for comparison. The third plot is REFL_SERVO, which has an increase of a factor of two over the lock across the spectrum.
Images attached to this comment
H1 ISC
elenna.capote@LIGO.ORG - posted 11:27, Wednesday 31 May 2023 - last comment - 17:44, Wednesday 31 May 2023(70042)
Input power reduced by 1W

We have recently reduced the ETMX ring heater by 0.1 W to avoid the 80 kHz PI. As a result, our buildups have increased slightly and we have been operating closer to 440 kW. We ahve also had much shorter locks than we have had in the past operating around 430 kW. Looking at the power and lock trends, our longest locks occur when the circulating power is slightly lower. This small variation in circulating power can result from slightly different input powers every lock from how the ISS second loop closes. Jenne and I think this is enough evidence to drop the PSL input power by 1W. We will now operate at 75W input from the PSL, which should bring us closer to the 430 kW operating power that works best for us.

The requested power level for NLN in lscparams is now 75W. I have loaded the ISC_LOCK, IMC_LOCK and LASER_PWR guardians accordingly.

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 17:44, Wednesday 31 May 2023 (70058)CAL
Tagging CAL.
This power decrease should be covered by TDCFs, but let's be sure.
H1 ISC
jenne.driggers@LIGO.ORG - posted 11:26, Wednesday 31 May 2023 (70043)
LSC noise cleaning retrained, not yet implemented

With yet more help from Gabriele, we've got a freshly trained LSC cleaning, using the same training data as alog 70023.

The 'secret tricks' were to actually sample the 16384 Hz GDS filter at 16384 Hz (rather than my mistakenly using 2048 Hz), and then to train without using the ASC modulations at first to get good starting estimates for filters, then fine tune the training with the modulations added back in. 

Attached is the offline subtraction that we should be able to achieve.  Note that this LSC training will be in competition around 120 Hz with the Jitter training from alog 70023, so I'll likely have to implement one of them, collect a few mins of data, and retrain on that data.

I'll try this out during our commissioning window this afternoon. 

Images attached to this report
H1 CDS
jonathan.hanks@LIGO.ORG - posted 11:13, Wednesday 31 May 2023 (70041)
WP11235, replaced h1dmtlogin
Today I replaced h1dmtlogin.  Ryan C restarted the some FOMS after the hardware swap to verify that things where still working.

I have pulled the old hardware out of the racks.
LHO VE
david.barker@LIGO.ORG - posted 10:22, Wednesday 31 May 2023 (70040)
Wed CP1 Fill

Wed May 31 10:07:40 2023 INFO: Fill completed in 7min 39secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 General
ryan.crouch@LIGO.ORG - posted 08:15, Wednesday 31 May 2023 (70038)
OPS Wednesday day shift start

TITLE: 05/31 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 128Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 3mph 5min avg
    Primary useism: 0.06 μm/s
    Secondary useism: 0.11 μm/s
QUICK SUMMARY:

Taking over from Ibrahim

LHO FMCS
bubba.gateley@LIGO.ORG - posted 08:09, Wednesday 31 May 2023 - last comment - 17:14, Thursday 08 June 2023(70037)
HVAC in LVEA
This will be a catch-up Alog as it dates back to last week. It started with trying to adjust the airflows (per Robert S.) and the rising temps in the LVEA, so I started investigating and found a large amount of condensate on the floor of AHU-1. We (Tyler, Randy, Chris and I) cleaned up all of the water with wet/dry vacuums and then adjusted mechanical linkages on Fan 1 & 2 to achieve the desired airflows. 
At that time, it was discovered that cooling coil 4 was much warmer than the other coils. I was not sure what other adjustments had been made by others so I continued to monitor. I took Friday off but was checking coil 4 and talking with Richard throughout the day. Towards the end of the day coil 4 was still much warmer than normal and allowing warm air into the LVEA. I decided to turn Fan 4 completely off to see if I could get the temps to stabilize. This strategy helped. 

Yesterday, being maintenance day, I took the opportunity to investigate a little more into why coil 4 was not cooling. After checking functionality of all the valves, I decided to open the strainer and found that it was very full of debris. I cleaned the strainer, reinstalled and turned Fan 4 back on. The coil is now running at normal temperature and the LVEA is coming back to a stable temperature. 

Because of the rising temps last week, many of the zone temperatures were lowered to try and help with the warmer areas. These setpoints have all been returned to the original desired temps. 

A work permit will be put in soon to inspect and clean all remaining strainers on the next Tuesday maintenance period.
Comments related to this report
camilla.compton@LIGO.ORG - 17:16, Wednesday 31 May 2023 (70056)

Adding plots of LVEA temperature over the last 10, 16 and 30 days. With cross-hairs showing dates temperatures changed. This plot is available with command: ndscope /opt/rtcds/userapps/release/cds/h1/scripts/fom_startup/nuc32/VEA_temperatures.yaml

Images attached to this comment
jeffrey.kissel@LIGO.ORG - 17:14, Thursday 08 June 2023 (70284)CDS, DetChar, FMP, ISC, OpsInfo, SYS
Adding some quantitative time and information to this aLOG series.

First, I cover the HVAC system and the channels that I'm using for metrics of the LVEA HVAC system.
The LVEA has 2 giant cooling actuators, or "air handlers" that live in the Mechanical Room, +X -Y of the beam splitter.
These air handlers add together to cool the LVEA as a common actuator.
There are separate, local *heaters* in each zone of the LVEA that serve as "differential" actuators. 

The system is depicted pretty well in the MEDM overview screen found under sitemap > "FMCS OVERVIEW" > "Air Handler 1,2"

The channels that are useful metrics of the system are listed below with their function and units
External, outside the building temperature
    H1:PEM-CS_TEMP_ROOF_WEATHER_DEGF       Temperature on the LVEA building roof, in deg F (look for corresponding "DEGC" channels for deg C)

LVEA temperature sensors << our primary "canary in the coal mine" indicator that the suspensions will likely misaligned
    H0:FMC-CS_LVEA_ZONE1A_DEGF    Temperature, in deg F, around BSC2 (think H1 Beamsplitter and ITMs)    
    H0:FMC-CS_LVEA_ZONE1B_DEGF    ", ", around 3-IFO Area (think old H2 beamsplitter)
    H0:FMC-CS_LVEA_ZONE4_DEGF     ", ", Output arm (think SR3)
    H0:FMC-CS_LVEA_ZONE5_DEGF     ", ", Input arm (think PR3)

Local, corresponding heater units
    H0:FMC-CS_LVEA_HEATER_ZONE1A_PC    percentage of heating power being applied to the zone
    H0:FMC-CS_LVEA_HEATER_ZONE1B_PC
    H0:FMC-CS_LVEA_HEATER_ZONE4_PC
    H0:FMC-CS_LVEA_HEATER_ZONE5_PC

Common HVAC Air Handler 1 -- often referred to as just "AHU1"
    These channels are is human controllable via the HVAC control system
    H1:FMC-CS_LVEA_AH_DAMPER_1_PC            "Percentage of open" Intake air reduction "damper" valve (0% full closed, 100% full open)
    H0:FMC-CS_LVEA_AH_COOLTEMP_1_DEGF        Temperature of the "Coiling Coil"
    H0:FMC-CS_LVEA_AH_COOLTEMP_2_DEGF
    H0:FMC-CS_LVEA_AH_AIRFLOW_1              Output Air flow into the LVEA in cubic feet per minute (cfm, or CFM)
    H0:FMC-CS_LVEA_AH_AIRFLOW_2

Common HVAC Air Handler 2 -- often referred to as just "AHU2"
    These channels are is human controllable via the HVAC control system
    H0:FMC-CS_LVEA_AH_DAMPER_2_PC
    H0:FMC-CS_LVEA_AH_COOLTEMP_3_DEGF
    H0:FMC-CS_LVEA_AH_COOLTEMP_4_DEGF
    H0:FMC-CS_LVEA_AH_AIRFLOW_3
    H0:FMC-CS_LVEA_AH_AIRFLOW_4

The story starts earlier than the adjustment to the air-handler units to reduce the overall cooling airflow into the LVEA. Here's the timeline of things changing, given that IFO problems with temperature started late April 2023. Here's every event.

Apr 26 2023 10:56 PDT (Wednesday, mid-morning)
    Air handler 2's damper has a step-change in behavior, going from diurnal fluctuations between 50% and 80% open, to between 60% and fully 100% open.
    This only causes a minor "glitch" change in the LVEA temperatures, mostly in the diurnal fluctuations getting "reset," changing the diurnal pattern but no overall average temperature change.

May 02 2023 07:57 PDT (Tuesday, first-thing, minutes before the official start of maintenance day)
    Air handler 2's damper, and corresponding cooling temperature 3&4 glitches for 30 minutes. For the most part, the damper returns to the normal 50% and 80% open behavior
    This only causes a minor change in the LVEA temperatures, again in the diurnal fluctuations changing their pattern but no overall average temperature change.

May 10 2023 13:46 PDT (Wednesday afternoon)
aLOG: LHO:69516
Air handler unit 2, AHU2 fails and turns off.
Channels showing this:
    H1:FMC-CS_LVEA_AH_DAMPER 2_PC         Goes all the zero closed at 0%
    H0:FMC-CS_LVEA_AH_AIRFLOW_3           Goes to zero, as airflow ceases
    H0:FMC-CS_LVEA_AH_AIRFLOW_4           Goes to zero, "
    H0:FMC-CS_LVEA_AH_COOLTEMP_3_DEGF     Goes up from stable at 50-60 deg F to really high at ~65 deg F
    H0:FMC-CS_LVEA_AH_COOLTEMP_4_DEGF

    This causes a major, a mean value excursion in LVEA temperature, but after the air handler function is restored, all zones come "back to normal" by May 11 20232 12:38 PDT, ~24 hours later.

May 11 2023 12:38 PDT
With Air Handler 1 restored, the LVEA temperatures return to normal but the airflow is now much larger, with Fan 1 producing ~12500 cfm worth of flow, where it used to only put out 6000 cfm.
Channels that show this:
    H0:FMC-CS_LVEA_AH_AIRFLOW_1
    
    This doesn't change the LVEA temperature, but means that there's a lot more air flowing into the LVEA. Maybe this is what cause Robert to pay attention?
    
    
May 17 2023 09:42 PDT (Saturday morning)
Damper for Air Handler Unit 2 damper starts to open up, constantly at 100%, and has stayed like this since, only occasionally coming off the rails to 80% in the first few days.
Channels showing this
    H1:FMC-CS_LVEA_AH_DAMPER 2_PC

    This causes a marked change in Zone 4, 0.5 deg F decrease in temperature. That's not a lot, but work calling out. The diurnal pattern of Zone 4 changes as well. Further, the "coiling coils" for AHU2 also start to run warmer increasing from (diurnal fluctuations around) 51 deg F to (diurnal fluctuations around) 53 deg F.

Then we get to where Bubba Says - "It started with trying to adjust the airflows (per Robert S.)"

May 24 2023 08:48 PDT (Tuesday maintenance)
In reference to LHO:69894, where Bubba says:

"Fans 1 & 2 [of Air Handler Unit 1] supplying the LVEA were readjusted to [reduced] airflows for the Observation Run. 
 The **total** CFM for AHU 1 was reduced from ~ 26,000 to ~ 11,200 CFM"

With a bit more detail,
 - Fan 1 (H0:FMC-CS_LVA_AH_AIRFLOW_1) is reduced from ~12500 cfm to it's level that it was prior to the May 11 2023 fix of Air Handler 1, now back to ~6000 cfm.
 - Fan 2 (H0:FMC-CS_LVA_AH_AIRFLOW_2) is reduced dramatically from the value it's been in a long time ~11000 cfm down to 4500 cfm.
Thus Bubba's statement about the total from (12500+11000 =) 23500 cfm to (6000+4500 =) ~10500 cfm.

As a result of this we land on Bubba's statement "At that time, it was discovered that cooling coil 4 was much warmer than the other coils" (from the May 17th 2023 change in damper behavior) because you see both coil 3 and coil 4 from Air Handler 2 (H0:FMC-CS_LVEA_AH_COOLTEMP_3_DEGF and H0:FMC-CS_LVEA_AH_COOLTEMP_4_DEGF) at the higher, ~53 deg F level.
   
   This dramatic airflow change causes the LVEA temperatures to merge high for a bit until May 24 18:38 PDT -- Zone 4 (think SR3), the output arm gets warmer to meet the other zones.

May 24 2023 22:30 PDT yo May 25 2023 06:52 PDT (Late Night Tuesday to Early morning Wednesday)
Air Handler 1's cooling coils are slowly stepping up in temperature (by the control system? by a human?) from 46 deg F to as much as 54 deg F.
   
   Somehow, this slowly starts to bring all temperatures *down* in the LVEA in all zones together until the next morning at May 25 2023 06:45 PDT.

May 25 2023 06:52 PDT (Early morning Wednesday)
Air handler 1's cooling coils 1 and 2 get restored back to 46 deg F, and the heater in Zone 1A kicks on for 2.5 hours. (by a the control system? by a human?) 

   This brings all temperatures *up* in the LVEA to a relatively high value of 69 deg F until another change on May 26 2023 15:56 PDT.

May 26 2023 15:56 PDT (Friday Afternoon)
Air Handler 2 Fan 4's air flow drops to zero, and cooling coil 3 and 4 temperature drops from 54 deg F to 49 deg F.

These are Bubba's actions when he says "Towards the end of the day [Friday Afternoon, May 26th] coil 4 was still much warmer than normal and allowing warm air into the LVEA. I decided to turn Fan 4 completely off to see if I could get the temps to stabilize. This strategy helped."

   This brings the temperatures collective *down* in the LVEA from ~69 deg F, to 67 deg F, closer to the "normal" values for the LVEA, though Zone 5 (think PR3) and Zone 1B (think 3IFO) are lower than their "normal" values.

May 30 2023 09:50 PDT - 11:51 PDT (Tuesday, Maintenance Day)
All hell breaks loose with the HVAC as the Fire Alarm system overrides the HVAC controls -- only really mentioned in the operator log during maintenance that day -- see LHO:70003, and drives TJ to investigate drifts in temperature, LHO:70000 thinking "it's happening again!!"

Also, in the mean while Bubba's action he mentions,
" Yesterday, being maintenance day, I took the opportunity to investigate a little more into why coil 4 was not cooling. After checking functionality of all the valves, I decided to open the strainer and found that it was very full of debris. I cleaned the strainer, reinstalled and turned Fan 4 back on. The coil is now running at normal temperature and the LVEA is coming back to a stable temperature. "

    However, upon restoration of the HVAC settings, Air Handler 2's "Fan 4 is turned off" setting from May 26 2023 (Friday Afternoon) get lost, and FAN4 comes back on at full blast at 12000 cfm. Further, Air Handler 1's cooling coils come back on much hotter than they've ever come on before, changing from 45 deg to 54 to 65 deg.

    Once AHU2 Fan 4 comes back on, and air flow to the LVEA re-increases, the temperatures in the LVEA begin to drop again, tanking down as low as 66 deg F.

May 31 2023 06:23 PDT (Early Wednesday Morning)
    Zone 4 and Zone 5 heaters come in to play (by human? or by the control system?) turning on for the first time with Zone 4 at 40 percent, and Zone 5 left to oscillate diurnally between 20% and 40%.

    This brings all LVEA temperatures back up to a really tight cluster, for the first time in months around 67 deg F. This is great, we should hold here if we can.

June 04 2023 18:34 PDT (5 days later, Sunday Evening)
For reasons I can't determine (no other visible change in the metrics I've been using), Zone 1B (think 3IFO) jumps up in temperature 0.5 deg F from 67.2 deg F to 67.7 deg F.

    This is a pretty inconsequential change in temperature that shouldn't affect suspensions.

June 06 2023 09:53 PDT (2 days later, Tuesday Maintenance Day)
    The fire alarm maintenance team is back at it, and this causes the intake damper of Air Handler 2 (H0:FMC-CS_LVEA_AH_DAMPER_2_PC) to drop to zero.

    This causes the entire LVEA temperature to rise again from its stable 67 deg period starting on May 31 2023 06:23 PDT to zones going as high as 69 deg F.

June 06 2023 11:52 PDT (same day)
    The operations team realizes the fire alarm closed the AHU2 damper again, called in facilities and re-opened it, and the temperatures come back down, but overshoot because Zone 1B and Zone 4 configuration got lost.

June 06 2023 17:02 PDT (same day, but later in the evening)
    Some one, or some thing, turns the Zone 1B and Zone 4 heater configuration back on.

    Temperatures return to the "good" May 31 2023 06:23 PDT configuration change
Images attached to this comment
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 08:03, Wednesday 31 May 2023 (70036)
OPS Owl Shift Summary

TITLE: 05/31 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 126Mpc
SHIFT SUMMARY:

  1. Lockloss at 12:13 UTC - DRMI Unlocked due to Myanmar quake.
    1. Guardian immediately went into Earthquake mode after this - it’s coming through on picket fence now. Seems like it was just the 5.8 Myanmar.- Sei just went back to WINDY/CALM as of 12:23 UTC.
  2. Lock 2: 
    1. ALS X and Y both at increase flashes
      1. Had to touch both X and Y
        1. Lost lock during Locking_ALS - had to relock
        2. Lost lock during Locking _ALS again
        3. Lost lock a few times post-EQ but would always catch immediately without needing to increase flashes post-manual movement.
        4. Lost lock as soon as IR was found 
    2. POP Signals are terrible even in PRMI so will misalign the PRM and adjust the BS
    3. I couldn’t make the POP signals any better - by moving the usual culprits so will take an initial alignment (EQ combined with temp change informs this decision).
      1. Init_Align went smoothly
      2. ALS X looks much more stable
    4. DRMI Locked with 0 help (post initial alignment) - never had this happen!
    5. Steep EQ coming in put SEI_CONF in EARTHQUAKE Automatically - at max power as this is hitting (rode through Japan EQ in Earthquake mode)
    6. Back to NLN at 14:34 UTC
  3. IFO is in NLN and OBSERVING as of 14:47 UTC

Other:

  1. Picket fence only gathering/showing data from one seismometer. Tried restarting and troubleshooting but the issue persisted. - I think it was an issue from the seismometer’s side because they all came back online around the same time (I also restarted it again). This has now happened 3 times (where it fixes itself now stuck on one seismometer).
  2. In response to the ISS pump power not reaching its threshold: with guidance from vicky, we turned down the threshold on pump_iss_setpoint from 60uW to 50uW. I saved the change in the code but before I could reload its guardian, it reached the nominal "observing" mode so I think it worked since this is where it was getting stuck.
    • From the log, the power would go close to 60 and then reset, which was preventing us from observing so this should fix it. Thanks Vicky!
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 04:00, Wednesday 31 May 2023 (70035)
OPS OWL Mid-Shift Update
  1. Lockloss at 8:10 UTC due to DRMI unlocked - locking now
  2. Lock 1:
    • Had to go to increase flashes (both)
      1. Had to touch X significantly 
    • Took a while to get Find_IR
    • DRMI unlocked
      1. Need to go through PRMI
      2. Put SQZ_MANAGER in DOWN as per Vicky's request
      3. Touched BS when PRMI locked to overlay the beams 
        • As with yesterday (5/30), had to go through PRMI at which point I assisted guardian (after letting it try and fail for a small time) through touching PRM to get PRMI to locked. While attempting to acquire DRMI (1F), I touched the BS while looking at the AS AIR camera in order to align the "peanut" beams on top of one another.
      4. Got NLN at 9:33 UTC but got stuck at ENGAGE_PUMP_ISS while trying to get LOCKED_CLF_DUAL in order to achieve FREQ_DEP_SQZ. Fixed itself somehow after 27 minutes of stalling.
  3. IFO STATUS: In NLN and OBSERVING as of 9:59 UTC.
  4. Other: NUC 30 DARM FOM working but near-constantly glitching.
H1 General
camilla.compton@LIGO.ORG - posted 00:09, Wednesday 31 May 2023 (70025)
OPS Evening Shift Summary

TITLE: 05/31 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing
SHIFT SUMMARY:
LOG:

Lock#1:

Adjusted some violin settings, note that we now have no good damping settings for IY5/6 and are leaving them with a gain of zero alog 70020
Lockloss at 01:09 UTC after 2h44 minutes at NLN. From lockloss tool can see a ~10-14Hz Yaw wiggle (LPY plot) in the seconds before lockloss, can also be seen in ASC control signals: 1369530572

Lock #2:

Super long relock acquisition as I failed to notice that SRM didn't correctly lock during INITAL_ALIGNMENT as TJ warned, see 70027 and 70013. Thanks Jenne for helping.
Temperatures in the LVEA are still changing, see attached plot of last 36 hours to go with TJ's 70000. (I adjusted the y-axis scale by 1degree)
Back to NLN at 05:07UTC and Observing (after waiting for ASD to converge) at 05:20UTC.
PI31 was successfully automatically damped at the stat of the lock (see attached)
Only sdfs were for adjusting SQZ FC_ASC threshold and OPO TEC temperature set piont, details in alog 70031.
Images attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 00:07, Wednesday 31 May 2023 (70034)
OPS Owl Shift Start

TITLE: 05/31 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 130Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 6mph Gusts, 4mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.12 μm/s
QUICK SUMMARY:

IFO is in NLN and OBSERVING (and has been for 2 hours)

H1 SEI
camilla.compton@LIGO.ORG - posted 23:33, Tuesday 30 May 2023 (70033)
H1 ISI CPS Noise Spectra Check - Weekly - FAMIS 19660

Closes FAMIS 19660, last done in 69690.

BSC CPS:

EX ST1 noise much lower than last week.

HAM CPS:

All look nominal. Seem to have lower low freq noise and sharper 12Hz peaks comapred to lask week.

Images attached to this report
H1 SEI
camilla.compton@LIGO.ORG - posted 23:20, Tuesday 30 May 2023 (70032)
BRS Drift Trends - Monthly - FAMIS #17559

Closes FAMIS 17559, last done 69145.

Trends attached, both look normal. 

Images attached to this report
LHO VE
jordan.vanosky@LIGO.ORG - posted 13:38, Tuesday 30 May 2023 - last comment - 13:25, Wednesday 31 May 2023(70014)
CP6 (Mid-X) Dewar Jacket Pumping

We set up an ISP250 to pump on the dewar jacket for CP6 (Mid-X). The pump was turned on at 1:23 PM. The pump was placed on two pieces of foam, see attached picture.

 

Will add a comment to this report when the pump is turned off today.

Images attached to this report
Comments related to this report
janos.csizmazia@LIGO.ORG - 13:25, Wednesday 31 May 2023 (70046)
We haven't turned this pump off in the end at MX. Instead, we've let it run until 5/31 (Wednesday) morning at CP6, and transposed it to CP5. Currently, it is still pumping.
H1 ISC
jennifer.wright@LIGO.ORG - posted 10:54, Monday 29 May 2023 - last comment - 14:19, Wednesday 31 May 2023(69985)
Feed Forward Tests with new filters after thermalisation

Jennie W., Ryan S., Evan H.

We tested the new filters Elenna & Gabriele designed in this alog. Previous effeorts to design these recently are in this alog.

These were previously tested by Ryan C. here.

Executive Summary

The aim was to do a test > 6 hours in as the filters were optimised for a thermalised IFO.

The new FF filters make DARM sensitivity worse.

SRCLFF coupling to DARM looks better with the new filters between 12 and 56Hz roughly.

MICHFF coupling to DARM looks worse below 100Hz with new filters.

Tuning the gains of MICHFF up and down made the coupling worse.


Test 1

We took a background measurement at 15:32:23 UTC.

This was with the old filters for MICHFF and SRCLFF1 - labelled as 4-23-23 in filter banks.

Ended before 15\;33\;39 UTC.

SRCLFF1 gain = 1, MICHFF gain = 1

Ref 0 - CAL-DELTAL_EXTERNAL_DQ Power Spectrum

Ref 1 LSC-MICH_OUT_DQ/CAL-DELTAL_EXTERNAL_DQ Coherence

Ref 2 LSC-SRCL_OUT_DQ/ CAL-DELTAL_EXTERNAL_DQ Coherence

 

Test 2

Went out of OBS, chnaged the ramp time of both banks to 5s. Turned gain to 0. Turned off 4-23-23, turned on 5-20-23. Turned gain back to 1.

We did the MICH filter first, it was turned on by 15:37:50 UTC.

Then the SRCL filter by 15:38: 29 UTC.

Both banks have a gain 1 nominally.

SRCLFF1 gain = 1, MICHFF gain = 1

Measurement start = 15:38:56 UTC

Measurement end = 15:40:00 UTC

Ref 3 CAL-DELTAL_EXTERNAL_DQ Power Spectrum

Ref 4 LSC-MICH_OUT_DQ/CAL-DELTAL_EXTERNAL_DQ Coherence

Ref 5 LSC-SRCL_OUT_DQ/ CAL-DELTAL_EXTERNAL_DQ Coherence

 

Test 3

For the next test I misread the data, decided SCRL was worse with the new filter and so tuned the gain a bit.

SCRLFF1 gain = 1.1, MICHFF gain = 1

Measurement start = 15:42:59 UTC

Measurement end = 15:44:10 UTC

Ref 6 CAL-DELTAL_EXTERNAL_DQ Power Spectrum

Ref 7 LSC-SRCL_OUT_DQ/ CAL-DELTAL_EXTERNAL_DQ Coherence

Afterwards I realised that SRCL was better with the new FF filters, so we put the SRCLFF gain back to one at 15:49:39 UTC and took some measurements while tuning the MICH gain which is broadly worse with the new filters.

 

Test 4

MICHFF gain changed by 15:51:06 UTC

SCRLFF1 gain = 1, MICHFF gain = 0.9

Measurement start = 15:51:58 UTC

Measurement end = 15:53:15 UTC

Ref 8 CAL-DELTAL_EXTERNAL_DQ Power Spectrum

Ref 9 LSC-MICH_OUT_DQ/CAL-DELTAL_EXTERNAL_DQ Coherence

 

Test 5

SCRLFF1 gain = 1, MICHFF gain = 1.1

Ref 10 CAL-DELTAL_EXTERNAL_DQ Power Spectrum

Ref 11 LSC-MICH_OUT_DQ/CAL-DELTAL_EXTERNAL_DQ Coherence

Template saved in /ligo/home/ryan.short/FF_testing.xml

Old filters back in by 16:00:50 UTC

My version - corresponding to attached plots is saved in jennifer.wright/git/Feedfoward/2023-05-29_DARM_FOM_LSC_FF_check.xml.

Red measurement is the old FF filters, blue is the new with a gain of 1, yellow is with a higher gain in SRCLFF1, purple is with a lower gain in MICHFF and green is with a higher gain in MICHFF.

First image is DARM ASD, SCRL coupling, MICH coupling, second shows DARM ASD for all measurements, thirs shows DARM ASD for old filters vs. new filters with no gain changes.

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 14:19, Wednesday 31 May 2023 (70048)

Today we ran this test again, only changing the SRCL FF, since there was some evidence from this test that it helped. It appears that yes, the new SRCL FF is a small improvement. I also checked the PRCL and MICH coherence during this test to see. In the attachment, all reference traces are "before" with the old LSC FF. The live, red, traces are all with just the new SRCL FF activated.

I zoomed in on the region where I saw improvement in DARM. This test was performed with squeezing off and there was no apparent change in DARM above 200 Hz.

Images attached to this comment
Displaying reports 19681-19700 of 87661.Go to page Start 981 982 983 984 985 986 987 988 989 End