Displaying reports 19841-19860 of 87668.Go to page Start 989 990 991 992 993 994 995 996 997 End
Reports until 13:51, Wednesday 24 May 2023
H1 AOS
sheila.dwyer@LIGO.ORG - posted 13:51, Wednesday 24 May 2023 (69890)
a few minutes out of observe for sqz green iss recovery

Vicky, Sheila, TJ

At 17:48 UTC the green ISS railed and was turned off, which caused a drop in OPO green power and BNS range for a couple of hours, as shown in the attached screenshot. This happened because the output power of the SHG was dropping, this output power seems very sensitive to LVEA temperature and drifts for reasons we don't understand.

At around 19:37 Vicky logged in, we took squeezing out of the IFO, reset the AS42 offsets.  Vicky adjusted  H1:SQZ-OPO_ISS_DRIVEPOINT to see what the OPO trans value would be when it was around 2.5 V, she did this to get the highest OPO trans power we could while having some headroom not to rail the ISS.  This gave an OPO trans value of 55uW, so we set opo_grTrans_setpoint_uW in sqzparams to 55, reloaded the SQZ_OPO_LR guardian, requested LOCKED_CLF_DUAL_NO_ISS, then set the request back to LOCKED_CLF_DUAL.  

Since the OPO circulating power is now lower, we need a high OPO TEC setpoint to compensate for less local heating, from the sqz overview there is a opo temp ndscope template from Vicky that I used to adjust the temperature to maximize CLF_REFL6_ABS.  After this we requested FREQ_DEP_SQZ from sqz manager, and attempted to adjust the SQZ angle to improve the blrms.  I moved the sqz angle by 10 degrees and saw no change, so I set it back to the original angle and we went back to observe after Vicky accepted the new offsets in SDF.  We may need to adjust the sqz angle, but it will probably require looking at the Darm spectrum more carefully since we have low nonlinear gain and also technical noise limiting us, so the impact of being mistuned with sqz angle is smaller than normal.

This recovered some range for H1. To diagnose this issue more quickly in the future, TJ says he can add a check for the ISS status in DIAG_MAIN.  H1:SQZ-OPO_ISS_OUTPUTRAMP should be 0, if it is 1 the ISS is off and something needs to be done.

Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 13:16, Wednesday 24 May 2023 (69892)
Ops Day Mid Shift Report

We've been locked for over four hours and observing for most of that. We went out of observing from 1938-1949UTC to fix a squeezer issue.

PI29 has not showed up yet, and hopefully this uninvited guest stays out of O4.

H1 PEM (SEI, VE)
thomas.shaffer@LIGO.ORG - posted 13:11, Wednesday 24 May 2023 - last comment - 18:59, Wednesday 24 May 2023(69891)
Moving dewar from staging to CP1 and back seen on HAM2 STS

A crew of people moved a ~200lbs empty dewar from the staging building to CP1 by fork lifting to the barricade sign then rolling the dewar on its wheels to CP1. After the fill, they pushed the filled dewar to the forklift and then drove it back to the staging building. This could be seen on the HAM2 STS 10-30Hz blrms channels seen in the attachment.

Images attached to this report
Comments related to this report
betsy.weaver@LIGO.ORG - 18:59, Wednesday 24 May 2023 (69900)

Note this activity was cleared by Keita.

LHO VE
david.barker@LIGO.ORG - posted 11:31, Wednesday 24 May 2023 (69888)
Wed CP1 Fill

Wed May 24 10:04:22 2023 INFO: Fill completed in 4min 21secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 11:30, Wednesday 24 May 2023 (69887)
Green'ed up h1pemcs DAQ rate on overview, summary of GDS_TP screens

The "DAQ Overflow" bit for h1pemcs on the CDS Overview was is in a minor EPICS alarm state (yellow) because its rate exceeds 4000kB (is at 4001kB with no testpoints open). Jonathan confirmed the true alarm limit is now 8000kB, and the EPICS alarm level will be updated in a future RCG. In the meantime I have changed the CDS Overview to set this block RED only if the rate exceeds 8000kB, which has green'ed up h1pemcs.

(h1pemcs exceeded this limit when the HAM7 accelerometers were added a few weeks back)

I reviewed the IOP GDS_TP screens, all are green except for six. Three have ADC AUTO_CAL failures and three have ADC Overflows (H1 was in observation mode at the time). See attachment.

Images attached to this report
H1 CAL (CAL)
richard.savage@LIGO.ORG - posted 11:00, Wednesday 24 May 2023 (69886)
PCal X/Y calibration comparison, calculation of chi_XY now working

The front-end calculation of the Pcal X/Y calibration comparison factor, chi_XY (H1:CAL-CS_TDEP_PCAL_X_OVER_Y_REL_MAG), now seems to be working as designed after correcting an error in one of the demod frequencies yesterday afternon.

The first attached plot shows the calculated value during a ~2 hour long lock stretch last night.  Ideally, the value should be unity.  The mean value over this two-hour interval is close to 1.0, with variations of about plus/minus 0.0030.  These variations may result from non-optimial filtering in the on-line calculation.  Investigating and optimizing this front-end calculation and understanding variations in the chi_XY parameter are the subject of a SURF summer student project scheduled to start mid-June.

The second attached image is a screenshot of a diaggui (DTT) session used to make a rough calculation of the parameter by hand.  The calculated value during the same lock stretch (starting at 09:00 UTC on 5/24/2023) is 1.00046.

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 10:49, Wednesday 24 May 2023 (69885)
h1seih16 2nd ADC continues to report timing errors

FRS27187

After the replacement of the 2nd ADC on h1seih16 yesterday unfortunately the timing errors persist. The TIM error bit latches on the MEDM, so we don't know the true error rate. Corey cleared the error at 22:30 last night and it reoccured at 22:50.

We believe that this is a very transient error, one error in the 64k reads per second, and only a few errors per day. We also think the error slows the readout of the second ADC and does not prohibit it, meaning that the ISI and HPI models are not impacted by this.

Next steps:

In the mean time to get better stats on the current error, I've written a script to DIAG_RESET h1iopseih16 when the ADC1_TIM error is raised.

 

H1 CDS (CDS, SEI)
erik.vonreis@LIGO.ORG - posted 09:46, Wednesday 24 May 2023 (69883)
Intermittent timing error on seih16 does not impact user models

The intermittent timing error on H1SEIH16 ADC2 is a delay of microseconds.  The real-time system is able to recover from delays of that length without affecting control loops or data.

The problem can be fixed when opportunity arises and doesn't need to be fixed immediately.

LHO General
thomas.shaffer@LIGO.ORG - posted 08:23, Wednesday 24 May 2023 (69882)
Ops Day Shift Start

TITLE: 05/24 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 3mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.10 μm/s
QUICK SUMMARY: Walked in with Ryan trying to get DRMI to lock. It looked liked something in the SRC was off so we reran the SRC alignment part of initial alignment and it wouldn't lock. I moved SRM in pitch by 133 urad and yaw by ~75urad and then it locked and finished offloading. DRMI then locked without issue and we are moving forward with locking.

 

H1 General
ryan.crouch@LIGO.ORG - posted 08:04, Wednesday 24 May 2023 - last comment - 17:52, Wednesday 31 May 2023(69871)
OPS Wednesday owl shift summary

TITLE: 05/24 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
SHIFT SUMMARY:

Lock#1

Couldn't lock PRMI, lockloss

Lock#2

Went right into an initial alignement, OM1 & OM2 were saturated during PRC in initial alignment

DRMI's lock didn't seem great on the buildups but the spot looked ok

OMC locked first try on its own

NLN @ 08:44, 30 ASC SDF diffs but they were all for the camera servos, I waited for ADS to converge for the camera servos to turn on (~ 16mins) which cleared these diffs

Observing mode @ 09:02 while we thermalize

Out of observing at 11:05UTC for a new FF filter test/measurement and then a calibration suite, new FF filter applied at 11:07UTC

I used Coreys template (/ligo/home/corey.gray/Templates/dtt/DARM_05232023.xml) which I had to enable "read data from tape" for it to run. Measurement started at 11:10UTC, finished at 11:12UTC. My DTT session then immediately glitched and crashed before I could save it, great... I restarted the measurement at 11:15UTC, finished at 11:16UTC. It wouldn't let me save it (error: Unable to open output file), but I added it as ref 27 on the previously mentioned xml file in Coreys directory but this might have not saved from that issue. I grabbed a screenshot of it.

I switched back to the old filters to take the calibration suite at 11:21UTC, I wasn't sure if we wanted them on or not for this, apologies if we did want them on.

Lockloss @ 11:25, possibly from PI29, but on NUC25s scope guardian appeared to be successfully damping it? It was tapering down when when the DCPDs saturated and we lost lock, it also coincided with a ground motion spike from that 5.5 from NZ. I then stepped down the EX ring heaters to 1.2 using the console commands Sheila provided in her alog.

Lock#3:

Yarms power was drastically lower after the lockloss and looked clipped on the camera, increase flashes ran twice and wasn't able to get it above 50%, I stepped in and still wasn't able to even get it to 80%. I gave guardian another shot after this and increase flashes ran another 2 times and was not able to get it, I tried again to lock it and was unsuccesful. Lockloss at LOCKING_ALS after some more rounds of adjusting and trying to lock.

Lock#4-11

ALS locklosses

Lock#12

Beatnotes aren't great, -19 & -20, I'm starting another initial alignment. Lots of SRM saturations during SRC align, trending SRMs OSEMs there doesn't appear to be any unusual motion in the past 20 hours.

After a suggestion from Betsy and Jenne, I check PR3 and it seem to have drifted a bit, I moved PR3 in yaw 0.8 microradians negatively which increased the COMM beatnote. I was able to lock ALS after this but got no flashes on PRMI so I started another initial alignment but it didn't do the SRC align correctly again, TJs going back to try and fix this.

 

Handing off to TJ

LOG:

                                                                                                                                                                     

Start Time System Name Location Lazer_Haz Task Time End
14:23 FAC Betsy FCES N Closeout checks 14:38
14:56 EE Ken Carpenter shop N   15:56
Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 10:34, Wednesday 24 May 2023 (69884)

I've attached a screenshot of the DARM spectrum and coherences with MICH and SRCL during the feedforward test.  This was done 2.5 hours after power up, the second attached screenshot shows where we were on the thermalization transient. 

It seems that the MICH FF is worse, while SRCL is better, similar to the test done at the start of a lock here: 69813

If anyone wants to use this template for a future feedforward check, they can find it at /ligo/home/sheila.dwyer/LSC/DARM_FOM_LSC_FF_check.xml

Images attached to this comment
victoriaa.xu@LIGO.ORG - 18:58, Thursday 25 May 2023 (69915)ISC, TCS

I think this recent attempt at 80 kHz PI damping, which was our first try with this Monday's guardian changes 69800, might have been somewhat successful?

From the screenshot, the DTT shows that when 80 kHz PI damping started, the HOM's were in the same place as that has recently caused locklosses, (comapre the pink/blue vs. black trace). And we see its aliased down 14.76 kHz peak, which visibily shifts down by a few Hz over several averages. Maybe this is a result of our PI damping; we've seen the mode move around before from driving it (68165). And playing the DTT forward in time, you can see the PI doesn't run away like it normally does. It seems likely to me that the 80 kHz didn't cause this lockloss.

From the ndscopes in the screenshot: the first scope shows a recent 80 kHz PI lockloss from Monday, where after the guardian starts damping, the mode's RMS grows ~1e4 in 5 minutes. Then Wednesday, from when the guardian first starts damping, the mode only grew x100 in ~8 minutes! Between Mon/Weds, we started damping with a stonger PI ESD drive in the guardian 69800 (10x DAMP_GAIN, from 5000 --> 50,000), still driving coils differentially (has been like this after 69759).

Images attached to this comment
jeffrey.kissel@LIGO.ORG - 18:02, Friday 26 May 2023 (69951)
The ALS issues reported in this aLOG were symptomatic of the true problem: the HAM-ISIs were all drifting off in Yaw (RZ) slowly, but surely, for weeks -- see LHO:69934.
jeffrey.kissel@LIGO.ORG - 17:52, Wednesday 31 May 2023 (70060)CAL
Tagging CAL.

It's not super explicit, but this is the aLOG when the ETMX (EX) ring heater power was reduced from 1.3 W to 1.2 W on both segments.

It has been later revealed that this has increased the lever of 1064 nm main laser power in the arm cavities, from ~435 kW thermalized to ~440 kW thermalized LHO:70042.

This may have changed the optical gain, cavity pole frequency, and the SRCL cavity detuning. The first two should be measured and correct for via the TDCF system, but we should confirm. 
We should measure more sensing functions and/or turn on the CAL_AWG_LINES low frequency calibration lines to confirm if anything's changes the the SRC detuning.
H1 INJ (DetChar, INJ)
keith.riles@LIGO.ORG - posted 07:48, Wednesday 24 May 2023 - last comment - 07:53, Wednesday 24 May 2023(69877)
Recovery of CW hardware injections
After a week of CW hardware injections, it's now feasible to recover some of the stronger signals with confidence. The bottom line is that amplitudes and phases look about right across the full band of injections (up to 2991 Hz).

Below are examples for specific "strong" injections (meaning marginally detectable with a day's integration):

Pulsar 6 - near 145 Hz
Pulsar 1 - near 849 Hz
Pulsar 4 - near 1387 Hz

For each injection, four graphs are shown:
- Cumulative F-statistic (predicted and recovered) vs day, starting May 17
- Cumulative recovered h0 amplitude with the true value as a horizontal line
- Cumulative recovered phase angle with true value
- Cumulative recovred polarization angle with true value

A full set of results can be found here.  
Notes:
1) The true signal parameters can be found here. 
2) The predicted F-statistic tends to be unduly optimistic in non-stationary bands
3) The last two signal injections are in binary sources, which this signal recovery infrastructure doesn't currently support (future upgrade planned). 
4) The phase recovery graphs show both a solid line for the true value and a dashed line which can be an attractor when the polarization angle is poorly recovered
5) A new of injection recoveries based on May 24 data onward will supersede these ER15  results after today
Comments related to this report
keith.riles@LIGO.ORG - 07:50, Wednesday 24 May 2023 (69878)
Pulsar 6 recoveries
Images attached to this comment
keith.riles@LIGO.ORG - 07:52, Wednesday 24 May 2023 (69879)
Pulsar 1 recoveries
Images attached to this comment
keith.riles@LIGO.ORG - 07:53, Wednesday 24 May 2023 (69880)
Pulsar 4 recoveries
Images attached to this comment
H1 General
ryan.crouch@LIGO.ORG - posted 05:37, Wednesday 24 May 2023 - last comment - 18:06, Friday 26 May 2023(69873)
OPS Wednesday OWL mid shift update pt2

We lost lock from what seems to have been PI29 at 11:25 UTC (I stepped down the EX ring heaters to 1.2 following Sheilas alog after that) and I haven't been able to lock ALS since, green arms was a big struggle especially Yarm. After a bunch of rounds of adjusting, letting increase flashes run, and a few ALS locklosses I started another initial alignment at 12:32UTC.

Comments related to this report
ryan.crouch@LIGO.ORG - 06:06, Wednesday 24 May 2023 (69875)

After the IA I still can't lock ALS, it look like the IMC keeps losing lock while ALS_COMM is in HANDOFF_PART2 which then stalls

ryan.crouch@LIGO.ORG - 07:03, Wednesday 24 May 2023 (69876)

PR3 had drifted a bit, I touched it by .8 microradians in yaw and it was then able to lock als

jeffrey.kissel@LIGO.ORG - 18:06, Friday 26 May 2023 (69953)
In the end the PR3 alignments was a temporary fix for the true problem. The true problem was that the HAM-ISIs were all drifting off in Yaw (RZ) slowly, but surely, for weeks -- see LHO:69934.
H1 General
ryan.crouch@LIGO.ORG - posted 04:01, Wednesday 24 May 2023 (69872)
OPS Wednesday OWL mid shift update

After failing to find DRMI and a lockloss at FIND_IR early on I ran an initial alignment, I was then able to reacquire NLN without any interventions at 08:44 UTC

In Observing at 09:02UTC once the ADS signals had converged for the camera servo to turn on, I let everything thermalize for about 2 hours, until we reached >430kW in the arms. I'm going to take us out of observing shortly to test/take a measurement of a new FF filter for MICH and SRCL then after that I'm going to run a calibration suite and hopefully then go back into observing for the rest of the morning.

Verbal reported a EQ from NZ (5.5) whose' R-waves will hit us in a half hour.

H1 General (DetChar)
betsy.weaver@LIGO.ORG - posted 15:25, Tuesday 23 May 2023 - last comment - 08:48, Tuesday 30 May 2023(69829)
VEA walk through sweeps to set conditions for O4-run state

Betsy, Jason, Sheila, Fil, Marc, Travis, Adrian

This morning, a few of us made some walk-thrus of the Corner and End station VEAs to check and turn off items utilized during non-Observing run times, per T1500386 (bold were items which needed attention this time around). All of the following were checked off.

• Make sure no one is in the LVEA

• Cranes in their "parking spots" & their lights are OFF

• Monitors/work stations are turned OFF (except VAC computers) - Powered down ITMX camera setup computer

• Phones unplugged (wall-wars & RJ11 plugs) & batteries pulled from handsets (Phone locations here)

• Confirm no mechanical shorts onto HEPI.

• Cleanrooms OFF

• PSL in Science Mode - bit of an audible hum in the LVEA after everything turned off, kinda of seemed like it was more in the South bay, maybe fans or HVAC still need to be checke din this area

• ISC Table fans OFF

• Confirm wifi access points are unplugged (instructions)

• Electronics racks (i.e. make sure no test equipment connected to a rack, unless work permit for it.) - A few unconnected cables hanging in the PSL, ISC, SQZ racks, but all determined to be not an issue (some used for temp needs).  Added termination plugs to unused RF plugs in SQZ racks, and 1 in the PSL rack.  Lots of PEM BNC cables still run to various areas from PEM racks. O-scope connected and powered on near West bay corner for PEM coil use.  Adrian/Robert confirm all PEM is in a nominal run configuration.  Will spend another Tuesday with folks to finish cleaning up and stowing cables.

Temp dust mon at HAM2 unplugged.

End stations - HWS camera power supplies under IIET upgrade WIP, so temp plug-in to wall power.

EX weather station equipment (and PS) in rack on VEA floor removed by Fil.

HAM6 RGA ion pump controller sitting on a cart at the end of HAM6 chamber will be left on, but RGA/fan was turned off.

• Forklift NOT connected to charger

• Unplug unused power supplies/extension cords - Unplugged some

• Lights OFF (for end stations check lights via webcams)

• Unplug power supplies for Valcom Paging System and 48V DC H1PSL Phone in Communications Room 163. Also, here is a mouse in this room 163.  The animal kind.

• ALOG the LVEA has been swept.

Comments related to this report
betsy.weaver@LIGO.ORG - 15:30, Tuesday 23 May 2023 (69854)DetChar

LVEA not as silent as we remember -

After all of the sweeping to unplug items, etc, the LVEA was not as quiet as many of us recall during O3.  There is a quietish high pitched hum (like a fan) somewhere, but after Jason, TJ, and Gerardo and I listened for a bit, we couldn't specifically tell where it was coming from.  It isn't a fan from an ISC table, nor is it the equipment in the PSL area emergency egress closet thing.  Maaaybe it's the SQZ racks between HAM4 and 5, but you can also hear it when walking from there to the PSL.  The SQZ racks are all new this time around however.  Or, Gerardo suggests checking to see if it is the dust monitor pump in the mech room which is a bit loud.  There is a temp power supply under the HAM4 HWS table but it has a slightly different quieter hum.

Richard also reminded us that the LVEA VAC rack along the Y-manifold area now has the back door removed and may be noisier than before.  Will look into this next Tues.

adrian.helmling-cornell@LIGO.ORG - 15:57, Tuesday 23 May 2023 (69858)

We never got a picture of the gustmeter instrumentation when I set them up and they came up in the pre-O4 sweep. We are leaving the EY gustmeter setups in place, plugged in and taking data near the emergency door in the EY VEA. A picture of the current setup is attached. The gustmeter on ADC channel 12 has failed at some point.

Images attached to this comment
betsy.weaver@LIGO.ORG - 08:06, Wednesday 24 May 2023 (69881)

The FCES was swept this morning before the run start.  All well there with the above items looked at.

betsy.weaver@LIGO.ORG - 08:48, Tuesday 30 May 2023 (69999)

A noise source was identified in alog 69927, namely a loud dust mon pump.

Displaying reports 19841-19860 of 87668.Go to page Start 989 990 991 992 993 994 995 996 997 End