Displaying reports 13821-13840 of 84090.Go to page Start 688 689 690 691 692 693 694 695 696 End
Reports until 15:32, Wednesday 23 August 2023
H1 SEI
jim.warner@LIGO.ORG - posted 15:32, Wednesday 23 August 2023 - last comment - 11:58, Thursday 24 August 2023(72393)
Tried LLO HAM1 Z ff, works well, will leave it on overnight

I stole the filter that Huyen used for her HAM1 feedforward test, and tried it on the LHO HAM1, with a gain tweak of 17% (ie I multiplied by a gain of 1.17 vs LLO ) to match my measurements and it seems to work well. I was having difficulty with my hand fits with excess low frequency noise or gain peaking where I tried to roll my filters off.  First plot shows the on/ off asds for the HAM2 sts and the HAM1 Z HEPI l4cs. I don't know if I quite believe the improvement below 1 hz is due to the feedforward, but Huyen said she got improvement kind of 1-70hz and that seems to be the case here as well. I'm leaving this filter on overnight, to get good low frequency data.

Second image compares one of my filters(red) with the (green) LLO filter. One thing I still don't understand is that my filters have often caused broad low frequency noise, but the LLO filter doesn't seem to. We don't currently have a noise model for HEPI that allows modeling the ff performance, that would help a lot.

Images attached to this report
Comments related to this report
jim.warner@LIGO.ORG - 11:58, Thursday 24 August 2023 (72406)

Attaching long spectra comparing HAM1 with the LLO feedforward. Still looks pretty good, we should run with this. Live traces are with the feedforward on, refs are with it off, so the improvement in the HAM1 HEPI l4cs is brown to bright green, so something like a factor of 20 improvement at 15 hz, which is the frequency where HAM1 just catches briefly up to the performance of HAM 2 and 4 with this feedforward running.

Second plot compares CHARD pitch asds during these same times. The improvement here is less dramatic, but there is still a factor of almost 2 around 8-9 hz. I suspect the low frequency (.1hz and below) differences is due to wind. I looked at cal deltal, but it seems that squeezing was off or something during my reference time, there's a lot of extra signal above 10 hz.

Images attached to this comment
H1 CAL
ryan.crouch@LIGO.ORG - posted 15:11, Wednesday 23 August 2023 (72392)
CAL BB and Simulines measurement

I ran a calibration measurement at the behest of Louis this afternoon, starting with the broadband:

/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20230823T213349Z.xml

Then the simulines:

GPS start: 1376862014.581911

GPS end: 1376863342.763444

2023-08-23 22:02:04,350 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20230823T213958Z.hdf5
2023-08-23 22:02:04,374 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20230823T213958Z.hdf5
2023-08-23 22:02:04,387 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20230823T213958Z.hdf5
2023-08-23 22:02:04,402 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20230823T213958Z.hdf5
2023-08-23 22:02:04,417 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20230823T213958Z.hdf5

 

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:29, Wednesday 23 August 2023 (72389)
Wed CP1 Fill

Wed Aug 23 10:11:14 2023 INFO: Fill completed in 11min 10secs

Images attached to this report
H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 08:34, Wednesday 23 August 2023 - last comment - 16:36, Wednesday 23 August 2023(72387)
Lockloss at 15:33UTC

DCPD saturation then lockloss

Comments related to this report
ryan.crouch@LIGO.ORG - 12:29, Wednesday 23 August 2023 (72391)

A few signals saw a kick less than a second before the lockloss, and it looks like CSOFT rang up

Images attached to this comment
oli.patane@LIGO.ORG - 16:36, Wednesday 23 August 2023 (72398)

If you get a refined time for the lockloss by looking at ASC-AS_A_DC_NSUM_OUT_DQ, you get that the power fell off the PD at 15:33:23.95UTC (1376840021.95)(see attachment - PD channel is bottom right), so the kicks in the ASC channels happened right after the lockloss. That being said, the jumps in the ASC channels right after the lockloss are all kind of small looking? as opposed to the giant jumps we usually see.

Images attached to this comment
H1 General
ryan.crouch@LIGO.ORG - posted 08:03, Wednesday 23 August 2023 (72386)
OPS Wednesday day shift start

TITLE: 08/23 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: SEISMON_ALERT
    Wind: 3mph Gusts, 2mph 5min avg
    Primary useism: 0.07 μm/s
    Secondary useism: 0.18 μm/s
QUICK SUMMARY:

H1 General
oli.patane@LIGO.ORG - posted 00:06, Wednesday 23 August 2023 (72385)
Ops EVE Shift End

TITLE: 08/23 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Uneventful and quiet night. Detector in Observing and Locked for 9hrs 40mins. Dust is finally settling down in the PSL, and our range is creeping back up as compared to the start of my shift.

23:00UTC Start of shift - detector in Observing and Locked for 1.5 hours

23:03 Superevent S230822bm


LOG:

Start Time System Name Location Lazer_Haz Task Time End
23:23 CDS Jonathon, Ajay   n Turn on DMT 23:35
H1 SUS
oli.patane@LIGO.ORG - posted 21:05, Tuesday 22 August 2023 (72383)
In-Lock SUS Charge Measurements - FAMIS25080

Previously done in 72264

Closes FAMIS#25080

No In-Lock SUS Charge measurements were made this morning due to the SUS_CHARGE Guardian having been in the DOWN state since last Tuesday 8/15 15:29UTC. TJ had fixed up some errors in the SUS_CHARGE code (72219) and had placed it in DOWN for Tuesday Maintainance but had forgotten it there.

At 3:56UTC I put the Guardian back into the correct WAITING state by selecting INJECTIONS_COMPLETE, as done in 71067.

H1 General (PEM)
oli.patane@LIGO.ORG - posted 20:32, Tuesday 22 August 2023 (72382)
Ops EVE MidShift Report

We are still in Observing and have been Locked for 5hrs 40mins now.

All the dust monitors that have had high dust counts from Tuesday Maintainance are going down, with the exception of both the >300um and >500um counts in the PSL Laser Room, of which the >300um counts is at the second highest it's been in 7ish years (highest it's been in the last ~7 years was two weeks ago). This rise in dust looks like it could be related to the wind at the corner station, so I'll keep an eye on it and hopefully it'll come down as the wind dies down.

Images attached to this report
H1 General
ryan.crouch@LIGO.ORG - posted 16:00, Tuesday 22 August 2023 (72369)
OPS Tuesday day shift summary

TITLE: 08/22 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 141Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:

Lock#1:

Yarm locked at a low power (~75%, lower power than XARm) and oscillating, no flashes at DRMI or PRMI, CHECK_MICH lockloss. There was a few more LL at FIND_IR and LOCKING ALS so I did an Initial Alignment, which went well untill we got to SRC where it tried for a while then eventually gave me a message "find by hand" where I tapped on SRM in Pitch maybe 10 microradians to maximize the signal and it was enough to push it onwords and finish IA.

Yarm looked better after the IA but was still oscillating, during these troubles the Norco liquid nitrogen truck was filling CP1 but we weren't seeing any extra motion on the CS accelerometers that we looked at.

Reached NLN at 21:25UTC, I had some sus SDF diffs pop up, they were all TRAMPs so I accepted them on SUSTMSX & SUSETMX. The 102Hz peak was noticable at the start of the lock.

In Observing at 21:45UTC, our range is lower now this lock

LOG:                                                                            

Start Time System Name Location Lazer_Haz Task Time End
14:57 PCAL Tony, Rick EndX Y PCAL measurement 18:56
14:59 FAC Chris Mech room N Fan room1 15:19
15:00 FAC Karen EndY N Tech clean 16:23
15:01 FAC Cindy FCES N Tech clean 16:20
15:04 EE Ken MidX N Light fixtures 21:54
15:17 FAC Mitch West bay N Parts dropoff 19:55
15:29 FAC Chris Bake lab N Check out fume hood for removal 15:44
15:37 DAQ Dave remote n restart nds1 19:24
15:49 FAC Chris MidX N Bring Ken a scissor lift 16:43
15:49 PSL Jason PSL Anteroom LOCAL Inventory 17:48
15:53 SQZ Vicky Remote N SQZ maintenance 17:50
16:09 EE Fill LVEA N Wiring 17:24
16:28 VAC Janos, Travis EndY, LVEA N Measurements 18:31
16:36 EE Marc CER N Check out power supplies with lasers 16:51
16:38 FAC Cindy LVEA N Tech clean 18:41
16:39 SEI Jim CR N HAM8 transfer functions, HAM7 18:23
16:55 FAC Mitch, Ibrahim Staging, Enclosure N Move laser blinds 19:24
16:50 FAC Karen LVEA N Tech clean 18:20
16:58 FAC Chris LVEA N FAMIS checks 17:23
17:24 FAC Chris FCES N FAMIS checks 17:47
17:26 EE Fil EndY, then EndX N ESD upgrade checks 19:05
18:29 FAC Karen EndY N Drop off supplies 18:56
18:41 FAC Cindy High bay N Tech clean 18:58
21:05 VAC Janos, Travis EndX N Investigate anulus pump 21:30
21:09 PCAL Tony PCAL lab LOCAL PCAL post end station work 23:09
Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 16:00, Tuesday 22 August 2023 (72380)
Ops EVE Shift Start

TITLE: 08/22 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 139Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 23mph Gusts, 19mph 5min avg
    Primary useism: 0.09 μm/s
    Secondary useism: 0.11 μm/s
QUICK SUMMARY:

Detector is in Observing and has been Locked for an hour and a half. Decently windy outside.

H1 CDS
david.barker@LIGO.ORG - posted 14:49, Tuesday 22 August 2023 (72379)
CDS Maintenance Summary: Tuesday 22nd August 2023

WP11361 TW1 raw minute trend file offload

Jonathan, Dave:

Last week the files were copied from the almost full SSD on h1daqtw1 to archive spinning media on h1ldasgw1. This morning I completed the transfer by reconfiguring nds1 to use the archive location, and then deleted the files from h1daqtw1's SSD RAID, reducing its usage from 97% to 4%. Jonathan made the nds1 configuration change permanent in puppet.

cdslogin reboot

Jonathan, Dave:

Jonathan rebooted cdslogin at 13:48. The CDS alarm and alert systems came back online with no problems.

H1 CDS (PEM)
filiberto.clara@LIGO.ORG - posted 13:32, Tuesday 22 August 2023 - last comment - 18:24, Tuesday 22 August 2023(72377)
PEM ESD VMON Cables EY/EX

WP 11383
alog 66730

Cable clean-up of the VMON channels for monitoring the ESD 18V and 48V power lines. Signals tied to the power lines through BNC bulkheads on the junction box (top of rack). Voltage divider boxes are installed in series. Signal routed through PEM test panel to AA chassis. See attached pictures. Will work with PEM group to see how we want to isolate the divider boxes and route cables.

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 18:24, Tuesday 22 August 2023 (72381)CDS, PEM, SUS
Got a debrief from Fil on this -- his sentences can be interpreted as a bulleted list describing the *existing* system, not anything that he installed today. The only he did today was install a *new* BNC cable run from the VEAs' SUS-C1 to the CER TSC-C1 racks, without connecting them to anything and instead terminating both ends with 50 Ohm terminators.

I can expand on his words a bit to give the reader a better feel for the existing system:

    "Cable clean-up of the VMON channels for monitoring the ESD 18V and 48V power lines."
He's investigated moving forward with clean-up of the voltage monitor channels that the PEM team uses to monitor the dedicated ESD low-voltage low-noise driver's +/-18V and +/-48V DC power input (see why under the last sentence below). 

    "Signals tied to the power lines through BNC bulkheads on the junction box (top of rack).
Signal routed through PEM test panel to AA chassis."
These voltage monitors are a BNC pick off from the power reduction "junction box" on the top of the end-station VEA SUS-R1 (EX/EY) field racks.
The LVLN ESD driver's power is *not* connected from the "standard" D1002189 power strips that are powering the rest of the chassis in SUS-R1.
    - The dedicated / segregated power supplies live in the ante-room areas of the end-stations, in either DC power racks VDC-(X or Y)(C1 or C2). I don't have info on exactly where these live, so I can't be more specific.

The BNC monitor cables are sent to the PEM ADC via an overly complicated path:
    (a) From the junction box to the 16ch BNC bulkhead PEM test patch panel (D1300779), in U41 of SUS-R1 (EX or EY)
    (b) That SUS-R1 patch panel is connected via a long BNC cable run to an identical patch panel at U33 of the end-station CER TCS-C1 (XC1 or YC1) racks.
    (c) Then more BNCs connect the TCS-(XC1/YC1) U33 patch panel to the 32ch 10x gain PEM AA chassis, D1001421, on in to the PEM/TCS IO Chassis.

Of perhaps interesting note, it's unclear and inconsistent as to what the voltage monitor signals are monitoring. At both end stations, there're only 2 monitor channels. Fil *thinks* that these are connected to only the positive legs of each voltage. At EX, it's clear from the junction box spigot labeling that this is connected to +18V and +48V. However, at EY, the spigots are labeled +18V and -18V. This deserves clarification.

    "Voltage divider boxes are installed in series."
Out of the field of view of the pictures, in between the junction box and the BNC patch panel in SUS-R1, (a), there're metal project, aka pamona, boxes wrapped in ziploc baggies. These project boxes contain a voltage divider to gain down the voltage suitable for ingestion into one of our ADCs.

    "Will work with PEM group to see how we want to isolate the divider boxes and route cables.
We can do better than metal project boxes in ziploc baggies to electrically isolate them from where they rest on the rack or each other. A simple upgrade would be to just use plastic project boxes.

The idea on the table with the new cable run that Fil installed today would be to go directly from the project boxes into the AA chassis, rather than connecting through the patch panel. This has the advantage of re-freeing up the patch panel for temporary tests, and making a rather important voltage monitor more permanent.

Attached are Fil's whiteboard sketch of the system -- highlighting the new long BNC cable run he pulled today between VEA SUS-R1 and CER TCS-C1 in blue.

Using Omnigraffle, the pictures Fil attaches in the main aLOG, and my knowledge of the system, I enhanced his whiteboard sketch into the attached cartoon.
Images attached to this comment
Non-image files attached to this comment
H1 OpsInfo
thomas.shaffer@LIGO.ORG - posted 13:04, Tuesday 22 August 2023 (72376)
Added fault handling to ALS increase flashes state

While Ryan and I were testing out some ALS crystal frequency code, we noticed that if an ALS arm node goes into the Increase_Flashes state, it won't leave even if there are faults. I think I remember not putting in our usual fault checking decorators here because we want to make sure that we restore the offsets before leaving the state, and not just jump straight to the Fault state.

I ended up adding the general fault decorator the main method, since this will be before we set any of the offsets. I then just added a conditional in run to look for faults and then notify and wait. I chose to just wait rather than restore and then jump to Fault because the Beckhoff automation will often fix itself after some time and we don't want to be jumping in and out of this state. If we don't ever get out of this state, then IFO_NOTIFY will alert an operator and they can then intervene. This has been loaded into both ALS nodes.

H1 CDS (SEI)
filiberto.clara@LIGO.ORG - posted 12:58, Tuesday 22 August 2023 (72375)
HAM7 Synchronization Cable

WP 11349

Spare port on the CPS Fanout Chassis (SUS-R3) was used to synchronize the HAM7 CPS field chassis. Field cable was terminated with 100 ohm resistors across pins 1&6, 2&7. Temporary timing cable from HAM5 field box was disconnected.

H1 General
ryan.crouch@LIGO.ORG - posted 12:12, Tuesday 22 August 2023 - last comment - 14:45, Tuesday 22 August 2023(72374)
OPS Tuesday day shift midshift update

Maintenance activities have finished and we are going to start relocking and do a quick ALS automation code test while relocking green arms.

Comments related to this report
ryan.crouch@LIGO.ORG - 14:45, Tuesday 22 August 2023 (72378)

Reaquired Observing at 21:45

H1 SUS (GRD)
thomas.shaffer@LIGO.ORG - posted 10:46, Tuesday 22 August 2023 (72373)
Added R0 alignment offset switching to SUS.py

Both Jeff and Stuart Aston separately noticed that the quad R0 alignment offsets are not engaged or disengaged via the guardian. This seems like an oversight and has created a bit of confusion (LLO FRS27877). I've added to the engage_align_offsets to also switch R0 offsets if R0 is present in the levels() list. I tested this out on a quad (ETMY) and a non-quad (PRM), both worked as expected.

def engage_align_offsets(onoff):
+    stages = [susobj.levels()[0]]
+    # Do R0 at the same time for quads
+    if 'R0' in susobj.levels():
+        stages += ['R0']
    log('Ramping ALIGNMENT offsets %s' % (onoff))
-    susobj.alignOffsetSwitchWrite(onoff, levels=[susobj.levels()[0]])
+    susobj.alignOffsetSwitchWrite(onoff, levels=stages)

 

I reloaded all suspension nodes and committed this change to the svn. I'll pass this on to Stuart to do at LLO when they get the chance.

H1 SQZ (OpsInfo)
camilla.compton@LIGO.ORG - posted 11:48, Monday 14 August 2023 - last comment - 10:36, Thursday 14 September 2023(72195)
Unmonitored syscssqz channels that have been taking IFO out of observing

Naoki and I unmonitored  H1:SQZ-FIBR_SERVO_COMGAIN and H1:SQZ-FIBR_SERVO_FASTGAIN from syscssqz observe.snap. They have been regularly  taking us out of observing (72171) by changing when the TTFSS isn't really unlocking, see 71652. If the TTFSS really unlocks there will be other sdf diffs and the sqz guardians will unlock. 

We still plan to investigate this further tomorrow. We can monitor if it keeps happening using the channels.

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 10:42, Tuesday 15 August 2023 (72227)

Daniel, Sheila

We looked at one of these incidents, to see what information we could get from the beckhoff error checking.  The attached screenshot shows that when this happened on August 12th at 12:35 UTC, the beckhoff error code for the TTFSS was 2^20, counting down on the automated error screen (second attachment) the 20th error is Beatnote out of range of frequency comparator.  We looked at the beatnote error epics channel, which does seem to be well within the tolerances.  Daniel thinks that the error is happening faster than it can be recorded by epics.  He proposes that we go into the beckhoff code and add a condition that the error condition has to be met for 0.1s before throwing the error. 

Images attached to this comment
camilla.compton@LIGO.ORG - 10:17, Friday 18 August 2023 (72317)

In the last 5 days these channels would have taken us out of observing 13 times if they were still monitored, plot attached. Worryingly, 9 times in the last 14 hours, see attached.

Maybe something has changed in SQZ to make the TTFSS more sensitive. The IFO has been locked for 35 hours where sometimes we get close to the edges of our PZT ranges due to temperature drifts over long locks. 

Images attached to this comment
victoriaa.xu@LIGO.ORG - 12:25, Tuesday 22 August 2023 (72372)SQZ

I wonder if the TTFSS 1611 PD is saturated as power from the PSL fiber has drifted. Trending RFMON and DC volts from the TTFSS PD, it looks like in the past 2-3 months, the green beatnote's demod RF MON has increased (its RF max is 7), while the bottom gray DC volts signal from the PD has flattened out around -2.3V. Also looks like the RF MON got noisier as the PD DC volts saturated.

This PD should see the 160 MHz beatnote between the PSL (via fiber) and SQZ laser (free space). From LHO:44546, it looks like this PD "normally" would have like 360uW on it, with 180uW from each arm. If we trust the PD calibrations, then current PD values report ~600uW total DC power on the 1611 PD (red), with 40uW transmitted from the PSL fiber (green trend). Pick-offs for the remaining sqz laser free-space path (iem sqz laser seed/LO PDs) don't see power changes, so unlikely the saturations are coming from upstream sqz laser alignment. Not sure if there's some PD calibration issues going on here. In any case, all fiber PDs seem to be off from their nominal values, consistent with their drifts in the past few months.

I adjusted the TTFSS waveplates on the PSL fiber path to bring the FIBR PDs closer to their nominal values, and at least so we're not saturing the 1611. TTFSS and squeezer locks seem to have come back fine. We can see if this helps the SDF issues at all.

Images attached to this comment
camilla.compton@LIGO.ORG - 10:36, Thursday 14 September 2023 (72881)

These were re-monitored in 72679 after Daniel adjusted the SQZ Laser Diode Nominal Current, stopping this issue.

Displaying reports 13821-13840 of 84090.Go to page Start 688 689 690 691 692 693 694 695 696 End