Displaying reports 4701-4720 of 84773.Go to page Start 232 233 234 235 236 237 238 239 240 End
Reports until 16:57, Friday 24 January 2025
H1 CAL
matthewrichard.todd@LIGO.ORG - posted 16:57, Friday 24 January 2025 (82453)
Measuring the DARM loop OLG using pydarm

[Matthew Louis Sheila]

This alog was motivated by trying to understand how CHETA intensity noise will affect the ESD, where we are interested in the open loop gain of DARM (explained more in future alog).

Measuring the open loop gain of DARM

A sample script can be found at the bottom as well as these notebook style instructions.

First you will need to activate the appropriate conda environment

conda activate /ligo/groups/cal/conda/pydarm

Then enter into an ipython shell, then enter the following commands

from pydarm.cmd import Report
import numpy as np
r = Report.find("last")

# create frequency array over which you want the olg
freqs = np.geomspace( 0.1, 1e5, 7000)
olg = r.model.compute_darm_olg(freqs)
olg_gain, olg_phase = np.abs(olg), np.angle(olg)

To write to a file, you can use the numpy command

filename =  ""# /path/of/savefile.txt
comments = "" # make sure you put the date in and the report string
data = np.array([np.array([freq[i], olg_gain[i], olg_phase[i]]) for i in range(len(freq))])
np.savetxt(filename, data, header= comments, delimiter=',', fmt='%.10e')
H1 AOS (DetChar, DetChar-Request)
louis.dartez@LIGO.ORG - posted 16:50, Friday 24 January 2025 - last comment - 13:25, Monday 27 January 2025(82446)
AA filter engaged in DCPD path, and calibration updated
Today we re-engaged the 16k Digital AA Filter in the A and B DCPD paths then re-updated the calibration on the front end and in the gstlal-calibration (GDS) pipeline before returning to Observing mode.

### IFO Changes ###

* We engaged FM10 in H1OMC-DCPD_A0 and H1OMC-DCPD_B0 (omc_dcpd_filterbanks.png). We used the command in LHO:82440 to engage the filters and step the OMC Lock demod phase (H1:OMC-LSC_PHASEROT) from  56 to -21 degrees (77 degree change). The 77 degrees shift is necessary to compensate for the fact that the additional 16k AA filter in the DCPD path introduces a 77 degree phase shift at 4190Hz (the oscillator frequency at which the dither line that the OMC Lock servo is locked to) (omc_lock_servo.png). All of these changes (the FM10 toggles and the new OMC demod phase value) have been saved in the OBSERVE and SAFE sdfs.

* It was noted in the control room that the range was quite low (153Mpc) and re remembered that we might want to tune the squeezer again as Camilla had done yesterday (LHO:82421). We have not done this.

* Preliminary analysis of data taken with this newly installed 16k AA filter engaged suggests that the filter is helping (LHO:82420).


### Calibration Changes ###

We pushed a new calibration to the front end and the GDS pipeline based on the measurements in 20250123T211118Z. In brief, here are a few things we learned/did:

- The inverse optical gain (1/Hc) filter changes are not being exported to the front end at all. This is a bug.
- We included the following delays in the actuation path:
    uim_delay = 23.03e-6   [s]
    pum_delay = 0  [s]
    tst_delay = 20.21e-6   [s]
    
    These values are stored in the pydarm_H1.ini file.

- The pyDARM parameter set also contains a value of 198.664 for tst_drive_align_gain, which is inline with CALCS (H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN) and the ETMX path in-loop (H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN).

- There is still a 5% error at 30Hz that is not fully understood yet. Broadband pcal2darm comparison plots will be posted in a comment.



Images attached to this report
Comments related to this report
louis.dartez@LIGO.ORG - 13:25, Monday 27 January 2025 (82489)
I'm attaching a PCALY2DARM comparison to show where the calibration is now compared against what it was before the cal-related work started. At present (dark blue) we have a 5% error magnitude near 30Hz and roughly a 2degree maximum error in phase. The pink trace shows a broadband of PCALY to GDS-CALIB_STRAIN on Saturday, 1/25. This is roughly 24hrs after the cal work was done and I plotted it to show that the calibration seems to be holding steady. The bright green trace is the same measurement taken on 1/18, which is before the recent work to integrate the additional 16k AA filter in the DCPD path began. All in all, we've now updated the calibration to compensate for the new 16k AA filter and have left the calibration better than it was when we found it. 

More discussion related to the cause of the large error near 30Hz is to come.
Images attached to this comment
LHO FMCS (PEM)
ryan.crouch@LIGO.ORG - posted 16:34, Friday 24 January 2025 (82452)
HVAC Fan Vibrometers Check FAMIS

Closes FAMIS26356 Last checked in alog82332

I didn't see anything of note on either of the scopes.

Images attached to this report
LHO General
ryan.short@LIGO.ORG - posted 16:30, Friday 24 January 2025 (82449)
Ops Day Shift Summary

TITLE: 01/25 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Few hours this morning spent working on calibration, then a lockloss caused another couple hours of reacquisition time this afternoon. H1 has been observing for almost 1.5 hours.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
17:16 SAFETY LASER HAZ  (\u2310\u25a0-\u25a0) LVEA YES LVEA is Laser HAZARD Ongoing
15:58 FAC Mitchell LVEA - Checking scissor lifts 16:19
16:19 FAC Kim Opt Lab N Technical cleaning 16:45
18:41 ISC Keita, Jennie, Mayank, Sivananda Opt Lab YES (local) ISS array work 20:24
H1 PSL
ryan.short@LIGO.ORG - posted 16:04, Friday 24 January 2025 (82451)
PSL Status Report - Weekly

FAMIS 26352

Laser Status:
    NPRO output power is 1.85W
    AMP1 output power is 70.23W
    AMP2 output power is 137.2W
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN
    PDWD watchdog is GREEN

PMC:
    It has been locked 3 days, 4 hr 31 minutes
    Reflected power = 26.16W
    Transmitted power = 102.5W
    PowerSum = 128.6W

FSS:
    It has been locked for 0 days 1 hr and 44 min
    TPD[V] = 0.6629V

ISS:
    The diffracted power is around 3.6%
    Last saturation event was 0 days 3 hours and 19 minutes ago


Possible Issues:
    PMC reflected power is high
    FSS TPD is low

RefCav alignment will likely need to be fixed on-table next Tuesday (I can try touching it up with picos if there's some TOO downtime this weekend, but I don't expect to get much improvement). PMC Refl being high is nothing new.

H1 General
ryan.crouch@LIGO.ORG - posted 16:01, Friday 24 January 2025 - last comment - 16:59, Friday 24 January 2025(82450)
OPS Friday EVE shift start

TITLE: 01/25 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 16mph Gusts, 11mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.22 μm/s
QUICK SUMMARY:

Comments related to this report
ryan.crouch@LIGO.ORG - 16:59, Friday 24 January 2025 (82454)SQZ

I dropped Observing from 00:49 - 00:56 to adjust the SQZer, I brought H1:SQZ-ADF_OMC_TRANS_PHASE back to -136 alog82421 and then after the servo was done I adjusted the OPO temperature. I accepted the new phase in SDF.

Images attached to this comment
H1 ISC (PEM)
jennifer.wright@LIGO.ORG - posted 15:54, Friday 24 January 2025 (82447)
Moving PR2 spot analysis

Sheila, Jennie W, Ryan S

Summary: The camera servos got turned off accidentally last time we moved PR3. Worth another try at this measurement

Analysis of why we lost lock the other day while doing the PR2 spot move in lock by moving PR3 yaw alignment and pico-ing to stay on the POP and POPAIR PDs, see image.

When we first started alktering the yaw of PR3 at the first cursor the circulating power started to get higher in the arms and around 17:22:02 UTC the circulating power began to go down as did LSC-POP_A. About 30 mins after this the circulating power began to recover as we stopped changing PR3 position and the pic-motor position. We are not sure why this happened. After this preiod we started moving PR3 yaw down again and the circulating power and POP-A power decresed and then we lost lock.

Over this period when wer were not actively chnaging the alignment, PR2 was still moving. So we checked the camera servos to see if they move PR2 (they don't) but we discovered that the camera serrvos were switched off by the camera guardian, see image.

We realised that because the PR2_SPOT_MOVE guardian state that we had ISC-LOCK in is less than 577 which tripped this condition in the CAMERA_SERVO guardian.

The CAMERA-SERVO guardian went to state 500 as shown in the  ndscope final row at the first cursor. This guardian node then stalled here as the PR2_SPOT_MOVE state does not contain a call to the unstall nodes function in ISC_LOCK, instead of switching on ADS servos and then trying to get back to the CAMERA_SERVO_ON state as in its state graph.

We altered the CAMERA SERVO guardian to eliminate the turning off of camera servo if it thinks the IFO is unlocked (ie. in a low number state) as this should be handled by ISC_LOCK which manages it.

Still need to think about why our overall circulating power got better then worse several times during these changes and why precisely we lost lock.

Images attached to this report
H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 13:37, Friday 24 January 2025 - last comment - 15:16, Friday 24 January 2025(82445)
Lockloss @ 20:42 UTC

Lockloss @ 20:42 UTC - link to lockloss tool

No obvious cause, but the wind had recently picked up and looks like there was an ETMX glitch immediately before the lockloss.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 15:16, Friday 24 January 2025 (82448)

H1 back to observing at 23:10 UTC. Longer acquisition due to lots of low-state locklosses with seemingly no explanation (e.g. ALS dropping out unexpectedly for both arms). Eventually issues resolved themselves and relocking went automatically.

H1 General
ryan.short@LIGO.ORG - posted 12:20, Friday 24 January 2025 (82443)
H1 Out of Observing for Calibration Fixes

H1 dropped observing from 17:16 to 20:14 UTC for fixes to the calibration. Log entry to come from Louis/Evan with specifically what was done.

H1 DetChar
dishari.malakar@LIGO.ORG - posted 10:43, Friday 24 January 2025 (82442)
DQ shift report for 30 Dec 2024 - 5 Jan 2025

Summary of the report:

 

Full report: link.

LHO VE
david.barker@LIGO.ORG - posted 10:27, Friday 24 January 2025 (82441)
Fri CP1 Fill

Fri Jan 24 10:14:33 2025 INFO: Fill completed in 14min 29secs

Jordan confirmed a good fill curbside. TCmins [-91C, -90C] OAT (4C, 39F), deltaTempTime 10:14:22

Images attached to this report
H1 CAL
matthewrichard.todd@LIGO.ORG - posted 09:48, Friday 24 January 2025 (82440)
Calibration back to using new anti-aliasing filters in DCPD channels
[Evan Louis Matthew]

This morning after Evan and Louis fixed the anti-aliasing filters we used our 'lockloss-less' recipe to re-engage the filters while smoothly changing the demod phase. This required a slight adjustment to the arguments in the command LHO:82430 (listed below).

The new filters seem to be working as expected and are not causing yesterday's calibration error. This transition was done without lockloss.

This is a placeholder alog to state that the calibration and the ifo have been reverted to their nominal state from this morning. I'll follow up and edit this entry with additional details later tonight/ tomorrow morning.

To bring the filters back on and step the phase rot back to modified angle in same ramp time as the filters' ramps.
cdsutils switch H1:OMC-DCPD_A0 FM10 ON; cdsutils switch H1:OMC-DCPD_B0 FM10 ON; cdsutils step -s 0.065 H1:OMC-LSC_PHASEROT -- -1.0,77
LHO General (Lockloss)
ryan.short@LIGO.ORG - posted 07:41, Friday 24 January 2025 - last comment - 08:49, Friday 24 January 2025(82438)
Ops Day Shift Start

TITLE: 01/24 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 3mph Gusts, 1mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.18 μm/s
QUICK SUMMARY: H1 lost lock just a half hour ago at 15:09 from a not obvious cause (link to lockloss tool) and is relocking; just reached DRMI.

Comments related to this report
ryan.short@LIGO.ORG - 08:49, Friday 24 January 2025 (82439)ISC

H1 back to observing at 16:45 UTC. Had to help PRM during DRMI locking, but otherwise this was an automatic relock.

I updated OMC-LSC_PHASEROT from -21 to 56 since TJ pointed out in his alog from last night and accepted it in both SAFE and OBSERVE SDF tables (screenshots attached). Since the OMC was already locked by the time I did this, I just used the command from alog82430, which worked and did not cause a lockloss. This is possibly why the calibration overnight looked strange.

Images attached to this comment
H1 General
thomas.shaffer@LIGO.ORG - posted 03:17, Friday 24 January 2025 - last comment - 04:06, Friday 24 January 2025(82436)
Ops Owl Update

During relocking, H1 couldn't get a DRMI or PRMI so it went to go to the Check_Mich_Fringes state but we lost lock a few seconds into it. The LASER_PWR node was still moving up to 10W that we use for Check_Mich at the time that the 2W request came in, but this was ignored while it was moving. I'm not entirely sure why. So as with many lock losses, our IMC lost lock and since we were at 10W, it couldn't relock. The IMC eventually relocked 2.5 hours later, long enough to give me a call. By the time I logged in, it was already started an initial alignment at 10W. I requested 2W for the PRC alignment step and then it finished off initial alignment on its own.

All of the states in LASER_PWR that do the adjusting are "protected" guardian states, meaning that they have to return true before it's allow to move on. I can't remember why exaclty, but I think this was because it would confuse the rotation stage if you made a power request while there's another one going on. I would expect that once this state was done though, that it would have then moved to the 2W adjusting state, but it looks like it ignored that request entirely. I'll add this to my todo list to fix.

Comments related to this report
thomas.shaffer@LIGO.ORG - 04:06, Friday 24 January 2025 (82437)CAL

I had to accept an SDF diff for the OMC-LSC_PHASEROT of -21. I'm actually thinking that this is the incorrect value and it should be the previous 56 value, but I don't want to risk losing lock.

Images attached to this comment
H1 General
anthony.sanchez@LIGO.ORG - posted 22:08, Thursday 23 January 2025 (82435)
Thursday Eve shift End

TITLE: 01/24 Eve Shift: 21:00-0600 UTC (1300-2200 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY:
After we got to Observing  and everyone went home, nothing really happened after my last alog . Its been quiet ever since everyone left.

Since the Lockclock was interupted which was mentioned in my last alog, I should remind everyone:
This lock started at 19:41:24 UTC
Thus H1 has been locked for 10+ Hours.

oh also:
CALCS has some pending configuration changes according to the CDS Overview screen.

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 18:41, Thursday 23 January 2025 (82434)
Thursday mid shift report

TITLE: 01/24 Eve Shift: 21:00-0600 UTC (1300-2200 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 6mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.19 μm/s
QUICK SUMMARY:
H1 has been locked for 7 hours as of  02:41:24 UTC
H1 is currently Observing.
 

End of Comissioning and CtrlZ:
Calibration Team has been working all day on the OMC phase and gains.
Camilla touched up the SQZr settings and temp.
Robert covered the Viewports & we are almost ready to get back to Observing.

The calibration team now has to revert all of their changes.

GDS has been restarted a few times.

1:42 UTC DCPD AA filters turned off and phase changed. No lockloss!!! YAY!!
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=82433

After some final tweaks and some SDF accepts from Louis's changes, whe got back to OBSERVING without a Lockloss at 2:09 32 UTC!

 

This lock started at 19:41:24 UTC.
At 00:33 UTC I noticed that the Lock Clock FOM had crashed, and thus I relaunched the lockclock.
When it returned, it only had 30 minutes on it, even though we have not lost lock yet and had been locked for many hours.
All of the lock clocks all read the same 30 minutes.
On the Calibration_Monitor, CDS_OverView.
According to this FOM Screenshot of Nuc28 we had been locked for 4 hours and 19 minutes at 00:01 UTC:
https://lhocds.ligo-wa.caltech.edu/cr_screens/archive/png/2025/01/23/16/nuc28-1.png
The Lockclock crashed again, it may have coincided with a restart of GDS?  Sorry, Louis.  
After talking with Dave this turned out to be ; hand editing a puppet file issue.
When dave started working on this https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=82429 puppet started to overwrite the file that he had changed.

H1 ISC (CAL, ISC)
jeffrey.kissel@LIGO.ORG - posted 12:09, Tuesday 21 January 2025 - last comment - 13:02, Friday 24 January 2025(82375)
Digital Anti-Aliasing Options for 524kHz OMC DCPD Path
J. Kissel, E. Goetz, L. Dartez

As mentioned briefly in LHO:82329 -- after discovering that there is a significant amount of aliasing from the 524 kHz version of the OMC DCPD signals when down-sampled to 16 kHz -- Louis and Evan tried a versions of the (test, pick-off, A1, A2, B1, and B2) DCPD signal path with two copies, each, of the existing 524 kHz to 65kHz and 65 kHz to 16 kHz AA filters as opposed to one. In this aLOG, I'll refer to these filters as "Dec65k" and "Dec16k," or for short in the plots attached "65k" and "16k."

Just restating the conclusion from LHO:82329 :: Having two copies of these filters -- and thus a factor of 10x more suppression in the 8 to 32kHz region and 100x more suppression in the 32 to 232 kHz region -- seems to dramatically improve the amount of aliasing.

Recall these filters were designed with lots of compromises in mind -- see all the details in G2202011.

Upon discussion of applying this "why don't we just add MOAR FIRE" option 2xDec65k and 2xDec16k option for the primary signal path, there was concerns about 
    - DARM open loop gain phase margin, and
    - Computational turn-around time for the h1iopomc0 front-end process.

I attach two plots to help facilitate that discussion,
    (1st attachment) Bode plot of various combinations of the Dec65k and Dec16k filters.
    (2nd attachment) Plot of the CPU timing meter over the weekend, the during in which these filters were installed and ON in the 4x test banks on the same computer.

For (1st) :: Here we show several of the high-frequency suppression above 1000 Hz, and phase loss around 100 Hz for a couple of simple combinations of filtering. The weekend configuration of two copies of the 65k and 16k filters is shown in BLACK, the nominal configuration of one copy is shown in RED. In short -- all these combinations incur less than 5 deg phase loss around the DARM UGF. Louis is going do some modeling to show the impact of these combinations on the DARM loop stability via plots of open loop gain and loop suppression. We anecdotally remember that the phase margin is "pretty tight," sub-30 [deg], but we'll wait for the plots.

For (2nd) :: With the weekend configuration of filters, with eight more filters (the copies of the 65k and 16k, copied 4 times in each of the A1, A2, B1, B2 banks) installed and running, the extremes of CPU clock cycle turnaround time did increase, from "never above 13 [usec]" to "occasionally hitting 14 [usec]" out of the ideal 1/2^16 = 15.26 [usec], which is rounded up on the GDSTP MEDM screen to be an even 16 [usec]. This is to say, that "we can probably run with 4 more filters in the A0 and B0 banks," though that may necessarily limit how much filtering can be in the A1, A2, B1, B2 banks for future testing. Also, no one has really looked at what happens to the gravitational wave channel when the timing of the CPU changes, or gets near the ideal clock-cycle time, namely the basic question "Are there glitches in the GW data when the CPU runs longer than normal?"
Images attached to this report
Comments related to this report
erik.vonreis@LIGO.ORG - 13:28, Thursday 23 January 2025 (82424)

Unless a DAC, ADC, or IPC timing error occurs, then a long IOP cycle time will not affect the data.  The models have some buffering, so can even suffer an occaisional long cycle time beyond the maximum without affecting data.

h1iopomc0 average cycle time is about 8 us (see the IO Info button on the GDS TP screen), so can probably run with a consistent max cycle time well beyond 15 us without affecting data.

jeffrey.kissel@LIGO.ORG - 13:02, Friday 24 January 2025 (82444)
Here, the 1st attachment, a two week trend of H1IOPOMC0 front-end (DCUID 179) CPU timing activity during this time periods flurry of activity in installing, turning on, and using lots of different combinations of (relatively low Q, low-order, low SOS number) filters. While the minute trend of the primary "CPU_METER" channel is creeping up, the "CPU_AVG" has only incremented up once to 8 [usec] that Erik quotes above. 

FYI these channels can be found displayed on MEDM in the IOP's GDS_TP screen, following the link to "IO Info" and looking at the "CPU PROCESSING TIMES" section at the top middle. See second attachment.
Images attached to this comment
Displaying reports 4701-4720 of 84773.Go to page Start 232 233 234 235 236 237 238 239 240 End