Displaying reports 3001-3020 of 83082.Go to page Start 147 148 149 150 151 152 153 154 155 End
Reports until 15:40, Saturday 25 January 2025
H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 15:40, Saturday 25 January 2025 - last comment - 17:23, Saturday 25 January 2025(82462)
Lockloss @ 23:11 UTC

Lockloss @ 23:11 UTC - link to lockloss tool

As usual, no obvious cause for this lockloss, but there's evidence of an ETMX glitch right before it. Ends lock stretch at almost 11 hours.

Images attached to this report
Comments related to this report
ryan.crouch@LIGO.ORG - 17:23, Saturday 25 January 2025 (82466)

01:23 UTC Observing

H1 CAL
ryan.short@LIGO.ORG - posted 12:02, Saturday 25 January 2025 (82461)
Broadband and Simulines Calibration Sweeps

Following instructions from the TakingCalibrationMeasurements wiki, this morning I ran the usual broadband PCal and Simulines sweeps.

Broadband PCal: 19:30:44 to 19:35:54 UTC

Simulines: 19:36:46 to 20:00:29 UTC

File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20250125T193648Z.hdf5
File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20250125T193648Z.hdf5
File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20250125T193648Z.hdf5
File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20250125T193648Z.hdf5
File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20250125T193648Z.hdf5

H1 was out of observing from 19:30 to 20:01 UTC.

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:31, Saturday 25 January 2025 (82460)
Sat CP1 Fill

Sat Jan 25 10:10:10 2025 INFO: Fill completed in 10min 6secs

TCmins [-94C,-93C] OAT (1C,33F), deltaTempTime 10:10:12

Images attached to this report
LHO General
ryan.short@LIGO.ORG - posted 07:34, Saturday 25 January 2025 (82459)
Ops Day Shift Start

TITLE: 01/25 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 137Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 6mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.18 μm/s
QUICK SUMMARY: Two locklosses overnight, but looks like H1 was able to recover well enough each time. H1 has now been locked for just over 3 hours, and calibration sweeps are scheduled for 19:30 UTC.

H1 General (SQZ)
ryan.crouch@LIGO.ORG - posted 22:01, Friday 24 January 2025 (82457)
OPS Friday EVE shift summary

TITLE: 01/25 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Over a few hours I've seen the range slowly degrade as the SQZ angle slowly increases and REFL_RF6_ABS_OUTPUT decreases. (Tagging SQZ) Lockloss at the end of the shift, currently relocking at DRMI
LOG: No log

00:56 Observing

05:15 UTC lockloss (6 hour lock)

We keep losing it at locking ALS, something makes the arm start oscillating and then the wfs start working hard to fix it and we lose lock. ALS_Y can survive it sometimes. The BS is wicked misaligned now, CHECK_MICH moved it 2 urads in pitch and 1 urad in yaw.

Images attached to this report
H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 21:17, Friday 24 January 2025 (82458)
05:15 UTC lockloss

05:15 UTC lockloss

H1 TCS
matthewrichard.todd@LIGO.ORG - posted 17:31, Friday 24 January 2025 - last comment - 15:06, Tuesday 04 February 2025(82456)
CHETA RIN effect on ESD

[Matthew Camilla Louis Sheila]

There is concern that the CO2 laser proposed for the CHETA design has enough intensity noise to saturate the ESD preventing lock.

By calibrating the displacement noise projected from CHETA RIN data into counts at the L3_ADC, using the DARM loop OLG and transfer function between DeltaL_ctrl and L3_ADC, we can get a rough estimate of whether we expect the CHETA noise to saturate the ESDs. This is done by looking at the RMS of the cts/rtHz of CHETA noise at the ESD and comparing it to the 25% of saturation level (2^19 counts).

Figure 1 is the loop model for mapping the displament noise (CHETA RIN) to ESD counts, Figure 2 is a plot of the darm olg, Figure 3 is plot of the tf from deltaL_ctrl to L3_adc.
Figure 4 is the projection of CHETA RIN to the ADC counts, showing that we do not estimate CHETA is likely to saturate the ESDs.

Next steps are to see if we estimate CHETA nosie to saturate the DCPDs at different power up stages.

Non-image files attached to this report
Comments related to this report
matthewrichard.todd@LIGO.ORG - 15:22, Friday 31 January 2025 (82558)

[M. Todd, J. Kissel]

After measuring very low values for the coherence of the channels used to estimate the transfer function from H1:CAL-DELTAL_CTRL to H1:CAL-CS_DARM_FE_ETMX_L3_ESDOUTF_UL_OUT, it seemed better to go to the calibration model and use pydarm to do this calibration instead.

After making this change, the results are slightly more convincing, saying that we still do not expect CHETA to saturate the ESDs on the ETMX test-mass, however it is around 3% of the saturation level (2**19). The python code used to make this analysis as well as accompanying plots are listed below.

We do however worry about CHETA possibly saturating the coils on the L2 stage, as the noise is around 19% of the saturation level (2**19). This will require some more thought/testing. HOWEVER, this is during the Nominal Low Noise state (pydarm models NLN), and during lock-acquistion we expect that with the additional lowpass filters for the actuation stages we could have some more wiggle room with this.

Much of this code utilizes the pydarm model constructed from pydarm_report_H1.ini found at /ligo/groups/cal/H1/reports/20250123T211118Z/pydarm_H1.ini

The details about how to use the pydarm model and the transfer functions it contains can be found in Jeff's alog


Figures:

  1.  CHETA noise calibrated from m/rtHz to cts/rtHz and RMS to compare to saturation limit of ESDs (L3stage)
  2.  CHETA noise calibrated from m/rtHz to cts/rtHz and RMS to compare to saturation limit of L2 coils (L2stage)
  3.  CHETA noise calibrated from m/rtHz to cts/rtHz and RMS to compare to saturation limit of L1 coils (L1stage)
  4.  Transfer Function - deltaL_ctrl to L3_ESD_cts
  5.  Transfer Function - deltaL_ctrl to L2_coils_cts
  6.  Transfer Function - deltaL_ctrl to L1_coils_cts
  7.  Transfer Function - darm_err to darm_ctrl
  8.  Transfer Function - deltaL to darm_err
  9.  Open Loop Gain measurement of darm - pydar

Code:

Calibrating cheta darm to esd cts

Non-image files attached to this comment
matthewrichard.todd@LIGO.ORG - 15:06, Tuesday 04 February 2025 (82633)

[Edit!]

After noticing an error in the way that the RMS was calculated, I've fixed the code and updated the plots. Fortunately, this does not affect anything upstream (transfer functions and asd calibrations, etc.) but it does inform a better estimate of whether we expect CHETA to saturate the ESDs.

Updated plots:

    1) !cheta_in_esds.pdf
    2) !cheta_in_coilsL2.pdf
    3) !cheta_in_coilsL1.pdf

For more details/summary, refer to alog 82631

Non-image files attached to this comment
H1 DetChar (CAL, DetChar, ISC)
evan.goetz@LIGO.ORG - posted 17:19, Friday 24 January 2025 (82455)
Anti-aliasing filters doing a good job to remove high frequency artifacts
I reported in LHO aLOG 82376 that there were aliasing artifacts as well as a large number of line artifacts not predicted by the offline anti-aliasing analysis. Digging deeper into these unknown artifacts, these may instead be due to the fact that DTT normally holds data as single precision. The DCPD data has a large DC component, so if the data is processed with the "Remove mean" unchecked, then there may be artifacts owing to the large dynamic range. My hunch was confirmed when I tried two things:

1. re-check the "Remove mean" option. Many of the unknown additional line artifacts disappeared
2. using the diaggui_test program, which holds data as double precision.
 2a. Using "remove mean", the unknown additional line artifacts disappeared
 2b. Disabling "remove mean", the unknown additional line artifacts still were gone

Check out LHO aLOG 78559 for other experiences.

Attached is a plot comparing the 524 kHz data of H1:OMC-DCPD_A0_OUT (red) with the 16 kHz of H1:OMC-DCPD_A_OUT_DQ (blue). We don't see any significant differences now that the additional anti-aliasing filters have suppressed high frequency artifacts. The second figure is the now updated PSD comparison before the additional AA filtering in DTT using the "Remove mean" option for the 16k data. Additional artifacts consistent with the expected contribution from aliased artifacts are visible, but now the rest of the spectrum seems more-or-less in agreement.
Images attached to this report
H1 CAL
matthewrichard.todd@LIGO.ORG - posted 16:57, Friday 24 January 2025 (82453)
Measuring the DARM loop OLG using pydarm

[Matthew Louis Sheila]

This alog was motivated by trying to understand how CHETA intensity noise will affect the ESD, where we are interested in the open loop gain of DARM (explained more in future alog).

Measuring the open loop gain of DARM

A sample script can be found at the bottom as well as these notebook style instructions.

First you will need to activate the appropriate conda environment

conda activate /ligo/groups/cal/conda/pydarm

Then enter into an ipython shell, then enter the following commands

from pydarm.cmd import Report
import numpy as np
r = Report.find("last")

# create frequency array over which you want the olg
freqs = np.geomspace( 0.1, 1e5, 7000)
olg = r.model.compute_darm_olg(freqs)
olg_gain, olg_phase = np.abs(olg), np.angle(olg)

To write to a file, you can use the numpy command

filename =  ""# /path/of/savefile.txt
comments = "" # make sure you put the date in and the report string
data = np.array([np.array([freq[i], olg_gain[i], olg_phase[i]]) for i in range(len(freq))])
np.savetxt(filename, data, header= comments, delimiter=',', fmt='%.10e')
H1 AOS (DetChar, DetChar-Request)
louis.dartez@LIGO.ORG - posted 16:50, Friday 24 January 2025 - last comment - 13:25, Monday 27 January 2025(82446)
AA filter engaged in DCPD path, and calibration updated
Today we re-engaged the 16k Digital AA Filter in the A and B DCPD paths then re-updated the calibration on the front end and in the gstlal-calibration (GDS) pipeline before returning to Observing mode.

### IFO Changes ###

* We engaged FM10 in H1OMC-DCPD_A0 and H1OMC-DCPD_B0 (omc_dcpd_filterbanks.png). We used the command in LHO:82440 to engage the filters and step the OMC Lock demod phase (H1:OMC-LSC_PHASEROT) from  56 to -21 degrees (77 degree change). The 77 degrees shift is necessary to compensate for the fact that the additional 16k AA filter in the DCPD path introduces a 77 degree phase shift at 4190Hz (the oscillator frequency at which the dither line that the OMC Lock servo is locked to) (omc_lock_servo.png). All of these changes (the FM10 toggles and the new OMC demod phase value) have been saved in the OBSERVE and SAFE sdfs.

* It was noted in the control room that the range was quite low (153Mpc) and re remembered that we might want to tune the squeezer again as Camilla had done yesterday (LHO:82421). We have not done this.

* Preliminary analysis of data taken with this newly installed 16k AA filter engaged suggests that the filter is helping (LHO:82420).


### Calibration Changes ###

We pushed a new calibration to the front end and the GDS pipeline based on the measurements in 20250123T211118Z. In brief, here are a few things we learned/did:

- The inverse optical gain (1/Hc) filter changes are not being exported to the front end at all. This is a bug.
- We included the following delays in the actuation path:
    uim_delay = 23.03e-6   [s]
    pum_delay = 0  [s]
    tst_delay = 20.21e-6   [s]
    
    These values are stored in the pydarm_H1.ini file.

- The pyDARM parameter set also contains a value of 198.664 for tst_drive_align_gain, which is inline with CALCS (H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN) and the ETMX path in-loop (H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN).

- There is still a 5% error at 30Hz that is not fully understood yet. Broadband pcal2darm comparison plots will be posted in a comment.



Images attached to this report
Comments related to this report
louis.dartez@LIGO.ORG - 13:25, Monday 27 January 2025 (82489)
I'm attaching a PCALY2DARM comparison to show where the calibration is now compared against what it was before the cal-related work started. At present (dark blue) we have a 5% error magnitude near 30Hz and roughly a 2degree maximum error in phase. The pink trace shows a broadband of PCALY to GDS-CALIB_STRAIN on Saturday, 1/25. This is roughly 24hrs after the cal work was done and I plotted it to show that the calibration seems to be holding steady. The bright green trace is the same measurement taken on 1/18, which is before the recent work to integrate the additional 16k AA filter in the DCPD path began. All in all, we've now updated the calibration to compensate for the new 16k AA filter and have left the calibration better than it was when we found it. 

More discussion related to the cause of the large error near 30Hz is to come.
Images attached to this comment
LHO FMCS (PEM)
ryan.crouch@LIGO.ORG - posted 16:34, Friday 24 January 2025 (82452)
HVAC Fan Vibrometers Check FAMIS

Closes FAMIS26356 Last checked in alog82332

I didn't see anything of note on either of the scopes.

Images attached to this report
LHO General
ryan.short@LIGO.ORG - posted 16:30, Friday 24 January 2025 (82449)
Ops Day Shift Summary

TITLE: 01/25 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Few hours this morning spent working on calibration, then a lockloss caused another couple hours of reacquisition time this afternoon. H1 has been observing for almost 1.5 hours.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
17:16 SAFETY LASER HAZ  (\u2310\u25a0-\u25a0) LVEA YES LVEA is Laser HAZARD Ongoing
15:58 FAC Mitchell LVEA - Checking scissor lifts 16:19
16:19 FAC Kim Opt Lab N Technical cleaning 16:45
18:41 ISC Keita, Jennie, Mayank, Sivananda Opt Lab YES (local) ISS array work 20:24
H1 PSL
ryan.short@LIGO.ORG - posted 16:04, Friday 24 January 2025 (82451)
PSL Status Report - Weekly

FAMIS 26352

Laser Status:
    NPRO output power is 1.85W
    AMP1 output power is 70.23W
    AMP2 output power is 137.2W
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN
    PDWD watchdog is GREEN

PMC:
    It has been locked 3 days, 4 hr 31 minutes
    Reflected power = 26.16W
    Transmitted power = 102.5W
    PowerSum = 128.6W

FSS:
    It has been locked for 0 days 1 hr and 44 min
    TPD[V] = 0.6629V

ISS:
    The diffracted power is around 3.6%
    Last saturation event was 0 days 3 hours and 19 minutes ago


Possible Issues:
    PMC reflected power is high
    FSS TPD is low

RefCav alignment will likely need to be fixed on-table next Tuesday (I can try touching it up with picos if there's some TOO downtime this weekend, but I don't expect to get much improvement). PMC Refl being high is nothing new.

H1 General
ryan.crouch@LIGO.ORG - posted 16:01, Friday 24 January 2025 - last comment - 16:59, Friday 24 January 2025(82450)
OPS Friday EVE shift start

TITLE: 01/25 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 16mph Gusts, 11mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.22 μm/s
QUICK SUMMARY:

Comments related to this report
ryan.crouch@LIGO.ORG - 16:59, Friday 24 January 2025 (82454)SQZ

I dropped Observing from 00:49 - 00:56 to adjust the SQZer, I brought H1:SQZ-ADF_OMC_TRANS_PHASE back to -136 alog82421 and then after the servo was done I adjusted the OPO temperature. I accepted the new phase in SDF.

Images attached to this comment
H1 ISC (PEM)
jennifer.wright@LIGO.ORG - posted 15:54, Friday 24 January 2025 (82447)
Moving PR2 spot analysis

Sheila, Jennie W, Ryan S

Summary: The camera servos got turned off accidentally last time we moved PR3. Worth another try at this measurement

Analysis of why we lost lock the other day while doing the PR2 spot move in lock by moving PR3 yaw alignment and pico-ing to stay on the POP and POPAIR PDs, see image.

When we first started alktering the yaw of PR3 at the first cursor the circulating power started to get higher in the arms and around 17:22:02 UTC the circulating power began to go down as did LSC-POP_A. About 30 mins after this the circulating power began to recover as we stopped changing PR3 position and the pic-motor position. We are not sure why this happened. After this preiod we started moving PR3 yaw down again and the circulating power and POP-A power decresed and then we lost lock.

Over this period when wer were not actively chnaging the alignment, PR2 was still moving. So we checked the camera servos to see if they move PR2 (they don't) but we discovered that the camera serrvos were switched off by the camera guardian, see image.

We realised that because the PR2_SPOT_MOVE guardian state that we had ISC-LOCK in is less than 577 which tripped this condition in the CAMERA_SERVO guardian.

The CAMERA-SERVO guardian went to state 500 as shown in the  ndscope final row at the first cursor. This guardian node then stalled here as the PR2_SPOT_MOVE state does not contain a call to the unstall nodes function in ISC_LOCK, instead of switching on ADS servos and then trying to get back to the CAMERA_SERVO_ON state as in its state graph.

We altered the CAMERA SERVO guardian to eliminate the turning off of camera servo if it thinks the IFO is unlocked (ie. in a low number state) as this should be handled by ISC_LOCK which manages it.

Still need to think about why our overall circulating power got better then worse several times during these changes and why precisely we lost lock.

Images attached to this report
H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 13:37, Friday 24 January 2025 - last comment - 15:16, Friday 24 January 2025(82445)
Lockloss @ 20:42 UTC

Lockloss @ 20:42 UTC - link to lockloss tool

No obvious cause, but the wind had recently picked up and looks like there was an ETMX glitch immediately before the lockloss.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 15:16, Friday 24 January 2025 (82448)

H1 back to observing at 23:10 UTC. Longer acquisition due to lots of low-state locklosses with seemingly no explanation (e.g. ALS dropping out unexpectedly for both arms). Eventually issues resolved themselves and relocking went automatically.

H1 General
ryan.short@LIGO.ORG - posted 12:20, Friday 24 January 2025 (82443)
H1 Out of Observing for Calibration Fixes

H1 dropped observing from 17:16 to 20:14 UTC for fixes to the calibration. Log entry to come from Louis/Evan with specifically what was done.

H1 ISC (CAL, ISC)
jeffrey.kissel@LIGO.ORG - posted 12:09, Tuesday 21 January 2025 - last comment - 13:02, Friday 24 January 2025(82375)
Digital Anti-Aliasing Options for 524kHz OMC DCPD Path
J. Kissel, E. Goetz, L. Dartez

As mentioned briefly in LHO:82329 -- after discovering that there is a significant amount of aliasing from the 524 kHz version of the OMC DCPD signals when down-sampled to 16 kHz -- Louis and Evan tried a versions of the (test, pick-off, A1, A2, B1, and B2) DCPD signal path with two copies, each, of the existing 524 kHz to 65kHz and 65 kHz to 16 kHz AA filters as opposed to one. In this aLOG, I'll refer to these filters as "Dec65k" and "Dec16k," or for short in the plots attached "65k" and "16k."

Just restating the conclusion from LHO:82329 :: Having two copies of these filters -- and thus a factor of 10x more suppression in the 8 to 32kHz region and 100x more suppression in the 32 to 232 kHz region -- seems to dramatically improve the amount of aliasing.

Recall these filters were designed with lots of compromises in mind -- see all the details in G2202011.

Upon discussion of applying this "why don't we just add MOAR FIRE" option 2xDec65k and 2xDec16k option for the primary signal path, there was concerns about 
    - DARM open loop gain phase margin, and
    - Computational turn-around time for the h1iopomc0 front-end process.

I attach two plots to help facilitate that discussion,
    (1st attachment) Bode plot of various combinations of the Dec65k and Dec16k filters.
    (2nd attachment) Plot of the CPU timing meter over the weekend, the during in which these filters were installed and ON in the 4x test banks on the same computer.

For (1st) :: Here we show several of the high-frequency suppression above 1000 Hz, and phase loss around 100 Hz for a couple of simple combinations of filtering. The weekend configuration of two copies of the 65k and 16k filters is shown in BLACK, the nominal configuration of one copy is shown in RED. In short -- all these combinations incur less than 5 deg phase loss around the DARM UGF. Louis is going do some modeling to show the impact of these combinations on the DARM loop stability via plots of open loop gain and loop suppression. We anecdotally remember that the phase margin is "pretty tight," sub-30 [deg], but we'll wait for the plots.

For (2nd) :: With the weekend configuration of filters, with eight more filters (the copies of the 65k and 16k, copied 4 times in each of the A1, A2, B1, B2 banks) installed and running, the extremes of CPU clock cycle turnaround time did increase, from "never above 13 [usec]" to "occasionally hitting 14 [usec]" out of the ideal 1/2^16 = 15.26 [usec], which is rounded up on the GDSTP MEDM screen to be an even 16 [usec]. This is to say, that "we can probably run with 4 more filters in the A0 and B0 banks," though that may necessarily limit how much filtering can be in the A1, A2, B1, B2 banks for future testing. Also, no one has really looked at what happens to the gravitational wave channel when the timing of the CPU changes, or gets near the ideal clock-cycle time, namely the basic question "Are there glitches in the GW data when the CPU runs longer than normal?"
Images attached to this report
Comments related to this report
erik.vonreis@LIGO.ORG - 13:28, Thursday 23 January 2025 (82424)

Unless a DAC, ADC, or IPC timing error occurs, then a long IOP cycle time will not affect the data.  The models have some buffering, so can even suffer an occaisional long cycle time beyond the maximum without affecting data.

h1iopomc0 average cycle time is about 8 us (see the IO Info button on the GDS TP screen), so can probably run with a consistent max cycle time well beyond 15 us without affecting data.

jeffrey.kissel@LIGO.ORG - 13:02, Friday 24 January 2025 (82444)
Here, the 1st attachment, a two week trend of H1IOPOMC0 front-end (DCUID 179) CPU timing activity during this time periods flurry of activity in installing, turning on, and using lots of different combinations of (relatively low Q, low-order, low SOS number) filters. While the minute trend of the primary "CPU_METER" channel is creeping up, the "CPU_AVG" has only incremented up once to 8 [usec] that Erik quotes above. 

FYI these channels can be found displayed on MEDM in the IOP's GDS_TP screen, following the link to "IO Info" and looking at the "CPU PROCESSING TIMES" section at the top middle. See second attachment.
Images attached to this comment
Displaying reports 3001-3020 of 83082.Go to page Start 147 148 149 150 151 152 153 154 155 End