Displaying reports 7581-7600 of 85564.Go to page Start 376 377 378 379 380 381 382 383 384 End
Reports until 15:37, Thursday 26 September 2024
H1 TCS
sheila.dwyer@LIGO.ORG - posted 15:37, Thursday 26 September 2024 (80320)
increased ETMY ring heater

To try to avoid the 10kHz PI (MODE 24), I've increased the power on both segments of ETMY ring heater from 1W to 1.1W.  I did this after the calibration measurements when the PI was already ringing up, and we've now lost lock from the PI.  (for a recent history, see 80299)

 

 

H1 AOS
robert.schofield@LIGO.ORG - posted 15:20, Thursday 26 September 2024 (80319)
PR2 scraper baffle reflection at 19mW, was 17mW

I measured the power of the beam comming out of the HAM3 illuminator viewport and found it to be 19mW, as compared to 17mW measured in this alog 78878. The beam is a reflection off of the scraper baffle of the part of the PR2 to PR3 beam that is clipped by the aperture of the scraper baffle. We had minimized it from 47mW to 17mW for the referenced alog, and wanted to see if it was clipping more - not much.

H1 CAL
anthony.sanchez@LIGO.ORG - posted 15:05, Thursday 26 September 2024 (80317)
Calibration Sweep Complete

pydarm measure --run-headless bb
diag> save /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240926T212332Z.xml
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240926T212332Z.xml saved
diag> quit
EXIT KERNEL

2024-09-26 14:28:43,409 bb measurement complete.
2024-09-26 14:28:43,409 bb output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240926T212332Z.xml
2024-09-26 14:28:43,410 all measurements complete.
anthony.sanchez@cdsws29:

 

21:30 UTC gpstime;python /ligo/groups/cal/src/simulines/simulines/simuLines.py -i /ligo/groups/cal/H1/simulines_settings/newDARM_20231221/settings_h1_newDARM_scaled_by_drivealign_20231221_factor_p1_asc.ini;gpstime
PDT: 2024-09-26 14:30:05.062848 PDT
UTC: 2024-09-26 21:30:05.062848 UTC
GPS: 1411421423.062848

2024-09-26 21:53:35,296 | INFO | Finished gathering data. Data ends at 1411422832.0
2024-09-26 21:53:36,077 | INFO | It is SAFE TO RETURN TO OBSERVING now, whilst data is processed.
2024-09-26 21:53:36,077 | INFO | Commencing data processing.
2024-09-26 21:53:36,077 | INFO | Ending lockloss monitor. This is either due to having completed the measurement, and this functionality being terminated; or because the whole process was aborted.

2024-09-26 21:53:36,077 | INFO | It is SAFE TO RETURN TO OBSERVING now, whilst data is processed.
2024-09-26 21:53:36,077 | INFO | Commencing data processing.
2024-09-26 21:53:36,077 | INFO | Ending lockloss monitor. This is either due to having completed the measurement, and this functionality being terminated; or because the whole process was aborted.
2024-09-26 22:01:43,741 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240926T213006Z.hdf5
2024-09-26 22:01:43,754 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240926T213006Z.hdf5
2024-09-26 22:01:43,764 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240926T213006Z.hdf5
2024-09-26 22:01:43,774 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240926T213006Z.hdf5
2024-09-26 22:01:43,784 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240926T213006Z.hdf5
ICE default IO error handler doing an exit(), pid = 2864258, errno = 32
PDT: 2024-09-26 15:01:43.864814 PDT
UTC: 2024-09-26 22:01:43.864814 UTC
GPS: 1411423321.864814
anthony.sanchez@cdsws29:

 

 

Images attached to this report
H1 General (CDS)
anthony.sanchez@LIGO.ORG - posted 13:46, Thursday 26 September 2024 (80313)
SDF Diffs Accepted

After the EtherCAT reboot we had a few SDF Diffs that needed to be accepted.
Screenshot.

Images attached to this report
H1 ISC (CDS, ISC)
keita.kawabe@LIGO.ORG - posted 12:24, Thursday 26 September 2024 - last comment - 14:54, Thursday 26 September 2024(80309)
OMC whitening switching issue (Tony, TJ, JoeB, Sheila, Fil, Patrick, Daniel, Keita among others)

This morning Tony and TJ had a hard time locking the OMC.

We've found that OMC DCPD A and B output are very assymmetric only when there was a fast transient (1st attachment) but not when the OMC length was slowly brought close to the resonance (2nd attachment), which suggested whitening problem.

The transfer function from OMCA to B suggested that the switchable hardware whitening was ON for DCPD_A and OFF for B when it was supposed to be OFF for both. 3rd attachment shows the transfer function from DCPD_A to B, and 4th attachment shows the anti-whitening filter shape.

Switching ON the anti-whitening only for DCPD_A made the frequency response flat. Trying to switch the analog whitening ON and OFF by toggling H1:OMC-DCPD_A_GAINTOGGLE didn't change the hardware whitening status, it's totally stuck.

We tried to lock the IFO by only using DCPD_B, but IFO unlocked for some reason.

After IFO lost lock, people found on the floor that it's the problem of the whitening chassis, not the BIO. It's not clear if we can fix the board in the chassis (which is preferrable) or have to swap the whitening chassis (less preferable as calibration group needs to measure the analog TF and generate a compensation filter).

We'll update as we make progress.

 

Images attached to this report
Comments related to this report
daniel.sigg@LIGO.ORG - 12:57, Thursday 26 September 2024 (80310)

Fernando, Fil, Daniel

DCPD whitening chassis fixed.

We diagnosed a broken photocoupler in the DCPD whitening chassis. Since the photocoupler is located on the front interface board, we selected to swap this board with the one from the spare. This means the whitening transfer function should not have changed. Since we switched the front interface board together with the front-panel, the serial number of the chassis has (temporarily) changed to the one of the spare.

The in-vacuum DCPD amplifiers were powered off for 30-60 minutes while the repair took place. So, they need some time to thermalize.

filiberto.clara@LIGO.ORG - 13:32, Thursday 26 September 2024 (80312)

Unit installed is S2300003. The front panel and front interface board was removed/borrowed from S2300004.

louis.dartez@LIGO.ORG - 14:54, Thursday 26 September 2024 (80316)CAL
N.B. S2300004 and S2300002 have been characterized and fit already. See LHO:71763 and LHO:78072 for the S2300004 and S2300002 zpk fits, respectively.

Should the OMC DCPD Whitening chassis need to be fully swapped, we already have the information we need to install the corresponding compensation filters in the front end and in the pyDARM model to accommodate that change. This, of course, rides on the expectation that the electronics have not materially changed in their response in the interim.

H1 General
anthony.sanchez@LIGO.ORG - posted 11:27, Thursday 26 September 2024 (80307)
Thurs Mid Shift update

TITLE: 09/26 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 12mph Gusts, 6mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.22 μm/s
QUICK SUMMARY:

Locking notes:
Ran an initial Alignment.
After Initial Alignment I started the Locking process, and bounced to PRMI 3 times even though I had good DRMI flashes.
On the 3rd time of jumpin to PRMI,  I manualed back to Align_recycling mirrors, then requested NLN. that took us to DRMI , then I touched up SRM in Yaw.
 
OMC issue in Prep_DC_READOUT_TRANSITION.
Tridents looking good, but the TUNE_OFFSETS OMC_LOCK Guard state tunes Away from the desired value.
TJ Tried locking by hand...
OMC DCPDS are not balanced? (I'm not sure what they mean by this.
Sheila suggests that it might be a Whitening issue and Keita confirms this.
We are trying to switch the OMC DCPD Whitening that we are using. We are now using DCPD B. and Plan to switch back to DCPD A later on. An effort wasmade to try to keep the configuration the same for the CAL team, so Swapping Whitening Chassis was Not the best option. We are stuck in the "High State", which makes the output of the OMC's DCPD's high. But the High State is where we need it to be for NLN.
Sheila and Keita, did the DCPD Switching to get around the Whitening issues, just to have a LockLoss in the next ISC_Lock state DARM_TODC_READOUT. :(

Relocking again got to PRMI....

17:43 UTC EtherCat Failure, Fernando, Fil, and Sigg are working on resolving the issue.

Current H1 status is Down for Corrective maintenance.


 

 

 

 

H1 CAL
louis.dartez@LIGO.ORG - posted 11:16, Thursday 26 September 2024 - last comment - 10:18, Monday 25 August 2025(80291)
Procedural issues in LHO Calibration this week
In the past few weeks have seen rocky performance out of the Calibration pipeline and its IFO-tracking capabilities. Much, but not all, of this is due to [my] user error.

Tuesday's bad calibration state is a result of my mishandling of the recent drivealign L2L gain changes for the ETMX TST stage (LHO:78403,
LHO:78425, LHO:78555, LHO:79841).

The current practice adopted by LHO with respect to these gain changes is the following:

1. Identify that KAPPA_TST has drifted from 1 by some appreciable amount (1.5-3%), presumably due to ESD charging effects.
2. Calculate the necessary DRIVEALIGN gain adjustment to cancel out the change in ESD actuation strength. This is done in the DRIVEALIGN bank so that it's downstream enough to only affect the control signal being sent to the ESD. It's also placed downstream of the calibration TST excitation point.
3. Adjust the DRIVEALIGN gain by the calculated amount (if kappaTST has drifted +1% then this would correspond to a -1% change in the DRIVEALIGN gain).
3a. Do not propagate the new drivealign gain to CAL-CS.
3b. Do not propagate the new drivealign gain to the pyDARM ini model.

After step 3 above it should be as if the IFO is back to the state it was in when the last calibration update took place. I.e. no ESD charging has taken place (since it's being canceled out by the DRIVEALIGN gain adjustments). 
It's also worth noting that after these adjustments the SUS-ETMX drivealign gain and the CAL-CS ETMX drivealign will no longer be the copies of each other (see image below). 

The reasoning behind 3a and 3b above is that by using these adjustments to counteract IFO changes (in this case ESD drift) from when it was last calibrated, operators and commissioners in the control room could comfortably take care of performing these changes without having to invoke the entire calibration pipeline. The other approach, adopted by LLO, is to propagate the gain changes to both CAL-CS and pyDARM each time it is done and follow up with a fresh calibration push. This approach leaves less to 'be remembered' as CAL-CS, SUS, and pyDARM will always be in sync but comes at the cost of having to turn a larger crank each time there is a change.


 
Somewhere along the way I updated the TST drivealign gain parameter in the pyDARM model even though I shouldn't have. At this point, I don't recall if I was confused because the two sites operate differently or if I was just running a test and left this parameter changed in the model template file by accident and subsequently forgot about it. In any case, the drivealign gain parameter change made its way through along with the actuation delay adjustments I made to compensate for both the new ETMX DACs and for residual phase delays that haven't been properly compensated for recently (LHO:80270). This happened in commit 0e8fad of the H1 ifo repo. I should have caught this when inspecting the diff before pushing the commit but I didn't. I have since reverted this change (H1 ifo commit 41c516).

During the maintenance period on Tuesday, I took advantage of the fact that the IFO was down to update the calibration pipeline to account for all of the residual delays in the actuation path we hadn't been properly compensating for (LHO:80270). This is something that I've done several times before; a combination of the fact that the calibration pipeline has been working so well in O4 and that the phase delay changes I was instituting were minor contributed to my expectation that we would come back online to a better calibrated instrument. This was wrong. What I'd actually done was install a calibration configuration in which the CAL-CS drivealign gain and the pyDARM model's drivealign gain parameter were different. This is bad because pyDARM generates FIR filters that are used by the downstream GDS pipeline; those filters are embedded with knowledge of what's in CAL-CS by way of the parameters in the model file. In short, CAL-CS was doing one thing and GDS was correcting for another. 

-- 

Where do we stand?

At the next available opportunity, we will be taking another calibration measurement suite and using it reset the calibration one more time now that we know what went wrong and how to fix it. I've uploaded a comparison of a few broadband pcal measurements (image link). The blue curve is the current state of the calibration error. The red curve was the calibration state during the high profile event earlier this week. The brown curve is from last week's Thursday calibration measurement suite, taken as part of the regularly scheduled measurements.

--
Moving forward, I and others in the Cal group will need to adhere more strictly to the procedures we've already had in place: 
1. double check that any changes include only what we intend at each step
2. commit all changes to any report in place immediately and include a useful log message (we also need to fix our internal tools to handle the report git repos properly)
3. only update calibration while there is a thermalized ifo that can be used to confirm that things will back properly, or if done while IFO is down, require Cal group sign-off before going to observing
Images attached to this report
Comments related to this report
vladimir.bossilkov@LIGO.ORG - 15:30, Tuesday 19 August 2025 (86460)CAL

Posting here for historical reference.
The propagation of the correction of incorrect calibration was in an email thread between myself, Joseph Betzwieser, Aaron Zimmerman, Colm Talbot.

I had produced a calibration uncertainty with necessary correction that would account for the effects of this issue attached here as a text file, and as an image showing how it compares against our ideal model (blue dashed) and the readouts of the calibration monitoring lines at the time (red pentagons).

Ultimately the PE team used the inverse of what I post here, since as a result of this incident it was discovered that PE was ingesting uncertainly in an inverted fashion up to this point.

Images attached to this comment
Non-image files attached to this comment
joseph.betzwieser@LIGO.ORG - 10:29, Thursday 21 August 2025 (86494)
I am also posting the original correction transfer function (the blue dashed line in Vlad's comment's plot) here from Vlad for completeness.

It was created by calculating the modeled response of the interferometer that we intended to use at the time (R_corrected), over the response of the interferometer that was running live at the time (R_original) corrected for online correction (i.e. time dependent correction factors such as Kappa_C, Kappa_TST, etc).

So to correct, one would take the calibrated data stream at the time:
bad_h(t) = R_original (t) * DARM_error(t)

and correct it via:
corrected_h(t) = R_original(t) * DARM_error(t) * R_corrected / R_original(t)
Non-image files attached to this comment
joseph.betzwieser@LIGO.ORG - 10:18, Monday 25 August 2025 (86547)
So our understanding of what was wrong with the calibration around September 25th, 2024 00:00 UTC has improved significantly since then.  We had 4 issues in total.

1) The above mentioned drivealign gain mismatch issue between model, h1calcs, the interferometer and GDS calibration pipeline.

2) The ETMX L1 stage rolloff change that was not in our model (see LHO alog 82804)

3) LHO was not applying the measured SRC detuning to the front end calibration pipeline - we started pushing it in February 2025 (see LHO alog 83088).

4) The fact that pydarm doesn't automatically hit the load filters button for newly updated filters means sometimes humans forget to push that button (see for example LHO alog 85974).  Turns out that night the optical gain filter in the H1:CAL-DARM_ERR filter bank had not been updated.  Oddly enough, the cavity pole frequency filter bank had been updated, but I'm guessing the individual load button was pressed 

In the filter archive (/opt/rtcds/lho/h1/chans/filter_archive/h1calcs/), specifically H1CALCS_1411242933.txt has an inverse optical gain filter of 2.9083e-07, which is the same value as the previous file's gain.  However, the model optical gains did change (3438377 in the 20240330T211519Z report, and 3554208 in the bad report that was pushed, 20240919T153719Z).  The epics for the kappa generation were updated, so we had a mismatch between the kappa_C value that was calculated, and to what optical gain it applied - similar to the actuation issue we had.  It should have changed by a factor of 0.9674 (3554208/3438377).

This resulted in the monitoring lines showing ~3.5% error at the 410.3 Hz line during this bad calibration period. It also explains why there's a mistmatch between monitoring lines and the correction TFs we provided that night at high frequency.  Normally the ratio between PCAL and GDS is 1.0 at 410.3 Hz since the PCAL line itself is used to calculated kappa_C at that frequency and thus matches the sensing at that frequency to the line. See the grafana calibration monitoring line page.

I've combined all this information to create an improved TF correction factor and uncertainty plot, as well as more normal calibration uncertainty budgets.
So the "calibration_uncertainty_H1_1411261218.png" is a normal uncertainty budget plot, with a correction TF from the above fixes applied.
The "calibration_uncertainty_H1_1411261218.txt" is the associated text file with the same data.

"H1_uncertainty_systematic_correction.txt" is the TF correction factor that I applied, calculated with the above fixes.
Lastly, "H1_uncertainty_systematic_correction_sensing_L1rolloff_drivealign.pdf", is the same style plot Vlad made earlier, again with the above fixes.

I'll note the calibration uncertainty plot and text file was created on the LHO cluster, with /home/cal/conda/pydarm conda environment, using command: 
IFO=H1 INFLUX_USERNAME=lhocalib INFLUX_PASSWORD=calibrator CAL_ROOT=/home/cal/archive/H1/ CAL_DATA_ROOT=/home/cal/svncommon/aligocalibration/trunk/ python3 -m pydarm uncertainty 1411261218 -o ~/public_html/O4b/GW240925C00/ --scald-config ~cal/monitoring/scald_config.yml -s 1234 -c /home/joseph.betzwieser/H1_uncertainty_systematic_correction.txt
I had to modify the code slightly to expand out the plotting range - it was much larger than the calibration group usually assumes.

All these issues were fixed in the C01 version of the regenerated calibration frames.
Images attached to this comment
Non-image files attached to this comment
H1 CDS
david.barker@LIGO.ORG - posted 10:55, Thursday 26 September 2024 - last comment - 13:13, Thursday 26 September 2024(80306)
Slow controls system down for investigation, alarms bypassed

Daniel has the Beckhoff slow controls system offline for investigation.

I've bypassed the following alarms:

Bypass will expire:
Thu Sep 26 10:54:08 PM PDT 2024
For channel(s):
    H1:PEM-C_CER_RACK1_TEMPERATURE
    H1:PEM-C_MSR_RACK1_TEMPERATURE
    H1:PEM-C_MSR_RACK2_TEMPERATURE
    H1:PEM-C_SUP_RACK1_TEMPERATURE
 

Comments related to this report
david.barker@LIGO.ORG - 13:13, Thursday 26 September 2024 (80311)

all alarms are active again.

H1 CDS
david.barker@LIGO.ORG - posted 09:49, Thursday 26 September 2024 (80305)
New h1lsc filter loaded

I loaded the new H1LSC.txt filter file into h1lsc. This has added Elenna's "new0926" filter to PRCLFF. This filter is currently turned off and has not been switch on recently.

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 08:26, Thursday 26 September 2024 (80303)
Thu CP1 Fill

Thu Sep 26 08:14:57 2024 INFO: Fill completed in 14min 52secs

Jordan confirmed a good fill curbside.

Images attached to this report
H1 General (Lockloss)
anthony.sanchez@LIGO.ORG - posted 07:56, Thursday 26 September 2024 (80301)
Thursday OPS Day Shift Start

TITLE: 09/26 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 3mph Gusts, 1mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.23 μm/s
QUICK SUMMARY:

When I arrived the IFO was trying to lock its self after a Lockloss from NLN. .
Unknown Lockloss

While getting the screenshots and typign this up the IFO went to PRMI twice.so I decided to get it an Initial Alignment after that big gusty wind storm yesterday.
 

Images attached to this report
X1 SUS (SUS)
ryan.crouch@LIGO.ORG - posted 07:51, Thursday 26 September 2024 - last comment - 08:21, Thursday 26 September 2024(80296)
Comparison of all fully assembled A+ HRTS transfer functions

A follow up to alog78711 with all of the 11/12 completed assemblies.

I made 3 comparison plots; all suspensions(contains the legend on the 1st page, measurement date to corresponding sus s/n), both of the suspended versions, all 9 of the freestanding versions (there is 1 left to be finished that we're waiting on a part rework/repair).

Non-image files attached to this report
Comments related to this report
rahul.kumar@LIGO.ORG - 08:21, Thursday 26 September 2024 (80302)

We are still working on fine tuning the results of two HRTS suspension, measured on 08_30_2100 and 09_05_1800 (dark green and purple line in the plot shown here), especially for V and R dof. The magnitude (page 03 and 04 here) is lower than the rest of the batch and there is also some cross coupling from Rdof.

H1 AOS
jason.oberling@LIGO.ORG - posted 13:44, Tuesday 24 September 2024 - last comment - 11:18, Thursday 26 September 2024(80271)
SR3 Optical Lever

J. Oberling, O. Patane

Today we started to re-center the SR3 optical lever after SR3 alignment was reverted to its pre-April alignment.  That's not quite how it went down, however...

We started by hooking up the motor driver and moving the QPD around (via the crossed translation stages it is attached to), and could not see any improvement to the OpLev signal.  While moving the horizontal translation stage it suddenly stopped and started making a loud grinding noise, like it had hit its, or a, limit.  Not liking the sound of that, we launched on figuring out fall protection to climb on top of HAM4 to investigate.  While the fall protection was getting figured out we took a look at the laser and found it dead.  No light, no life, all dead.  So we grabbed a spare laser from the Optics Lab and installed it (did not turn it on yet).

Once the fall protection was figured out I climbed on top of HAM4 and opened the OpLev receiver.  I couldn't visually see anything wrong with the stage.  It was near the center of its travel range, and nothing else looked like it was hung up.  I removed the QPD plate and the vertically mounted translation stage to get a better view of the stuck stage, and could still see nothing wrong.  Oli tried moving the stage with the driver and it was still making the loud noise, and the stage was not moving.  So it was well and truly stuck.  We grabbed one of the two spare translation stages from the EE shop (where Fernando was testing the remote OpLev recentering setup), tested it to make sure it worked (it did!), and installed it in the SR3 OpLev receiver.  The whole receiver was reassembled and the laser was turned on.  Oli slowly turned up the laser power while I watched for the beam, and once it was bright enough Oli then moved the translation stages to roughly center it on the QPD.

Something interesting, as Oli was turning up the laser power it would occasionally flash bright and then return to the brightness it was at before the flash.  They got it bright enough to see a SUM count of ~3k, and then re-centered the OpLev.  At this point I closed up the receiver and came down from the chamber.  I turned the laser power up to return the SUM counts to the ~20k it was at before the SR3 alignment shift and saw the SUM counts jump just like the beam would flash.  This happened early in the power adjustment (for example: started at ~3k SUM, adjusted up and saw a flash to ~15k, then back down to ~6k) but leveled off once the power was higher (I saw no jumps once the SUM counts were above 15k or so).  Maybe some oddness with a low injection current for the laser diode?  Not sure.  The OpLev is currently reading ~20k SUM counts and looks OK, but we'll keep an eye out to see if it remains stable starts behaving oddly.

The SR3 optical lever is now fixed and working again.

New laser SN is 197-3, old laser SN is 104-1.  SN of the new translation stage is 10371

Images attached to this report
Comments related to this report
jason.oberling@LIGO.ORG - 11:18, Thursday 26 September 2024 (80308)

Forgot to add, once the translation stage became stuck the driver was still recording movement as the counts would change when we tried to move the stage but the stage was clearly not moving.  So the motor encoder for the stage was working while the stage itself was stuck.

H1 DetChar (DetChar, DetChar-Request)
gabriele.vajente@LIGO.ORG - posted 11:27, Wednesday 18 September 2024 - last comment - 17:47, Wednesday 02 October 2024(80165)
Scattered light at multiples of 11.6 Hz

Looking at data from a couiple of days ago, there is evidence of some transient bumps at multiples of 11.6 Hz. Those are visible in the summary pages too around hour 12 of this plot.

Taking a spectrogram of data starting at GPS 1410607604, one can see at least two times where there is excess noise at low frequency. This is easier to see in a spectrogram whitened to the median. Comparing the DARM spectra in a period with and without this noise, one can identify the bumps at roughly multiples of 11.6 Hz.

Maybe somebody from DetChar can run LASSO on the BLRMS between 20 and 30 Hz to find if this noise is correlated to some environmental of other changes.

Images attached to this report
Comments related to this report
jane.glanzer@LIGO.ORG - 14:29, Thursday 26 September 2024 (80314)DetChar

I took a look at this noise, and I have some slides attached to this comment. I will try to roughly summarize what I found. 

I first started by taking some 20-30 hz BLRMS around the noise. Unfortunately, the noise is pretty quiet, so I don't think lasso will be super useful here. Even taking a BLRMS for a longer period around the noise didn't produce much. I can re-visit this (maybe take a narrower BLRMS?), but as a separate check I looked at spectra of the ISI, HPI, SUS, and PEM channels to see if there was excess noise anywhere in particular. I figured maybe this could at least narrow down a station where there is more noise at these frequencies.

What I found was:

  1. Didn't see excess noise in the EY or EX channels at ~11.6 Hz or at the second/third harmonics.
  2. Many CS channels had some excess noise around 11.6 hz, less at the second/third harmonics.
  3. However, of the CS channels that DID have excess noise around 11.6 Hz and 23.2 Hz, HAM8 area popped up the most. Specifically these channels: H1:PEM-FCES_ACC_BEAMTUBE_FCTUBE_X_DQ, H1:ISI-HAM8_BLND_GS13Z_IN1_DQ, H1:ISI-HAM8_BLND_GS13X_IN1_DQ.
  4. HAM3 also popped up, and the Hveto results for this day had some glitches witnessed by H1:HPI-HAM3_BLND_L4C_RZ_IN1_DQ.
  5. Potential scatter areas: something near either HAM8 or HAM3?
Non-image files attached to this comment
jane.glanzer@LIGO.ORG - 12:33, Wednesday 02 October 2024 (80429)DetChar

I was able to run lasso on a narrower strain blrms (suggested by Gabriele) which made the noise more obvious. Specifically, I used a 21 Hz - 25 Hz blrms of auxiliary channels (CS/EX/EY HPI,ISI,PEM & SUS channels) to try and model a strain blrms of the same frequency via lasso. In the pdf attached, the first slide shows the fit from running lasso. The r^2 value was pretty low, but the lasso fit does pick up some peaks in the auxiliary channels that do line up with the strain noise. In the following slides, I made time series plots of  the channels that lasso found to be contributing the most to the re-creation of the strain. The results are a bit hard to interpret though. There seems to be roughly 5 peaks in the aux channel blrms, but only 2 major ones in the strain blrms. The top contributing aux channels are also not really from one area, so I can't say that this narrowed down a potential location. However, two HAM8 channels were among the top contributors (H1:ISI_HAM8_BLND_GS_X/Y). It is hard to say if that is significant or not, since I am only looking at about an hours worth of data. 

I did a rough check on the summary pages to see if this noise happened on more than one day, but at this moment I didn't find other days with this behavior. If I do come across it happening again (or if someone else notices it), I can run lasso again.

Non-image files attached to this comment
adrian.helmling-cornell@LIGO.ORG - 17:47, Wednesday 02 October 2024 (80437)DetChar

I find that the noise bursts are temporally correlated with vibrational transients seen in H1:PEM-CS_ACC_IOT2_IMC_Y_DQ. Attached are some slides which show (1) scattered light noise in H1:GDS-CALIB_STRAIN_CLEAN from 1000-1400 on Septmeber 17, (2) and (3) the scattered light incidents compared to a timeseries of the accelerometer, and (4) a spectrogram of the accelerometer data.

Non-image files attached to this comment
H1 General (CAL, ISC)
anthony.sanchez@LIGO.ORG - posted 15:06, Saturday 31 August 2024 - last comment - 09:23, Thursday 26 September 2024(79841)
ETMX Drive align L2L Gain changed

anthony.sanchez@cdsws29: python3 /ligo/home/francisco.llamas/COMMISSIONING/commissioning/k2d/KappaToDrivealign.py

Fetching from 1409164474 to 1409177074

Opening new connection to h1daqnds1... connected
    [h1daqnds1] set ALLOW_DATA_ON_TAPE='False'
Checking channels list against NDS2 database... done
Downloading data: |█████████████████████████████████████████████████████████████████████████████████████| 12601.0/12601.0 (100%) ETA 00:00

Warning: H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN changed.


Average H1:CAL-CS_TDEP_KAPPA_TST_OUTPUT is -2.3121% from 1.
Accept changes of    
H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN from 187.379211 to 191.711514 and
H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN from 184.649994 to 188.919195
Proceed? [yes/no]
yes
Changing
H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN and
H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN
H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN => 191.7115136134197
anthony.sanchez@cdsws29:

 

Comments related to this report
louis.dartez@LIGO.ORG - 16:20, Saturday 31 August 2024 (79845)
I'm not sure if the value set by this script is correct. 

KAPPA_TST was 0.976879 (-2.3121%) at the time this script looked at it. The L2L DRIVEALIGN GAIN in H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN was 184.65 at the time of our last calibration update. This is the time at which KAPPA_TST was set to 1. So to offset the drift in the TST actuation strength we should change the drivealign gain to 184.65 * 1.023121 = 188.919. This script chose to update the gain to 191.711514 instead; this is 187.379211 * 1.023121, with 187.379211 being the gain value at the time the script was run. At that time, the drivealign gain was already accounting for a 1.47% drift in the actuation strength (this has so far not been properly compensated for in pyDARM and may be contributing to the error we're currently seeing...more on that later this weekend in another post.). 

So I think this script should be basing corrections as percentages applied with respect to the drivealign gain value at the time when the kappa's were last set (i.e. just after the last front end calibration update) *not* at the current time.

also, the output from that script claims that it also updated H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN but I trended it and it hadn't been changed. Those print statements should be cleaned up.
louis.dartez@LIGO.ORG - 09:23, Thursday 26 September 2024 (80304)
to close out this discussion, it turns out that the drivealign adjustment script is doing the correct thing. Each time the drivealign gain is adjusted to counteract the effect of ESD charging, the percent change reported by Kappa TST should be applied to the drivealign gain at that time rather than what the gain was when the kappa calculations were last updated.
Displaying reports 7581-7600 of 85564.Go to page Start 376 377 378 379 380 381 382 383 384 End