Displaying reports 481-500 of 77237.Go to page Start 21 22 23 24 25 26 27 28 29 End
Reports until 16:19, Thursday 11 July 2024
H1 ISC
camilla.compton@LIGO.ORG - posted 16:19, Thursday 11 July 2024 (79037)
Laser Noise aligoNB taken: Jitter, Frequency, Intensity

Jennie, Sheila, Camilla

Last all done in March 76623,76323,  Jitter taken in June 78554. Committed to ligo/gitcommon/NoiseBudget/aligoNB/aligoNB/H1/couplings and on aligoNB git.

Followed instructions in 70642 (new dB, see below), 74681 and 74788. We left CARM control just on REFL B for all three of these injections sets so that Sheila can create the 78969 projection plots.

 

Adjusting  70642 for Switch CARM control from REFL A+B to REFL B only:

Notes on plugging in the CARM CMB excitation for frequency excitation. In PSL rack ISC-R4, Plug the BNC from row 18 labeled AO-OUT-2 into the D0901881 Common Mode Servo on Row 15: EXC on Excitation A.

 

Images attached to this report
Non-image files attached to this report
H1 General
oli.patane@LIGO.ORG - posted 16:18, Thursday 11 July 2024 (79046)
Ops Eve Shift Start

TITLE: 07/11 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 14mph Gusts, 12mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.06 μm/s
QUICK SUMMARY:

Currently relocking and at MOVE_SPOTS. Everything looking good

LHO General
thomas.shaffer@LIGO.ORG - posted 16:15, Thursday 11 July 2024 (79023)
Ops Day Shift End

TITLE: 07/11 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY: We had a large and close 6.5M earthquake off the coast of Vancouver Island stop us from observing for most of the shift. This earthquake tripped all of the ISIs, but the only suspensions that tripped were IM1, SR3, SRM. Getting back was fully auto(!), expect for untripping WDs and some stops to allow for SEI testing. The delayed commissoining was then coordinated with LLO to start at 2030 UTC (130pm PT). A lock loss happened 1.5 hours into commissioning, the cause seeming to be one of those ETMX wiggles again. Relocking now has been full auto with an initial alignment so far.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
15:47 ISC Jeff CER - Take pictures of racks 15:58
16:09 - Sabrina, Carlos, Milly EX n Property survey in desert 18:12
17:23 CDS Marc EX n Swap cables at rack 17:48
18:32 ISC Keita LVEA n Replug AS_C cable 18:33
18:38 PCAL Francisco, Cervane, Shango, Dan PCAL Lab local PCAL meas. 21:36
20:56 ISC Sheila PSL racks n Plug in freq. noise excitation 21:06
21:26 ISC Sheila, Camilla PSL racks n Unplug niose excitation 21:31
21:36 PCAL Francisco PCAL lab local Flip a switch 21:51
H1 ISC (ISC)
jennifer.wright@LIGO.ORG - posted 15:55, Thursday 11 July 2024 - last comment - 15:26, Friday 19 July 2024(79045)
DARM Offset step with hot OM2

We were only about 2 and half hours into lock when I did this test due to our earthquake lockloss this morning.

I ran the

python auto_darm_offset_step.py

in /ligo/gitcommon/labutils/darm_offset_step

Starting at GPS 1404768828

See attached image.

Analysis to follow.

Returned DARM offset H1:OMC-READOUT_X0_OFFSET to 10.941038 (nominal) at 2024 Jul 11 21:47:58 UTC (GPS 1404769696)

DARM offset moves recorded to 
data/darm_offset_steps_2024_Jul_11_21_33_30_UTC.txt

Images attached to this report
Comments related to this report
jennifer.wright@LIGO.ORG - 14:25, Friday 12 July 2024 (79080)

Here is the calculated Optical gain vs dcpd power and DARM offset vs optical gain as calculated by ligo/gitcommon/labutils/darm_offset_step/plot_darm_optical_gain_vs_dcpd_sum.py

The contrast defect is  calculated from the height of the 410Hz PCAL line at each offset step in the output DCPD, and is 1.014 +/- 0.033 mW.

Non-image files attached to this comment
jennifer.wright@LIGO.ORG - 15:58, Monday 15 July 2024 (79130)

I added an additional plotting step to the code and it now makes this plot which shows us how the power at AS_C changes with the DARM offset power at the DCPDs. The slope of this graph tells us what fraction of the power is lost between the input to HAM6 (AS_C) and the DCPDs.

P_AS = 1.770*P_DCPD + 606.5mW

Where the second term is light that will be rejected by the OMC and that which gets through the OMC but is insensitive to DARM length changes.

The loss term between the anti-symmetric port and the DCPDs is 1/1.77 = 0.565

Non-image files attached to this comment
H1 OpsInfo
thomas.shaffer@LIGO.ORG - posted 15:43, Thursday 11 July 2024 (79044)
Minor changes to H1_MANAGER

I tested out two changes to H1_MANAGER today:

H1 ISC
thomas.shaffer@LIGO.ORG - posted 15:33, Thursday 11 July 2024 (79040)
Range integrand plots for our recent range swings during thermalization

Over the past week or so after we first get locked, our range starts out a bit lower than previously, then as we thermalize it will climb back to our usual 155Mpc (example). Sheila has some scripts that can compare the range integrands of different points in time (alog76935). I ran the script comparing 30 min into the July 11 0550 UTC lock and 3.5 hours into the same lock after we have thermalized. These point 20-50Hz or so as the largest area of change during that thermalization time. This is roughly what we see with our DARM BLRMS as well. Based on this frequency range we are thinking that the PRCL FF and A2L could be improved to help this, the former getting updated today (alog79035), but the we lost lock before A2L could be ran.

Images attached to this report
X1 SUS
ibrahim.abouelfettouh@LIGO.ORG - posted 15:29, Thursday 11 July 2024 - last comment - 15:04, Monday 15 July 2024(79042)
BBSS M1 BOSEM Count Drift Over Last Week - Temperature Driven Suspension Sag

Ibrahim, Rahul

BOSEM counts have been visibly drifting over the last few days since I centered them last week. Attached are two screenshots:

  1. Screenshot 1 shows the 48hr shift of the BOSEM counts as the temperature is varying
  2. Screenshot 2 shows the full 8 day drift since I centered the OSEMs.

I think this can easily be explained by Temperature Driven Suspension Sag (TDSS - new acronym?) due to the blades. (Initially, Rahul suggested maybe the P-adjuster was loose and moving but I think the cyclic nature of the 8-day trend disproves this)

I tried to find a way to get the temp in the staging building but Richard said there's no active data being taken so I'll take one of the thermometer/temp sensors available and place it in the cleanroom when I'm in there next, just to have the available data.

On average, the OSEM counts for RT and LF, the vertical facing OSEMs have sagged by about 25 microns. F1, which is above the center of mass, is also seeing a long-term drift. Why?

More importantly, how does this validate/invalidate our OSEM results given that some were taken hours after others and that they were centered days before the TFs were taken?

Images attached to this report
Comments related to this report
ibrahim.abouelfettouh@LIGO.ORG - 15:04, Monday 15 July 2024 (79137)

Ibrahim

Taking new trends today shows that while the suspension sag "breathes" and comes back and forth as the temperature fluctuates on a daily basis, the F1 OSEM counts are continuing to trend downwards despite temperature not changing peak to peak over the last few days.
This F1 OSEM has gone down an additional 670 cts in the last 4 days (screenshot 1). Screenshot 2 shows the OSEM counts over the last 11 days. What does this tell us?

What I don't think it is:

  1. It somewhat disproves the idea that the F1 OSEM drift was just due to the temperatures going up, since they have not leveled out as the temperatures have - unless for some reason something is heating up more than usual
  2. A suggestion was that the local cleanroom temperature closer to the walls was hotter but this would have an effect on all OSEMs on this face (F2 and F3), but those OSEMs are not trending downwards in counts.
  3. It is likely not an issue with the OSEM itself since the diagnostic pictures (alog 79079) do show a percieveable shift when there wasn't one during centering, meaning the pitch has definitiely changed, which would show up on the F1 OSEM necessarily.

What it still might be:

  1. The temperature causes the Top Stage and Top Mass blades to sag. These blades are located in front of one another and while the blades are matched, they are not identical. An unlucky matching could mean that either the back top stage blade or two of the back top mass blades could be sagging net more than the other two, causing a pitch instability. Worth check
  2. It is not temperature related at all, but that the sagging is revealing that we still have our hysteresis issue that we thought we fixed 2 weeks ago. This OSEM has been drifting in counts ever since it was centered, but the temperature has also been drastically in that time (50F difference between highs and lows last week).

Next Steps:

  • I'm going to go set up temperature probes in the cleanroom in order to see if there is indeed some weird differential temperature effect specifically in the cleanroom. Tyler and Eric have confirmed that the Staging Building temperature only really fluctuates between 70 and 72 so I'll attempt to reproduce this. This should give more details about the effect of temperature on the OSEM drift.
  • See using the individual OSEM counts and their basis DOF matrix transformation values if there's a way to determine that some blades are sagging more than others via seeing if other OSEMs are spotting it.
    • Ultimately, we could re-do the blade position tests to difinitively measure the blade height changes at different temperatures. I will look into the feasibility of this.
Images attached to this comment
H1 ISC
sheila.dwyer@LIGO.ORG - posted 15:02, Thursday 11 July 2024 (79039)
new notch filters in quads

TJ, Jennie W and I were preparing to test A2K decoupling at 12Hz, to compare to our results at 20-30Hz.  For this reason we added a notch to ISCINF P and Y for all 4 quads at 12Hz.  We had an unexplained lockloss while we were editing the notch filter, so we've loaded these in preparation for testing A2L decoupling next time we have a chance.

H1 General
thomas.shaffer@LIGO.ORG - posted 15:01, Thursday 11 July 2024 - last comment - 17:26, Thursday 11 July 2024(79038)
Lock loss 2154 UTC

Lock loss 1404770064

Lost lock during commissioning time, but we were between measurements so it was caused by something else. Looking at the lock loss tool ndscopes, ETMX has that movement we've been seeing a lot of just before the lock loss.

Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 17:26, Thursday 11 July 2024 (79051)CAL

07/12 00:13 UTC Observing

There were changes to the PCAL ramp times(PCALX, PCALY) made at 21:48 UTC. At that time we were locked and commissioning.

I have reverted those changes.

Images attached to this comment
H1 ISC
camilla.compton@LIGO.ORG - posted 14:40, Thursday 11 July 2024 (79035)
PRCL Feedforward turned on

We turned on the PRCL FF measured in 78940, the injection shows improvement (plot) and the range appears to have improved 1-2MPc.

In 78969 Sheila shows that PRCL noise was directly coupling to DARM rather than though SRCL/MICH.

Images attached to this report
X1 SUS
ibrahim.abouelfettouh@LIGO.ORG - posted 14:27, Thursday 11 July 2024 (79036)
BBSS TF F3 OSEM Instabilities: Mechanical Issue or Sensor Issue

Ibrahim, Rahul

This alog follows alog 79032 and is an in-depth investigation of the F3 OSEM's percieved instability.

From last alog:

"The nicest sounding conclusion here is that something is wrong with the F3 OSEM because it is the only OSEM and/or flag involved in L, P, Y (less coherent measurements) but not in the others; F3 fluctuates and reacts much more irratically than the others, and in Y, the F3 OSEM has the greatest proportion of actuation than P and a higher magnitude than L, so if there were something wrong with F3, we'd see it in Y the loudest. This is exactly where we see the loudest ring-up."

I have attached a gif which shows the free-hanging F3 OSEM moving much more than the others and percieveably so. I have also attahched an ndscope visualization of this movement, clearly showing that F3 is actuating harder/swinging wider than F1 and F3 (screenshot 1). This was percieved to a higher degree during the TF excitations and my current guess is that this is exactly what we're seeing in terms of the 1.5-6hz noisiness that is persistent in all of our TFs in varying degrees. Note that this does not need to be a sensor issue but could be a mechanical issue whereby an instability rings modes along this frequency and this OSEM is just showing us this in the modes that it rings up/actuates against the most. i.e. P, L and Y.

Investigation:

The first thing I did was take the BOSEM noise spectra whilst also having F1 and F2 as stable controls. While slightly noisy, there was no percieved discrepancy between the spectra (screenshot 2). There are some peaks and troughs around the problem 1.5-6hz area though but I doubt these are too related. In this case, we may have a mechanical instability on our hands.

The next thing I did was trend the F1 and F3 OSEMs to see if one is percieveably louder than the other but they were quite close in their amplitudes and the same in their freq (0.4hz) (screenshot 3). I used the micron counts here.

The last and most interesting thing I did was take another look at the F3, F2 and F1 trend of the INMON count (screenshot 1) and indeed it shows that F3 oscillation does take place at around 2Hz, which is where our ring-up is loudest across the board. Combined with the clean spectra, this further indicates that there is a mechanical issue at these frequencies (1.5-6hz).

Rahul suggested that maybe the pitch adjuster was unlocked and was causing some differential pitch as the OSEMs tend to catch up and this may be the case, so I will go check this soon. This pitch adjuster thing also may affect another issue we are having, which is OSEM Count Drift (a seperate alog, coming soon to a workstation near you).

Conclusion:

There must be an issue, not with the sensor systems, but mechanically. Due to a recent history in hysteresis, this may be the case on a less percieveable level. Another potential culprit is rising Staging Building temps differentially screwing with blades(Rahul's thought since there was a measured 2F change between yesterday and 3 days ago). Will figure out next steps pending discussion with the team.

Images attached to this report
X1 SUS
ibrahim.abouelfettouh@LIGO.ORG - posted 13:38, Thursday 11 July 2024 - last comment - 21:56, Thursday 11 July 2024(79032)
BBSS Transfer Functions and First Look Observations

Ibrahim, Oli

Attached are the most recent (07-10-2024) BBSS Transfer Functions following the most recent RAL visit and rebuild. The Diaggui screenshots show the first 01-05-2024 round of measurements as a reference. The PDF shows these results with respect to expectations from the dynamical model. Here is what we think so far:

Thoughts:

The nicest sounding conclusion here is that something is wrong with the F3 OSEM because it is the only OSEM and/or flag involved in L, P, Y (less coherent measurements) but not in the others; F3 fluctuates and reacts much more irratically than the others, and in Y, the F3 OSEM has the greatest proportion of actuation than P and a higher magnitude than L, so if there were something wrong with F3, we'd see it in Y the loudest. This is exactly where we see the loudest ring-up. I will take spectra and upload this in another alog. This would account for all issues but the F1, LF and RT OSEM drift, which I will plot and share in a seperate seperate alog.

Images attached to this report
Non-image files attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 21:56, Thursday 11 July 2024 (79055)

We have also now made a transfer function comparison between the dynamical model, the first build (2024/01/05), and following the recent rebuild (2024/07/10). These plots were generated by running $(sussvn)/trunk/BBSS/Common/MatlabTools/plotallbbss_tfs_M1.m for cases 1 and 3 in the table. I've attached the results as a pdf, but the .fig files can also be found in the results directory, $(sussvn)/trunk/BBSS/Common/Results/allbbss_2024-Jan05vJuly10_X1SUSBS_M1/. These results have been committed to svn.

Non-image files attached to this comment
H1 ISC
jim.warner@LIGO.ORG - posted 13:32, Thursday 11 July 2024 - last comment - 15:30, Thursday 11 July 2024(79033)
HAM1 asc FF turned off 1404765055 for tuning

Turned off HAM1 asc feedforward on cli by:

caput H1:HPI-HAM1_TTL4C_FF_INF_RX_GAIN 0 & caput H1:HPI-HAM1_TTL4C_FF_INF_RY_GAIN 0 & caput H1:HPI-HAM1_TTL4C_FF_INF_X_GAIN 0 & caput H1:HPI-HAM1_TTL4C_FF_INF_Z_GAIN 0 &

 

Comments related to this report
jim.warner@LIGO.ORG - 13:42, Thursday 11 July 2024 (79034)

Turned back on just after 1404765655.

caput H1:HPI-HAM1_TTL4C_FF_INF_RX_GAIN 1 & caput H1:HPI-HAM1_TTL4C_FF_INF_RY_GAIN 1 & caput H1:HPI-HAM1_TTL4C_FF_INF_X_GAIN 1 & caput H1:HPI-HAM1_TTL4C_FF_INF_Z_GAIN 1 &

jim.warner@LIGO.ORG - 15:30, Thursday 11 July 2024 (79041)

I've run Gabriele's script AM1_FF_CHARD_P_2024_04_12.ipynb from this alog on the window with the HAM1 asc ff off. I don't have a good feel for when the script produces good or bad filters, so I wrote them to copy of the seiproc foton file in my directory and plotted the current filters against the new filters. There are a lot of these, many of them are very small magnitude so I'm not sure some of them are doing anything. But none of the new filters are radically different from the old filters. I'll install the new filters in seiproc in FM7 for all the filter banks with a date stamp of 711, but won't turn them on yet. We can try next week maybe, unless Gabriele or Elenna have a better plan.

Images attached to this comment
H1 SEI
jim.warner@LIGO.ORG - posted 13:05, Thursday 11 July 2024 (79031)
HAM3 3dl4c feed forward works, but needs tuning

On Tuesday, I added some spare vertical L4Cs under HAM3 to try doing the 3dl4c feedforward that we've used on HAM1. While the earthquake was ringing down this morning, I tried turning on some feed forward filters I came up with using Gabriele's interactivefitting python tool (SEI log). The feedforward to HEPI works, doesn't seem to affect the HEPI to ISI feedforward. There is some gain peaking at 1-3hz, so I will take a look at touching up the filter in that band, then try again.

First attached image are some trends for the test. Top two traces are the Z L4C and position loop output during the test. The third trace is the gain for the feedforward path, it's on when the gain is -1, off when the gain is 0. Bottom trace are the 1-3hz and 3-10hz log blrms for the ISI Z Gs13s. The HEPI L4Cs and ISI GS13s both see reduced motion when the feedforward is turned on, but the is some increased 1-3hz motion on the ISI.

Second image are some performance measurements, top plot are asds during the on (live traces) and off (refs) measurements. This was done while the 6.5 eq in Canada was still ringing down so the low frequency part of the asds are kind of confusing, but there is clear improvement in both the HEPI BLND L4C and ISI GS13s above 3 hz. Bottom plot are transfer functions from the 3DL4C to HEPI L4C and ISI GS13, these do a better job accounting for the ground motion from the earthquake. Red (3dl4c to HEPI l4c tf) and blue (3dl4c to ISI GS13 tf) are 3dl4c feedforward on, green (3dl4c to HEPI l4c tf) and brown (3dl4c to ISI GS13 tf) are 3dl4c feedforward off. Sensor correction was off at this time, but HEPI to ISI feedforward was on. It seems like the 3dl4c feedforward makes the HEPI and ISI motion worse by a factor of 3 or 4  at 1 to 2hz, but reduces the HEPI motion by factors of 5-10x from 4 to 50hz.  The ISI motion isn't improved as much, maybe because the feedforward to HEPI is affecting the HEPI to ISI feedforward. I might try this on HAM4 or HAM5 next.

 

Images attached to this report
H1 SEI
jim.warner@LIGO.ORG - posted 11:41, Thursday 11 July 2024 (79028)
HAM suspension trips this morning caused by ISI trips, changing blends to mitigate

A couple of HAM triple suspensions tripped this morning, while the 6.5 eq off of Vancouver island was rolling by. Looking at trends for SRM, M3 tripped because the ISI tripped and cause the optic to saturate the M3 osems.  The ISI trip happened after the peak of the ground motion, when some of the CPS saturated, due to the large low frequency motion. I think we could have avoided this by switching to higher blends, when SEI_ENV went to it's LARGE_EQ state. TJ added this to the guardian, but it looks like HAM7 and HAM8 might not be stable with those blends. I'll have to do some measurements on those two chambers to see what is causing those blends to be unstable, when I have time.

First attached trend are a short wind around the time of the ISI trip. The M3 osems don't see much motion until the ISI trips on the CPS,  and SRM doesn't trip until a bit later, when the ISI starts saturating GS13s because of the trip.

Second image shows the full timeline. The middle row shows the peak of the earthquake has more or less passed, but the ISI CPS are still moving quite a lot. The GS13 on the bottom row doesn't saturate until after the ISI trips on the CPS.

Images attached to this report
H1 CAL
louis.dartez@LIGO.ORG - posted 07:10, Thursday 11 July 2024 - last comment - 22:48, Friday 12 July 2024(79019)
testing patched simulines version during next calibration measurement
We're running a patched version of simuLines during the next calibration measurement run. The patch (attached) was provided by Erik to try and get around what we think are awg issues introduce (or exacerbated) by the recent awg server updates (mentioned in LHO:78757).

Operators: there's is nothing special to do. just follow the normal routine as I applied the patch changes in place. Depending on the results of this test, I will either roll them back or work with Vlad to make them permanent (at least for LHO).
Non-image files attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 17:16, Thursday 11 July 2024 (79048)

Simulines was run right after getting back to NOIMINAL_LOW_NOISE. Script ran all the way until after Commencing data processing, where it then gave:

Traceback (most recent call last):
  File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 712, in
    run(args.inputFile, args.outPath, args.record)
  File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 205, in run
    digestedObj[scan] = digestData(results[scan], data)
  File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 621, in digestData
    coh = np.float64( cohArray[index] )
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/gwpy/types/series.py", line 609, in __getitem__
    new = super().__getitem__(item)
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/gwpy/types/array.py", line 199, in __getitem__
    new = super().__getitem__(item)
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/astropy/units/quantity.py", line 1302, in __getitem__
    out = super().__getitem__(key)
IndexError: index 3074 is out of bounds for axis 0 with size 0

erik.vonreis@LIGO.ORG - 17:18, Thursday 11 July 2024 (79049)

All five excitations looked good on ndscope during the run.

erik.vonreis@LIGO.ORG - 17:22, Thursday 11 July 2024 (79050)

Also applied the following patch to simuLines.py before the run.  The purpose being to extend the sine definition so that discontinuities don't happen if a stop command is executed late.  If stop commands are all executed on time (the expected behavior), then this change will have no effect.

 

diff --git a/simuLines.py b/simuLines.py
index 6925cb5..cd2ccc3 100755
--- a/simuLines.py
+++ b/simuLines.py
@@ -468,7 +468,7 @@ def SignalInjection(resultobj, freqAmp):
     
     #TODO: does this command take time to send, that is needed to add to timeWindowStart and fullDuration?
     #Testing: Yes. Some fraction of a second. adding 0.1 seconds to assure smooth rampDown
-    drive = awg.Sine(chan = exc_channel, freq = frequency, ampl = amp, duration = fullDuration + rampUp + rampDown + settleTime + 1)
+    drive = awg.Sine(chan = exc_channel, freq = frequency, ampl = amp, duration = fullDuration + rampUp + rampDown + settleTime + 10)
     
     def signal_handler(signal, frame):
         '''

 

vladimir.bossilkov@LIGO.ORG - 07:33, Friday 12 July 2024 (79059)

Here's what I did:

  • Cloned simulines in my home directory
  • Copied the currently used ini file to that directory, overwriting default file [cp /ligo/groups/cal/src/simulines/simulines/newDARM_20231221/settings_h1_newDARM_scaled_by_drivealign_20231221_factor_p1.ini /ligo/home/vladimir.bossilkov/gitProjects/simulines/simulines/settings_h1.ini]
  • reran simulines on the log file [./simuLines.py -i /opt/rtcds/userapps/release/cal/common/scripts/simuLines/logs/H1/20240711T234232Z.log]

No special environment was used. Output:
2024-07-12 14:28:43,692 | WARNING | It is assumed you are parising a log file. Reconstruction of hdf5 files will use current INI file.
2024-07-12 14:28:43,692 | WARNING | If you used a different INI file for the injection you are reconstructing, you need to replace the default INI file.
2024-07-12 14:28:43,692 | WARNING | Fetching data more than a couple of months old might try to fetch from tape. Please use the NDS2_CLIENT_ALLOW_DATA_ON_TAPE=1 environment variable.
2024-07-12 14:28:43,692 | INFO | If you alter the scan parameters (ramp times, cycles run, min seconds per scan, averages), rerun the INI settings generator. DO NOT hand modify the ini file.
2024-07-12 14:28:43,693 | INFO | Parsing Log file for injection start and end timestamps
2024-07-12 14:28:43,701 | INFO | Commencing data processing.
2024-07-12 14:28:55,745 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240711T234232Z.hdf5
2024-07-12 14:29:11,685 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240711T234232Z.hdf5
2024-07-12 14:29:20,343 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240711T234232Z.hdf5
2024-07-12 14:29:29,541 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240711T234232Z.hdf5
2024-07-12 14:29:38,634 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240711T234232Z.hdf5


Seems good to me. Were you guys accidentally using some conda environment when running simulines yesterday? When running this I was in " cds-testing " (which is the default?!). I have had this error in the past due to borked environments [in particular scipy which is the underlying responsible code for coherence], which is why I implemented the log parsing function.
The fact that the crash was on coherence and not the preceding transfer function calculation rings the alarm bell that scipy is the issue. We experienced this once in LLO with a single bad conda environment that was corrected, though I stubbornly religiously ran with a very old environment for a long time to make sure that error doesn't come up,

I ran this remotely so can't look at PDF if i run 'pydarm report'.
I'll be in touch over teamspeak to get that resolved.

ryan.crouch@LIGO.ORG - 08:00, Friday 12 July 2024 (79061)

Attaching the calibration report

Non-image files attached to this comment
vladimir.bossilkov@LIGO.ORG - 08:06, Friday 12 July 2024 (79062)

There's a number of WAY out there data points in this report.

Did you guys also forget to turn off the calibration lines when you ran it?

Not marking this report as valid.

louis.dartez@LIGO.ORG - 08:34, Friday 12 July 2024 (79065)
right, there was no expectation of this dataset being valid. the IFO was not thermalized and the cal lines remained on. 

The goal of this exercise was to demonstrate that the patched simulines version at LHO can successfully drive calibration measurements. And to that end the exercise was successful. LHO has recovered simulines functionality and we can lay to rest the scary notion of regressing back to our 3hr-long measurement scheme for now.
erik.vonreis@LIGO.ORG - 22:48, Friday 12 July 2024 (79089)

The run was probably in done in the 'cds' environment.  At LHO, 'cds' and 'cds-testing' are currently identical.  I don't know the situation at LLO, but LLO typically runs with an older environment than LHO.

Since it's hard to stay with fixed versions on conda-forge, it's likely several packages are newer at LHO vs. LLO cds environments.

H1 SUS (SUS)
rahul.kumar@LIGO.ORG - posted 11:18, Wednesday 10 July 2024 - last comment - 15:35, Thursday 11 July 2024(79003)
New settings for damping rung up violin mode ITMY mode 05 and 06

The following settings seems to be working for now and I will commit it in the lscparams after a couple of IFO lock stretches.

New settings (ITMY05/06):- FM5 FM6 FM7 FM10 Gain +0.01 (new phase -90 degree), might increase the gain later on depending upon how slow the damping is.

Nominal settings (ITMY05/06): FM6 FM8 FM10 Gain +0.02 (phase -30deg)

Given below are settings I tried this morning and it did not work,

1. no phase, 0.01 gain - increase
2. -30 phase, -0.01 gain - increase
3. +30 phase, 0.01 gain - increase
4. -30 phase, 0.01 gain - IY05 decreasing (both filters) IY06 increasing (both filters)
5. -60 phase, 0.01 gain - IY05 decreasing (both filters) IY06 increasing (only in narrow filter)

---

After talking to TJ, I have set the gain to zero on lscparams and saved it but not loaded it since we are OBSERVING. Will load it once there is a target or opportunity.

Images attached to this report
Comments related to this report
rahul.kumar@LIGO.ORG - 16:30, Wednesday 10 July 2024 (79011)

DARM spectra attached below shows that both the modes are slowly decreasing, next I will try and bump up the gain to 0.02.

Images attached to this comment
rahul.kumar@LIGO.ORG - 15:35, Thursday 11 July 2024 (79043)SUS

ITMY 05/06 - FM5 FM6 FM7 FM10 Gain +0.02 has been saved in lscparams and violin mode Guardian has been loaded for the next lock.

LHO VE
david.barker@LIGO.ORG - posted 13:04, Tuesday 09 July 2024 - last comment - 11:43, Thursday 11 July 2024(78967)
CDS Maintenance Summary: Tuesday 9th July 2024

WP11970 h1susex 28AO32 DAC

Fil, Marc, Erik:

Fil connected the upper set of 16 DAC channels to the first 16 ADC channels and verified there were no bad channels in this block. At this point there were two bad channels; chan4 (5th chan) and chan11 (12th chan).

Later Marc and Erik powered the system down and replaced the interface card, its main ribbon cable back to the DAC and the first header plate including its ribbon to the interface card. What was not replaced was the DAC card itself and the top two header plates (Fil had shown the upper 16 channels had no issues). At this point there were no bad channels, showing the problem was most probably in the interface card.

No DAQ restart was required.

WP11969 h1iopomc0 addition of matrix and filters

Jeff, Erik, Dave:

We installed a new h1iopomc0 model on h1omc0. This added a mux matrix and filters to the model, which in turn added slow channels to the DAQ INI file. DAQ restart was required.

WP11972 HEPI HAM3

Jim, Dave:

A new h1hpiham3 model was installed. The new model wired up some ADC channels. No DAQ restart was required.

DAQ Restart

Erik, Jeff, Dave:

The DAQ was restarted soon after the new h1iopomc0 model was installed. We held off the DAQ restart until the new filters were populated to verify the IOP did not run out of processing time, which it didn't. It went from 9uS to 12uS.

The DAQ restart had several issues:

both GDS needed a second restart for channel configuration

FW1 spontaneously restarted itself after running for 9.5 minutes.

WP11965 DTS login machine OS upgrade

Erik:

Erik upgraded x1dtslogin. When it was back in operation the DTS environment channels were restored to CDS by restarting dts_tunnel.service and dts_env.service on cdsioc0.

Comments related to this report
david.barker@LIGO.ORG - 13:29, Tuesday 09 July 2024 (78970)

Tue09Jul2024
LOC TIME HOSTNAME     MODEL/REBOOT
09:50:32 h1omc0       h1iopomc0   <<< Jeff's new IOP model
09:50:46 h1omc0       h1omc       
09:51:00 h1omc0       h1omcpi     


09:52:18 h1seih23     h1hpiham3   <<< Jim's new HEPI model


10:10:55 h1daqdc0     [DAQ] <<< 0-leg restart for h1iopomc0 model
10:11:08 h1daqfw0     [DAQ]
10:11:09 h1daqnds0    [DAQ]
10:11:09 h1daqtw0     [DAQ]
10:11:17 h1daqgds0    [DAQ]
10:11:48 h1daqgds0    [DAQ] <<< 2nd restart needed


10:14:02 h1daqdc1     [DAQ] <<< 1-leg restart
10:14:15 h1daqfw1     [DAQ]
10:14:15 h1daqtw1     [DAQ]
10:14:16 h1daqnds1    [DAQ]
10:14:24 h1daqgds1    [DAQ]
10:14:57 h1daqgds1    [DAQ] <<< 2n restart needed


10:23:07 h1daqfw1     [DAQ] <<< FW1 spontaneous restart


11:54:35 h1susex      h1iopsusex  <<< 28AO32 DAC work in IO Chassis
11:54:48 h1susex      h1susetmx   
11:55:01 h1susex      h1sustmsx   
11:55:14 h1susex      h1susetmxpi 
 

marc.pirello@LIGO.ORG - 13:51, Tuesday 09 July 2024 (78973)

Power Spectrum of channels 0 through 15.  No common mode issues detected. 

Channel 3 & 9 are elevated below 10Hz

It is unclear if these are due to the PEM ADC or the output of the DAC.  More testing is needed.

 

Images attached to this comment
marc.pirello@LIGO.ORG - 10:06, Wednesday 10 July 2024 (79000)

New plot of first 16 channels, with offsets added to center the output to zero.  When offsets were turned on, the 6Hz lines went away, I believe these were due to uninitialized DAC channels.  This plot also contains the empty upper 16 channels on the PEM ADC chassis as a noise comparison with nothing attached to the ADC.  Channel 3 is still noisy below 10Hz.

Images attached to this comment
marc.pirello@LIGO.ORG - 11:43, Thursday 11 July 2024 (79030)

New plot of second 16 channels (ports C & D), with offsets added to center the output to zero.  This plot also contains the empty lower 16 channels on the PEM ADC chassis as a noise comparison with nothing attached to the ADC.  Channel 3 is still noisy below 10Hz, signifying this to be an ADC issue, not necissarily a DAC issue.  These plots seem to imply that the DAC noise desnity while driving zero volts is well below the ADC noise floor in this frequency range.

Images attached to this comment
Displaying reports 481-500 of 77237.Go to page Start 21 22 23 24 25 26 27 28 29 End