Displaying reports 441-460 of 77237.Go to page Start 19 20 21 22 23 24 25 26 27 End
Reports until 01:12, Saturday 13 July 2024
LHO General (ISC, OpsInfo, SYS)
ryan.short@LIGO.ORG - posted 01:12, Saturday 13 July 2024 (79090)
Ops Eve Shift Summary

TITLE: 07/13 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
SHIFT SUMMARY:  H1 has been down since the lockloss at 16:44 UTC earlier today.

Since midshift, Sheila was working with me on diagnosing locking issues. After locking DRMI, we turned on the DRMI ASC loops one-by-one until we found that turning on the SRC1_P&Y loops caused the buildups and camera image on AS AIR to get worse, so we've commented out using those loops in the ISC_DRMI Guardian. Locking DRMI after that and running DRMI ASC all at once made DRMI look much better and we didn't see the oscillations and saturations that I was seeing earlier. We also commented out the use of SRC1 in ENGAGE_AS_FOR_FULL_IFO since Sheila and Keita saw earlier in the day that it was not minimizing POP90 well at that point.

Once we made it past DRMI, we then encountered our next issue at CHECK_AS_SHUTTERS. The FAST_SHUTTER Guardian ran through its shutter test, then jumped to SHUTTER_FAILURE where it reported "Fast shutter failed tests! Do not power up!" and the AS port protection screen shows the error messages "Protection in fault" and "Power interlock is on". Unsure of what these errors meant, and having been unsuccessful in running the fast shutter test again manually, I called Fil for support. He had me power cycle the fast shutter driver chassis in the ISC rack by HAM6, which showed gibberish on the LCD screen (a previously seen issue), and the screen looked better and showed 255V DC charge. This didn't seem to fix the issue of the test failing, so I also power cycled the shutter logic chassis at the bottom of the rack, but this also didn't solve it.

Later, Keita discovered that the shutter test was failing because when the shutter closes, the signal on ASC-AS_B_DC_NSUM was not low enough, so Guardian didn't think it was closed even though it was. By just pressing the open and close buttons manually on the fast shutter screen, the shutter was able to run through its test and the Beckhoff errors cleared. Nothing was fundamentally changed here to fix this, aside from adding some log statements to the FAST_SHUTTER Guardian, but at least it sounds less like an electronics issue at this point. Keita commented that this perhaps isn't an alignment issue, unless some alignment in HAM6 changed recently.

Even though the issue with the fast shutter seems to be dealt with (but untested during a lock acquisition), there are still IFO locking issues that Sheila and I were unable to fully address this evening due to the fast shutter issues, including the 0.5Hz oscillation and lockloss when going to full power and the strange image on the AS AIR camera, which will need to be addressed in the morning. I'm leaving H1 DOWN for the night until more thorough investigations can be done.

H1 SYS
sheila.dwyer@LIGO.ORG - posted 22:25, Friday 12 July 2024 (79087)
fast shutter did shut in our last lockloss

Ryan S and Filiberto remotely are troubleshooting why the fast shutter isn't working now.  I just had a look at our last high power lockloss, from MAX power at around 6:45 local time.  In that lockloss the fast shutter did function and protect the AS port. 

We've had a strange looking AS air camera all afternoon, screenshot of the camera with DRMI locked is attached.  Hopefully the shutter issue is just the controller needing to be reset.  In the past when the shutter wires were in the beam path towards OM1, the AS camera image did look strange.

Images attached to this report
H1 General
ryan.short@LIGO.ORG - posted 20:39, Friday 12 July 2024 (79086)
Ops Eve Mid Shift Report

H1 has been unable to relock so far this shift. After running an initial alignment at the start of the evening, we were able to reach MAX_POWER fairly consistently before a 0.5Hz oscillation started in ASC MICH_P and also smaller in DHARD_P and some SRC_P which would cause a lockloss. After one of these locklosses, when relocking DRMI, there would be many SRM and SR2 saturation callouts after DRMI ASC turned on and DRMI would lose lock. I dropped back down to run another initial alignment, paying closer attention this time, and everything went smoothly aside from SRC alignment, where no matter what SRM and SR2 moves I made, we could not acquire SRY. Trending OSEMs, optics in the SRC are in generally the same place as they are when locking normally, so I'm unsure at this point where the misalignment is (and I'm hesitant to move SR3 at this stage). The AS AIR camera has looked quite bad the whole time as well.

Starting the locking sequence to see what things look like now.

H1 General
ryan.crouch@LIGO.ORG - posted 16:33, Friday 12 July 2024 (79076)
OPS Friday day shift summary

TITLE: 07/12 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Currently finishing up with a IA to align SRM.

Lock1:

Lock2:

LOG:                                                                                                           

Start Time System Name Location Lazer_Haz Task Time End
18:30 ASC Keita HAM6 N Install ADC ASC component 18:36
LHO General
ryan.short@LIGO.ORG - posted 16:06, Friday 12 July 2024 (79084)
Ops Eve Shift Start

TITLE: 07/12 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 11mph Gusts, 8mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY: Sounds like H1 has been having some locking troubles today, which I'll be aware of as we relock. Currently H1 is in the process of locking PRMI.

X1 SUS
ibrahim.abouelfettouh@LIGO.ORG - posted 15:57, Friday 12 July 2024 (79083)
BBSS New Transfer Functions with Temperature Sag Offsets Applied

Ibrahim

Attached are the new set of BBSS TF measurement screenshots with slightly adjusted offsets applied. These offsets account for the Pitch drift we're seeing in F1 and for any other non-centered OSEM behavior due to temperature sagging. These are also the first TFs since moving the EQ stops back and re-confirming that nothing is rubbing or touching. See alog 79079 for the summary of what was done right before these were taken. Screenshot 1 shows the new offsets that are currently in. Screenshot 2 shows the old offsets that were from the OLV script - if the temperature stops varying to this degree, it may be worth reverting these.

The red trace are results as of today (07-12-2024)

The green results are results from earlier this week 07-10-2024, which were presented in alog 79032 had the issues mentioned in alog 79042 and alog 79036.

The blue results are the results from 01-05-2024, which we were using as a reference for expected coherence.

The new measurements are saved under a new file in the Data folder: /ligo/svncommon/SusSVN/sus/trunk/BBSS/X1/BS/SAGM1/Data/2024-07-12_1400_tfs. I've svn commited this Data file so it should be visible both in the X1 and normal workstations.

Interpretations:

The new transfer functions look much better than the ones from two days ago, meaning that moving the EQ stops back and adjusting for the Temp Driven sag was extremely fruitful. We can clearly see cleaner results, with almost eliminated noise in the previously problematic 1.5-6hz region and the results are markedly more coherent, comprable to the pre-rebuild state from 01-05. Now we can begin interpreting these results seriously with respect to the dynamic model, which I will leave for next week.

Next Steps (Next Week):

Images attached to this report
H1 ISC
sheila.dwyer@LIGO.ORG - posted 15:36, Friday 12 July 2024 (79082)
DARM offset causing locklosses again

Keita, Sheila, Ryan C, Ibrahim

Two previous alogs about this problem: 78332 78258

We tried to open the SRC1 loop and align SRM manually, we were able to decrease POP90 that way but when we again turned on the DARM offset we saw the same drop in powers that preceeds locklosses. (see screenshot, when things recover we manually turned off the DARM offset.)

We were able to move past this for today by lowering the DARM offset used from 9e-5 to 6e-5 in ISC_LOCK DARM_OFFSET.  Then to get the OMC guardian to lock we had to lower the threshold on the peak height in OMC_LOCK FIND_CARRIER line 328 to 7.  Then we had to lower the check in OMC_LOCKED from 10 to 7, OMC_LOCK line 512.

We manually reengaged the SRC1 ASC loops, and are now powering up.

 

Images attached to this report
H1 CAL
jeffrey.kissel@LIGO.ORG - posted 15:27, Friday 12 July 2024 - last comment - 10:33, Wednesday 17 July 2024(79081)
PCALX :: Long-term High-Frequency Sensing Function TF vs. PCAL Spot Position Systematics Study
J. Kissel, L. Dartez, F. Llamas

As Louis and I were wracking our brains for all the changes to the IFO that might have impacted the IFO's calibration in 2024 in order to update / maintain T2300297, we realized that Francisco's work -- changing the PCAL "inner beam" spot position of PCALX to explore and validate models of systematic errors the PCAL system -- would potentially impact the measurements that are constantly running on PCALX -- the so-called "roaming lines" which is a really long duration, repeating sweep of the sensing function between 1 and 5 kHz. Because this long-duration sweep relies on nominal low noise data, the speed at which the sweep frequency advances is dependent on the IFO's duty cycle, and is thus irregular; sometimes complete in 6 days, other times taking 10 days or more.

To facilitate this research, I plot the trend of the PCALX Oscillator Frequency that's used to define the sweep frequency over time and overlay Francisco, et. al's PCALX spot moves.

And just so you can join us in our worry -- while Francisco's work (see latest in LHO:78964) is showing 0.03%-level changes in the comparison between PCALX and PCALY at 283.91 and 284.01 Hz respectively (via plots like this), the roaming lines measurement is only PCALX and the sweep is from 1-5 kHz, where it's been known for a long time that PCAL spot position changes of ~5 [mm] at RX, or ~2.5 [mm] scale at the Test Mass, can cause 1-to-10%-level errors in reported displacement values at 3, 4, 5 kHz (largest at highest frequencies) -- see, for example, figures 2.30 through 2.35 in T2300381.

Stay tuned!

This aLOG does not suggest we've done the leg work to quantify that we've seen the a problem, but the start of doing such a study.
Images attached to this report
Comments related to this report
richard.savage@LIGO.ORG - 10:33, Wednesday 17 July 2024 (79191)CAL

Jeff wrote:

"...while Francisco's work (see latest in LHO:78964) is showing 0.03%-level changes..."

I think he meant to write 0.3%-level rather than 0.03%.

X1 SUS
ibrahim.abouelfettouh@LIGO.ORG - posted 13:17, Friday 12 July 2024 (79079)
BBSS Post-TF Diagnostic Check-up

Ibrahim

Today, I went into the staging building to see if there were any visible or otherwise apparent reasons for our L, P, Y M1 Transfer Function incoherence.

Here's what I did:

Here's what I found:

Here's what I'm going to to:

I'm going to set an offset in order to "fake" center the BOSEMs and then take data but I expect that with a Pitch instability the results won't be too coherent either. We'll see. We're at least now ensured that there won't be any rubbing or contact.

What I'm going to do later:

We will fix the percieveable Roll issue on Monday when there are more hands on deck. I'll also consider re-centering the OSEMs at noon and measuring the temp at that time so we have some sort of informed reference.

Images attached to this report
H1 PSL
ryan.crouch@LIGO.ORG - posted 10:58, Friday 12 July 2024 (79075)
PSL Status Report - Weekly


Laser Status:
    NPRO output power is 1.821W (nominal ~2W)
    AMP1 output power is 65.27W (nominal ~70W)
    AMP2 output power is 138.5W (nominal 135-140W)
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN
    PDWD watchdog is GREEN

PMC:
    It has been locked 2 days, 23 hr 37 minutes
    Reflected power = 17.94W
    Transmitted power = 107.8W
    PowerSum = 125.8W

FSS:
    It has been locked for 0 days 0 hr and 49 min
    TPD[V] = 0.7949V

ISS:
    The diffracted power is around 3.2%
    Last saturation event was 0 days 0 hours and 49 minutes ago


Possible Issues:
    ISS diffracted power is high, seems like its been high since early morning this past maintenance.

H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 09:52, Friday 12 July 2024 - last comment - 09:05, Monday 15 July 2024(79073)
Lockloss at 16:44 UTC

16:44 UTC lockloss. PRCL was oscillating at about ~3.6 Hz.

Comments related to this report
ryan.crouch@LIGO.ORG - 12:55, Friday 12 July 2024 (79078)

We've lost it at PREP_DC_READOUT twice in a row, during different points of the OMC locking process. Lockloss tool tags ADS_EXCURSION

camilla.compton@LIGO.ORG - 09:05, Monday 15 July 2024 (79111)

We turned on a PRCL FF the day before: 79035. But this 3.6Hz PRCL wobble is normal, it was constant throughout the lock plot and locks before the feedforward was installed (example).

This lockloss looked very normal, AS_A then IMC loosing lock, as usual, plot.

Images attached to this comment
H1 ISC
ryan.crouch@LIGO.ORG - posted 09:43, Friday 12 July 2024 - last comment - 10:35, Friday 12 July 2024(79069)
Range checks

Ryan C, Sheila D

15:32 to 16:01 UTC we dropped Observing to do some range checks/investigations.

Sheila and I did some range checks following the wiki. Running the coherence check showed high coherence with CHARD and the SQZer BLRMs did not look as good as previous locks, particularly in the 10-20, 20-34, and 60-100 HZ bandwidths. So we decided to drop out of observing since LLO was down to run the SQZ alignment and angle scans and I concurrently ran the A2L_min script (TJs alog78552). After these finished we also took 5 minutes of NO_SQZ time starting at 15:49 UTC which showed that the extra noise is not coming from the SQZer. We gained a few Mpcs from the SQZ scan and A2L_min script.

Coherence comparison before and after SQZ and A2L checks

Images attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 10:19, Friday 12 July 2024 (79074)

Quickly checked temperature.

LVEA diurnal temperature swing is larger than usual in the past 4 days or so (e.g. zone 1B 0.33 Celsius pp instead of 0.15) but I don't see correlation with the range drop.

Images attached to this comment
sheila.dwyer@LIGO.ORG - 10:35, Friday 12 July 2024 (79072)

Attached is a spectrum comparison for a 159 Mpc time from last night to 149 Mpc this morning.  The issue is between 30-70 Hz, where Ryan's coherence plot doesn't show much.

The second attachment shows that today's poor range time is similar to what happened after Tuesday maintenance (78954). 

We tried the no squeezing time because we've seen in the past that the squeezer was adding noise around this frequency range, 78033 77969 77888 . In these past times were at times when we had moved the spot position on PR2, and we also had the intermittent squeezing problems that we seem not to have this week.  We reverted PR2 spot moves because of this suspicion (77895  78012) and the issue seemed to go away, but we thought it might be simply that the intermittent squeezer issue happened to get better.

The third attached screenshot shows a similar trends for the times when we moved PR3 in May to move the spot on PR2 (77949), that was a ~3 times larger move than the one we did last week (78878).

Images attached to this comment
H1 ISC (ISC)
jennifer.wright@LIGO.ORG - posted 15:55, Thursday 11 July 2024 - last comment - 15:26, Friday 19 July 2024(79045)
DARM Offset step with hot OM2

We were only about 2 and half hours into lock when I did this test due to our earthquake lockloss this morning.

I ran the

python auto_darm_offset_step.py

in /ligo/gitcommon/labutils/darm_offset_step

Starting at GPS 1404768828

See attached image.

Analysis to follow.

Returned DARM offset H1:OMC-READOUT_X0_OFFSET to 10.941038 (nominal) at 2024 Jul 11 21:47:58 UTC (GPS 1404769696)

DARM offset moves recorded to 
data/darm_offset_steps_2024_Jul_11_21_33_30_UTC.txt

Images attached to this report
Comments related to this report
jennifer.wright@LIGO.ORG - 14:25, Friday 12 July 2024 (79080)

Here is the calculated Optical gain vs dcpd power and DARM offset vs optical gain as calculated by ligo/gitcommon/labutils/darm_offset_step/plot_darm_optical_gain_vs_dcpd_sum.py

The contrast defect is  calculated from the height of the 410Hz PCAL line at each offset step in the output DCPD, and is 1.014 +/- 0.033 mW.

Non-image files attached to this comment
jennifer.wright@LIGO.ORG - 15:58, Monday 15 July 2024 (79130)

I added an additional plotting step to the code and it now makes this plot which shows us how the power at AS_C changes with the DARM offset power at the DCPDs. The slope of this graph tells us what fraction of the power is lost between the input to HAM6 (AS_C) and the DCPDs.

P_AS = 1.770*P_DCPD + 606.5mW

Where the second term is light that will be rejected by the OMC and that which gets through the OMC but is insensitive to DARM length changes.

The loss term between the anti-symmetric port and the DCPDs is 1/1.77 = 0.565

Non-image files attached to this comment
H1 CAL
louis.dartez@LIGO.ORG - posted 07:10, Thursday 11 July 2024 - last comment - 22:48, Friday 12 July 2024(79019)
testing patched simulines version during next calibration measurement
We're running a patched version of simuLines during the next calibration measurement run. The patch (attached) was provided by Erik to try and get around what we think are awg issues introduce (or exacerbated) by the recent awg server updates (mentioned in LHO:78757).

Operators: there's is nothing special to do. just follow the normal routine as I applied the patch changes in place. Depending on the results of this test, I will either roll them back or work with Vlad to make them permanent (at least for LHO).
Non-image files attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 17:16, Thursday 11 July 2024 (79048)

Simulines was run right after getting back to NOIMINAL_LOW_NOISE. Script ran all the way until after Commencing data processing, where it then gave:

Traceback (most recent call last):
  File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 712, in
    run(args.inputFile, args.outPath, args.record)
  File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 205, in run
    digestedObj[scan] = digestData(results[scan], data)
  File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 621, in digestData
    coh = np.float64( cohArray[index] )
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/gwpy/types/series.py", line 609, in __getitem__
    new = super().__getitem__(item)
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/gwpy/types/array.py", line 199, in __getitem__
    new = super().__getitem__(item)
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/astropy/units/quantity.py", line 1302, in __getitem__
    out = super().__getitem__(key)
IndexError: index 3074 is out of bounds for axis 0 with size 0

erik.vonreis@LIGO.ORG - 17:18, Thursday 11 July 2024 (79049)

All five excitations looked good on ndscope during the run.

erik.vonreis@LIGO.ORG - 17:22, Thursday 11 July 2024 (79050)

Also applied the following patch to simuLines.py before the run.  The purpose being to extend the sine definition so that discontinuities don't happen if a stop command is executed late.  If stop commands are all executed on time (the expected behavior), then this change will have no effect.

 

diff --git a/simuLines.py b/simuLines.py
index 6925cb5..cd2ccc3 100755
--- a/simuLines.py
+++ b/simuLines.py
@@ -468,7 +468,7 @@ def SignalInjection(resultobj, freqAmp):
     
     #TODO: does this command take time to send, that is needed to add to timeWindowStart and fullDuration?
     #Testing: Yes. Some fraction of a second. adding 0.1 seconds to assure smooth rampDown
-    drive = awg.Sine(chan = exc_channel, freq = frequency, ampl = amp, duration = fullDuration + rampUp + rampDown + settleTime + 1)
+    drive = awg.Sine(chan = exc_channel, freq = frequency, ampl = amp, duration = fullDuration + rampUp + rampDown + settleTime + 10)
     
     def signal_handler(signal, frame):
         '''

 

vladimir.bossilkov@LIGO.ORG - 07:33, Friday 12 July 2024 (79059)

Here's what I did:

  • Cloned simulines in my home directory
  • Copied the currently used ini file to that directory, overwriting default file [cp /ligo/groups/cal/src/simulines/simulines/newDARM_20231221/settings_h1_newDARM_scaled_by_drivealign_20231221_factor_p1.ini /ligo/home/vladimir.bossilkov/gitProjects/simulines/simulines/settings_h1.ini]
  • reran simulines on the log file [./simuLines.py -i /opt/rtcds/userapps/release/cal/common/scripts/simuLines/logs/H1/20240711T234232Z.log]

No special environment was used. Output:
2024-07-12 14:28:43,692 | WARNING | It is assumed you are parising a log file. Reconstruction of hdf5 files will use current INI file.
2024-07-12 14:28:43,692 | WARNING | If you used a different INI file for the injection you are reconstructing, you need to replace the default INI file.
2024-07-12 14:28:43,692 | WARNING | Fetching data more than a couple of months old might try to fetch from tape. Please use the NDS2_CLIENT_ALLOW_DATA_ON_TAPE=1 environment variable.
2024-07-12 14:28:43,692 | INFO | If you alter the scan parameters (ramp times, cycles run, min seconds per scan, averages), rerun the INI settings generator. DO NOT hand modify the ini file.
2024-07-12 14:28:43,693 | INFO | Parsing Log file for injection start and end timestamps
2024-07-12 14:28:43,701 | INFO | Commencing data processing.
2024-07-12 14:28:55,745 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240711T234232Z.hdf5
2024-07-12 14:29:11,685 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240711T234232Z.hdf5
2024-07-12 14:29:20,343 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240711T234232Z.hdf5
2024-07-12 14:29:29,541 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240711T234232Z.hdf5
2024-07-12 14:29:38,634 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240711T234232Z.hdf5


Seems good to me. Were you guys accidentally using some conda environment when running simulines yesterday? When running this I was in " cds-testing " (which is the default?!). I have had this error in the past due to borked environments [in particular scipy which is the underlying responsible code for coherence], which is why I implemented the log parsing function.
The fact that the crash was on coherence and not the preceding transfer function calculation rings the alarm bell that scipy is the issue. We experienced this once in LLO with a single bad conda environment that was corrected, though I stubbornly religiously ran with a very old environment for a long time to make sure that error doesn't come up,

I ran this remotely so can't look at PDF if i run 'pydarm report'.
I'll be in touch over teamspeak to get that resolved.

ryan.crouch@LIGO.ORG - 08:00, Friday 12 July 2024 (79061)

Attaching the calibration report

Non-image files attached to this comment
vladimir.bossilkov@LIGO.ORG - 08:06, Friday 12 July 2024 (79062)

There's a number of WAY out there data points in this report.

Did you guys also forget to turn off the calibration lines when you ran it?

Not marking this report as valid.

louis.dartez@LIGO.ORG - 08:34, Friday 12 July 2024 (79065)
right, there was no expectation of this dataset being valid. the IFO was not thermalized and the cal lines remained on. 

The goal of this exercise was to demonstrate that the patched simulines version at LHO can successfully drive calibration measurements. And to that end the exercise was successful. LHO has recovered simulines functionality and we can lay to rest the scary notion of regressing back to our 3hr-long measurement scheme for now.
erik.vonreis@LIGO.ORG - 22:48, Friday 12 July 2024 (79089)

The run was probably in done in the 'cds' environment.  At LHO, 'cds' and 'cds-testing' are currently identical.  I don't know the situation at LLO, but LLO typically runs with an older environment than LHO.

Since it's hard to stay with fixed versions on conda-forge, it's likely several packages are newer at LHO vs. LLO cds environments.

H1 AOS (ISC, VE)
keita.kawabe@LIGO.ORG - posted 13:05, Tuesday 09 July 2024 - last comment - 11:49, Friday 12 July 2024(78966)
We cannot assess the energy deposited in HAM6 during pressure spike incidents (yet)

We cannot make a reasonable assessment of energy deposited in HAM6 when we had the pressure spikes (the spikes themselves are reported in alogs 78346, 78310 and 78323, Sheila's analysis is in alog 78432), or even during regular lock losses.

This is because all of the relevant sensors saturate badly, and the ASC-AS_C is the worst in this respect because of heavy whitening. This happens each and every time the lock is lost. This is our limitation in configuration. I made a temporary change to partly mitigate this in a hope that we might obtain useful knowledge for regular lock losses (but I'm not entirely hopeful), which will be explained later.

Anyway, look at the 1st attachment, which is the trend at around the pressure spike incident at 10W (other spikes were at 60W, so this is the mildest of all). You cannot see the pressure spike because it takes some time for the puffs of gass molecules to reach the pirani.

Important points to take:

This is understandable. Look at the second attachment for a very rough power budget and electronics description of all of these sensors. QPDs (AS_C  and OMC QPDs) have 1kOhm raw transimpedance, 0.4:40 whitening that is not switchable on top of two stages of 1:10 that are switchable. WFSs (AS_A and AS_B) have 0.5k transimpedance with a factor of 10 gain that is switchable, and they don't have whitening.

This happens with regular lock losses, and even  with 2W RF lock losses (third attachment), so it's hard to make a good assessment of the power deposited for anything. At the moment, we have to accept that we don't know.

We can use AS_B or AS_A data even though they're railed and make the lower bound of the power, thus energy. That's what I'll do later.


(Added later)

After TJ locked the IFO, we saw strange noise bump ffrom ~20 to ~80 or so Hz. Since nobody had any idea, and since my ASC SUM connection to the PEM rack is an analog connection from the ISC rack that also has the DCPD interface chassis, I ran to the LVEA and disconnected that.

Seems like that wasn't it (it didn't get any better right after the disconnection), but I'm leaving it disconnected for now. I'll connect it back when I can.

Images attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 13:24, Tuesday 09 July 2024 (78968)

In a hope to make a better assesment of the regular lock losses, I made the following changes.

  • With Richard's help, I T-ed the ASC-AS_C analog SUM output on the back of the QPD interface chassis in ISC R5 rack (1st picture) and connected it to H1:PEM-CS_ADC_5_19_2k_OUT_DQ.
    • The SUM output doesn't have any whitening nor any DC amplification, it is just the analog average (SEG1+2+3+4)/4 where each SEG has 1kOhm transimpedance gain, and AS_C only receives ~400ppm of the power coming into HAM6. This will be the signal that rails/saturates later than other sensors.
    • The other end of the T goes to fast shutter logic chassis input in the same rack. The "out" signal of that chassis is T-ed and goes to the shutter driver as well as shutter interface in the same rack.
    • Physical connection goes from the QPD interface in the ISC rack on the floor to the channel B03 of the PEM DQ patch panel on the floor, then to CH20 of the PEM patch panel in the CER.
  • I flipped the x10 gain switch for AS_B to "low", which means there's no DC amplification for AS_B. So we have that much headroom.
    • I set the dark offset for all quadrants.
    • There was no "+20dB" in the AS_B DC filters, so I made that and loaded the filter (2nd attachment).
    • TJ took care of SDF for me.

My gut feeling is that these things still rail, but we'll see. I'll probably revert these on Tuesday next week.

Images attached to this comment
thomas.shaffer@LIGO.ORG - 13:50, Tuesday 09 July 2024 (78974)

SDF screenshot of accepted values.

Images attached to this comment
keita.kawabe@LIGO.ORG - 15:17, Tuesday 09 July 2024 (78977)

Low voltage operation of the fast shutter: It still bounces.

Before we started locking  IFO, I used available light coming from IMC and closed/opened the fast shutter using the "Close" and "Open" button on the MEDM screen. Since this doesn't involve the trigger voltage crossing the threshold, this only seems to drive the low voltage output of the shutter driver which is used to hold the shutter in closed position for a prolonged time.

In the attached, the first marker shows the time the shutter started moving, witnessed by GS-13.

About 19ms after the shutter started moving, the shutter is fully shut. About 25 ms after the shutter was closed, it started opening, got open or half-open for about 10ms and then closed for good.

Nothing was even close to railing. I repeated the same thing three times and it was like this every time.

Apparently the mirror is bouncing down or maybe moving sideways. During the last vent we haven't taken the picture of the beam on the fast shutter mirror, but it's hard to imagine that it's close the the end of the mirror's travel.

I thought that it's not supposed to do that. See the second movie in G1902365, even though the movie is capturing the HV action, not the LV, it's supposed to stay in the closed position.

Images attached to this comment
keita.kawabe@LIGO.ORG - 11:37, Thursday 11 July 2024 (79029)

ASC-AS_C analog sum signal at the back of the QPD interface chassis was put back on at around 18:30 UTC on Jul/11.

keita.kawabe@LIGO.ORG - 11:49, Friday 12 July 2024 (79077)

Unfortunately, I forgot that the input range of some of these PEM ADCs are +-2V, and so the signal still railed when the analog output of ASC-AS_SUM didn't (2V happens to be the trigger threshold of the fast shutter), so this was still not good enough.

I installed 1/11 resistive divider (nominally 909Ohm - 9.1k) on the output of the ASC-AS_C analog SUM output on the chassis (not on the input of the PEM patch panel) at around 18:30 UTC on Jul/12 2024 while IFO was out of lock.

H1 ISC (CAL, CDS)
jeffrey.kissel@LIGO.ORG - posted 10:34, Tuesday 09 July 2024 - last comment - 22:43, Friday 12 July 2024(78958)
Testing CPU Turn Around Time for OMC DCPD 524 kHz IOP model :: Unused High Frequency Notches Removed to Save Computation Time
J. Kissel, E. von Reis, D. Sigg

Circa March 2023, Daniel had installed some never-used first attempts at further filtering the high frequency noise present in the 524 kHz OMC DCPD channels (never aLOGged because it was never used, but I call them out after I found the work in LHO:68098). 

Now, because I'd like to characterize the existing, in-use, digital AA filtering, running into some unknown noise (LHO:78516), and hoping to install 2 to 4 more parallel filter banks that would also be quite full of filters (LHO:78956), there is worry that there won't be enough computation time in the 524 kHz system.

Remember, that "the 524 kHz system" is actually a modified "standard" 65 kHz system, which is reading out 8 samples from the 524 kHz ADC each 65 kHz clock cycle and computing everything at 65 kHz. Thus, in principle, the max turn around time is
    1 / (2^16 Hz) = (1 / 65536) [sec] = 1.5258789e-5 [sec] = 15.3 [usec]

However, we *think* the practical limit is somewhat less that this. I don't think I understand what those limitations are enough to say definitively and/or to quote a limit quantitatively, but I think they're related to "the copy of the OMC DCPD channels which are demodulated at high frequency to create PI channels that are shipped to the end station -- in other words, the IPC sending demands a bit of computational time, and if there isn't enough turnaround time left in the OP to write to the IPC network, then the end-station SUS PI models throw an IPC timing error."

Anyways -- this morning, I looked at the 524 kHz system's computation time as is before doing anything (via the channel H1:FEC-179_CPU_METER), and it's sitting at 9 [usec] (out of the ideal 15 [usec]), occasionally popping up to 10 [usec].

But -- this led me to remember that -- regardless of whether the filter is turned ON -- the front-end computes the output of the filter -- sucking up computation time.
So, I've removed these unused prototype notch filters from the DCPD A0 and B0 filter banks.
In addition, I've also removed the old "V2A" filter from a previous version of the digital compensation for the OMC DCPD transimpedance amplifier response.

Removing the notch filters drops the computation time from "9 [sec] occasionally bopping up to 10 [usec]."
See attached time series of the CPU meter.

These filters are, of course, available in the filter_archive, under the latest previous archived file before today's work:
    /opt/rtcds/lho/h1/chans/filter_archive/h1iopomc0/
        H1IOPOMC0_1401558760.txt
but for ease of use, I copy them here.

FM3 :: Notches1
    ellip("BandStop",3,0.5,30,12800,13200)
    notch(10216,50,30)
    ellip("BandStop",3,0.5,30,10380,10465)
    ellip("BandStop",3,0.5,30,12900,13100)

FM5 :: Notches2
    ellip("BandStop",3,0.5,30,8100,8200)
    notch(9101,50,30)notch(9337,200,30)
    notch(9463,50,20)
    ellip("BandStop",3,0.5,30,9750,9950)

FM8 :: Notches3
    ellip("BandStop",5,0.5,40,14384,18384)
    ellip("BandStop",5,0.5,40,30768,34768)

FM6 :: V2A
    zpk([5.699+i*22.223;5.699-i*22.223;32.73],[2.549;2.117;6.555],0.00501187,"n")gain(0.971763)
I've also posted a plot of the magnitude of these notch filters -- mostly just to demonstrate how many second order sections that these filters had -- sucking up computation time.
Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 09:42, Friday 12 July 2024 (79071)

Keita, Sheila

We were looking for explanations for the drops in range we've seen since Tuesday.  Attached is a plot of the CPU meter, it seems that this jumped up shortly after Jeff's plot was made.  It is still below 13usec, and doesn't look correlated with our range problems.

Images attached to this comment
erik.vonreis@LIGO.ORG - 22:43, Friday 12 July 2024 (79088)

Variation of CPU time in that range shouldn't by itself have any effect on the control loops running on that model until they get to a sustained time above 15 us or an individual cycle time somewhat more than 15 us depending on the model. 

The effects of a model that's run too long are DAC buffer starvation, i.e. the IOP didn't keep up with the DAC clocks,  or IPC communication between models arriving to late. 

Both of these errors would appear immediately on the CDS overview MEDM.

Displaying reports 441-460 of 77237.Go to page Start 19 20 21 22 23 24 25 26 27 End