Displaying reports 501-520 of 77237.Go to page Start 22 23 24 25 26 27 28 29 30 End
Reports until 09:57, Thursday 11 July 2024
H1 SYS (INS, SEI, VE)
jeffrey.kissel@LIGO.ORG - posted 09:57, Thursday 11 July 2024 (79027)
Pictures of WHAM3 D5 Feedthru and HAM2 Table Optical Lever (Oplev) Transceiver
J. Kissel, S. Koehlenbeck, M. Robinson, J. Warner

The LHO install team (Jim, Mitch) -- who has experience installing in-chamber fiber optic systems -- have reviewed the two options put forth by the SPI team for optical fiber routing from some feedthrus to the future location of the SPI follower breadboard on the -X side wall of the HAM3 ISI using Eddie's mock-ups in D2400103. Both options (WHAM-D8 or WHAM-D5) are "evil" in several (but different) ways but we think the lesser of the two is running the fibers from D5 (the currently blank flange underneath the input arm beam tube on the -X side of HAM3; see D1002874).

In support of this path forward, one of the primary evils with D5 is that access to it is *very* crowded with HEPI hydraulics piping, cable trays, and various other stuff. 

Here I post some pictures of the situation.

Images 5645, 5646, 5647, 5648, 5649, 5650, 5651 show various views looking at HAM3 D5.

Images 5652, 5653, 5654 show the HAM2 optical lever (oplev) transceiver, which is a part of the officially defunct HAM Table Oplev system which -- if removed -- would help clear a major access interference point.
Images attached to this report
H1 CDS
erik.vonreis@LIGO.ORG - posted 09:36, Thursday 11 July 2024 (79025)
Conda package update

Conda packages on the workstations were updated.

There are two bug fixes in this update

foton 4.1.2: magnitude can now be positive when creating a root using the Mag-Q style in the 's' plane.

diaggui 4.1.2: Excitation channel names weren't nested properly on the excitations tab.  This has been fixed.

LHO General (SEI, SUS)
thomas.shaffer@LIGO.ORG - posted 08:21, Thursday 11 July 2024 (79024)
Lock loss 1510 UTC from l6.5M earthquake off the coast of Vancouver Island

6.5m off the coast of Vancouver Island. One picket fence station gave us warning before it hit. We were in the process of transitioning to earthquake mode when we lost lock. All ISIs tripped and some suspensions so far.

H1 CDS
erik.vonreis@LIGO.ORG - posted 07:43, Thursday 11 July 2024 (79021)
Sine wave definition patched
LHO General
thomas.shaffer@LIGO.ORG - posted 07:29, Thursday 11 July 2024 (79020)
Ops Day Shift Start

TITLE: 07/11 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 3mph Gusts, 2mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.06 μm/s
QUICK SUMMARY: Locked for 9 hours. Planned calibration and commissioning today from 830-1200PT (1530-1930 UTC).

H1 CAL
louis.dartez@LIGO.ORG - posted 07:10, Thursday 11 July 2024 - last comment - 22:48, Friday 12 July 2024(79019)
testing patched simulines version during next calibration measurement
We're running a patched version of simuLines during the next calibration measurement run. The patch (attached) was provided by Erik to try and get around what we think are awg issues introduce (or exacerbated) by the recent awg server updates (mentioned in LHO:78757).

Operators: there's is nothing special to do. just follow the normal routine as I applied the patch changes in place. Depending on the results of this test, I will either roll them back or work with Vlad to make them permanent (at least for LHO).
Non-image files attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 17:16, Thursday 11 July 2024 (79048)

Simulines was run right after getting back to NOIMINAL_LOW_NOISE. Script ran all the way until after Commencing data processing, where it then gave:

Traceback (most recent call last):
  File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 712, in
    run(args.inputFile, args.outPath, args.record)
  File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 205, in run
    digestedObj[scan] = digestData(results[scan], data)
  File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 621, in digestData
    coh = np.float64( cohArray[index] )
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/gwpy/types/series.py", line 609, in __getitem__
    new = super().__getitem__(item)
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/gwpy/types/array.py", line 199, in __getitem__
    new = super().__getitem__(item)
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/astropy/units/quantity.py", line 1302, in __getitem__
    out = super().__getitem__(key)
IndexError: index 3074 is out of bounds for axis 0 with size 0

erik.vonreis@LIGO.ORG - 17:18, Thursday 11 July 2024 (79049)

All five excitations looked good on ndscope during the run.

erik.vonreis@LIGO.ORG - 17:22, Thursday 11 July 2024 (79050)

Also applied the following patch to simuLines.py before the run.  The purpose being to extend the sine definition so that discontinuities don't happen if a stop command is executed late.  If stop commands are all executed on time (the expected behavior), then this change will have no effect.

 

diff --git a/simuLines.py b/simuLines.py
index 6925cb5..cd2ccc3 100755
--- a/simuLines.py
+++ b/simuLines.py
@@ -468,7 +468,7 @@ def SignalInjection(resultobj, freqAmp):
     
     #TODO: does this command take time to send, that is needed to add to timeWindowStart and fullDuration?
     #Testing: Yes. Some fraction of a second. adding 0.1 seconds to assure smooth rampDown
-    drive = awg.Sine(chan = exc_channel, freq = frequency, ampl = amp, duration = fullDuration + rampUp + rampDown + settleTime + 1)
+    drive = awg.Sine(chan = exc_channel, freq = frequency, ampl = amp, duration = fullDuration + rampUp + rampDown + settleTime + 10)
     
     def signal_handler(signal, frame):
         '''

 

vladimir.bossilkov@LIGO.ORG - 07:33, Friday 12 July 2024 (79059)

Here's what I did:

  • Cloned simulines in my home directory
  • Copied the currently used ini file to that directory, overwriting default file [cp /ligo/groups/cal/src/simulines/simulines/newDARM_20231221/settings_h1_newDARM_scaled_by_drivealign_20231221_factor_p1.ini /ligo/home/vladimir.bossilkov/gitProjects/simulines/simulines/settings_h1.ini]
  • reran simulines on the log file [./simuLines.py -i /opt/rtcds/userapps/release/cal/common/scripts/simuLines/logs/H1/20240711T234232Z.log]

No special environment was used. Output:
2024-07-12 14:28:43,692 | WARNING | It is assumed you are parising a log file. Reconstruction of hdf5 files will use current INI file.
2024-07-12 14:28:43,692 | WARNING | If you used a different INI file for the injection you are reconstructing, you need to replace the default INI file.
2024-07-12 14:28:43,692 | WARNING | Fetching data more than a couple of months old might try to fetch from tape. Please use the NDS2_CLIENT_ALLOW_DATA_ON_TAPE=1 environment variable.
2024-07-12 14:28:43,692 | INFO | If you alter the scan parameters (ramp times, cycles run, min seconds per scan, averages), rerun the INI settings generator. DO NOT hand modify the ini file.
2024-07-12 14:28:43,693 | INFO | Parsing Log file for injection start and end timestamps
2024-07-12 14:28:43,701 | INFO | Commencing data processing.
2024-07-12 14:28:55,745 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240711T234232Z.hdf5
2024-07-12 14:29:11,685 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240711T234232Z.hdf5
2024-07-12 14:29:20,343 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240711T234232Z.hdf5
2024-07-12 14:29:29,541 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240711T234232Z.hdf5
2024-07-12 14:29:38,634 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240711T234232Z.hdf5


Seems good to me. Were you guys accidentally using some conda environment when running simulines yesterday? When running this I was in " cds-testing " (which is the default?!). I have had this error in the past due to borked environments [in particular scipy which is the underlying responsible code for coherence], which is why I implemented the log parsing function.
The fact that the crash was on coherence and not the preceding transfer function calculation rings the alarm bell that scipy is the issue. We experienced this once in LLO with a single bad conda environment that was corrected, though I stubbornly religiously ran with a very old environment for a long time to make sure that error doesn't come up,

I ran this remotely so can't look at PDF if i run 'pydarm report'.
I'll be in touch over teamspeak to get that resolved.

ryan.crouch@LIGO.ORG - 08:00, Friday 12 July 2024 (79061)

Attaching the calibration report

Non-image files attached to this comment
vladimir.bossilkov@LIGO.ORG - 08:06, Friday 12 July 2024 (79062)

There's a number of WAY out there data points in this report.

Did you guys also forget to turn off the calibration lines when you ran it?

Not marking this report as valid.

louis.dartez@LIGO.ORG - 08:34, Friday 12 July 2024 (79065)
right, there was no expectation of this dataset being valid. the IFO was not thermalized and the cal lines remained on. 

The goal of this exercise was to demonstrate that the patched simulines version at LHO can successfully drive calibration measurements. And to that end the exercise was successful. LHO has recovered simulines functionality and we can lay to rest the scary notion of regressing back to our 3hr-long measurement scheme for now.
erik.vonreis@LIGO.ORG - 22:48, Friday 12 July 2024 (79089)

The run was probably in done in the 'cds' environment.  At LHO, 'cds' and 'cds-testing' are currently identical.  I don't know the situation at LLO, but LLO typically runs with an older environment than LHO.

Since it's hard to stay with fixed versions on conda-forge, it's likely several packages are newer at LHO vs. LLO cds environments.

H1 General (ISC)
oli.patane@LIGO.ORG - posted 01:17, Thursday 11 July 2024 (79018)
Ops Eve Shift End

TITLE: 07/11 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Observing at 154Mpc and have been Locked for 2.5 hours. The wind looks to be dying down, but I did get a weather warning that the wind is supposed to pick up, so hopefully it's not too bad.

One lockloss during my shift (79012).  When I first started relocking, ALSX was having issues giving high enough flashes, and I was seeing the fuzzy noise in the channel that indicates an issue with the crystal. While this was happening, ALS_XARM wasn't going into the FAULT or CHECK_CRYSTAL_FREQ states, so I just started adjusting ETMX and TMSX and as the flashes got higher, the fuzzy noise and crystal issues stopped.

Relocking took forever because of the high wind and earthquake, but once we finally got past FIND_IR, we were able to go the rest of the way up without issue. There had been a user message on the SQZ PMC guardian at 07/11 01:13 UTC (25 mins after LL) that said, 'PMC_REFL_RF35 demod error', and then kept trying and failing to lock the PMC, resulting in another message: 'Cannot lock PMC. Check SQZ laser power'. I contacted Daniel about this error once we were on our way up to NOMINAL_LOW_NOISE, but all he had to do was re-request the PMC to LOCKED and it unstalled and locked without issue.

LOG:

23:00 Observing and locked for 14.5 hours

00:50 Lockloss
- CPSFF is oscillating and big but peakmon is only at 70 so I'll start an initial alignment
- Some small issues locking ALSX 
    - The flashes were ~0.5 and the fuzzy noise that indicates something wrong with the crystal was happening every few seconds (attachment)
    - ALS_XARM wasn't going to the CHECK_CRYSTAL_FREQ or FAULT states
    - I just aligned ALSX by hand and as flashes got better, it stopped having the bad fuzzy noise
01:26 IA done, relocking
    - Lockloss from FIND_IR x 3
02:20 Took us to down to run a manual initial alignment for INPUT_ALIGN X and Y arms
02:27 Back to DOWN and waiting out earthquake

03:53 Initial alignment
04:29 Started relocking
05:18 NOMINAL_LOW_NOISE
05:21 Observing                                                                                                                                                                        

Start Time System Name Location Lazer_Haz Task Time End
00:18 PCAL Rick, Francisco PCAL Lab y(local) PCALin 00:57
H1 General
oli.patane@LIGO.ORG - posted 21:23, Wednesday 10 July 2024 (79014)
Ops Eve Midshift Status

Just started an initial alignment.

After the lockloss (79012), wind was above 30mph and stayed that high for the next hour, and I believe that might be the reason why while trying to relock in FIND_IR, we would get XARM up but then lose lock before being able to try getting YARM up. In the past we have been able to relock with the winds this high, but I guess not today. While trying to troubleshoot this, we had a large earthquake hit us from the Phillipines (at 02:26 UTC), and just a few minutes ago the ground calmed enough for me to start another initial alignment.

Something strange that may or may not be related to the issues getting through FIND_IR is that after the lockloss and initial alignment, the DCPDs started growing (attachment). This growth in noise seems to turn 'on' and 'off' as the detector was trying to get past FIND_IR.

It's seemed to start increasing again while locking green arms in my current initial alignment.

Images attached to this report
H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 17:55, Wednesday 10 July 2024 - last comment - 22:22, Wednesday 10 July 2024(79012)
Lockloss

Lockloss @ 07/11 00:50 UTC from sudden ground motion and also probably the sudden spike in wind.

Comments related to this report
oli.patane@LIGO.ORG - 22:22, Wednesday 10 July 2024 (79016)

05:21 UTC Observing

New OMC filter module configuration (new filter turned on) accepted in sdf

Images attached to this comment
X1 SUS (CDS)
ibrahim.abouelfettouh@LIGO.ORG - posted 16:26, Wednesday 10 July 2024 (79010)
BBSS Transfer Functions IOP Model Issue and Fix

Ibrahim, Erik, Jeff

The third installation in our new rebuilt BBSS TF saga. See alog 79005 for context.

  1. I went to Erik (CDS) to help with fixing the IOP Model Issue and our many errors. His first suggestion was to restart.
  2. Erik and I restarted it, broke it in the process (crashing the model and resetting all filter values), then restarted it again.
    1. We had a weird issue where our tripped WD wouldn't untrip despite low values - I think this is still a model issue since it only happened when we turned up the gain on the damping loops (just as a test for the IOP fix)
    2. The DK light is still red but this is due to (almost certain) a different model we're not using having a WD tripped (D1 under the QUAD model).
  3. We couldn't untrip the WD so I just left the station for about 30 minutes and tried untripping and it worked! I believe what may be happening is that thresholds are set in different units than the value reading? Definitely some weirdness with the values. Thus, we should check these out once we have what we need TF-wise.
  4. I began to take TFs and they actually worked! The coherence looks good and while the values are different when overlaid with our first article Jan 05 ones, they are similar in noise.

Now, since our model and BBSS are both reliably working, Oli and I will take TFs and post them in the next/a new alog.

LHO General
thomas.shaffer@LIGO.ORG - posted 16:25, Wednesday 10 July 2024 (79007)
Ops Day Shift End

TITLE: 07/10 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 65Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Locked for almost 15 hours. Rahul found some good damping settings for ITMY Mode 5 violin and it is damping very nicely. There are a few pending changes that should be loaded on a lock loss or out of observing time.

Pending changes:

    H1LSC model has an unloaded filter which is to due Camilla PRCL FF that she will test tomorrow in the morning.

    lscparams.py has the ITMY mode 5 violin mode updated but needs to be loaded into the guardian

    IMC_LOCK needs a reload to take in notification changes

    OMC DCPD Test bank needs the new anti whitening filters engaged for A1,B1,A2,B2

LOG:

Start Time System Name Location Lazer_Haz Task Time End
16:25 PCAL/FAC Tony, Karen PCAL lab - Tech clean and PCAL lab work 16:58
17:54 PCAL Rick, Francisco, 2 visitors PCAL lab local Lab tour 20:19
18:29 FAC Chris EY n Pickup equipment in fan room 18:30
X1 SUS (CDS)
ibrahim.abouelfettouh@LIGO.ORG - posted 16:12, Wednesday 10 July 2024 - last comment - 22:09, Wednesday 10 July 2024(79005)
BBSS Transfer Functions and Interpretation following Rebuild

Following TFs on alog 79004, we have been diagnosing the noisy TFs and have come to the simple realization that the DAC is not working and that we were simply not actuating due to a broken IOP Model DAC. All the noise was simply the natural dynamic modes as they were hanging in the clean room. We're getting in contact with the CDS team to fix the issue. Stay tuned.

Oli's Quick partial investigation: Oli plotted multiple channels that the excitation goes through to see if we could narrow down causes (L, T, V, and R). Here are the results with ss of the ndscopes to help compare.

L (January 5th, July 3rd)
The July measurement:
- No difference in excitation amplitude or measurement presets
- Excitation amplitude is almost 1/2 the January amplitude
- Same for coil driver INMONs 
- OSEMs reading are close to January
- DAMP INMON is 1/2 the amplitude of January
 
T (January 5th, July 3rd)
The July measurement:
- No difference in excitation amplitude or measurement presets
- Amplitude read during excitation matches January measurement until OSEMs read the flag motion back in
 
V (January 5th, July 3rd)
The July measurement:
- Excitation amplitude - January: 1075, July: 1000, everything else the same
- excitation amplitude looks good, about the same size
- I have no idea what's going on with the OSEM readings. Seems like they get pushed away from the LED so more light shines through, and then are held there??
 
R (January 5th, July 3rd)
The July measurement:
- Excitation amplitude - January: 200, July: 175, everything else the same
- Same issue as V wrt OSEM readings
 
This showed that the issue was down the line, and gave the hint that the OSEMs may have not been reading to the IOP Model DAC.
 
Jeff Investigation
 
Jeff and I (Ibrahim) realized that the IOP Model DAC output on the screen was reading 0 despite actuations going down the chain to the user model output. We put in a DC offset and the chain remained at 0.
We then found that the GDS_TP IOP states were red in TIM, ADC, DAC, DK. We looked further and found that the FIFO status was also in the red (screenshots) and as such, we need to fix the IOP Model and re-take our TFs. Stay tuned.
Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 22:09, Wednesday 10 July 2024 (79015)

Viewing the January vs July attachments requires a caltech login, so here are the attachments for those without a caltech login:

L

January 5th, July 3rd

T

January 5th, July 3rd

V

January 5th, July 3rd

R

January 5th, July 3rd

Images attached to this comment
H1 ISC
thomas.shaffer@LIGO.ORG - posted 16:08, Wednesday 10 July 2024 (79008)
Further potential fast shutter checks with the AS_ pds but at Prep for DC

Following up from alog78667 where Sheila compared AS_{A,B,C} powers and some of their ratios during our bad alignments and our good ones, I've done the same thing but at the ISC_LOCK state of Prep_DC_Readout_Transition (500). Just like before, the difference between the lowest good AS_C/AS_A ratio and the highest bad ratio is still pretty low. About the same actually.

  AS_C NSUM (W in HAM6) AS_A NSUM AS_ B NSUM AS_C/ AS_A AS_C/ AS_B
6/6 20:53 UTC (bad) 0.0724 6156 6670 1.176e-5 1.086e-5
6/7 2:42 UTC (bad) 0.0759 6544 7063 1.160e-5 1.075e-5
6/7 3:29 UTC (bad) 0.0763 6566 7082 1.162e-5 1.077e-5
6/6 12:19 (good) 0.0862 7109 7335 1.213e-5 1.175e-5
6/6 7:20 UTC (good) 0.0793 6540 6792 1.213e-5 1.168e-5
6/6 01:11 UTC (good) 0.0797 6569 6781 1.213e-5 1.175e-5
6/5 10:59 UTC (good) 0.0765 6292 6547 1.216e-5 1.169e-5
7/10 08:15 UTC (good) 0.0790 6392 6771 1.236e-5 1.167e-5
7/9 20:51 UTC (good but with lower range) 0.0789 6383 6754 1.236e-5 1.168e-5
H1 General
oli.patane@LIGO.ORG - posted 16:07, Wednesday 10 July 2024 (79009)
Ops Eve Shift Start

TITLE: 07/10 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 18mph Gusts, 12mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.05 μm/s
QUICK SUMMARY:

Observing at 150Mpc and have been Locked for 14.5 hours. Winds going up a bit but not too bad.

H1 SUS (SUS)
rahul.kumar@LIGO.ORG - posted 11:18, Wednesday 10 July 2024 - last comment - 15:35, Thursday 11 July 2024(79003)
New settings for damping rung up violin mode ITMY mode 05 and 06

The following settings seems to be working for now and I will commit it in the lscparams after a couple of IFO lock stretches.

New settings (ITMY05/06):- FM5 FM6 FM7 FM10 Gain +0.01 (new phase -90 degree), might increase the gain later on depending upon how slow the damping is.

Nominal settings (ITMY05/06): FM6 FM8 FM10 Gain +0.02 (phase -30deg)

Given below are settings I tried this morning and it did not work,

1. no phase, 0.01 gain - increase
2. -30 phase, -0.01 gain - increase
3. +30 phase, 0.01 gain - increase
4. -30 phase, 0.01 gain - IY05 decreasing (both filters) IY06 increasing (both filters)
5. -60 phase, 0.01 gain - IY05 decreasing (both filters) IY06 increasing (only in narrow filter)

---

After talking to TJ, I have set the gain to zero on lscparams and saved it but not loaded it since we are OBSERVING. Will load it once there is a target or opportunity.

Images attached to this report
Comments related to this report
rahul.kumar@LIGO.ORG - 16:30, Wednesday 10 July 2024 (79011)

DARM spectra attached below shows that both the modes are slowly decreasing, next I will try and bump up the gain to 0.02.

Images attached to this comment
rahul.kumar@LIGO.ORG - 15:35, Thursday 11 July 2024 (79043)SUS

ITMY 05/06 - FM5 FM6 FM7 FM10 Gain +0.02 has been saved in lscparams and violin mode Guardian has been loaded for the next lock.

H1 AOS (ISC, VE)
keita.kawabe@LIGO.ORG - posted 13:05, Tuesday 09 July 2024 - last comment - 11:49, Friday 12 July 2024(78966)
We cannot assess the energy deposited in HAM6 during pressure spike incidents (yet)

We cannot make a reasonable assessment of energy deposited in HAM6 when we had the pressure spikes (the spikes themselves are reported in alogs 78346, 78310 and 78323, Sheila's analysis is in alog 78432), or even during regular lock losses.

This is because all of the relevant sensors saturate badly, and the ASC-AS_C is the worst in this respect because of heavy whitening. This happens each and every time the lock is lost. This is our limitation in configuration. I made a temporary change to partly mitigate this in a hope that we might obtain useful knowledge for regular lock losses (but I'm not entirely hopeful), which will be explained later.

Anyway, look at the 1st attachment, which is the trend at around the pressure spike incident at 10W (other spikes were at 60W, so this is the mildest of all). You cannot see the pressure spike because it takes some time for the puffs of gass molecules to reach the pirani.

Important points to take:

This is understandable. Look at the second attachment for a very rough power budget and electronics description of all of these sensors. QPDs (AS_C  and OMC QPDs) have 1kOhm raw transimpedance, 0.4:40 whitening that is not switchable on top of two stages of 1:10 that are switchable. WFSs (AS_A and AS_B) have 0.5k transimpedance with a factor of 10 gain that is switchable, and they don't have whitening.

This happens with regular lock losses, and even  with 2W RF lock losses (third attachment), so it's hard to make a good assessment of the power deposited for anything. At the moment, we have to accept that we don't know.

We can use AS_B or AS_A data even though they're railed and make the lower bound of the power, thus energy. That's what I'll do later.


(Added later)

After TJ locked the IFO, we saw strange noise bump ffrom ~20 to ~80 or so Hz. Since nobody had any idea, and since my ASC SUM connection to the PEM rack is an analog connection from the ISC rack that also has the DCPD interface chassis, I ran to the LVEA and disconnected that.

Seems like that wasn't it (it didn't get any better right after the disconnection), but I'm leaving it disconnected for now. I'll connect it back when I can.

Images attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 13:24, Tuesday 09 July 2024 (78968)

In a hope to make a better assesment of the regular lock losses, I made the following changes.

  • With Richard's help, I T-ed the ASC-AS_C analog SUM output on the back of the QPD interface chassis in ISC R5 rack (1st picture) and connected it to H1:PEM-CS_ADC_5_19_2k_OUT_DQ.
    • The SUM output doesn't have any whitening nor any DC amplification, it is just the analog average (SEG1+2+3+4)/4 where each SEG has 1kOhm transimpedance gain, and AS_C only receives ~400ppm of the power coming into HAM6. This will be the signal that rails/saturates later than other sensors.
    • The other end of the T goes to fast shutter logic chassis input in the same rack. The "out" signal of that chassis is T-ed and goes to the shutter driver as well as shutter interface in the same rack.
    • Physical connection goes from the QPD interface in the ISC rack on the floor to the channel B03 of the PEM DQ patch panel on the floor, then to CH20 of the PEM patch panel in the CER.
  • I flipped the x10 gain switch for AS_B to "low", which means there's no DC amplification for AS_B. So we have that much headroom.
    • I set the dark offset for all quadrants.
    • There was no "+20dB" in the AS_B DC filters, so I made that and loaded the filter (2nd attachment).
    • TJ took care of SDF for me.

My gut feeling is that these things still rail, but we'll see. I'll probably revert these on Tuesday next week.

Images attached to this comment
thomas.shaffer@LIGO.ORG - 13:50, Tuesday 09 July 2024 (78974)

SDF screenshot of accepted values.

Images attached to this comment
keita.kawabe@LIGO.ORG - 15:17, Tuesday 09 July 2024 (78977)

Low voltage operation of the fast shutter: It still bounces.

Before we started locking  IFO, I used available light coming from IMC and closed/opened the fast shutter using the "Close" and "Open" button on the MEDM screen. Since this doesn't involve the trigger voltage crossing the threshold, this only seems to drive the low voltage output of the shutter driver which is used to hold the shutter in closed position for a prolonged time.

In the attached, the first marker shows the time the shutter started moving, witnessed by GS-13.

About 19ms after the shutter started moving, the shutter is fully shut. About 25 ms after the shutter was closed, it started opening, got open or half-open for about 10ms and then closed for good.

Nothing was even close to railing. I repeated the same thing three times and it was like this every time.

Apparently the mirror is bouncing down or maybe moving sideways. During the last vent we haven't taken the picture of the beam on the fast shutter mirror, but it's hard to imagine that it's close the the end of the mirror's travel.

I thought that it's not supposed to do that. See the second movie in G1902365, even though the movie is capturing the HV action, not the LV, it's supposed to stay in the closed position.

Images attached to this comment
keita.kawabe@LIGO.ORG - 11:37, Thursday 11 July 2024 (79029)

ASC-AS_C analog sum signal at the back of the QPD interface chassis was put back on at around 18:30 UTC on Jul/11.

keita.kawabe@LIGO.ORG - 11:49, Friday 12 July 2024 (79077)

Unfortunately, I forgot that the input range of some of these PEM ADCs are +-2V, and so the signal still railed when the analog output of ASC-AS_SUM didn't (2V happens to be the trigger threshold of the fast shutter), so this was still not good enough.

I installed 1/11 resistive divider (nominally 909Ohm - 9.1k) on the output of the ASC-AS_C analog SUM output on the chassis (not on the input of the PEM patch panel) at around 18:30 UTC on Jul/12 2024 while IFO was out of lock.

H1 SQZ (SQZ)
nutsinee.kijbunchoo@LIGO.ORG - posted 04:10, Monday 08 July 2024 - last comment - 01:18, Thursday 11 July 2024(78933)
Twin Sisters Rock SHG explained

Daniel, Nutsinee

 

I was intrigued by the Twin Sisters Rock feature in the brand new spare SHG Terry and Karmeng put together so I did a little bit of digging. A quick summary is that we believe the Twin Sisters Rock feature were caused by the phase mismatch between the red and the green inside of the SHG cavity. We believe the dichroic coating of the SHG back mirror might be a suspect since the coating specifications can not be found. However, the phase mismatch doesn't neccessary explain the Mount Saint Helens in the 6 year-old SHG we are currently using. 

The equation used to fit the data came from Gonzalez, Nieh, and Steier 1973 equation1. The equation looks similar to the usual sinc^2 function we know and love except that it's taken the phase mismatch between 532 and 1064 and the air dispersion into account in the cos^2 term. This paper also shows that the experimental data doesn't necessary match the model (Fig.3). The problem is we have to related deltaK*l term to temperature. This was done using Kato and Takaoka 2002. This left us with K_0. The only unknown-ish term here is the nonlinear coefficient d_eff inside of K_0. We used d_eff = 10pm/V as suggested by Leonardi et al 2018. After I wasn't able to get a sensible result Daniel pointed out that unit in K_0 doesn't add up. We replaced K0 with equation1 from Arie et al 1997.

 

The function is also very sensitive to polling length of the crystal. Since we don't have the exact number of the polling length, we picked a polling length such that deltaK is 0 at 34.6 C (optimal phase match condition). The polling length we got was 8.99608um. I believe the crystal polling length according to Raicol is 9um.

 

The function in the end looks like this. I was able to fit Karmeng's Twin Sisters Rock using sensible parameters (d_eff = 10pm/V, circulated 1064 power of 2W, Boyed-Kleinman focusing parameter = 0.9, and a phase shift of Pi/2). Other variations of the spare SHG plots can be explained by shifting this phi variable.

 

We suspect the coating of the SHG back mirror might be to blame. None of the SHG plots found in SURF reports (Nathan Zhao, Andre Medina) look symmetric like they should. 

 

The zip file of the python code is also attached.


If the coating is not to blame, I wonder if it's time we think about redesigning a SHG that is more tolerant to the assembly line errors.

Images attached to this report
Non-image files attached to this report
Comments related to this report
daniel.sigg@LIGO.ORG - 09:22, Monday 08 July 2024 (78939)

For people who are too lazy to read the paper: The dispersion that matters is between the first path and the second path in the crystal. In a simple dual path system, green light will be generated in phase with the red light in the first path, then the two frequencies propagate in air towards the rear mirror, where they are reflected with potentially different phases, and then propagate back to the crystal. According to Gonzalez et al. the dispersion in air for 1064/532 is 27.4°/cm (double passed). Our SHG length is about 5 cm long with a 1 cm crystal. So, the in-air path is of order 4 cm. You need a 90° phase shift to explain the double peak. However, the phase difference of our mirror reflectivity is unknown.

nutsinee.kijbunchoo@LIGO.ORG - 01:18, Thursday 11 July 2024 (79017)SQZ

Daniel Nutsinee

We futher fit a single pass measurements with following parameters:

d_eff = 9pm/V

input power = 60mW

Boyd-Klienman factor = 0.35 (corresponds to w0 = 70um according to Karmeng this is what he sent into the cavity)

The measured power was so low a dark noise is required to offset the model to fit the data correctly. This value is 0.8 uW.

A factor of 1/2 has been added inside of sinc and cos term to convert from double pass to single pass. This agrees with Sheila's equation 3.14.

We also added a factor of 4 to the original function used to fit the double peak measurement to take into account the double pass (4 times the single pass power). A newly acquired Boyd-Klienman factor is 0.56 (corresponds to w0 = 55um, a beam waist dictates by the known cavity parameters). I recalculated circulated power taken into account loss measurement from the escape efficiency. The model suggests 1.7W. I managed to fit using 1.5W. We don't know the modulation depth used to generate the locking signal so this number can be wiggled around a bit. The facator of 4 can also be wiggled around a bit as we don't really know the green loss as it emerges from the cavity. 

 

According to Karmeng the data was taken through a green Bandpass filter with 9% green loss. The data has been multipled by a factor of 1.09 to compensate for this loss.

In summary, our model fits both the Twin Sisters and the single pass measurements.

Images attached to this comment
H1 ISC
sheila.dwyer@LIGO.ORG - posted 21:56, Tuesday 25 June 2024 - last comment - 09:48, Thursday 11 July 2024(78652)
OM2 impact on low frequency sensitivity and optical gain

The first attachment shows spectra (GDS CALIB STRAIN clean, so with calibration corrections and jitter cleaning updated and SRCL FF retuned) with OM2 hot vs cold this week, without squeezing injected.  The shot noise is slightly worse with OM2 hot, while the noise from 20-50Hz does seem slightly better with OM2 hot.  This is not as large of a low frequency improvement as was seen in December.  The next attachment shows the same no squeezing times, but with coherences between PRCL and SRCL and CAL DELTAL.  MICH is not plotted since it's coherence was low in both cases.  This suggests that some of the low frequency noise with OM2 cold could be due to PRCL coherence. 

The optical gain is 0.3% worse with OM2 hot than it was cold (3rd attachment), before the OMC swap we saw a 2% decrease in optical gain when heating OM2 in Decmeber 74916 and last July 71087.  This seems to suggest that there has been a change in the OMC mode matching situation since last time we did this test. 

The last attachment shows our sensitivity (GDS CALIB STRAIN CLEAN) with squeezing injected.  The worse range with OM2 hot can largely be attributed to worse squeezing, the time shown here was right after the PSAMs change this morning 78636 which seems to have improved the range to roughly 155Mpc with cleaning; it's possible that more psams tuning would improve the squeezing further. 

Times used for these comparisons (from Camilla):

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 11:43, Friday 28 June 2024 (78722)

Side point about some confusion caused by a glitch:

The first attachment shows something that caused me some confusion, I'm sharing what the confusion was in case this comes up again.  It is a spectrum of the hot no sqz time listed above, comparing the spectrum produced by dtt with 50 averages, 50% overlap, and BW 0.1 Hz (which requires 4 minutes at 15 seconds of data), compared to a spectrum produced by the noise budget code at the same time. The noise budget uses a default resolution of 0.1Hz and 50% overlap, and the number of averages is set by the duration of data we give it which is most often 10 minutes.  The second screenshot shows that there was a glitch 4 minutes and 40 seconds into this data stretch, so that the spectrum produced by the noise budget shows elevated noise compared to the one produced by dtt. The third attachment shows the same spectra comparison, where the noise budget span is set to 280 seconds so the glitch is not included and the two spectra agree. 

Comparison of sensitivity with OM2 hot and cold wihtout squeezing:

The next two attachments show spectra comparisons for no sqz times with OM2 hot and cold, (same times as above), the first shows a comparison of the DARM spectrum, and the second shows the range accumating as a function of frequency.  In both plots, the bottom panel shows the difference in accumulated range, so this curve has a positive slope where the sensitivity of OM2 hot is better than OM2 cold, and a negative slope where OM2 hot is worse.  The small improvement in sensitivity between 20-35 Hz improves the range by almost 5Mpc, then there is a new broad peak at 33Hz with OM2 hot which comes and goes, and again a benefit of about 4Mpc due to the small improvement in sensitivity from 40-50 Hz. 

From 90-200 Hz the sensitivity is slightly worse with OM2 hot.  The coupled cavity pole dropped from 440Hz to 424Hz while OM2 warmed up, we can try tuning the offsets in AS72 to improve this as Jennie and Keita did a few weeks ago: 78415

Comparison of with squeezing:

Our range has been mostly lower than 160 Mpc with OM2 hot, which was also true in the few days before we heated it up.  I've picked a time when the range just hit 160Mpc after thermalization, 27/6/2024 13:44 UTC to make the comparison of our best sensititivites with OM2 hot vs cold. This is a time without the 33Hz peak, we gain roughly 7 Mpc from 30-55 Hz, (spectra and accumulated range comparisons) and loose nearly all of that benefit from 55-200 Hz.  We hope that we may be able to gain back some mid frequency sensitivty by optimizing the PSAMs for OM2 hot, and by adjusting SRM alignment.  This is why we are staying with this configuration for now, hoping to have some more time to evaluate if we can improve the squeezing enough here.  

There is a BRUCO running for the 160Mpc time with OM2 hot, started with the command:

python -m bruco --ifo=H1 --channel=GDS-CALIB_STRAIN_CLEAN --gpsb=1403531058 --length=400 --outfs=4096 --fres=0.1 --dir=/home/sheila.dwyer/public_html/brucos/GDS_CLEAN_1403531058 --top=100 --webtop=20 --plot=html --nproc=20 --xlim=7:2000 --excluded=/home/elenna.capote/bruco-excluded/lho_excluded_O3_and_oaf.txt

It should appear here when finished: https://ldas-jobs.ligo.caltech.edu/~sheila.dwyer/brucos/GDS_CLEAN_1403531058/

 

 

Images attached to this comment
gerardo.moreno@LIGO.ORG - 15:59, Wednesday 10 July 2024 (78829)VE

(Jenne, Jordan, Gerardo)

On Monday June 24, I noticed an increase on pressure at HAM6 pressure gauge only.  Jordan and I tried to correlate the rise on pressure to other events but we found nothing, we looked at RGA data, but nothing was found, then Jenne pointed us to the OM2 thermistor.

I looked at the event on question, and one other event related to changing the temperature of OM2, and the last time the temperature was modified was back on October 10, 2022.

Two events attached.

Images attached to this comment
camilla.compton@LIGO.ORG - 09:48, Thursday 11 July 2024 (79026)

Some more analysis on pressure vs OM2 temperature in alog 78886: this recent pressure rise was smaller than the first time we heated OM2 after the start of O4 pumpdown.

Displaying reports 501-520 of 77237.Go to page Start 22 23 24 25 26 27 28 29 30 End