Displaying reports 12441-12460 of 86528.Go to page Start 619 620 621 622 623 624 625 626 627 End
Reports until 15:57, Tuesday 12 March 2024
H1 CDS
david.barker@LIGO.ORG - posted 15:57, Tuesday 12 March 2024 (76304)
CDS Maintenance Summary: Tuesday 12th March 2024

WP11766 Shorten SWWD SUS-EY trip time 
Dave:
The h1iopsusey model was changed to reduce the SWWD SUS timer from 900s to 600s. This means that the SUS DACKILL time to trip is shortened from 20mins to 15mins and is no longer coincident with the HWWD tripping. 

This IOP restart required the restart of all the models on h1susey, and was done in conjuction with the restart of h1susetmy.

After all the models had been running again for a few minutes, h1susey locked up. It was not on the network (no ping, ssh). We fenced it from Dolphin and power cycled it via its IPMI management port.

WP11743 SUS DACKILL Removal
Jeff, Oli, Jim, Dave:
Today we removed the SUS DACKILL from the end station models h1susetmx and h1susetmy. The dolphin ipc receiving models h1isietmx, h1isietmy were modified to replicate a good ipc receiver.
In addition the h1susetmxpi and h1susetmypi models also needed replicated receivers.


Unfortunately I got the ERR replication value incorrect, it should be 0 and not 1. The receiver models were restarted with the correct values.

The first round of restarts required a DAQ restart, the subsequent changes only needed model restarts.

The TMS models were modified later, requiring a second DAQ restart.

WP11761 sw-fces-cds0 Firmware Upgrade
Jonathan:
The firmware on the FCES switch sw-fces-cds0 was upgraded.

WP11765 DMT Upgrade and Reboots
Dan:
This action was canceled for today and will be scheduled for a later maintenance period.

DAQ Restart
Jonathan, Erik, Dave:

The DAQ was restarted partially once, completely twice for the above model changes.

The first restart (partial 0-leg only) at 12:30 highlighted that the PI models were showing IPC Rx errors due to a removed SUS SHMEM channel.

The second full restart at 13:06/13:12 was in support of the SUS-ETM, SUS-ETM-PI and ISI-ETM model changes.

the third full restart at 14:47/14:52 was in support of the SUS-TMS model changes.

There were no major issues with the DAQ restarts, just the usual GDS second restarts for channel
list synchronization.

Restart/Reboot Log

Tue12Mar2024
LOC TIME HOSTNAME     MODEL/REBOOT
12:14:27 h1seiex      h1isietmx <<< SUS-ETM and ISI-ETM model restarts
12:14:58 h1susex      h1susetmx
12:15:32 h1seiey      h1isietmy
12:16:47 h1susey      h1iopsusey <<< EY IOP and all Models restarts
12:17:04 h1susey      h1susetmy
12:17:18 h1susey      h1sustmsy
12:17:32 h1susey      h1susetmypi


12:24:21 h1susey      ***REBOOT*** <<< h1susey locked up, needed a reboot
12:26:25 h1susey      h1iopsusey
12:26:38 h1susey      h1susetmy
12:26:51 h1susey      h1sustmsy
12:27:04 h1susey      h1susetmypi


12:30:40 h1daqdc0     [DAQ] <<< Partial 0-leg restart
12:30:49 h1daqfw0     [DAQ]
12:30:49 h1daqtw0     [DAQ]
12:30:50 h1daqnds0    [DAQ]
12:30:58 h1daqgds0    [DAQ]
12:34:33 h1daqgds0    [DAQ]


12:50:53 h1susex      h1susetmxpi <<< PI model change, remove IPC Rx
12:55:45 h1susey      h1susetmypi


13:06:50 h1daqdc0     [DAQ] <<< Full DAQ restart for susetm, susetmpi, isietm models
13:07:01 h1daqfw0     [DAQ]
13:07:01 h1daqtw0     [DAQ]
13:07:02 h1daqnds0    [DAQ]
13:07:10 h1daqgds0    [DAQ]
13:08:42 h1susey      h1susetmypi <<< Problem is etmypi build (install too quick), rebuild and install.
13:12:08 h1daqdc1     [DAQ]
13:12:20 h1daqfw1     [DAQ]
13:12:20 h1daqtw1     [DAQ]
13:12:22 h1daqnds1    [DAQ]
13:12:30 h1daqgds1    [DAQ]
13:12:56 h1daqgds1    [DAQ]


13:49:38 h1seiex      h1isietmx << Fix the replicated IPC ERR=0
13:55:49 h1seiey      h1isietmy
13:58:35 h1susex      h1susetmxpi
14:01:46 h1susey      h1susetmypi


14:43:40 h1susex      h1sustmsx <<< New TMS models
14:44:06 h1susey      h1sustmsy


14:47:04 h1daqdc0     [DAQ]  <<< DAQ restart for TMS model changes
14:47:15 h1daqfw0     [DAQ]
14:47:16 h1daqnds0    [DAQ]
14:47:16 h1daqtw0     [DAQ]
14:47:24 h1daqgds0    [DAQ]
14:47:50 h1daqgds0    [DAQ]
14:52:21 h1daqdc1     [DAQ]
14:52:33 h1daqfw1     [DAQ]
14:52:33 h1daqtw1     [DAQ]
14:52:34 h1daqnds1    [DAQ]
14:52:42 h1daqgds1    [DAQ]

H1 PSL
sheila.dwyer@LIGO.ORG - posted 15:21, Tuesday 12 March 2024 - last comment - 15:33, Wednesday 13 March 2024(76303)
attempt to pico ISS second loop, undid changes

Sheila, Camilla, Jennie W, Keita remote

Stefan and Daniel suspect that our excess noise around 100 Hz might be due to intensity noise, and we did have a large shift in alignment of the beam transmitted through IM4 (76291) 76241.  I moved pico 1, first to center the beam on the ISS QPD, which made the power on the ISS array PDs drop, and didn't improve the spectrum of the ISS Sum inner or outer channels (we did this with the loop open).  We reverted this, then looked at the QPD position in O4a when we had 60W input power, went to 60W input power and pico'd to bring the beam to the same location as O4a.  This also didn't improve the spectrum, so we brought the pico back to where we started.

Images attached to this report
Comments related to this report
jennifer.wright@LIGO.ORG - 16:13, Tuesday 12 March 2024 (76306)

Sheila asked me to compare the ISS second loop outputs with from our recent NLN lock last night to that during O4a. The spectra doesn't show much difference in the noise (top left plot shows Jan 14th lock in cyan and yellow, last night's one in pink and brown).

The other three plots shows the coherence of ISS output with all three GS13s on the HAM2 optical table. There only seems to be coherence at the power lines.

So maybe the ISS is not causing intensity noise to couple into DARM differently pre and post vent.

Images 2 and 3 attached shows the ndscope fro all channels I used around the two times I took spectra.

Time 1 = 2024-01-14 22:05:27 UTC

Time 2 = 2024-03-12 12:53:24 UTC

Channels for ISS:

H1:PSL-ISS_SECONDLOOP_PDSUMINNER_OUT_DQ

H1:PSL-ISS_SECONDLOOP_PDSUMOUTER_OUT_DQ

Channels for GS13s:

H1:ISI-HAM2_BLND_GS13X_IN1_DQ

H1:ISI-HAM2_BLND_GS13Y_IN1_DQ

H1:ISI-HAM2_BLND_GS13Z_IN1_DQ

Images attached to this comment
camilla.compton@LIGO.ORG - 15:33, Wednesday 13 March 2024 (76356)

I looked at the pointing of the IMs and MCs from the start of a lock 16th Janary to now. As previously found, nothing seems to have changed much, IM1 P changed by ~100 counts, more than the other IMs, but it's unclear what these counts are calibrated to.

Images attached to this comment
H1 SQZ (SQZ)
nutsinee.kijbunchoo@LIGO.ORG - posted 13:45, Tuesday 12 March 2024 (76297)
New CLF VCO installed

Daniel, Naoki, Nutsinee

We installed a new VCO that allows us to increase the gain in the CLF loop (was 4kHz with 20 degree phase margin with the old VCXO). The box sits in the left SQZ rack U35. Below is the old VCO. CLF is now running at ~13kHz UGF with 40 degree phase margin. VCXO interface isn't working quite right but that doesn't stop us from locking the CLF. The "ON" switch is the excitation switch at the moment. The VCO control currently has a wrong sign and can't be turned on but it can be tuned by hand. CLF common mode board is now operating at -16dB common path gain with all three boosts on (no compensation). All relevant changes has been accepted in the SDF.

 

A quick look at the CLF noise (QMON) at 10 kHz tells us that we are now a factor of 5 better than last year. The Vpp calibration is 5V/rad.

Unrelated to this work we commented out OPO common gain in the guardian and have SDF take care of the gain value instead. We also accepted the PMC trans nominal value.

Images attached to this report
H1 SEI
jim.warner@LIGO.ORG - posted 13:42, Tuesday 12 March 2024 (76300)
All ISI CPS calibration filters updated.

I updated the CPS cal filters for all of the ISIs this morning. This was to fix a slight, long standing error in the filters, as described in FRS 30428. Shouldn't really have any impact on the IFO. LLO was done a while ago, we can close the ticket. 

H1 SQZ
daniel.sigg@LIGO.ORG - posted 13:32, Tuesday 12 March 2024 - last comment - 13:55, Tuesday 12 March 2024(76298)
SQZ CLF VCO

A new VCO for the CLF was installed. This VCO has a higher modulation bandwidth compared to the previous VCXO at the cost of more low frequency phase noise.

The SSB phase noise according to the data sheets:

Frequency New VCO Old VCXO
100Hz   -103 dBc
1kHz -110 dBc -133 dBc
10kHz -132 dBc -148 dBc
100kHz -150 dBc -158 dBc
Comments related to this report
nutsinee.kijbunchoo@LIGO.ORG - 13:55, Tuesday 12 March 2024 (76301)

Sorry I made a duplicate alog. More details here

H1 General
camilla.compton@LIGO.ORG - posted 12:51, Tuesday 12 March 2024 (76296)
LVEA *visually* swept

Lights turned off. 

Noting but left alone as commissioning work is still ongoing :

H1 CDS (AOS)
filiberto.clara@LIGO.ORG - posted 12:33, Tuesday 12 March 2024 (76295)
HAM5 Feedthru Cabling - DCPD/OFI TEC

Cabling on the HAM5 D3 flange was redressed. Cables routed behind the cross beam. This required cables for the HAM5 DCPD and the OFI TEC/THERMISTORS to be disconnected. D. Sigg disabled the servo for the OFI TEC. Servo enabled after cable work was completed.

Images attached to this report
H1 ISC (ISC)
craig.cahillane@LIGO.ORG - posted 10:26, Tuesday 12 March 2024 (76291)
Input alignment changed to align onto IM4 Trans
A few days ago Jenne mentioned that our IM4 Trans NSUM diode was reading lower than before, because it was badly pitched down on the QPD.
IM4 TRANS is used as our PRG calculator, so we would like make sure it's a trustworthy readback of our input power.

This morning I adjusted IM1, IM2, and IM3 such that the alignment onto IM4 Trans is repaired.
This brought the IM4 Trans NSUM estimated power incident on PRM from 1.8 to 1.9 W.

We are now back to similar levels on IM4 TRANS / IMC PWR IN we were at at the beginning of the run:
Current IM4 TRANS / IMC PWR IN ratio = 0.96
Ratio on August 8, 2023 = 0.95

I was never able to find an alignment which 
1) maximized our detected power and
2) properly brought our YAW on IM4 TRANS back to zero.  
Right now, IM4 TRANS YAW is sitting at 0.4.

Because of max power/YAW discrepancy, I walked significantly in a bunch of directions (IM1 + IM3, IM2 + IM3) to check for clipping on the IFI or the post IM4 path.
I did not find anything obvious, so I suspect we are okay where we sit.
Still, we should be prepared to have to massively adjust the input alignment using IM4 when locking later today.
If we need to, we can revert to the IM alignments from 8 am this morning, and simply pico on the IM4 TRANS path like we did in alog 64446.


alogs:
August 2022 Picoing onto IM4 Trans alog 64446
July 2022 IM4 TRANS calibration alog 63812


EDIT: After this, because we were more worried about locking, I restored the sliders to their original locations, then further restored all of the IMs OSEMs to close to their original locations from before the vent.  
Somehow, we are still falling off IM4 TRANS.
Additionally, Daniel advised to look at the ISS QPD, seems like we are not falling off that one yet, but are significantly yawed, not pitched.  
I believe that this actually makes sense, and that ISS QPD was 90 degrees rotated compared to the other QPDs.
I'm now in the process of checking the IMC OSEMs.

EDIT 2:  Mode cleaner OSEMs seem close to their original values from before the vent.  It's very hard to tell what changed about our input alignment, if the IMC and IMs OSEMs are at the same values.  
In any case, I've left it at the same slider values we had this morning, with the loose plan of slowly aligning IM4 TRANS during full lock with the input alignment loops on.
Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:20, Tuesday 12 March 2024 (76292)
Tue CP1 Fill

Tue Mar 12 10:15:18 2024 INFO: Fill completed in 15min 14secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 CDS
jonathan.hanks@LIGO.ORG - posted 10:17, Tuesday 12 March 2024 (76290)
WP 11761 Upgrade the firmware on sw-fces-cds0 to a recent version
Today I updated the firmware of sw-fces-cds0 to match the other icx 7150 units.  Due to the age of the firmware it was a two step process which involved 2 restarts of the switch.

This was done completely remotely.  After the upgrade I reviewed the configuration, make some changes to update how the spanning-tree was done and recorded the config.  The first restart of the switch took about 3 1/2 minutes, the second tool about 5 1/4 minutes.  Adjusting the spanning tree took some minor outages while the spanning tree was re-discovered.

The command sequence for the upgrade was:

copy tftp system-manifest 10.20.0.85 fastiron_08080f/FI08080f_Manifest.txt all-images-primary
boot system flash pri
copy tftp system-manifest 10.20.0.85 fastiron_0895g/FI08095g_Manifest.txt primary
boot system flash pri
copy tftp system-manifest 10.20.0.85 fastiron_0895g/FI08095g_Manifest.txt secondary

H1 ISC
georgia.mansell@LIGO.ORG - posted 09:58, Tuesday 12 March 2024 (76287)
ISCT1 peek

TJ, Trent, Matt, Georgia, Craig

This morning we went to ISCT1 with the goal of touching up the ALS DIFF beat note (currently sitting at around -13dBm), and investigating the clipping on the ALS Y transmission path, since there visible clipping on the ALSY camera and low-ish transmitted power on ALS-C_TRY.

Second and third attached photos show locations where we might be clipping.

Fourth and fifth attached photos show overviews of the ALS Y and Diff paths.

Sixth photo shows what the beam splitting prism looks like at the moment.

Images attached to this report
H1 PEM
ryan.short@LIGO.ORG - posted 09:20, Tuesday 12 March 2024 (76286)
Magnetic Injections Ran this Morning

It appears that after a reboot of the h1guardian computer a week or so ago, the PEM_MAG_INJ node was requested to INJECTIONS_COMPLETE, meaning that at 14:20 UTC (07:20 PDT) this morning while H1 was locked at low noise, the injection suite ran for the following 20 minutes. This seems to have gone without issue, and the log lives in the normal injection directory, /ligo/www/www/exports/pem/WeeklyMagneticInjection/logs/

The magnetic injections do NOT appear to correspond with the lockloss at 14:45 UTC (the suite had finished 4-5 minutes prior). It's also worth noting that the in-lock SUS charge measurements did not run this morning, which would normally start at 07:45 local time, so that isn't the cause of the lockloss either.

I'll leave the magnetic injection suite set to run automatically on Tuesday mornings unless people have objections to it (it will not run unless the IFO is locked at low noise anyway).

H1 AOS
louis.dartez@LIGO.ORG - posted 00:38, Tuesday 12 March 2024 - last comment - 10:39, Tuesday 12 March 2024(76282)
Testing New DARM configuration
E. Hall, S. Pandey, S. Dwyer, L. Dartez

We took another look at the new DARM configuration this evening. Evan had a suggestion for a new test: move into NLN_ETMY, pause here before switching back to ETMX in the new configuration (but after all the new filters have been put into place) to take measurements of the new ETMX OLG before activating it. I took us into NLN_ETMY using the guardian then manually ran the instructions in NEW_DARM to set up the ETMX locking filters but paused before switching arms (so we were still locked with ETMY, notably without the MICH feedforward which remained inactive for the rest of our testing).

We measured H1:SUS-ETMX_L3_LOCK_L_IN1 / H1:SUS-ETMX_L3_LOCK_L_IN2 and H1:SUS-ETMY_L3_LOCK_L_IN1 / H1:SUS-ETMY_L3_LOCK_L_IN2 (DTT template paths are below) to isolate the OLG for the new DARM path on the x-arm. Some post-processing of the data is needed to do this. We'll follow up with that.


Finishing the Transition to the new state:
After the measurements above (but before we had a chance to look at the data) we tried slowly bringing up ETMX slowly starting with an L3 LOCK gain of 0.01 (and an ETMY L3 LOCK gain of 1). We immediately lost lock. This indicates that there is something severely wrong with the new DARM configuration as it's currently installed. Most likely, this is not something as subtle as an instability. We'll need to walk through these steps more carefully next time.


DTT Templates:
H1:SUS-ETMX_L3_LOCK_L_IN1 / H1:SUS-ETMX_L3_LOCK_L_IN2: /ligo/home/louis.dartez/projects/20240311_new_darm_investigations/SUSETMX_L3_SS_new_darm.xml
H1:SUS-ETMY_L3_LOCK_L_IN1 / H1:SUS-ETMY_L3_LOCK_L_IN2: /ligo/home/louis.dartez/projects/20240311_new_darm_investigations/SUSETMY_L3_SS_new_darm.xml

Relevant logs:
- Old Matlab model revived (LHO:75584)
- Unsuccessful attempts in early 2024 (LHO:75308)
- Times spent successfully in new DARM state (LHO:75631)
- Calibration attempts in new DARM state (LHO:74977)
Comments related to this report
louis.dartez@LIGO.ORG - 00:39, Tuesday 12 March 2024 (76283)
We went home after we lost lock but I forgot to request NLN on the way back up. The IFO relocked itself into NLN_ETMY and got stuck there. I tried going to DARM recover but overlooked that it'd try to go through new DARM to get there. This resulted in another lockloss, after which I requested NLN.
louis.dartez@LIGO.ORG - 10:39, Tuesday 12 March 2024 (76293)
It turns out that last night's lockloss wasn't all bad; the IFO didn't lose lock while in the New DARM state (state 711)...it actually went down from state 712 which is RETURN_TO_NLN_ETMY. RETURN_TO_NLN_ETMY isn't particularly well-tested nor necessarily expected to work. The good news here is that we successfully transitioned into the NEW_DARM state last night.

This is great news! But we're not out of the woods yet. The transition resulted in a pretty hard kick (see NEW_DARM_worked.png) that we'd like to mitigate in future tests. We also only sat at NEW_DARM for about 52 seconds because I requested RECOVER_DARM, which took us in and out of NEW_DARM before losing lock. 

Lockloss Link: 1394259161


For next steps, we have integrators on the L1 and L2 LOCK banks that we're thinking of separating from their current filters and ramping them on after the initial transition. It's not clear yet if this will work. 
Images attached to this comment
H1 ISC
gabriele.vajente@LIGO.ORG - posted 20:08, Monday 11 March 2024 - last comment - 13:56, Tuesday 12 March 2024(76278)
Quiet time and coherences

Quiet time between

PDT: 2024-03-11 19:11:02.319941 PDT
UTC: 2024-03-12 02:11:02.319941 UTC
GPS: 1394244680.319941

and

PDT: 2024-03-11 19:21:42.276196 PDT
UTC: 2024-03-12 02:21:42.276196 UTC
GPS: 1394245320.276196
 

Used this time to run BruCo: https://ldas-jobs.ligo-wa.caltech.edu/~gabriele.vajente/bruco_1394244680_STRAIN_CLEAN/

Notable coherences:

OMC-REFL_A_LF shows low-ish but broadband coherence with DARM above 30 Hz, and this is suggestive of the excess noise we see now w.r.t. O4a. Evan suggests that this could be 45 MHz sidebands amplitude noise, that dominates the OMC reflection, and has a small transmission through the OMC

Images attached to this report
Comments related to this report
matthewrichard.todd@LIGO.ORG - 11:24, Tuesday 12 March 2024 (76294)

comparing OMC_REFL with CLF to now

Atttached is a plot of the power spectrum of OMC-REFL during:

 -- blue: REFL 45 O4a ( Janurary 15th 02:25 UTC )
 --green: REFL 45 now (March 12th 12:30 UTC )
 --brown: REFL 45  with CLF closed ( March 12th 00:24 UTC )
 --red:  REFL 45 with CLF open (March 12th 01:54 UTC )


Clearly, during O4a, the noise in OMC-REFL 45 was better compared to now. It seems from this is that the noise level does not change with the modulation depth (only looking at a 2 minute stretch); however, the source of the noise difference between CLF open and closed is not known. The noise level also seems to vary over the recent locks, which we cannot explain yet.

 

Images attached to this comment
elenna.capote@LIGO.ORG - 13:56, Tuesday 12 March 2024 (76302)

I ran a quick bruco on the OMC-REFL_A_LF channel itself, https://ldas-jobs.ligo-wa.caltech.edu/~elenna.capote/brucos/OMC_REFL/

H1 CAL (AOS)
louis.dartez@LIGO.ORG - posted 19:21, Monday 11 March 2024 - last comment - 10:33, Wednesday 13 March 2024(76271)
Updated Calibration for CAL_DELTAL_EXTERNAL
J. Kissel, L. Dartez

Jeff ran the calibration measurement suite. We processed it according to the instructions here. I then updated the CAL_DELTAL_EXTERNAL calibration using the new report at /ligo/groups/cal/H1/reports/20240311T214031Z.


Images attached to this report
Non-image files attached to this report
Comments related to this report
louis.dartez@LIGO.ORG - 14:36, Tuesday 12 March 2024 (76299)
Attaching the cal report. 


Optical gain:

2024-03-12: 3.322e+06 [DARM ERROR counts / meter]
2023-10-27: 3.34e+06 [DARM ERROR counts / meter]
KappaC at the end of O4a: 1.006
Optical gain at end of O4a: 3.336e6 [DARM ERROR counts / meter]

So the current optical gain is differs from what we had at the end of O4a by about 0.4%.
Images attached to this comment
Non-image files attached to this comment
jeffrey.kissel@LIGO.ORG - 10:33, Wednesday 13 March 2024 (76330)
The calibration from this report has now been added to the LDAS cluster archive such that it shows up in the official infrastructure.

It's location is 
    https://ldas-jobs.ligo-wa.caltech.edu/~cal/?report=20240311T214031Z

It was tagged as "valid" and "exported" as follows:
    On a local control room workstation (or on whichever computer system the report was created)
        $ cd /ligo/groups/cal/H1/reports
        $ touch 20240311T214031Z/tags/exported
        $ touch 20240311T214031Z/tags/tags
        $ arx commit 20240311T214031Z
H1 ISC
matthewrichard.todd@LIGO.ORG - posted 18:58, Monday 11 March 2024 - last comment - 09:58, Tuesday 12 March 2024(76274)
OMC Suspension input matrix optimization for noise reduction

Matt, Stefan, Jennie W

Overall: We adjusted the OMC input matrix resulting in a factor of ten reduction in the OMC suspension drive.


The procedure is as follows:
1) Start by calculating the sensing matrix which maps OM3 and OMC degrees of freedom to QPD A and B changes for one of the suspension degrees of freedom (pitch or yaw).

2) Calculate the inverse of the sensing matrix, which will give you a possible input matrix, mapping QPDA and B changes to OM3 and OMC changes which we use for feedback. The first row of the matrix maps only to OM3, for example, which we chose to be our primary degree of freedom with the higher bandwidth. The trouble we were finding before is that the inverse of our sensing matrix yields a strong degeneracy between the two degrees of freedom, and so we push our second row in the direction that cancels most of the noise in the error signal, and also reduces the degeneracy between the error signals.

This procedure can then be repeated for the other degree of freedom (pitch or yaw).

The results of the noise reduction can be seen in the time domain as well.

Comments related to this report
jennifer.wright@LIGO.ORG - 09:58, Tuesday 12 March 2024 (76288)
stefan.ballmer@LIGO.ORG - 19:03, Monday 11 March 2024 (76275)

Plots:

- Signal direction in QPD1-QPD2 basis

- SDF snapshot

- Resulting reduction in suspenion drive for similar bandwidth

-time series of coil drive

Images attached to this comment
H1 ISC (ISC)
craig.cahillane@LIGO.ORG - posted 18:01, Monday 11 March 2024 - last comment - 10:02, Tuesday 12 March 2024(76272)
Mod depth up down test, cold OM2, 54.3 W input - March 11, 2024
Jennie, Craig

Today we reran the mod-depth up down tests.  We are not fully trusting these numbers because we suspect we are clipping a bit on IM4_TRANS, which might artificially raise all of our estimated PRGs, including those for RF9 and RF45.
Additionally, there was obvious thermalization still happening during the measurement.  Still, the first pass measurement is useful for up to 10% estimates?
We should rerun after we recenter on IM4.

PRGs
9 MHz PRG = 85.6
45 MHz PRG = 36.6
Carrier PRG = 50.4

REFL ratios
9 MHz reflection ratio = 0.309
45 MHz reflection ratio = 0.268
Carrier reflection ratio = 0.055

Table of relative powers
Channels9 MHz45 MHzCarrier
H1:IMC-PWR_IN_OUT160.0130.0150.972
H1:IMC-IM4_TRANS_NSUM_OUT160.0130.0150.972
H1:LSC-REFL_A_LF_OUT160.0650.0650.870
H1:LSC-REFL_B_LF_OUT160.0630.0620.874
H1:LSC-POP_A_LF_OUT160.0220.0110.967
H1:ASC-POP_A_NSUM_OUT160.0210.0110.968
H1:ASC-POP_B_NSUM_OUT160.0210.0110.968
H1:ASC-AS_C_NSUM_OUT160.1810.5210.298
H1:ASC-OMC_A_NSUM_OUT160.1740.6370.189
H1:ASC-OMC_B_NSUM_OUT160.1800.5650.255
H1:ASC-X_TR_A_NSUM_OUT160.0080.0090.983
H1:ASC-X_TR_B_NSUM_OUT160.0080.0090.983
H1:ASC-Y_TR_A_NSUM_OUT160.0090.0090.983
H1:ASC-Y_TR_B_NSUM_OUT160.0090.0090.983
EDIT: We forgot where the POP beam diverter was, so I'm also posting some MEDMs and Guardian code snippets of where and how to reopen that beam diverter. We have left the POP beam diverter open for now.
Images attached to this report
Non-image files attached to this report
Comments related to this report
jennifer.wright@LIGO.ORG - 10:02, Tuesday 12 March 2024 (76289)

For the 9 MHz PRG of 85.6 the nominal level on POPAIR_B_RF18 I phase PD is 1577 counts.

For the 45 MHz PRG of 36.6 the nominal level on POPAIR_B_RF90 I phase PD is 399 counts.

H1 AOS (SEI)
michael.ross@LIGO.ORG - posted 15:17, Friday 09 February 2024 - last comment - 09:15, Tuesday 12 March 2024(75802)
First BRS Remote Mass Adjusting

We successfully balanced the BRS with the remote mass adjuster for the first time this morning. The process was relatively painless. We used a windows machine to drive the picomotor and a separate laptop to monitor the BRS readouts. The BRS is still equilibrating and will drift more into range over the next few days.


Total movement today: -60k steps
Coupling/decoupling move: 1.25k steps
Maximum: +- 140k steps
Be careful: +-100k steps

Log:
-2.5k steps
-2.5k steps
+1.25k steps
-1.25k steps
-2.5k steps
+1.25k steps
-1.25k steps
-10k steps
+1.25k steps
-1.25k steps
-10k steps
+1.25k steps
-1.25k steps
-5k steps
+1.25k steps
-1.25k steps
-2.5k steps
+1.25k steps
-1.25k steps
-2.5k steps
+1.25k steps
-1.25k steps
-5k steps
+1.25k steps
-1.25k steps
-10k steps
+1.25k steps
-1.25k steps
-2k steps
+1.25k steps
-1.25k steps
-1.25k steps
+1.25k steps
-1.25k steps
-2.5k steps
+1.25k steps
-1.25k steps
-2.5k steps
+1.25k steps

 

 

 

Comments related to this report
anthony.sanchez@LIGO.ORG - 09:15, Tuesday 12 March 2024 (76284)

Finally adding the pictures to this alog.
And a link to the Google doc for making the BRS changes.
https://docs.google.com/document/d/1XBH-TVwQ3JC8rjLGXUg-LDZKzHxaTuKBC3_-B_fWWAk/edit#heading=h.7b9gxgqfvsr0

Images attached to this comment
Displaying reports 12441-12460 of 86528.Go to page Start 619 620 621 622 623 624 625 626 627 End