Displaying reports 14881-14900 of 86440.Go to page Start 741 742 743 744 745 746 747 748 749 End
Reports until 16:36, Tuesday 24 October 2023
H1 SQZ
victoriaa.xu@LIGO.ORG - posted 16:36, Tuesday 24 October 2023 - last comment - 11:05, Monday 26 February 2024(73696)
SQZ-OMC mode scan with hot OM2

Kevin, Sheila, Evan, Vicky

Summary: SQZ-OMC mode scans with hot OM2, and PSAMS 120/120 vs. 200/200. From this data, we should get single-bounce SQZ-OMC mode-matching with hot OM2, check SQZ readout losses (AS port throughput), and measure OMC losses via cavity visibility when locked/unlocked to the squeezer beam. With hot OM2, in sqz single bounce, SQZ-OMC mode-matching looks a bit better with PSAMS 120/120 than 200/200.

We'll ask Jennie W. to help us fit these SQZ-OMC mode scans. She can fit the double-peak in the 2xHOM, to give an accurate measure of SQZ-OMC mode-matching with hot OM2 and these two PSAMS settings. Here is just naively calculating mismatch from the relative power in TEM20 (TEM20/(TEM00 + TEM10/01 + TEM20)), and then calculating the total power not in TEM00 (ie 1-TEM00/(TEM00 + TEM10/01 + TEM20)), to get the following estimates on SQZ-OMC mode matching:

PSAMS 120/120, scan: 10/24/23 19:46:53 UTC + 200 seconds.
   --> mismatch ~ TEM20/peak_sums ~ 2%.      Total incl. mismatch + misalignment: 1-tem00/peak_sums ~ 8%.
PSAMS 200/200, scan: 10/24/23 19:04:57 UTC + 200 seconds.
   --> mismatch ~ TEM20/peak_sums ~ 5%.      Total incl. mismatch + misalignment: 1-tem00/peak_sums ~ 12%.

We will follow-up with analysis on OMC loss measurements based on cavity visibility, more accurate SQZ-OMC mode mismatches from these scans, and checking single-bounce SQZ powers through the AS port.

---------------------------------------------------------------------------

Notes:

---------------------------------------------------------------------------

Some relevant alogs, as we try to piece together the SQZ-IFO, IFO-OMC, and SQZ-OMC mode matchings:

Images attached to this report
Comments related to this report
jennifer.wright@LIGO.ORG - 14:38, Thursday 22 February 2024 (75931)

Thanks to Vicky for helping me update the code to work for SQZ measaurements I had some trouble fitting these in the past as the fitting code was not subtracting off the dark current on the measurements, this doesn't matter so much for mode scans using the PSL as this has a much higher power through the OMC than the SQZ beam (16mA on the DCPDs vs. 0.5 mA on the DCPDs).

For the first measurement taken on 24th October 2023, hot OM2, PSAMS (ZM4 at 120V, ZM5 at 120V).

I used 70s of data taken starting at 1382212031.

See attached plots of the mode scan with identified peaks, and the carrier 02 peaks fitted as a sum of lorentzians.

The blue line shows the data zoomed in to the C02 peak. Th red line shows the sum of lorentzians using the fitted parameters of both centre frequencies, both amplitudes, and the half-width at half-maximum of an individual peak.

The purple line shows the lorentzian sum as a function of the initial fitting parameters.

 

The fitted mode spacing is 149.665 - 149.153 MHz = 0.512 MHz, which is less than the expected HOM spacing 0.588 MHz from this entry which uses the original measurements by Koiji in Table 25.

The mode-mismatch is 0.0062 + 0.0071 /( 0.0062 + 0.0071 + 0.45) = 2.9 % for the 02 modes with the lower frequency mode (horizontal I think) being higher in magnitude.


Code to do run mode scans is OMCScan_nosidebands6.py and fit the data is in fit_two_peaks_no_sidebands6.py located in labutils/omcscan git reposiotory on /dev branch, ran using labtutils conda enrvironment at labutils gitlab).

Run OMCscan_nosidebands6.py with

python OMCscan_nosidebands6.py 1382212031 70 "PSAMS 120/120, SQZ-OMC 1st scan" "single bounce" --verbose -m -p 0.008 -o 2

And also it is neccessary to hard code in the C02 mode being the 5th largest mode and 01 being the third largest in order to get a good fit as the sidebands are off.

Inside OMCscan_nosidebands6.py

find the module:

def identify_C02(self):

then change the lines shown after:

#set frequency to be that of third largest peak.

to read:

third_larg = np.argsort(self.peak_heights)[-3]#third largest is 01.

fourth_larg = np.argsort(self.peak_heights)[-5]#fifth largest is 02

Non-image files attached to this comment
jennifer.wright@LIGO.ORG - 11:05, Monday 26 February 2024 (75933)

For the second measurement taken on 24th October 2023, hot OM2, PSAMS (ZM4 at 200V, ZM5 at 200V).

I used 80s of data taken starting at 1382209515.

See attached plots of the mode scan with identified peaks, and the carrier 02 peaks fitted as a sum of lorentzians.

The blue line shows the data zoomed in to the C02 peak. Th red line shows the sum of lorentzians using the fitted parameters of both centre frequencies, both amplitudes, and the half-width at half-maximum of an individual peak.

The purple line shows the lorentzian sum as a function of the initial fitting parameters.

 

The fitted mode spacing is 149.757 - 149.204 = 0.552 MHz, which is less than the expected HOM spacing 0.588 MHz from this entry which uses the original measurements by Koiji in Table 25.

The mode-mismatch is 0.019 + 0.016 / (0.016 + 0.019 + 0.42) = 0.054 = 5.4 % for the 02 modes with the lower frequency mode (horizontal I think) being higher in magnitude.


Code to do run mode scans is OMCScan_nosidebands7.py and fit the data is in fit_two_peaks_no_sidebands7.py located in labutils/omcscan git reposiotory on /dev branch, ran using labtutils conda environment at labutils gitlab).

Run OMCscan_nosidebands7.py with

python OMCscan_nosidebands7.py 1382209515 80 "PSAMS 200/200, SQZ-OMC 2nd scan" "single bounce" --verbose -m -o 2

And also it is neccessary to hard code in the C02 mode being the 4th largest mode and 01 being the third largest in order to get a good fit as the sidebands are off.

Inside OMCscan_nosidebands7.py

find the module:

def identify_C02(self):

then change the lines shown after:

#set frequency to be that of third largest peak.

to read:

third_larg = np.argsort(self.peak_heights)[-3]#third largest is 01.

fourth_larg = np.argsort(self.peak_heights)[-4]#fourth largest is 02

Non-image files attached to this comment
H1 TCS
thomas.shaffer@LIGO.ORG - posted 16:27, Tuesday 24 October 2023 - last comment - 09:04, Wednesday 25 October 2023(73704)
Swapped TCSX chiller for new chiller

Summary - Camilla and I removed the running TCSX chiller and replaced it with a new ThermoFlex 1400 SN#1153600201231003.

We've been having some issues with the TCSX laser relocking itself during Observing (ex: alog73331). This seems to be due to inconsistent water temperature from two of our three chilers, causing the temperature of the laser to shift slightly. We've swapped out SN#0110193301120813 for the new chiller and sent off the spare chiller for service. We input the settings from the CO2Y chiller and it the new chiller fired right up and seems to be outputing correct flow and pressure. We'll keep an eye on it the next few days.

Comments related to this report
thomas.shaffer@LIGO.ORG - 09:04, Wednesday 25 October 2023 (73729)

Flow with the new chiller seems much more stable and the laser is operating at a slightly lower temperature (23.4C vs 24.5C) and a slightly higher output power (41.9W vs 40.9W). In the attached shot I'm unsure why the flow rate, as seen by the flow meter under the BSC, starts a bit higher and then over the course of ~4hours settles back to where it is now. I'd expect the system to settle much faster than that. For reference alog72267 is the last time we swapped the chillers and SN813 didn't show this.

When packing up our old spare chiller (SN#822) to be sent in for service, we found a small piece of metal, about the size of a bb in the outlet quick disconnect. This clearly wasn't helping the flow, but this chiller also had a refrigerant leak. Definitely time to be sent in.

Images attached to this comment
H1 General (TCS)
anthony.sanchez@LIGO.ORG - posted 16:15, Tuesday 24 October 2023 - last comment - 16:37, Tuesday 24 October 2023(73714)
Tuesday Maintanence day Eve Shift Start

TITLE: 10/24 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 15mph Gusts, 12mph 5min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.18 μm/s
QUICK SUMMARY:
Full day of maintenance so Locking will start here after the channel H1:TCS-ITMX_HWS_LIVE_AQUISITION_GPSTIM SPM channel issue is resolved.
Tagging TCS.

I will be watching EY and EX temps. And OPO SDF Diffs when getting to Observing.

I also have H1:DAQ-H1EDC_CHAN_NOCON is 88 which seems like a few things are still not working correctly.
Tagging CDS.

 

Comments related to this report
camilla.compton@LIGO.ORG - 16:37, Tuesday 24 October 2023 (73715)

The TCS-ITMX_HWS 88 channels not connecting errors were as I hadn't yet restarted the ITMX HWS code that makes these channels after Jonathan's work on h1hwsmsr 73699. I've now restarted the code and it's working well. Doesn't effect locking. 

H1 General
oli.patane@LIGO.ORG - posted 16:08, Tuesday 24 October 2023 (73713)
Ops DAY Shift End

TITLE: 10/24 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Maintenance is over and we are starting the process of getting back up.

Things to keep track of:

- EY heater tripped off at some point and caused temps to drop, should be good now but keep close eye on EY temps.

- There will be an OPO TEMP SDF, but shouldn't be any other sqz diffs
LOG:

15:00UTC Detector Locked for 28.5hrs, all SUS Charge measurements ran
23:00 Starting to move back towards locking    

Start Time System Name Location Lazer_Haz Task Time End
15:03 TCS Camilla CR N CO2 Blast on ad off 15:10
15:04 FAC Karen EY n Tech clean 16:35
15:05 FAC Kim EX n Tech clean 16:21
15:07 FAC Randy EX/EY n Cleanroom moving stuff 21:10
15:11 VAC Jordan, Travis FCETube n Closing gate valves and replacing ion pump 19:27
15:15 ISC TJ CR n Moving Common CARM Gain 15:27
15:24 SQZ Vicky Remote n SQZ adjustments 15:53
15:28 PSL Jason CR n PMC alignment 15:38
15:30 VAC Janos FCEncl n Getting stuff 15:45
15:51 EE Fil LVEA n New electronics install 15:52
15:53 SQZ Sheila, Vicky CR n OMC Scans (misaligning ITMX, ETMX) 22:28
15:58 ISI Arnaud remote n ETMX tests 17:23
16:01 FAC Cindi FCES n Tech clean 17:53
16:07 TCS Camilla, TJ Mech Room n Replacing TCSX CO2 Chiller 16:56
16:18 ISI Jim CR n ISI tests (ETMY misaligned) 17:24
16:21 FAC Kim LVEA n Tech clean 18:40
16:43 TCS Camilla LVEA n Into LVEA for TCS 16:56
16:51 CDS Jonathan CER n Cloning HWS MSR disk 19:05
16:52 FAC Karen LVEA n Tech clean 18:22
17:00 HWS TJ, Camilla EX VEA YES Swap HWS 22:54
17:10 PSL Jason, Austin Optics Lab n Oplev work 18:58
17:14 FAC Chris, Eric Sitewide n Grease supply fans 17:59
17:17 DMT Dave remote no add channels to dmt epics, DAQ restart required 20:31
17:17 DAQ Dave remote no Add zotvac0 epics channels to DAQ. DAQ restart required 20:31
17:18 DAQ Dave remote no DAQ RESTART. Needed for EDC changes and new h1susprocpi model 20:31
17:24 ISI Jim CR n ETMX/ETMY tests 22:19
17:25 FAC Mitch EX/EY n Dust mon/HEPI pump checks 18:06
17:54 FAC Cindi LVEA Receiving n Getting cardboard 22:26
18:19 ISC Keita LVEA n OM2 19:17
18:28 SUS Dave remove no Restart Vlad's new h1susprocpi model, DAQ restart required 18:29
18:29 FAC Tyler, Eric LVEA n Walking floor 18:52
18:29 SUS Dave remote no Install Vlad's new h1susprocpi model, DAQ restart is required 20:31
20:03 FAC Christina OSB n Moving pallet w/ forklift 22:13
20:09 FAC Tyler, Eric FCES n Tour 20:31
20:11 ISC Keita LVEA n OM2 work 20:53
20:14 VAC Jordan, Travis, Janos FCETube n Pump swap 21:28
20:24   TJ LVEA n Looking for parts 20:49
20:32 FAC Tyler, Eric EY n Look at EY temp excursion 21:29
20:41 TCS Jason, Randy, Austin LVEA n Putting 3IFO TCSX away 20:59
21:14 HWS TJ, Camilla Optics Lab, LVEA n Looking for parts 21:35
21:30 VAC Janos EX/EY n Working with pumps 22:11
21:53 CDS Jonathan CUR n Network switching 22:54
22:05 ISC Keita CER n Measuring Beckoff voltage 22:29
22:12 VAC Janos LVEA n Pumps 22:18
LHO VE
jordan.vanosky@LIGO.ORG - posted 16:04, Tuesday 24 October 2023 (73712)
Installation of FCT Ion Pump (C1 Cross)

Jordan, Mitchell

Today, we were able to install one of the new 150 l/s ion pumps on FC-CI, C1 cross. The Ion Pump/Tee/Angle valve assembly was pre-built in the staging building and then pumped down and leak checked. This assembly was stored under vacuum and then brought to the Filter Cavity Enclosure.

We closed FCV-3 (BSC3), FCV-5, FCV-6, and FCV-9 (HAM8) to isolate the C1 cross. Then closed the -Z axis GV on the C6 cross to isolate the ion pump port. We then vented the ion pump assembly with N2 and removed the angle valve/6" zero-length reducer on the cross, and installed the ion pump on the 6" CF port. A genie lift was used to lift/hold the ion pump while the connections were made.

Once installed, we used the leak detector/small turbo to pump down the assembly, and then helium leak tested the one CF connection that was made. There was no detectable signal above the helium background of 9.5E-11 Torr-l/s.

The ion pump was powered on locally, and quickly dropped to ~8E-8 Torr, using a new IPCMini controller. The -Z gate valve remains closed, and the rest of the FCT gate valves were reopened once the ion pump was leak checked and powered on. We will continue with the installation of the rest of the pumps in the LVEA in the following weeks.

Images attached to this report
H1 AOS
louis.dartez@LIGO.ORG - posted 15:35, Tuesday 24 October 2023 (73711)
LVEA swept
LVEA swept.
LHO VE (VE)
travis.sadecki@LIGO.ORG - posted 14:59, Tuesday 24 October 2023 (73709)
GV8 Annulus Ion Pump replaced

The GV8 annulus ion pump was replaced today with a rebuilt, Galaxy-style pump body and a rebuilt old-style (fanless) MiniVac controller.  A controller swap took place last week, but upon powering on this controller with the new pump, I noticed that the polarity of the controller was positive.  So, I powered it down and swapped it with another rebuilt controller with negative polarity.  

The annulus system was pumped down to the mid e-5 torr range via local turbo and aux cart, at which time I powered on the ion pump.  Within a couple of minutes, the display on the controller showed 1-light of ion current (good).  The local turbo and aux cart were disconnected and the system restored to nominal Observing-run mode.  

H1 CDS
david.barker@LIGO.ORG - posted 14:16, Tuesday 24 October 2023 - last comment - 14:44, Tuesday 24 October 2023(73705)
CDS Maintenance Summary: Tuesday 24th October 2023

WP11492 SUSPROC PI PLL model changes

Vladimir, Naoki, Erik, Dave

A new h1susprocpi model was installed. A DAQ restart was required

WP11485 h1hwsmsr disk replacement

Jonathan.

The failed 2TB HDD disk which is part of the /data raid was replaced with a 2TB SSD drive. At time of writing it is 70% done with the new disk rebuild.

Jonathan also cloned the boot disk. I verified that /data is being backed up to LDAS on a daily basis at 5am.

WP11478 Add HOFT and NOLINES H1 effective range to EPICS and DAQ.

Jonathan, Dave:

On Keith's suggestion, the HOFT and NOLINES versions of the H1 effective range were added to the dmt2epics IOC on h1fescript0. The MPC and GPS channels were also added to the H1EPICS_DMT.ini for inclusion into the DAQ. A DAQ restart was required.

WP11488 Deactivate opslogin

Jonathan

The old opslogin machine (not to be confused with opslogin0) was powered down.

WP11479 Add zotvac0 epics monitor channels to DAQ

Dave:

H1EPICS_CDSMON was modified to add zotvac0's channels. A DAQ restart was required.

DAQ Restart

Jonathan, Dave:

The DAQ was restarted for the above changes. This was a very messy restart.

0-leg was restarted. h1gds0 needed a second restart to sync its channel list.

EDC on h1susauxb123 was restarted

8 minutes later fw0 spontaneously restarted itself. At this point h1susauxb123 front end locked up, all models and EDC crashed.

Jonathan connected a local console to h1susauxb123, but there was no errors printed, the keyboard was unresponsive. h1susauxb123 was rebooted.

After the EDC came back online, the DAQ 1-leg was restarted.

h1gds1 needed a second restart to sync up its disks.

Comments related to this report
david.barker@LIGO.ORG - 14:30, Tuesday 24 October 2023 (73707)

Tue24Oct2023
LOC TIME HOSTNAME     MODEL/REBOOT
13:01:42 h1oaf0       h1susprocpi <<< model restart


13:02:34 h1daqdc0     [DAQ]  <<< 0-leg restart
13:02:43 h1daqfw0     [DAQ]
13:02:43 h1daqtw0     [DAQ]
13:02:44 h1daqnds0    [DAQ]
13:02:51 h1daqgds0    [DAQ]
13:03:21 h1susauxb123 h1edc[DAQ] <<< EDC restart
13:03:52 h1daqgds0    [DAQ] <<< 2nd gds0 restart


13:09:26 h1daqfw0     [DAQ]  <<< spontaneous FW0 restart (crash of h1susauxb123 at this point)


13:22:34 h1susauxb123 ***REBOOT*** <<< reboot h1susauxb123, start EDC
13:23:20 h1susauxb123 h1edc[DAQ]
13:23:36 h1susauxb123 h1iopsusauxb123
13:23:49 h1susauxb123 h1susauxb123


13:26:01 h1daqdc1     [DAQ] <<< 1-leg restart
13:26:11 h1daqfw1     [DAQ]
13:26:11 h1daqtw1     [DAQ]
13:26:12 h1daqnds1    [DAQ]
13:26:20 h1daqgds1    [DAQ]
13:26:48 h1daqgds1    [DAQ] <<< 2nd GDS1 restart
 

david.barker@LIGO.ORG - 14:44, Tuesday 24 October 2023 (73708)

DMT2EPICS configuration file, HOFT and NOLINES added:

{
    "prefix": "H1:",
    "entries": [
        {
            "engine": "dmt",
            "config": {
                "url": "https://marble.ligo-wa.caltech.edu/dmtview/SenseMonitor_CAL_H1/H1SNSW%20EFFECTIVE%20RANGE%20%28MPC%29/data.txt",
                "pv-name": "CDS-SENSMON_CAL_SNSW_EFFECTIVE_RANGE_MPC",
                "pv-gps": "CDS-SENSMON_CAL_SNSW_EFFECTIVE_RANGE_MPC_GPS",
                "disconnected-value": -1.0,
                "period": 30.0
            }
        },
        {
            "engine": "dmt",
            "config": {
                "url": "https://marble.ligo-wa.caltech.edu/dmtview/SenseMonitor_Clean_H1/H1SNSC%20EFFECTIVE%20RANGE%20%28MPC%29/data.txt",
                "pv-name": "CDS-SENSMON_CLEAN_SNSC_EFFECTIVE_RANGE_MPC",
                "pv-gps": "CDS-SENSMON_CLEAN_SNSC_EFFECTIVE_RANGE_MPC_GPS",
                "disconnected-value": -1.0,
                "period": 30.0
            }
        },
        {
            "engine": "dmt",
            "config": {
                "url": "https://marble.ligo-wa.caltech.edu/dmtview/SenseMonitor_hoft_H1/H1SNSH%20EFFECTIVE%20RANGE%20%28MPC%29/data.txt",
                "pv-name": "CDS-SENSMON_HOFT_SNSH_EFFECTIVE_RANGE_MPC",
                "pv-gps": "CDS-SENSMON_HOFT_SNSH_EFFECTIVE_RANGE_MPC_GPS",
                "disconnected-value": -1.0,
                "period": 30.0
            }
        },
        {
            "engine": "dmt",
            "config": {
                "url": "https://marble.ligo-wa.caltech.edu/dmtview/SenseMonitor_Nolines_H1/H1SNSL%20EFFECTIVE%20RANGE%20%28MPC%29/data.txt",
                "pv-name": "CDS-SENSMON_NOLINES_SNSL_EFFECTIVE_RANGE_MPC",
                "pv-gps": "CDS-SENSMON_NOLINES_SNSL_EFFECTIVE_RANGE_MPC_GPS",
                "disconnected-value": -1.0,
                "period": 30.0
            }
        }
    ]
}
 

H1 DetChar
gabriele.vajente@LIGO.ORG - posted 12:19, Tuesday 24 October 2023 (73701)
Brute force coherences

Here's a BruCo repot for GDS-CALIB_STRAIN_NOLINES from last night lock: https://ldas-jobs.ligo-wa.caltech.edu/~gabriele.vajente/bruco_1382152121_STRAIN/ 

Some highlights:

Large frequency bands are completely devoid of coherence.

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 12:19, Tuesday 24 October 2023 - last comment - 12:25, Tuesday 24 October 2023(73700)
Tue CP1 Fill

Tue Oct 24 10:05:21 2023 INFO: Fill completed in 5min 18secs

 

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 12:25, Tuesday 24 October 2023 (73702)

Because CP1's dewer was being filled with LN2 this morning, soon after the fill Gerardo requested the LLCV valve be closed down from 37.5% to 29.0%. The attached trend shows discharge line LN2 sputtering which was stopped by the valve throttling.

Images attached to this comment
H1 SEI
arnaud.pele@LIGO.ORG - posted 12:05, Tuesday 24 October 2023 - last comment - 07:20, Wednesday 25 October 2023(73688)
ETMX ISI - 1.23Hz hunting

Summary - Jim fixed it with a little shake.

We cycled through the various ISI state controls. The 1Hz line would start appearing when stage 1 is isolated. In any states below this point, we can clearly see the difference in the 'sensor response' of the local H1 L4C wrt T240 or CPS. While Jim was taking an olg measurement, the ISI tripped, which seem to have changed the response of the H1 L4C back to the nominal response, see the first pdf attached, showing the L4C/T240 local TF, before (p1) vs after (p2) the ISI trip. The 2nd pdf attached shows the ISI spectra in the isolated state before (p1) vs after (p2) the shake, no other changes.
We've had sticky L4Cs in the past and solved it in a similar way (see alog 38939), but the symptoms were much more obvious than a slight change in resonant frequency as we are seeing here.

Timeline of tests :

16:04 - 16:12 UTC ETMX ISI/HEPI OFFLINE
16:15 - 16:30 UTC HEPI ON / ETMX ISI OFFLINE
16:31 - 16:37 UTC HEPI ON /ETMX DAMPED
16:38 - 16:44 UTC HEPI ON / ETMX ST1 ISO - We can see the giant ~1Hz line
16:45 - 16:48 UTC HEPI ON/ ETMX ST1 ISO - H1 L4C gain 0.5 - sym filter off - ~1Hz line goes down
16:50 - 16:52 UTC HEPION/ETMX ST1/ST2 ISO - H1 L4C gain 0.5 - sym filter off
16:55 - 17:07 UTC Back to nominal
17:25 - Jim changing stage 1 Rz blend to noL4C
17:38 - Jim changing stage 1 Y blend to noL4C
18:00 - Jim measuring Rz olgf

Ref alog 73625

Non-image files attached to this report
Comments related to this report
jim.warner@LIGO.ORG - 16:43, Tuesday 24 October 2023 (73716)

The ITMY peak is also coming the a "stuck" H1 L4C. This problem is kind of subtle, the L4C only seems to misbehave when the isolation loops are on and the accelerations on the L4C are low. Because tripping ETMX seemed to fix that L4C, I tried it on ITMY. To do this, I put the ISI in damped, ramped on an offset on the ST1H1 actuator, then turned it off with a zero ramp time. I did this for 2k, 3k and 4k counts of drive. All of these caused a bigger signal than the ETMX trip earlier, but the ITMY H1 L4C is still "stuck". Attached asds compare the corner 1 and corner 2 l4c-to-t240 local tfs before and after the whacks. No difference before vs after, but brown and pink are before, red and blue are after. 

But, changing ITMY Y and RZ St1 blends to 250mhz blends that don't use the H1L4C makes the 1.something hz peak on ITMY go away. This also worked on ETMX. I've set both ISIs to not use their H1 L4Cs, we'll watch for a while and re-evaluate next week. At this point, only ITMX is still using it's H1 L4C.

Images attached to this comment
gabriele.vajente@LIGO.ORG - 07:20, Wednesday 25 October 2023 (73726)

The two lines around 1.3 Hz are gone from DARM.

Unfortunateely it's hard to tell if this improved the nosie above 10 Hz, because there are a lof of scattered light events (4 Hz cryobaffle?)

Images attached to this comment
H1 CDS
jonathan.hanks@LIGO.ORG - posted 12:02, Tuesday 24 October 2023 (73699)
WP11485 work on the h1hwsmsr system
I replaced two disks on the h1hwsmsr system.

 * cloned a failing boot disk
 * replaced a failed raid drive that holds the image repository

I used clonezilla to replicate the boot disk.  Then after booting into the newly cloned disk I replaced the failed raid disk.  At this point in time the data is being automatically replicated onto the new disk, it should take a few hours.
H1 ISC
gabriele.vajente@LIGO.ORG - posted 11:39, Tuesday 24 October 2023 - last comment - 12:45, Wednesday 25 October 2023(73698)
Resonant gains in DARM2

I prepared a new filter module with resonant gains at 2.8 Hz and 3.4 Hz. This can be tried tomorrow (on during some commissioning time) to reduce the DARM RMS and see if it helps the non-stationary noise.

The new FM is loaded into DARM2 FM1 "RG2.8-3.4". It is not engaged now.

 

For future reference, here are the FMs that are used during lock acquisition:

DARM1: FM1 FM2 FM3 FM4 FM7 FM9 FM10

DARM2: FM3 FM4 FM5 FM6 FM7 FM8 FM9 FM10

leaving DARM1 FM5,6,8 and DARM2 FM1,2 unused

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 12:45, Wednesday 25 October 2023 (73737)

Camilla and I turned on DAM2 FM1 from 19:43:37 to 20:05:50 UTC October 25

H1 ISC
vladimir.bossilkov@LIGO.ORG - posted 10:09, Tuesday 24 October 2023 - last comment - 13:31, Tuesday 24 October 2023(73692)
h1susprocpi model changes

WP #11489, and #11492 (not marked as Tuesday Maintinence as LHO apparently doesn't track that).

I have altered the h1susprocpi model in the follownig ways:

These changes affected libraries:

Changes will come in when the model is restarted on the h1oaf0 and the DAQ is restarted, as the new filter block adds channels.

Comments related to this report
vladimir.bossilkov@LIGO.ORG - 13:31, Tuesday 24 October 2023 (73703)

I have initialised and set to monitored SDF all new filterbank settings from this change.

I have specifically set a limit of 10 on every MODE's PLL AMP_FINAL filterbank; replicating the old hard-coded functionality.

H1 DetChar
gabriele.vajente@LIGO.ORG - posted 08:50, Wednesday 18 October 2023 - last comment - 18:35, Friday 27 October 2023(73546)
Low Frequency Noise (<50 Hz)

Using two periods of quiet time during the last couple of days (1381575618 + 3600s, 1381550418 + 3600s) I computed the usual coherences:

https://ldas-jobs.ligo-wa.caltech.edu/~gabriele.vajente/bruco_STRAIN_1381550418/
https://ldas-jobs.ligo-wa.caltech.edu/~gabriele.vajente/bruco_STRAIN_1381575618/

The most interesting observation is that, for the first time as far as I can remember, there is no coherence above threshold with any channels for wide bands in the low frequency range, notably between 20 and 30 Hz, and also for many bands above 50 Hz. I'll assume for now that most of the noise above ~50 Hz is explained by thermal noise and quantum noise, and focus on the low frequency range (<50 Hz).

Looking at the PSDs for the two hour-long times, the noise belowe 50 Hz seems to be quite repeatable, and follows closely a 1/f^4 slope. Looking at a spectrogram (especially when whitened with the median), one can see that there is still some non-stationary noise, although not very large. So it seems to me that the noise below ~50 Hz is made up o some stationary 1/f^4 unknown noise (not coherent with any of the 4000+ auxiliary channels we record) and some non-stationary noise. This is not hard evidence, but an interesting observation.

Concerning the non-stationary noise, I think there is evidence that it's correlated with the DARM low frequency RMS. I computed the GDS-CALIB RMS between 20 and 50 Hz (whitened to the median to weight equally the frequency bins even though the PSD has a steep slope), and the LSC_DARM_IN1 RMS between 2.5 and 3.5 Hz (I tried a few different bands and this is the best). There is a clear correlation between the two RMS, as shown in a scatter plot, where every dot is the RMS computed over 5 seconds of data, using a spectrogram.

 

 

Images attached to this report
Comments related to this report
gabriele.vajente@LIGO.ORG - 11:01, Wednesday 18 October 2023 (73554)

DARM low frequency (< 4 Hz) is highly coherent with ETMX M0 and R0 L damping signals. This might just be recoil from the LSC drive, but it might be worth trying to reduce the L damping gain and see if DARM RMS improves

 

Images attached to this comment
gabriele.vajente@LIGO.ORG - 13:04, Wednesday 18 October 2023 (73560)

Bicoherence is also showing that the noise between 15 and 30 Hz is modulated according to the main peaks visible in DARM at low frequency.

Images attached to this comment
elenna.capote@LIGO.ORG - 20:53, Wednesday 18 October 2023 (73579)

We might be circling back to the point where we need to reconsider/remeasure our DAC noise. Linking two different (and disagreeing) projections from the last time we thought about this, it has the correct slope. However, Craig's projection and the noisemon measurement did not agree, something we never resolved.

Projection from Craig: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=68489

Measurement from noisemons: https://alog.ligo-wa.caltech.edu/aLOG/uploads/68382_20230403203223_lho_pum_dac_noisebudget.pdf

christopher.wipf@LIGO.ORG - 11:15, Friday 20 October 2023 (73620)

I updated the noisemon projections for PUM DAC noise, and fixed an error in their calibration for the noise budget. They now agree reasonably well with the estimates Craig made by switching coil driver states. From this we can conclude that PUM DAC noise is not close to being a limiting noise in DARM at present.

Images attached to this comment
jeffrey.kissel@LIGO.ORG - 09:51, Tuesday 24 October 2023 (73691)CDS, CSWG, ISC, OpsInfo, SUS
To Chris' point above -- we note that the PUMs are using 20-bit DACs, and we are NOT using and "DAC Dither" (see aLOGs motivating why we do *not* use them in LHO:68428, and LHO:65807, namely that [in the little testing that we've done] we've seen no improvement, so we decided they weren't worth the extra complexity and maintenance.)
christopher.wipf@LIGO.ORG - 15:25, Tuesday 24 October 2023 (73710)

If at some point there’s a need to test DAC dithers again, please look at either (1) noisemon coherence with the DAC request signal, or (2) noisemon spectra with a bandstop in the DAC request to reveal the DAC noise floor.  Without one of those measures, the noisemons are usually not informative, because the DAC noise is buried under the DAC request.

christopher.wipf@LIGO.ORG - 18:35, Friday 27 October 2023 (73784)

Attached is a revised PUM DAC noisemon projection, with one more calibration fix that increases the noise estimate below 20 Hz (although it remains below DARM).

Images attached to this comment
Displaying reports 14881-14900 of 86440.Go to page Start 741 742 743 744 745 746 747 748 749 End