Displaying reports 12821-12840 of 84369.Go to page Start 638 639 640 641 642 643 644 645 646 End
Reports until 14:16, Tuesday 24 October 2023
H1 CDS
david.barker@LIGO.ORG - posted 14:16, Tuesday 24 October 2023 - last comment - 14:44, Tuesday 24 October 2023(73705)
CDS Maintenance Summary: Tuesday 24th October 2023

WP11492 SUSPROC PI PLL model changes

Vladimir, Naoki, Erik, Dave

A new h1susprocpi model was installed. A DAQ restart was required

WP11485 h1hwsmsr disk replacement

Jonathan.

The failed 2TB HDD disk which is part of the /data raid was replaced with a 2TB SSD drive. At time of writing it is 70% done with the new disk rebuild.

Jonathan also cloned the boot disk. I verified that /data is being backed up to LDAS on a daily basis at 5am.

WP11478 Add HOFT and NOLINES H1 effective range to EPICS and DAQ.

Jonathan, Dave:

On Keith's suggestion, the HOFT and NOLINES versions of the H1 effective range were added to the dmt2epics IOC on h1fescript0. The MPC and GPS channels were also added to the H1EPICS_DMT.ini for inclusion into the DAQ. A DAQ restart was required.

WP11488 Deactivate opslogin

Jonathan

The old opslogin machine (not to be confused with opslogin0) was powered down.

WP11479 Add zotvac0 epics monitor channels to DAQ

Dave:

H1EPICS_CDSMON was modified to add zotvac0's channels. A DAQ restart was required.

DAQ Restart

Jonathan, Dave:

The DAQ was restarted for the above changes. This was a very messy restart.

0-leg was restarted. h1gds0 needed a second restart to sync its channel list.

EDC on h1susauxb123 was restarted

8 minutes later fw0 spontaneously restarted itself. At this point h1susauxb123 front end locked up, all models and EDC crashed.

Jonathan connected a local console to h1susauxb123, but there was no errors printed, the keyboard was unresponsive. h1susauxb123 was rebooted.

After the EDC came back online, the DAQ 1-leg was restarted.

h1gds1 needed a second restart to sync up its disks.

Comments related to this report
david.barker@LIGO.ORG - 14:30, Tuesday 24 October 2023 (73707)

Tue24Oct2023
LOC TIME HOSTNAME     MODEL/REBOOT
13:01:42 h1oaf0       h1susprocpi <<< model restart


13:02:34 h1daqdc0     [DAQ]  <<< 0-leg restart
13:02:43 h1daqfw0     [DAQ]
13:02:43 h1daqtw0     [DAQ]
13:02:44 h1daqnds0    [DAQ]
13:02:51 h1daqgds0    [DAQ]
13:03:21 h1susauxb123 h1edc[DAQ] <<< EDC restart
13:03:52 h1daqgds0    [DAQ] <<< 2nd gds0 restart


13:09:26 h1daqfw0     [DAQ]  <<< spontaneous FW0 restart (crash of h1susauxb123 at this point)


13:22:34 h1susauxb123 ***REBOOT*** <<< reboot h1susauxb123, start EDC
13:23:20 h1susauxb123 h1edc[DAQ]
13:23:36 h1susauxb123 h1iopsusauxb123
13:23:49 h1susauxb123 h1susauxb123


13:26:01 h1daqdc1     [DAQ] <<< 1-leg restart
13:26:11 h1daqfw1     [DAQ]
13:26:11 h1daqtw1     [DAQ]
13:26:12 h1daqnds1    [DAQ]
13:26:20 h1daqgds1    [DAQ]
13:26:48 h1daqgds1    [DAQ] <<< 2nd GDS1 restart
 

david.barker@LIGO.ORG - 14:44, Tuesday 24 October 2023 (73708)

DMT2EPICS configuration file, HOFT and NOLINES added:

{
    "prefix": "H1:",
    "entries": [
        {
            "engine": "dmt",
            "config": {
                "url": "https://marble.ligo-wa.caltech.edu/dmtview/SenseMonitor_CAL_H1/H1SNSW%20EFFECTIVE%20RANGE%20%28MPC%29/data.txt",
                "pv-name": "CDS-SENSMON_CAL_SNSW_EFFECTIVE_RANGE_MPC",
                "pv-gps": "CDS-SENSMON_CAL_SNSW_EFFECTIVE_RANGE_MPC_GPS",
                "disconnected-value": -1.0,
                "period": 30.0
            }
        },
        {
            "engine": "dmt",
            "config": {
                "url": "https://marble.ligo-wa.caltech.edu/dmtview/SenseMonitor_Clean_H1/H1SNSC%20EFFECTIVE%20RANGE%20%28MPC%29/data.txt",
                "pv-name": "CDS-SENSMON_CLEAN_SNSC_EFFECTIVE_RANGE_MPC",
                "pv-gps": "CDS-SENSMON_CLEAN_SNSC_EFFECTIVE_RANGE_MPC_GPS",
                "disconnected-value": -1.0,
                "period": 30.0
            }
        },
        {
            "engine": "dmt",
            "config": {
                "url": "https://marble.ligo-wa.caltech.edu/dmtview/SenseMonitor_hoft_H1/H1SNSH%20EFFECTIVE%20RANGE%20%28MPC%29/data.txt",
                "pv-name": "CDS-SENSMON_HOFT_SNSH_EFFECTIVE_RANGE_MPC",
                "pv-gps": "CDS-SENSMON_HOFT_SNSH_EFFECTIVE_RANGE_MPC_GPS",
                "disconnected-value": -1.0,
                "period": 30.0
            }
        },
        {
            "engine": "dmt",
            "config": {
                "url": "https://marble.ligo-wa.caltech.edu/dmtview/SenseMonitor_Nolines_H1/H1SNSL%20EFFECTIVE%20RANGE%20%28MPC%29/data.txt",
                "pv-name": "CDS-SENSMON_NOLINES_SNSL_EFFECTIVE_RANGE_MPC",
                "pv-gps": "CDS-SENSMON_NOLINES_SNSL_EFFECTIVE_RANGE_MPC_GPS",
                "disconnected-value": -1.0,
                "period": 30.0
            }
        }
    ]
}
 

H1 DetChar
gabriele.vajente@LIGO.ORG - posted 12:19, Tuesday 24 October 2023 (73701)
Brute force coherences

Here's a BruCo repot for GDS-CALIB_STRAIN_NOLINES from last night lock: https://ldas-jobs.ligo-wa.caltech.edu/~gabriele.vajente/bruco_1382152121_STRAIN/ 

Some highlights:

Large frequency bands are completely devoid of coherence.

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 12:19, Tuesday 24 October 2023 - last comment - 12:25, Tuesday 24 October 2023(73700)
Tue CP1 Fill

Tue Oct 24 10:05:21 2023 INFO: Fill completed in 5min 18secs

 

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 12:25, Tuesday 24 October 2023 (73702)

Because CP1's dewer was being filled with LN2 this morning, soon after the fill Gerardo requested the LLCV valve be closed down from 37.5% to 29.0%. The attached trend shows discharge line LN2 sputtering which was stopped by the valve throttling.

Images attached to this comment
H1 SEI
arnaud.pele@LIGO.ORG - posted 12:05, Tuesday 24 October 2023 - last comment - 07:20, Wednesday 25 October 2023(73688)
ETMX ISI - 1.23Hz hunting

Summary - Jim fixed it with a little shake.

We cycled through the various ISI state controls. The 1Hz line would start appearing when stage 1 is isolated. In any states below this point, we can clearly see the difference in the 'sensor response' of the local H1 L4C wrt T240 or CPS. While Jim was taking an olg measurement, the ISI tripped, which seem to have changed the response of the H1 L4C back to the nominal response, see the first pdf attached, showing the L4C/T240 local TF, before (p1) vs after (p2) the ISI trip. The 2nd pdf attached shows the ISI spectra in the isolated state before (p1) vs after (p2) the shake, no other changes.
We've had sticky L4Cs in the past and solved it in a similar way (see alog 38939), but the symptoms were much more obvious than a slight change in resonant frequency as we are seeing here.

Timeline of tests :

16:04 - 16:12 UTC ETMX ISI/HEPI OFFLINE
16:15 - 16:30 UTC HEPI ON / ETMX ISI OFFLINE
16:31 - 16:37 UTC HEPI ON /ETMX DAMPED
16:38 - 16:44 UTC HEPI ON / ETMX ST1 ISO - We can see the giant ~1Hz line
16:45 - 16:48 UTC HEPI ON/ ETMX ST1 ISO - H1 L4C gain 0.5 - sym filter off - ~1Hz line goes down
16:50 - 16:52 UTC HEPION/ETMX ST1/ST2 ISO - H1 L4C gain 0.5 - sym filter off
16:55 - 17:07 UTC Back to nominal
17:25 - Jim changing stage 1 Rz blend to noL4C
17:38 - Jim changing stage 1 Y blend to noL4C
18:00 - Jim measuring Rz olgf

Ref alog 73625

Non-image files attached to this report
Comments related to this report
jim.warner@LIGO.ORG - 16:43, Tuesday 24 October 2023 (73716)

The ITMY peak is also coming the a "stuck" H1 L4C. This problem is kind of subtle, the L4C only seems to misbehave when the isolation loops are on and the accelerations on the L4C are low. Because tripping ETMX seemed to fix that L4C, I tried it on ITMY. To do this, I put the ISI in damped, ramped on an offset on the ST1H1 actuator, then turned it off with a zero ramp time. I did this for 2k, 3k and 4k counts of drive. All of these caused a bigger signal than the ETMX trip earlier, but the ITMY H1 L4C is still "stuck". Attached asds compare the corner 1 and corner 2 l4c-to-t240 local tfs before and after the whacks. No difference before vs after, but brown and pink are before, red and blue are after. 

But, changing ITMY Y and RZ St1 blends to 250mhz blends that don't use the H1L4C makes the 1.something hz peak on ITMY go away. This also worked on ETMX. I've set both ISIs to not use their H1 L4Cs, we'll watch for a while and re-evaluate next week. At this point, only ITMX is still using it's H1 L4C.

Images attached to this comment
gabriele.vajente@LIGO.ORG - 07:20, Wednesday 25 October 2023 (73726)

The two lines around 1.3 Hz are gone from DARM.

Unfortunateely it's hard to tell if this improved the nosie above 10 Hz, because there are a lof of scattered light events (4 Hz cryobaffle?)

Images attached to this comment
H1 CDS
jonathan.hanks@LIGO.ORG - posted 12:02, Tuesday 24 October 2023 (73699)
WP11485 work on the h1hwsmsr system
I replaced two disks on the h1hwsmsr system.

 * cloned a failing boot disk
 * replaced a failed raid drive that holds the image repository

I used clonezilla to replicate the boot disk.  Then after booting into the newly cloned disk I replaced the failed raid disk.  At this point in time the data is being automatically replicated onto the new disk, it should take a few hours.
H1 ISC
gabriele.vajente@LIGO.ORG - posted 11:39, Tuesday 24 October 2023 - last comment - 12:45, Wednesday 25 October 2023(73698)
Resonant gains in DARM2

I prepared a new filter module with resonant gains at 2.8 Hz and 3.4 Hz. This can be tried tomorrow (on during some commissioning time) to reduce the DARM RMS and see if it helps the non-stationary noise.

The new FM is loaded into DARM2 FM1 "RG2.8-3.4". It is not engaged now.

 

For future reference, here are the FMs that are used during lock acquisition:

DARM1: FM1 FM2 FM3 FM4 FM7 FM9 FM10

DARM2: FM3 FM4 FM5 FM6 FM7 FM8 FM9 FM10

leaving DARM1 FM5,6,8 and DARM2 FM1,2 unused

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 12:45, Wednesday 25 October 2023 (73737)

Camilla and I turned on DAM2 FM1 from 19:43:37 to 20:05:50 UTC October 25

H1 AOS
vladimir.bossilkov@LIGO.ORG - posted 11:26, Tuesday 24 October 2023 (73697)
PI MODE28 PLL filter changes and output ETMX Signal filter changes

I have made alterations to the signal path for the 80.3 kHz PI ("MODE28").

In the PLL block I have made all the requisite changes to exactly replicate my LLO implementation.
This should get applied to all PLLs, but in the interest of not making changes to signal paths that LHO currectly depends on to stay locked, I don't want to change other MODE's feedback settings.

I have altered the H1:SUS-ETMX_PI_UPCONV_UC3_SIG bank's bandpass filter, to steeply roll off.
This is motivated by the work in this LLO aLog. I have implemented the exact same filters here.
Again, I only make this change in the signal flow of MODE28, so as not to perturb other already functioning, and depended upon, systems.

H1 ISC
vladimir.bossilkov@LIGO.ORG - posted 11:12, Tuesday 24 October 2023 (73695)
SUS_PI guardian MODE24 trigger threshold changed; and RMS change initialisation changed.

Vlad, Naoki

Yesterday I observed PI MODE24 ring up a large number of occasions. This was caused because the PI Guardian was falsely perceiving that the amplitude of PI was rising, inferred that the phase was "bad" and changed the phase to an actually bad phase.

In the time it took to finally find the "good" phase again, the amplitude rang up, and the saving grace was actually the hard-coded limit on the PLL outputs (which I am re-implementing to be soft-coded), which meant that while the PI phase was slewing from bad to terrible and back to good, the actuation was not exponentially increasing.
I have increased the RMS threshold for which the guardian will infer that the mode is "going up" from 3->4.

Another issue is that the first time this triggers for any mode, it reads out the RMS in the last 5 seconds and considers whether that is larger than the last time it makes that check. But the these "last times" had an initialisation value of 0: so it would always change the phase (to a now "bad" one).
The fix for this we think is to initialise these thresholds to the triggering thresholds for each of the modes being damped. This will at least prevent the guaranteed change on the first trigger.

H1 General
mitchell.robinson@LIGO.ORG - posted 11:11, Tuesday 24 October 2023 (73694)
Monthly Dust Monitor Vacuum Pump Check

Dust monitor pumps are running smoothly.

H1 SQZ
daniel.sigg@LIGO.ORG - posted 10:23, Tuesday 24 October 2023 (73693)
SQZ PMC Electronics

Fil, Daniel

We installed a common mode board, a dual demodulator chassis and replaced the single delay line phase shifter with a dual. This delay line is used by the filter cavity green locking. We also installed the new feedthrough panels on SQZT0.

H1 ISC
vladimir.bossilkov@LIGO.ORG - posted 10:09, Tuesday 24 October 2023 - last comment - 13:31, Tuesday 24 October 2023(73692)
h1susprocpi model changes

WP #11489, and #11492 (not marked as Tuesday Maintinence as LHO apparently doesn't track that).

I have altered the h1susprocpi model in the follownig ways:

These changes affected libraries:

Changes will come in when the model is restarted on the h1oaf0 and the DAQ is restarted, as the new filter block adds channels.

Comments related to this report
vladimir.bossilkov@LIGO.ORG - 13:31, Tuesday 24 October 2023 (73703)

I have initialised and set to monitored SDF all new filterbank settings from this change.

I have specifically set a limit of 10 on every MODE's PLL AMP_FINAL filterbank; replicating the old hard-coded functionality.

H1 CDS
jonathan.hanks@LIGO.ORG - posted 09:24, Tuesday 24 October 2023 (73690)
WP 11488 Opslogin turned off
As per WP 11488 the opslogin system was turned off.  This was an older Debian 9 system that was kept around when we transitioned to Debian 10+ in case we needed access to older tools.  At this point the system is not being used and causes confusion when people login onto opslogin instead of opslogin0 (the current system to use).
H1 ISC
vladimir.bossilkov@LIGO.ORG - posted 09:08, Tuesday 24 October 2023 (73689)
PI broadband BLRMS fix

The band limited RMS (BLRMS) (H1:OMC-BLRMS_32_BAND#_BP for #={1 to 7}) bands that monitor DARM (Actually OMC DCPD A) frequencies in the range 2.5 kHz through to 32 kHz were being used in the lockloss monitoring tools to assess if a lockloss was due to PI.

These were not working correctly, so I poked my head in this morning.

The BLRMS used Elliptical filters, which have a flat response down the DC, and we were seeing a large DC value in the outputs that was "saturating" any high frequency signal we might care about.

These BLRMS have been altered to use Butterworth filters instead of Elliptical. Butterworth filters have the highly valuable property of actually being AC coupled so that should do the trick.

Logbook Admin General
jonathan.hanks@LIGO.ORG - posted 09:01, Tuesday 24 October 2023 (73687)
alog rebooted today 9am localtime
The alog was rebooted as part of regular maintenance.  This is a test of the system.
H1 ISC
thomas.shaffer@LIGO.ORG - posted 08:31, Tuesday 24 October 2023 - last comment - 08:55, Tuesday 24 October 2023(73683)
Stepped CARM gain

Following instructions from Sheila, I stepped the CARM gains (H1:LSC-REFL_SERVO_IN{1, 2}GAIN) by 1 about every minute starting at 15:11:30 UTC. I started at a gain of 8 and we lost lock as soon as I got to 20. We lost lock at 15:24 UTC.

Comments related to this report
camilla.compton@LIGO.ORG - 08:55, Tuesday 24 October 2023 (73685)

See SQZ BLRMs and DARM plot in 73682 for the high frequency noise decreasing at 15:17UTC, with a CARM gain of ~10.

H1 DetChar
gabriele.vajente@LIGO.ORG - posted 08:50, Wednesday 18 October 2023 - last comment - 18:35, Friday 27 October 2023(73546)
Low Frequency Noise (<50 Hz)

Using two periods of quiet time during the last couple of days (1381575618 + 3600s, 1381550418 + 3600s) I computed the usual coherences:

https://ldas-jobs.ligo-wa.caltech.edu/~gabriele.vajente/bruco_STRAIN_1381550418/
https://ldas-jobs.ligo-wa.caltech.edu/~gabriele.vajente/bruco_STRAIN_1381575618/

The most interesting observation is that, for the first time as far as I can remember, there is no coherence above threshold with any channels for wide bands in the low frequency range, notably between 20 and 30 Hz, and also for many bands above 50 Hz. I'll assume for now that most of the noise above ~50 Hz is explained by thermal noise and quantum noise, and focus on the low frequency range (<50 Hz).

Looking at the PSDs for the two hour-long times, the noise belowe 50 Hz seems to be quite repeatable, and follows closely a 1/f^4 slope. Looking at a spectrogram (especially when whitened with the median), one can see that there is still some non-stationary noise, although not very large. So it seems to me that the noise below ~50 Hz is made up o some stationary 1/f^4 unknown noise (not coherent with any of the 4000+ auxiliary channels we record) and some non-stationary noise. This is not hard evidence, but an interesting observation.

Concerning the non-stationary noise, I think there is evidence that it's correlated with the DARM low frequency RMS. I computed the GDS-CALIB RMS between 20 and 50 Hz (whitened to the median to weight equally the frequency bins even though the PSD has a steep slope), and the LSC_DARM_IN1 RMS between 2.5 and 3.5 Hz (I tried a few different bands and this is the best). There is a clear correlation between the two RMS, as shown in a scatter plot, where every dot is the RMS computed over 5 seconds of data, using a spectrogram.

 

 

Images attached to this report
Comments related to this report
gabriele.vajente@LIGO.ORG - 11:01, Wednesday 18 October 2023 (73554)

DARM low frequency (< 4 Hz) is highly coherent with ETMX M0 and R0 L damping signals. This might just be recoil from the LSC drive, but it might be worth trying to reduce the L damping gain and see if DARM RMS improves

 

Images attached to this comment
gabriele.vajente@LIGO.ORG - 13:04, Wednesday 18 October 2023 (73560)

Bicoherence is also showing that the noise between 15 and 30 Hz is modulated according to the main peaks visible in DARM at low frequency.

Images attached to this comment
elenna.capote@LIGO.ORG - 20:53, Wednesday 18 October 2023 (73579)

We might be circling back to the point where we need to reconsider/remeasure our DAC noise. Linking two different (and disagreeing) projections from the last time we thought about this, it has the correct slope. However, Craig's projection and the noisemon measurement did not agree, something we never resolved.

Projection from Craig: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=68489

Measurement from noisemons: https://alog.ligo-wa.caltech.edu/aLOG/uploads/68382_20230403203223_lho_pum_dac_noisebudget.pdf

christopher.wipf@LIGO.ORG - 11:15, Friday 20 October 2023 (73620)

I updated the noisemon projections for PUM DAC noise, and fixed an error in their calibration for the noise budget. They now agree reasonably well with the estimates Craig made by switching coil driver states. From this we can conclude that PUM DAC noise is not close to being a limiting noise in DARM at present.

Images attached to this comment
jeffrey.kissel@LIGO.ORG - 09:51, Tuesday 24 October 2023 (73691)CDS, CSWG, ISC, OpsInfo, SUS
To Chris' point above -- we note that the PUMs are using 20-bit DACs, and we are NOT using and "DAC Dither" (see aLOGs motivating why we do *not* use them in LHO:68428, and LHO:65807, namely that [in the little testing that we've done] we've seen no improvement, so we decided they weren't worth the extra complexity and maintenance.)
christopher.wipf@LIGO.ORG - 15:25, Tuesday 24 October 2023 (73710)

If at some point there’s a need to test DAC dithers again, please look at either (1) noisemon coherence with the DAC request signal, or (2) noisemon spectra with a bandstop in the DAC request to reveal the DAC noise floor.  Without one of those measures, the noisemons are usually not informative, because the DAC noise is buried under the DAC request.

christopher.wipf@LIGO.ORG - 18:35, Friday 27 October 2023 (73784)

Attached is a revised PUM DAC noisemon projection, with one more calibration fix that increases the noise estimate below 20 Hz (although it remains below DARM).

Images attached to this comment
H1 TCS
camilla.compton@LIGO.ORG - posted 09:34, Tuesday 10 October 2023 - last comment - 08:55, Tuesday 24 October 2023(73357)
CO2X turned up before IFO Unlocked this morning.

I turned up CO2X 15:05 to 15:12 UTC to see if we could put noise into DARM, plot attached.

We got DARM glitches when raised and lowered the annular mask, see t-cursors attached. This is surprising. We should check that this is from the CO2 laser being blocked while the mask  (on a flipper mount) passes through the beam, rather than any electronics issue. Can do this by turning off CO2 laser and then raising and lowering the mask. 

Still want to redo 72981 line driving once we understand which PD to use for readback.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 08:55, Tuesday 24 October 2023 (73686)

In 73682 showed the glitches were caused by the CO2 beam being quickly blocked rather than any electronics issue.

Displaying reports 12821-12840 of 84369.Go to page Start 638 639 640 641 642 643 644 645 646 End