Displaying reports 4201-4220 of 83128.Go to page Start 207 208 209 210 211 212 213 214 215 End
Reports until 16:25, Thursday 14 November 2024
LHO General
thomas.shaffer@LIGO.ORG - posted 16:25, Thursday 14 November 2024 (81278)
Ops Day Shift End

TITLE: 11/15 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Corey
SHIFT SUMMARY: The entire shift was dedicated to troubleshooting the laser glitching issue. We also had high winds and high useism, so it was good timing. The wind has died down and the troubleshooting has ended for the day so we are starting initial alignment.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
21:41 SAF Laser LVEA YES LVEA is laser HAZARD 08:21
16:22 ISC Sheila LVEA yes Unplugging cable for IMC feedback 16:42
16:47 TCS Camilla LVEA yes Power cycle TCSY chassis 17:04
17:44 EE Fernando MSR n Continuing work on the ISC backup computer 22:38
18:05 PSL Patrick, Ryan, Fil LVEA yes Power cycle PSL Beckhoff computer 18:26
18:39 PSL/CDS Sheila, Fil, Marc, Richard LVEA yes 35MHz swap or wiring 19:09
19:18 PSL Ryan LVEA yes Reset noise eater 19:49
19:58 TCS TJ, Camilla LVEA YES Checking TCSY cables 20:58
19:58 PSL Vicky LVEA YES Setting up PSL scope & poking cables 20:58
22:43 PSL Jason, Sheila LVEA yes PMC meas. 00:20
22:44 PSL Ryan LVEA yes PMC meas 23:14
22:44 PSL Vicky LVEA yes Setting up sr785 00:21
H1 PSL (ISC)
elenna.capote@LIGO.ORG - posted 15:54, Thursday 14 November 2024 (81283)
PMC OLG, Getting SR785 data at PSL racks

Vicky and Jason measured the PMC olg, and I grabbed the data from the SR785 for them. The plots are attached. The second measurement is a zoomed in version of the first.

Looks like the feature above 5 kHz is around the same frequency as the peak we are seeing in the intensity and frequency noise (alogs 80603, 81230)

These are the steps I took to get the data:

> cd /ligo/gitcommon/psl_measurements

> conda activate psl

> python code/SRmeasure.py -i 10.22.10.30 -a 10 --getdata -f data/name

This will save your data in the data folder as "name_[datetime string].txt"

To confirm connection before running, try

> ping 10.22.10.30

you should get something like

PING 10.22.10.30 (10.22.10.30) 56(84) bytes of data.
64 bytes from 10.22.10.30: icmp_seq=1 ttl=64 time=1.26 ms
64 bytes from 10.22.10.30: icmp_seq=1 ttl=64 time=1.54 ms (DUP!)
64 bytes from 10.22.10.30: icmp_seq=2 ttl=64 time=0.748 ms
64 bytes from 10.22.10.30: icmp_seq=2 ttl=64 time=1.03 ms (DUP!)
64 bytes from 10.22.10.30: icmp_seq=3 ttl=64 time=0.730 ms
64 bytes from 10.22.10.30: icmp_seq=3 ttl=64 time=1.02 ms (DUP!)
^C
--- 10.22.10.30 ping statistics ---
3 packets transmitted, 3 received, +3 duplicates, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 0.730/1.054/1.538/0.282 ms
----

run Ctrl-C to exit. If you don't get these messages, you are probably not plugged in properly.

To plot the data (assuming that you are measuring transfer functions), use

> python code/quick_tf_plot.py data/name_[datetime str].txt

Craig has lots of great options in this code to make nice labels, save the plot in certain places, etc. if you want to get fancy. Also, he has other scripts that will plot spectra, multiple spectra or multiple tfs.

If you want to measure from the control room, there are yaml templates that run different types of measurements, such as carm or imc olgs.

Non-image files attached to this report
H1 CAL
louis.dartez@LIGO.ORG - posted 14:41, Thursday 14 November 2024 - last comment - 19:40, Thursday 14 November 2024(81282)
Calibration pipeline 60 Hz line (and harmonics) subtraction turned off
We restarted the calibration pipeline with a new configuration ini such that it no longer subtracts the 60 Hz (and harmonics up to 300 Hz).  The configuration change is recorded in commit here: https://git.ligo.org/Calibration/ifo/H1/-/commit/53f2e892a38cfb18815912c33b1f1b8385cfff62

I restarted the pipeline at around 9:35 am PST. Around the same time as this was done at LLO (LLO:74051).

The IFO was down at the time, so I left a request with the H1 operators to contact both me and Joe B. when H1 is back at NLN but before going to Observing mode so that we (whoever is first) can confirm that the GDS restart is behaving as expected. Initial checks at LLO indicate that things are working properly, which is promising.
Comments related to this report
louis.dartez@LIGO.ORG - 19:40, Thursday 14 November 2024 (81287)
Joe B., Louis D., Corey G.

Corey called as soon as H1 reached NLN. The gstlal-calibration pipeline restart with 60 Hz subtraction turned off looks like it's behaving as expected so we gave Corey the green light from Cal to go into Observing. The 60Hz line and its harmonics up to 300Hz look good (i.e. NOLINES looks identical to STRAIN since subtraction for those lines was turned off).
Images attached to this comment
H1 CDS (CDS, PSL)
patrick.thomas@LIGO.ORG - posted 13:56, Thursday 14 November 2024 - last comment - 10:36, Friday 15 November 2024(81281)
Restarted Beckhoff PSL computer in diode room
Ryan, Jason, Patrick, Filiberto

As part of troubleshooting the PSL we hardware power cycled the PSL Beckhoff computer in the diode room this morning, along with all of the associated diode power supplies and a chassis in the LVEA.

I had guessed that everything would autostart, but I was wrong, so I took the opportunity to set it up to do so. This required putting a shortcut to the EPICS IOC startup script in the C:\TwinCAT\3.1\Target\StartUp directory (see attached screenshots), and selecting an option in the TwinCAT Visual Studio project to autostart the TwinCAT runtime. We software restarted the computer again to test this, and after logging in, the Beckhoff runtime and PLC code started, along with the EPICS IOC, but the visualization did not. I found documentation that pointed to the location of the executable that starts the visualization, and added a shortcut to that to the startup directory as well. We didn't have time to restart the computer again to see if that would autostart correctly.

For some reason there seemed to be issues with processes reconnecting to the EPICS IOC channels. I tested running caget on the Beckhoff computer itself and got a message about connecting to two different instances of the channel, and a couple of pop up windows related I think to allowing network access, which I said to allow. caget worked, although it gave a blank space for the value, so I tried it again with an invalid channel name, which it correctly gave an error for. On the Linux workstation we were using, the MEDM screens were not reconnecting, even after closing and reopening them, but again caget worked. We had to restart the entire medm process for it to reconnect.

The EDCU and SDF also had issues reconnecting, and they had to be restarted too.
Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 10:36, Friday 15 November 2024 (81294)

As Patrick mentioned, channel access clients which had been connected to the IOC on h1pslctrl0 would not reconnect after its restart.

The EDC stayed in its disconnect state for almost an hour, even though cagets on h1susauxb123 itself were connecting, albeit with "duplicate list entry" warnings:

(diskless)controls@h1susauxb123:~$ caget H1:SYS-ETHERCAT_PSL_INFO_TPY_TIME_HOUR
Warning: Duplicate EPICS CA Address list entry "10.101.0.255:5064" discarded
H1:SYS-ETHERCAT_PSL_INFO_TPY_TIME_HOUR 18

The restart of the DAQ EDC did not go smoothly, I had added a missing channel to H1EPICS_CDSRFM.ini (WP12195) in preparation for next Tuesday maintenance and so the EDC came back with a different channel list to that of the rest of the DAQ. I reverted this file change and a second EDC restart was successful.

11:38:35 h1susauxb123 h1edc[DAQ]
11:46:17 h1susauxb123 h1edc[DAQ]

The slow controls h1pslopcsdf system was also unable to reconnect to the 4 PSL WD channels it monitors. This was restarted at 12:08 14nov2024 PST.

Erik found that MEDM on some workstations would continue to show white-screen for h1pslctrl0 channels and a full restart of MEDM was needed to resolve this.

H1 PSL
camilla.compton@LIGO.ORG - posted 13:14, Thursday 14 November 2024 (81280)
Wiggled every cable is PSL/ISC Racks

TJ, Vicky, Camilla 

Vicky set up an oscilloscope on a cable simular to the PMC mixer and we watched the second trend of Sheila's PSL ndscope. The largest cause of repeatable glitches by touching cables: ISS AOM 80MHz that we found loose and tightened, it sounds like the PSL team found the PSL side of this cable loose and tightened on Tuesday too. Times noted below if we want to go back and look at the raw data.

LHO VE
david.barker@LIGO.ORG - posted 10:28, Thursday 14 November 2024 (81279)
Thu CP1 Fill

Thu Nov 14 10:15:06 2024 INFO: Fill completed in 15min 3secs

Jordan confirmed a good fill curbside.

New Vacuum section on CDS Overview shows CP1 LLCV percentage open as a red bar. Bar limits are between 40% and 100% open so normally you won't see any red in this widget.

Images attached to this report
H1 PSL
sheila.dwyer@LIGO.ORG - posted 09:43, Thursday 14 November 2024 (81277)
PSL bebugging this morning

Sheila, Vicky, Elenna, TJ, Marc, Filiberto, Richard, Daniel, Jason, Ryan Short

The IMC did not stay locked overnight, after Corey left it locked at 2W with the ISS on (screen shot from TJ). Vicky noticed that the SQZ 35MHz LO monitor sees something going on sometimes before the IMC lost lock, screenshot from Elenna.  A few days ago Nutsinee flagged this squeezer 35MHz LO monitor channel, it does show increased noise when the FSS is unlocked which doesn't make sense but it seems like this is at least partially due to cross talk (when we intentionally unlock the FSS, there is extra noise in this channel).

A bit before 8:29 I unplugged the cable from the IMC servo to the PSL VCO, and we left the FSS and PMC locked with the ISS off.  At 8:31 pacific time the FSS came unlocked, and glitches were visible in the PMC mixer and HV (screenshot) .  The new channel plugged in on TUesday H1:PSL-PWR_HPL_DC_OUT_DQ might show some extra noise, more visible in the zoomed in screenshot.  There are some small glitches seen in the SQZ 35MHz LO monitor at the time of the reference cavity glitches, the squeezer 35MHz LO is shared by the SQZ + PSL PMCs. 

8:44 unlocked the FSS and sitting with the PMC only locked, a few seconds later we had a few glitches in the mixer and HV. PMC alone screenshot

The PSL was powered down to restart the beckhoff, a few minutes before 11 until 11:30 or so.

A few minutes before 11 pacific time, Marc Filiberto and Richard went to the CER, measured the power out of the 35MHz source (11.288dBm) and adjusted the Marconi setting to match the power measured on the RF meter to 11.313dBm.  There is a 10MHz signal which is locked to gps through the timing system plugged into the back of the Marconi, Daniel says that he thinks the Marconi is locked to that source if it is plugged in.

At 19:33 (11:33 UTC) the PMC is relocked after the beckhoff reboot with the 35MHz source changed.

Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 07:31, Thursday 14 November 2024 (81275)
Ops Day Shift Start

TITLE: 11/14 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 33mph Gusts, 23mph 3min avg
    Primary useism: 0.08 μm/s
    Secondary useism: 0.66 μm/s
QUICK SUMMARY: The IMC dropped lock many times overnight. I imagine that this will shape our plans for the day once we have time to dicuss next steps.

Images attached to this report
LHO General (TCS)
corey.gray@LIGO.ORG - posted 21:59, Wednesday 13 November 2024 (81270)
Wed EVE Ops Summary

TITLE: 11/14 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 161Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:

Had a nice 5hr lock for H1.  Running an IMC-only lock overnight to see if there are any FSS/IMC glitches.
LOG:

H1 ISC (OpsInfo, PSL)
corey.gray@LIGO.ORG - posted 21:55, Wednesday 13 November 2024 - last comment - 22:02, Wednesday 13 November 2024(81273)
H1 Lockloss at 2100utc (9pm PT) After 5hr Lock.....Now What....Leaning To IMC-only Overnight.....

The plan for tonight's shift was to see how locking would go for H1.  Well....TJ handed over a locked H1, and then it stayed locked for 5hrs of the 6hr Eve shift!  

Vicky mentioned that the Lockloss at 0459utc looked like possible FSS glitch, but it looked "different".  She is going to post her plot about that.

Comments related to this report
victoriaa.xu@LIGO.ORG - 22:02, Wednesday 13 November 2024 (81274)

This lockloss (1415595557, 2024-11-13 20:58:59 PT) looks related to IMC (/FSS /ISS / PSL)... 10-15 seconds before LL, FSS starts to see some glitches, then more glitches starting ~5 sec before LL. ~30 second trends here.

Then zooming in ~100ms before LL (plot), maybe a small glitch is seen on ISS second loop PDs, and then the IMC loses lock. At the same time that the IMC loses lock, then lockloss pulse and AS port and LSC_REFL see the lock loss. So it seems like an IMC lockloss with some FSS glitching beforehand.

Images attached to this comment
H1 TCS
corey.gray@LIGO.ORG - posted 17:47, Wednesday 13 November 2024 - last comment - 10:52, Monday 18 November 2024(81271)
TCSy Laser Unlocks & Relocks, Dropping H1 From Observing Briefly

H1 was dropped out of OBSERVING due to the TCS ITMy CO2 laser unlocking at 0118utc.  The TCS_ITMY_CO2 guardian relocked the 2min.

It was hard to see the reason why at first (there were no SDF Diffs, but eventually saw a User Message via GRD IFO (on Ops Overview) pointing to something wrong with TCS_ITMY_CO2.  Oli was also here and they mentioned seeing this, along with Camilla, on Oct9th (alog)---this was the known issue of the TCSy laser nearing the end of its life.  It was replaced a few weeks later on  Oct22nd (alog).

Here are some of the lines from the LOG:

2024-11-13_19:43:31.583249Z TCS_ITMY_CO2 executing state: LASER_UP (10)
2024-11-14_01:18:56.880404Z TCS_ITMY_CO2 [LASER_UP.run] laser unlocked. jumping to find new locking point

.

.
2024-11-14_01:20:12.130794Z TCS_ITMY_CO2 [RESET_PZT_VOLTAGE.run] ezca: H1:TCS-ITMY_CO2_PZT_SET_POINT_OFFSET => 35.109375
2024-11-14_01:20:12.196990Z TCS_ITMY_CO2 [RESET_PZT_VOLTAGE.run] ezca: H1:TCS-ITMY_CO2_PZT_SET_POINT_OFFSET => 35.0
2024-11-14_01:20:12.297890Z TCS_ITMY_CO2 [RESET_PZT_VOLTAGE.run] timer['wait'] done
2024-11-14_01:20:12.379861Z TCS_ITMY_CO2 EDGE: RESET_PZT_VOLTAGE->ENGAGE_CHILLER_SERVO
2024-11-14_01:20:12.379861Z TCS_ITMY_CO2 calculating path: ENGAGE_CHILLER_SERVO->LASER_UP
2024-11-14_01:20:12.379861Z TCS_ITMY_CO2 new target: LASER_UP

Comments related to this report
camilla.compton@LIGO.ORG - 10:52, Monday 18 November 2024 (81334)

CO2Y has only unlocked/relocked once since we power cycled the chassis on Thursday 14th (t-cursor in attached plot).

Images attached to this comment
corey.gray@LIGO.ORG - 17:48, Wednesday 13 November 2024 (81272)TCS

0142:  ~30min later had another OBSERVING-drop due to CO2y laser unlock.

thomas.shaffer@LIGO.ORG - 09:21, Thursday 14 November 2024 (81276)

While it is normal for the CO2 lasers to unlock from time to time, whether it's from running out of range of their PZT or just generically losing lock, this is happening more frequently than normal. The PZT doesn't seem to be running out of range, but it does seem to be running away for some reason. Looking back, it's unlocking itself ~2 times a day, but we haven't noticed since we haven't had a locked IFO for long enough lately.

We aren't really sure why this would be the case, chiller and laser signals all look as they usually do. Just to try the classic "turn it off and on again", Camilla went out to the LVEA and power cycled the control chassis. We'll keep an eye on it today and if it happens again, and we have the time to look further into it, we'll see what else we can do.

Images attached to this comment
LHO General
corey.gray@LIGO.ORG - posted 16:43, Wednesday 13 November 2024 (81269)
Wed EVE Ops Transition

TITLE: 11/14 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 161Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 13mph Gusts, 10mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.50 μm/s
QUICK SUMMARY:

TJ was taking H1 to OBSERVING as I arrived!  Plan for tonight is to keep H1 NLN as much as possible---if H1 goes down and locking is tough toward the end of the shift, will redirect and focus on only the IMC and keeping it locked overnight to log time to monitor for any glitches.  If this happens, will set ISC_LOCK to IDLE and take IMC_LOCK to from MANAGED to AUTO.

Environmentally it is less breezy than 24hrs ago.  µseism is about the same as 24hrs ago (it rose a little but came back down).  

LHO General
thomas.shaffer@LIGO.ORG - posted 16:26, Wednesday 13 November 2024 (81242)
Ops Day Shift End

TITLE: 11/14 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: We took 4 or so hours out today to troubleshoot the laser glitches that we've been having. After that we were able to relock quickly and test out some longer ramp times for te ETMX transition state.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
21:41 SAF Laser LVEA YES LVEA is laser HAZARD 08:21
16:36 FAC Karen Opt lab n Tech clean 17:01
17:01 FAC Karen Wood shop/Fire pump rm N Tech clean 17:32
19:32 PSL/CDS Jason, Marc LVEA yes FSS transfer function 20:06
20:25 SEI Jim, Neil LVEA yes Bier garten huddle test meas. 21:21
20:43 VAC Janos MidX N Mech room checks and measurements 21:23
21:48 ISC/CDS Sheila, Marc LVEA yes IMC OLG TF meas. 22:13
22:07 SEI Neil CER yes Checking on cabling 22:44
22:44 SEI Jim, Neil LVEA yes Removing bier garten measurement 23:03
22:48 VAC Janos MX n Mech room checks and meas 23:04
H1 General
thomas.shaffer@LIGO.ORG - posted 16:09, Wednesday 13 November 2024 (81267)
Observing 0002 UTC

We paused locking for just over 4 hours today to try and troubleshoot the glitching that we've been having. After that was wrapped up we relocked without an initial alignment, but I did help DRMI lock by touching SRM and a little BS. Once at low noise there were 3 SDF diffs related to the new TRAMPS for the transition from ETMX state.

Images attached to this report
H1 ISC
sheila.dwyer@LIGO.ORG - posted 15:44, Wednesday 13 November 2024 - last comment - 16:07, Wednesday 13 November 2024(81262)
fast lockloss investigations today, plan for tonight

Today TJ had the IMC locked without the IFO (with the ISS first loop) from 18:45- 22:45, without the IMC losing lock.  This might suggest that the change of the FSS EOM path op amp yesterday 81247 did indeed solve the issue that was making the IMC unstable even without the interferometer locking, but more time would make this a more definitive test. 

During the 4 hours Jason and Marc also went to the floor and measured the FSS OLG, 81254, which showed some places where there are mulitple places where it came close to unity gain.  They lowered the gain by 0.75 dB.  Later, Marc and I went and measured the IMC OLG, and we tried lowering the gain by 3dB (common gain of 12dB), to give the FSS OLG more clearance from unity gain, we measured the IMC OLG before and after this change 81259

We decided to try locking tonight with the FSS gain at 12dB.  If the interferometer is not stable through the evening shift, Corey will set the IFO to down and leave the IMC locked overnight so that we can get more time for the test of IMC stability on it's own. 

 

Comments related to this report
thomas.shaffer@LIGO.ORG - 16:07, Wednesday 13 November 2024 (81266)

Here's a screenshot of the FSS diff in the safe.snap. There was no diff in the Observe so we must not monitor it. Perhaps something we should change?

Images attached to this comment
H1 ISC
elenna.capote@LIGO.ORG - posted 15:11, Wednesday 13 November 2024 - last comment - 16:11, Wednesday 13 November 2024(81260)
ETMX transition still a problem

Unfortunately, changing the ramp time in the IX L3 to EX L3 transition to 2 seconds (from 10 seconds), seems to overall have made things worse. The first lock attempt through this state was successful, but there were four locklosses last night in state 558, each with 1 second state duration and none tagged as IMC.

Sheila and I changed the ramp time to 20 seconds, to test if ramping slower is better. Besides that, it seems like we should measure the loops before and after the transition on the way up to check the stability of the DARM loop.

Comments related to this report
elenna.capote@LIGO.ORG - 15:59, Wednesday 13 November 2024 (81264)

The 20 second ramp appears to have worked. Still an oscillation, but with a much lower amplitude.

Images attached to this comment
elenna.capote@LIGO.ORG - 16:11, Wednesday 13 November 2024 (81268)

We tried to make an OLG measurement before and after the transition, but the template we used before the transition caused a big wiggle in the buildups so we didn't actually get any data. We ran the template after the transition and noticed the gain was low by about 15%.

Attached screenshot shows two OLG measurements because I forgot to add the most recent measurement as a reference. Upper left red measurement was taken Jun 20, 2024. Lower right red measurement was today.

Images attached to this comment
LHO General
thomas.shaffer@LIGO.ORG - posted 07:45, Wednesday 13 November 2024 - last comment - 16:07, Wednesday 13 November 2024(81240)
Ops Day Shift Start

TITLE: 11/13 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 16mph Gusts, 11mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.65 μm/s
QUICK SUMMARY: Looks like H1 made it up to low noise two times in the last 3 hours, but lost lock immediately. useism is still quite high, and some wind overnight didn't help. The IMC is now struggling to stay locked after a few failed DRMI attempts, but a successful PRMI. Starting initial alignment now.

 

Comments related to this report
camilla.compton@LIGO.ORG - 09:17, Wednesday 13 November 2024 (81241)PSL

Two of the 4 locklosses over the night have FSS_OSCILATION tags and two have the IMC tag. The lockloss tool and attached trend show more glitches in the H1:PSL-FSS_FAST_MON_OUT_DQ channel before these locklosses. Maybe from the changes yesterday: TTFSS changed and "PSL FSS:  Common Gain increased from 13 to 15.  Fast Gain increased from 5 to 6" 81235

NLN locklosses last night:

Time (gps) Time (UTC) Time in NLN Tags Notes Zoomed in plot
1415545894 2024-11-13 15:11:15 UTC 0:02:06 FSS_OSCILLATION, OMC_DCPD FSS channel gitches start and grow before lockloss, but not an IMC lockloss  plot zoomed plot
1415542310 2024-11-13 14:11:32 UTC 0:00:06 IMC, FSS_OSCILLATION, OMC_DCPD FSS channel has lots of glitches before lockloss that get more frequent, IMC lockloss plot zoomed plot
1415537275 2024-11-13 12:47:36 UTC 2:37:46 IMC, OMC_DCPD FSS channel normal, but IMC caused lockloss plot zoomed plot
1415508173 2024-11-13 04:42:34 UTC 1:07:00 OMC_DCPD Normal lockloss, no FSS glitches, not caused by IMC  
Images attached to this comment
jason.oberling@LIGO.ORG - 09:08, Wednesday 13 November 2024 (81244)

Increasing the gains should not have caused more oscillations.  I'm writing an alog covering the entirety of our work yesterday right now, but we raised the gains due to finding a bad component in the in-service TTFSS (the PA-85 high voltage op-amp in the EOM path) whose replacement removed reduced the observed zero crossings around 500kHz noted by Sheila et al here and corrected anomolies Marc observed in the EOM path TF (he has the data for this).  We also correctly tuned the EOM notch to remove the 1.685MHz peak also seen in the linked alog.  Raising the common gain put the FSS UGF back in its normal range (400kHz - 500kHz), and we raised the Fast gain by 1dB to compensate for a slight loss of gain in the PZT path caused by our moving the PZT notch from ~35.7kHz to ~32.7kHz.  The TF and crossover measurement showed all should be well with the FSS after these fixes, so it's still unclear why these IMC locklosses continue.

Edit: Upon closer inspection of the TF, the peakiness around 500kHz is still present but apparently less, yet still close to a zero crossing.  We could lower the Common gain by 1dB (to 14dB) to better clear the zero crossing if we suspect this is causing instability.

camilla.compton@LIGO.ORG - 09:27, Wednesday 13 November 2024 (81245)

At 17:18UTC had another lockloss where the FSS_FAST_MON and FSS_PC_MON grow before the lockloss, IMC caused. Plot, zoomed plot shows there isn't explicit glitches but the FSS_FAST_MON channel just gets more noisy.

Images attached to this comment
victoriaa.xu@LIGO.ORG - 16:07, Wednesday 13 November 2024 (81251)

Zoom out comparing Camilla's Lockloss 1 (tagged "FSS" + "IMC") vs. Lockloss 2 (tagged "FSS").

LL1 1415545894 sees lots of FSS glitches before FSS and IMC lose lock, then IFO loses lock. LL2 1415542310 sees noise related to FSS grow slowly before lockloss.

For lockloss 2, tagged "FSS" with a slow oscillation growing before lockloss, Daniel and I then compared the FSS and OMC_DCPD fast (64k) spectra before losing lock here (where H1:PSL-FSS_FAST_MON_OUT_DQ has been calibrated with zpk = ([], [10], [1.3e6]) to get Hz/rtHz.

After PSL swap (cyan and pink traces) - laser noise looks as Daniel expected with ~100 Hz/rtHz at 100 Hz. Before PSL swap, black trace looked lower than expected, unclear.  From lho81210, LLO engages this zpk filter in their filter bank to get the output of H1:PSL-FSS_FAST_MON_OUT_DQ in Hz/rtHz ; LHO has it in the filter bank but does not engage it.

We see more broadband extra noise in FSS (not in OMC DCPD) before losing lock, but no big peaks / features otherwise.

Images attached to this comment
Displaying reports 4201-4220 of 83128.Go to page Start 207 208 209 210 211 212 213 214 215 End