Displaying reports 15801-15820 of 86558.Go to page Start 787 788 789 790 791 792 793 794 795 End
Reports until 16:20, Thursday 14 September 2023
LHO General
thomas.shaffer@LIGO.ORG - posted 16:20, Thursday 14 September 2023 (72889)
Ops Day Shift Summary

TITLE: 09/14 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: One lock loss with a full auto relock. Quiet shift otherwise.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
14:59 FAC Tyler EY n Check on chiller alarm 15:15
15:22 FAC Tyler EX n Check on chiller alarm 15:37
18:06 FAC Randy EY n Inspections outside of building 19:02
20:14 FAC Tyler EX n Checking on chiller 20:22
20:31 FAC Tyler EX n More chiller checks 21:31
21:42 VAC Gerardo EY n Check on water lines 22:08
21:53 FAC Tyler MY n Check on Fan1 noise 22:25
22:13 - Jeff, OBC EX, overpass n Film arm driving 22:35
H1 General
thomas.shaffer@LIGO.ORG - posted 16:07, Thursday 14 September 2023 (72890)
Lock loss 2137 UTC

1378762650

Looks like the same one as yesterday and similar to others recently were LSC-DARM_IN1 shows the first signs of movement. Need to look into this further. Ibrahim has dug a good amount here 72876.

Images attached to this report
LHO General
bubba.gateley@LIGO.ORG - posted 15:23, Thursday 14 September 2023 (72888)
M-Y AHU FAN 1
I received an email from the control room (thank you TJ) about the Mid Y AHU Fan 1 showing some excessive vibration. Tyler went down to investigate and had me turn the fan off to listen to the sound as it was ramping down. It was terrible, one or both bearings are damaged or destroyed. We switched to Fan 2 which should be much smoother. We will investigate Fan 1 and replace the bearings. 
LHO FMCS
tyler.guidry@LIGO.ORG - posted 13:30, Thursday 14 September 2023 (72886)
EX Recurring Chiller Fault
An alarm was noted this morning on the Alerton System at End X chiller 2. The alarm information is not accessible remotely. At the control a "Low Refrigerant Temperature" alarm was present. Because this is a non-latched alarm, I took note and cleared it before returning to the corner station. 

Shortly after clearing, the alarm returned. I investigated the setup of the chiller via compass and noticed that the chiller was set to "manually enable". It is my understanding that a manual enable of the chiller will hold the chiller on regardless of the demand for cooling. I suspect that the cooler temperatures combined with the manual enable kept the chiller running well past the buildings demand for chiller water and possible driving refrigerant temperatures lower than the chiller would like.

When returning this afternoon to clear the second alarm, I observed, for the first time in recent memory, EX chiller 2 ramping down indicating that the requirement for cooling had been satisfied. The control boards diagnostics agreed.

Im hopeful the chiller refrigerant will return to normal following this change but I will continue to monitor it in the coming days.

B. Gateley T. Guidry 
H1 OpsInfo (GRD)
ryan.short@LIGO.ORG - posted 12:06, Thursday 14 September 2023 - last comment - 10:18, Monday 18 September 2023(72884)
Earthquake Check and WAITING State in H1_MANAGER

R. Short, T. Shaffer

Our automation has called for assistance when earthquakes roll through and make locking H1 difficult, which typically just has an operator request H1 to 'DOWN' and wait until ground motion is low, then try locking again. In an attempt to further improve our automation and lower the need for intervention, I've added a 'WAITING' state to H1_MANAGER that holds ISC_LOCK in 'READY' and waits for the SEI_ENV guardian to leave its 'EARTHQUAKE' state before moving back to 'RELOCKING.' H1_MANAGER will jump from 'RELOCKING' to 'WAITING' if the SEI_ENV node is in 'EARTHQUAKE' or 'LARGE_EQ' and ISC_LOCK is not past 'READY' (the motivation for this being that if H1 is making progress in locking when an earthquake hits, we don't want it to stop if the earthquake is harmless enough).

These changes are committed to svn and H1_MANAGER has been loaded.

Comments related to this report
ryan.short@LIGO.ORG - 10:18, Monday 18 September 2023 (72938)

There were two cases over the weekend where an earthquake caused a lockloss and H1_MANAGER correctly identified that with SEI_ENV being in 'EARTHQUAKE' mode, it would be challenging to relock, so it kept ISC_LOCK from trying (one on 9/17 at 11:30 UTC and another on 9/18 at 13:44 UTC). However, after 15 minutes of waiting, IFO_NOTIFY called for assistance once it saw that ISC_LOCK had not made it to its 'READY' state; confusing behavior at first, since H1_MANAGER requests ISC_LOCK to 'READY' when it moves to the 'WAITING' state. When looking into this, I was reminded that ISC_LOCK's 'DOWN' state has a jump transition to 'PREP_FOR_LOCKING' when it finishes, meaning that ISC_LOCK will stall in 'PREP_FOR_LOCKING' unless revived by its manager or is requested to go to another state. To fix this, I've added an "unstall" decorator to H1_MANAGER's 'WAITING' state's run method, which will revive ISC_LOCK so that it can move past 'PREP_FOR_LOCKING' and all the way to 'READY' while waiting for the earthquake to pass.

Images attached to this comment
H1 CDS
david.barker@LIGO.ORG - posted 11:15, Thursday 14 September 2023 (72883)
FMCS FCES MEDM created

I have put together an MEDM for the FCES (HAM8 shack) FMCS. It can be opened from the FMCS Overview MEDM.

On a related note, I have added a button on the Overview to run a script which restores the FMCS EPICS alarm settings.

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 10:47, Thursday 14 September 2023 - last comment - 10:14, Friday 15 September 2023(72882)
Added front-end model latched IPC error display to CDS Overview

We are now in day 113 of O4 and we have not had any spontaneous IPC receive errors on any model throughout this time.

During Tuesday maintenance this week I forgot to issue a DIAG_RESET on h1oaf after the pem models were restarted, and therefore it is showing latched IPC errors from this time which I just noticed today.

To elevate the visibility of latched transient IPC errors, I have added a new block on the CDS overview which will turn yellow if the model has a latched IPC error. This block does not differentiate between IPC type (shared-memory, local-dolphin, x-arm, y-arm). The new block is labeled lower case "i". Clicking on this block opens the model's IPC channel table.

The upper case "I" block remains as before which turns red if there are any ongoing IPC errors (reported as a bit in the model's STATE_WORD)

To make space for this new block (located at the end by the CFC) I have reduced the width of the DAQ-STAT and DAQ-CFC triangles to the same width as the blocks (10 pixels).

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 10:14, Friday 15 September 2023 (72897)

I have added a legend to the CDS Overview, showing what all the model status bits mean.

Clicking on the Legend button opens DCC-T2300380 pdf using the zathura image viewer.

LHO VE
david.barker@LIGO.ORG - posted 10:35, Thursday 14 September 2023 (72880)
Thu CP1 Fill

Thu Sep 14 10:07:22 2023 INFO: Fill completed in 7min 18secs

Gerardo confirmed a good fill curbside.

Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 08:01, Thursday 14 September 2023 (72877)
Ops Day Shift Start

TITLE: 09/14 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: SEISMON_ALERT
    Wind: 1mph Gusts, 0mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.12 μm/s
QUICK SUMMARY: Locked for 9 hours. A 5.2M earthquake from Columbia is starting to roll through and elevate 30-100mHz.

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 00:43, Thursday 14 September 2023 (72876)
OPS Eve Shift Summary

TITLE: 09/14 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 142Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 06:04 UTC

Lock acquisition was fully automatic, but had to go through PRMI and MICH_FRINGES.

3 IY SDF diffs accepted as per Jenne's instruction (screenshotted)

Lockloss Investigation (alog 72875)

“EX” Saturation one second before the lockloss.

The lockloss happened at 04:45:48 UTC. In order to find the so-called “first cause”, I realized that there was an EX saturation at 04:45:47. I went and checked the L3 actuators since those were the highest (blinking yellow) on the saturation monitor.

Upon trending this lockloss, I found that there was indeed a very big actuation at 04:45:47 at around the same time for all 4 ESDs. The fact that they were also in the same millisecond tells me that they were all caused by something else. (Screenshot "EXSaturation")

More curious however is that there was a preliminary negative “kick” about half a second before (1.5 seconds pre-lockloss at 04:45:46.5 UTC). This wasn’t a saturation but contributes to instability in EX (I’m assuming). This “kick” did not happen regularly before the saturation so I think may be relevant to the whole mystery. All of these also happened at the same time (to the nearest ms) and all had the same magnitude of ~-330,000 cts. I think that this half a second kick was caused by something else since it’s the same magnitude and time for all 4.

It’s worth noting that this preliminary kick was orders of magnitude higher (lower) than the EX activity leading up to it (Screenshot "EXSaturation2") and the saturation was many orders of magnitude higher than that one, which went up to 1*1011 counts.

It is equally worth noting that EX has saturated 5 separate times since the beginning of the shift:

  1. 00:43:44 UTC
  2. 01:52:23 UTC
  3. 02:04:00 UTC
  4. 04:33:33 UTC (12 mins pre-lockloss)
  5. 04:45:47 UTC (1 second pre-lockloss)

Now the question becomes: was it another stage of “EX” or was it something else?

First, we can trend the other stages and see if there is any before-before-before lockloss behavior. (This becomes relevant later so I’ll leave this investigation for now).

Second, we can look at other fun and suspicious factors:

This leads me to believe that whatever caused the glitch behavior may be related to the EX saturation, which begs the question: Have the recent EX saturations all prompted these BLRM all-bands glitches? Let’s find out.

Matching the EX saturation timestamps above:

Looking at the saturation 12 mins before, there was indeed a ring up, this time with 20-34 Hz band being first and highest in magnitude (Screenshot "EXSaturation4"). It’s worth saying that this rang up much less (I had to zoom in by many orders of magnitude to see the glitch behavior), which makes sense because it didn’t cause a lockloss but also tells us that these locklosses are particularly aggressive if:

  1. When we do lose lock, we have to go all the way to MICH_FRINGES in order to align properly (and yesterday even that didn’t work).
  2. Even the EX saturations that cause these glitches that don’t cause locklosses are orders of magnitudes lower than the ones that do.

Anyway, all EX saturations during this shift caused this behavior in the BLRMs screen. All of the non-lockloss causing ones were about 1*108 times lower in magnitude than this one.

This all isn’t saying much other than that an EX ring-up in DARM will show up in DARM.

But, now we have confirmed that these glitches seem to be happening due to EX saturations so let’s find out if this is actually the case. So far, we know that a very bad EX saturation happens, the BLRMs screen lights up, and then we lose lock. This splits up our question yet again:

Does something entirely different (OMC perhaps) cause EX to saturate or is the saturation happening and caused within another EX stage? (Our “first” from before) We can use our scope archeology to find out more.

But sadly, not today because it’s close to 1AM now - though the investigation would continue as such:

  1. Trend EX actuators and see when the “first kick” happens. Remember our first evidence of anything is 04:45:46.5 UTC in L3 but all at the same time.
    1. If one actuator is clearly first, this supports the claim that that specific actuator is the problem
    2. If none are first, then it is likely something else
  2. Trend OMC DCPD channels in a similar manner to EX
  3. Do “all of the above” for locklosses of a similar breed (yesterday’s would be a good one).
  4. Continue exploring “first causes” among those locklosses if nothing else yields conclusive answers.

I will continue investigating this in tomorrow’s shift.

P.S. If my reasoning is circular or I’m missing something big, then burst my bubble as slowly as possible so I could maximize the learning (and minimize the embarrassment)

LOG:

None

Images attached to this report
H1 General
ibrahim.abouelfettouh@LIGO.ORG - posted 21:49, Wednesday 13 September 2023 - last comment - 09:32, Thursday 14 September 2023(72875)
Lockloss 4:46 UTC

Lockloss due to another of the same type of Glitch as yesterday's Lockloss Alog 72852 (and less than 20 minutes apart too). Attempting to re-lock automatically now while investigating the cause of the glitch.

Comments related to this report
thomas.shaffer@LIGO.ORG - 09:32, Thursday 14 September 2023 (72878)

Investigation in Ibrahim's summary alog72876

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 20:08, Wednesday 13 September 2023 (72874)
OPS Eve Midshift Update

IFO is in NLN as of 7:24 UTC (19:37 hr lock) and OBSERVING as of 23:12 UTC

Other:

IY violin modes 5 (and 6) have been going down steadily for the last 4 hrs since comissioning rang them up and a gain was applied to mode 5 (Screenshot). All violins are now under 10-17 m/Hz1/2 as of about 2:00 UTC.

 

Images attached to this report
H1 SQZ
victoriaa.xu@LIGO.ORG - posted 17:53, Wednesday 13 September 2023 - last comment - 11:13, Monday 25 September 2023(72870)
Brief test for fc backscatter with fc alignments, some strange lines in FC-LSC locking signal

Based on TJ's log 72857 about recent FC alignment drifts, I wanted to check out the situation with possible scattered light from the filter cavity since alignments can drift around (worth investigating more), and I wanted to check the CLF fiber polarization (this was basically fine).

Summary screenshot shows some measurements from today. I'm wondering if we might be closer than I realized to filter cavity backscatter. With an excitation that is ~5-10x greater than the ambient lsc control to FC2, it created scatter about 2x above darm at 100 Hz. Based on previous (higher-freq) measurements (LHO:68022) and estimates (LHO:67586), I had thought ambient LSC control was about 10-fold below DARM; this suggests we are within a factor of ~5? Though, there is a lot of uncertainty (on my end) of how hard we are driving FC2 given the suspension/loop roll-offs/etc, so I need to think more about the scaling between measured scatter and the excitation.

Strange line in FC-LSC error signal -- it seems to wander; I've seen it (or things like it) sometimes in other SQZ-related signals; but I haven't figured out where it comes from yet. It can be easily seen on SQZ summary pages (sqz > subsystems tab) since Derek and Iara helped us it up (thank you Detchar!!). I don't see it in CLF_ISS but sometimes in other error signals. Not clear to me that this is an issue if the peak is >100 Hz, but if it drifts to lower frequencies and this line is real/physical (not some random artifact), it could be problematic. The peak amplitude seems large enough that if it were in-band of the FC2 suspension and controls, it could plausibly get injected as real FC length noise and drive some measurable backscatter.

For the excitation -- I used the template from LHO:68022 but ran it to lower frequencies, in-band of FC-LSC. Compared to the FC error signal (specifcally H1:SQZ-FC_WFS_A_I_SUM_OUTPUT which is the input sensor to FC_LSC_DOF2_IN1), this DTT injection of 30,000 counts increased the integrated RMS of the in-loop error signal at 10 Hz by about 9-fold (= 3375 (w/excitation) / 387 (ambient), measured from dtt rms cursors). I injected into the fc-lsc loop at H1:SQZ-FC_LSC_DOF2_EXC, with various amplitudes (like 30k), and filter = butter("BandPass",4,10,300); this should then go to FC2_L for suspension feedback. I'm not sure that I'm using the best witness sensors for actual length noise driven in this excitation, but wasn't able to totally figure it out in time.

With this excitation going, I tried to walk the alignment to see if there was an alignment that minimizes backscatter, but I didn't figure this out in time. I tried to walk ZM2 with beam spot control off, and then set the QPD offsets where it landed. This was probably the wrong approach, since I wasn't able to then set the QPD offsets in time; maybe I should have walked the FC-QPD offsets with full ASC running at higher gain, since this loop is so slow. Might be worth trying this again for a bit; with the injection running, I wasn't sure if I was able to minimize scatter by walking ZM2 (p/y/ maybe psams?), but there were a couple directions in both pitch and yaw that looked promising. 

In the end, SDF diffs for ZM2 were accepted (ZM2 is not under asc-control), and I accepted the beam spot position change in the FC QPD Pitch offset (from 0.07 --> 0.08) while reverting the yaw change that I didn't figure out in time. I don't anticipate much overall change in the squeezer after these tests.

Images attached to this report
Comments related to this report
lee.mcculler@LIGO.ORG - 11:13, Monday 25 September 2023 (73090)

Do you have the SQZ laser noise eater on? We noticed a similarly wandering line before O3. See LLO43780 and LLO43822.

H1 ISC
jenne.driggers@LIGO.ORG - posted 17:08, Wednesday 13 September 2023 - last comment - 17:47, Wednesday 13 September 2023(72862)
Measurements for adjusting LSC FF

[Jenne, Gabriele, with thoughts and ideas from Elenna and Sheila]

The last few days, our sensitivity has been degrading a small amount, and Gabriele noted from a bruco that we're seeing increased MICH and SRCL coherence.  It hasn't even been a full 2 weeks since Gabriele and Elenna last tuned the MICH FF, so this is disappointing. Elenna has made the point in alog 72598 that the effectiveness of the MICH FF seems to be related to the actuation strength of the ETMX ESD.  We certainly see that the Veff of ETMX has been marching in a monatonic line for the last few months in Ibrahim's alog 72849. After roughly confirming that this makes sense, Gabriele and I took measurements in preparation for soon switching the LSC FF to use the ITMY PUM, just like LLO does, in hopes that makes us more immune to these gain changes.


Today, at Sheila's suggestion, I tried modifying the ETMX L3 DriveAlign L gain to counteract this actuation strength change.  (Fear not, I reverted the gain before commissioning ended, so our Observe segments do not have any change to any calibration.)  To check the effect of changing that drivealign vlaue, I looked both at the DARM open loop gain, as well as the coupling between MICH and DARM. 

This all seemed to jive with the MICH FF effectiveness being related to ETMX ESD actuation strength.  So, rather than try to track that, we decided to work on changing over to use the ITMY PUM like LLO does.  I note that our Transition_from_ETMX guardian state uses ITMX, not ITMY, so it should be safe to have made changes to the ITMY settings.

Images attached to this report
Comments related to this report
gabriele.vajente@LIGO.ORG - 17:47, Wednesday 13 September 2023 (72873)

Here is a first look at fitting the MICH and SRCL FF when actuating through ITMY PUM. We started by measuring the coupling with FF completely off, and we might want / need to iterate once or twice to get better results.

MICH: fit looks good, a filter of order 14 fits the transfer function reasonably well

zpk([-1.873934219410558+i*46.83216078589182;-1.873934219410558-i*46.83216078589182;-0.7341315698490372+i*62.0033340187172;-0.7341315698490372-i*62.0033340187172;-0.9693772618643728+i*65.63696788856699;-0.9693772618643728-i*65.63696788856699;-0.2306511845861754+i*111.0934300115408;-0.2306511845861754-i*111.0934300115408;-2.345238808833105+i*1219.259385509747;-2.345238808833105-i*1219.259385509747;166.8801527617445+i*2890.927112949171;166.8801527617445-i*2890.927112949171;60.92218422456504;-104.5628711283977],[-5.152689788327478+i*30.32659841775321;-5.152689788327478-i*30.32659841775321;-11.78911031695619+i*37.8625720690391;-11.78911031695619-i*37.8625720690391;-1.090075083133351+i*61.79155607437996;-1.090075083133351-i*61.79155607437996;-0.9661450057136016+i*65.6925963766675;-0.9661450057136016-i*65.6925963766675;-0.2109589422385647+i*111.481550695115;-0.2109589422385647-i*111.481550695115;-2.311985363993003+i*1219.293222488905;-2.311985363993003-i*1219.293222488905;-219.9709799569737+i*1294.77378378834;-219.9709799569737-i*1294.77378378834],-0.1367317709937509)

SRCL: as usual it's hard to get a good fit. The predicted performance is a factor 10 subtraction, which should be ok as a start. We might need to iterate

zpk([-11.44480027611615+i*191.7889881463648;-11.44480027611615-i*191.7889881463648;-0.9328145827962181+i*266.494151512518;-0.9328145827962181-i*266.494151512518;-11.21909030586581+i*278.7781189198607;-11.21909030586581-i*278.7781189198607;-15.29341485126412+i*362.4319559041366;-15.29341485126412-i*362.4319559041366;-25.24254442245431+i*409.6141252691203;-25.24254442245431-i*409.6141252691203;-472.4077152209672+i*539.2943077769149;-472.4077152209672-i*539.2943077769149;-497.1568846370241+i*2312.902650596404;-497.1568846370241-i*2312.902650596404],[-0.7255682054509971+i*14.53648889678977;-0.7255682054509971-i*14.53648889678977;-11.0173094335847+i*191.6358938485432;-11.0173094335847-i*191.6358938485432;-0.9817914931295652+i*266.5227875729932;-0.9817914931295652-i*266.5227875729932;-11.91696627440176+i*278.5709318199229;-11.91696627440176-i*278.5709318199229;-15.26184825001648+i*362.5155688498652;-15.26184825001648-i*362.5155688498652;-24.47199659066153+i*410.2863096146389;-24.47199659066153-i*410.2863096146389;-227.951089876243+i*1896.119580343166;-227.951089876243-i*1896.119580343166],0.0007550286305133534)

Filters not yet uploaded to foton. Note the plots do not include the additional high pass filters that we are using, so the low frequency amplitude of the two LSC FF is lower.

Images attached to this comment
H1 TCS
oli.patane@LIGO.ORG - posted 20:02, Sunday 03 September 2023 - last comment - 15:46, Thursday 14 September 2023(72653)
Out of Observing Briefly for TCS ITMX

Between 09/04 00:48 and 00:50UTC the TCSX CO2 laser lost lock and pushed us out of Observing(attachment1). There were two SDF diffs for TCS-ITMX_CO2_CHILLER_SERVO_GAIN and TCS-ITMX_CO2_PZT_SERVO_GAIN(attachment2), both of which resolved themselves as the laser locked back up. Currently these unlocks are happening every ~2.8ish days. Jeff and Camilla noted(72627) that the ITMX TCS laser head power has been declining over the course of the past 1.5 months, and the drops in the power output line up with every time that the TCSX laser loses lock(attachment3).

08/15 22:00UTC - TCSX Chiller Swap to spare -> ..sn813 (72220)

08/16 08:30UTC TCSX unlock

08/18 13:30UTC ''

08/23 08:42UTC ''

08/26 09:46UTC ''

08/29 08:27UTC ''

09/01 01:44UTC ''

09/04 00:48UTC ''

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 12:16, Thursday 14 September 2023 (72885)

TJ and I noticed that since the 72220 chiller swap, the CO2 laser temperature is 0.3degC hotter, see attached. The guardian increased the set point to relock the laser when the chiller swapped but maybe this is not a good temperature. On next IFO unlock we can try adjusting the CHILLER_SET_POINT lower. I did this with the old chiller in alog 71685 but it didn't help.

Images attached to this comment
camilla.compton@LIGO.ORG - 15:46, Thursday 14 September 2023 (72887)

While the IFO was relocking, at 22:13UTC I reduced H1:TCS-ITMX_CO2_CHILLER_SET_POINT_OFFSET 0.3deg from 21.16 to 20.86 degC. This changed the laser tempurature 0.03degC.

Images attached to this comment
H1 SQZ (OpsInfo)
camilla.compton@LIGO.ORG - posted 11:48, Monday 14 August 2023 - last comment - 10:36, Thursday 14 September 2023(72195)
Unmonitored syscssqz channels that have been taking IFO out of observing

Naoki and I unmonitored  H1:SQZ-FIBR_SERVO_COMGAIN and H1:SQZ-FIBR_SERVO_FASTGAIN from syscssqz observe.snap. They have been regularly  taking us out of observing (72171) by changing when the TTFSS isn't really unlocking, see 71652. If the TTFSS really unlocks there will be other sdf diffs and the sqz guardians will unlock. 

We still plan to investigate this further tomorrow. We can monitor if it keeps happening using the channels.

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 10:42, Tuesday 15 August 2023 (72227)

Daniel, Sheila

We looked at one of these incidents, to see what information we could get from the beckhoff error checking.  The attached screenshot shows that when this happened on August 12th at 12:35 UTC, the beckhoff error code for the TTFSS was 2^20, counting down on the automated error screen (second attachment) the 20th error is Beatnote out of range of frequency comparator.  We looked at the beatnote error epics channel, which does seem to be well within the tolerances.  Daniel thinks that the error is happening faster than it can be recorded by epics.  He proposes that we go into the beckhoff code and add a condition that the error condition has to be met for 0.1s before throwing the error. 

Images attached to this comment
camilla.compton@LIGO.ORG - 10:17, Friday 18 August 2023 (72317)

In the last 5 days these channels would have taken us out of observing 13 times if they were still monitored, plot attached. Worryingly, 9 times in the last 14 hours, see attached.

Maybe something has changed in SQZ to make the TTFSS more sensitive. The IFO has been locked for 35 hours where sometimes we get close to the edges of our PZT ranges due to temperature drifts over long locks. 

Images attached to this comment
victoriaa.xu@LIGO.ORG - 12:25, Tuesday 22 August 2023 (72372)SQZ

I wonder if the TTFSS 1611 PD is saturated as power from the PSL fiber has drifted. Trending RFMON and DC volts from the TTFSS PD, it looks like in the past 2-3 months, the green beatnote's demod RF MON has increased (its RF max is 7), while the bottom gray DC volts signal from the PD has flattened out around -2.3V. Also looks like the RF MON got noisier as the PD DC volts saturated.

This PD should see the 160 MHz beatnote between the PSL (via fiber) and SQZ laser (free space). From LHO:44546, it looks like this PD "normally" would have like 360uW on it, with 180uW from each arm. If we trust the PD calibrations, then current PD values report ~600uW total DC power on the 1611 PD (red), with 40uW transmitted from the PSL fiber (green trend). Pick-offs for the remaining sqz laser free-space path (iem sqz laser seed/LO PDs) don't see power changes, so unlikely the saturations are coming from upstream sqz laser alignment. Not sure if there's some PD calibration issues going on here. In any case, all fiber PDs seem to be off from their nominal values, consistent with their drifts in the past few months.

I adjusted the TTFSS waveplates on the PSL fiber path to bring the FIBR PDs closer to their nominal values, and at least so we're not saturing the 1611. TTFSS and squeezer locks seem to have come back fine. We can see if this helps the SDF issues at all.

Images attached to this comment
camilla.compton@LIGO.ORG - 10:36, Thursday 14 September 2023 (72881)

These were re-monitored in 72679 after Daniel adjusted the SQZ Laser Diode Nominal Current, stopping this issue.

H1 CAL
vladimir.bossilkov@LIGO.ORG - posted 08:29, Friday 28 July 2023 - last comment - 12:31, Tuesday 12 December 2023(71787)
H1 Systematic Uncertainty Patch due to misapplication of calibration model in GDS

First observed as a persistent mis-calibration in systematic error monitoring Pcal lines which measure PCAL / GDS-CALIB_STRAIN affecting both LLO and LHO, [LLO Link] [LHO Link], characterised by these measurements consistently disagreeing with the uncertainty envelope.
It us presently understood that this arises from bugs in the code producing the GDS FIR filters there exists a sizeable discrepancy, which Joseph Betzwieser is spear-heading a thorough investigation to correct,

I make a direct measurement of this systematic error by dividing CAL-DARM_ERR_DBL_DQ / GDS-CALIB_STRAIN , where the numerator is further corrected for kappa values of the sensing, cavity pole, and the 3 actuation stages (GDS does the same corrections internally). This gives a transfer function of the difference induced from errors in the GDS filters.

Attached in this aLog, and its sibling aLog in LLO, is this measurement in blue, the PCAL / GDS-CALIB_STRAIN measurement in orange, and the smoothed uncertainty correction vector in red. Attached also is a text file of this uncertainty correction for application in pyDARM to produce the final uncertainty, in the format of [Frequency, Real, Imaginary].

Images attached to this report
Non-image files attached to this report
Comments related to this report
ling.sun@LIGO.ORG - 15:33, Friday 28 July 2023 (71798)

After applying this error TF, the uncertainty budget seems to agree with monitoring results (attached).

Images attached to this comment
ling.sun@LIGO.ORG - 13:02, Thursday 17 August 2023 (72299)

After running the command documented in alog 70666, I've plotted the monitoring results on top of the manually corrected uncertainty estimate (see attached). They agree quite well.

The command is:

python ~cal/src/CalMonitor/bin/calunc_consistency_monitor --scald-config  ~cal/src/CalMonitor/config/scald_config.yml --cal-consistency-config  ~cal/src/CalMonitor/config/calunc_consistency_configs_H1.ini --start-time 1374612632 --end-time 1374616232 --uncertainty-file /home/ling.sun/public_html/calibration_uncertainty_H1_1374612632.txt --output-dir /home/ling.sun/public_html/

The uncertainty is estimated at 1374612632 (span 2 min around this time). The monitoring data are collected from 1374612632 to 1374616232 (span an hour).

 

Images attached to this comment
jeffrey.kissel@LIGO.ORG - 17:01, Wednesday 13 September 2023 (72871)
J. Kissel, J. Betzwieser

FYI: The time at which Vlad used to gather TDCFs to update the *modeled* response function at the reference time (R, in the numerator of the plots) is 
    2023-07-27 05:03:20 UTC
    2023-07-26 22:03:20 PDT
    GPS 1374469418

This is a time when the IFO was well thermalized.

The values used for the TDCFs at this time were
    \kappa_C  = 0.97764456
    f_CC      = 444.32712 Hz
    \kappa_U  = 1.0043616 
    \kappa_P  = 0.9995768
    \kappa_T  = 1.0401824

The *measured* response function (GDS/DARM_ERR, the denominator in the plots) is from data with the same start time, 2023-07-27 05:03:20 UTC, over a duration of 384 seconds (8 averages of 48 second FFTs).

Note these TDCF values list above are the CAL-CS computed TDCFs, not the GDS computed TDCFs. They're the value exactly at 2023-07-27 05:03:20 UTC, with no attempt to average further over the duration of the *measurement*. See attached .pdf which shows the previous 5 minutes and the next 20 minutes. From this you can see that GDS was computing essentially the same thing as CALCS -- except for \kappa_U, which we know
 - is bad during that time (LHO:72812), and
 - unimpactful w.r.t. the overall calibration.
So the fact that 
    :: the GDS calculation is frozen and
    :: the CALCS calculation is noisy, but is quite close to the frozen GDS value is coincidental, even though
    :: the ~25 minute mean of the CALCS is actually around ~0.98 rather than the instantaneous value of 1.019
is inconsequential to Vlad's conclusions.

Non-image files attached to this comment
louis.dartez@LIGO.ORG - 00:54, Tuesday 12 December 2023 (74747)
I'm adding the modeled correction due to the missing 3.2 kHz pole here as a text file. I plotted a comparison showing Vlad's fit (green), the modeled correction evaluated on the same frequency vector as Vlad (orange), and the modeled correction evaluated using a dense frequency spacing (blue), see eta_3p2khz_correction.png. The denser frequency spacing recovers error of about 2% between 400 Hz and 600 Hz. Otherwise, the coarsely evaluated modeled correction seems to do quite well. 
Images attached to this comment
Non-image files attached to this comment
ling.sun@LIGO.ORG - 12:31, Tuesday 12 December 2023 (74758)

The above error was fixed in the model at GPS time 1375488918 (Tue Aug 08 00:15:00 UTC 2023) (see LHO:72135)

Displaying reports 15801-15820 of 86558.Go to page Start 787 788 789 790 791 792 793 794 795 End