Displaying reports 21-40 of 76860.Go to page 1 2 3 4 5 6 7 8 9 10 End
Reports until 14:52, Monday 15 July 2024
H1 OpsInfo (ISC)
thomas.shaffer@LIGO.ORG - posted 14:52, Monday 15 July 2024 (79136)
Added manual_control parameter to lscparams to avoid some automation states when True

Sheila, TJ

In times of heavy commissioning or troubleshooting, we want full control of the IFO and don't want to fight the automation. Sheila suggested that we add a manual_control boolean to lscparams.py that ISC_LOCK will look at to decide whether it will automatically run states like Check_Mich_Fringes, PRMI, Increase_Flashes, etc. When this is set to True, it will avoid these automated state either through a conditional in a state's logic, or by weighting edges to force ISC_LOCK to avoid particular states.

For now we are setting manual_control = True while we troubleshoot the IFO.  We will need to remember to return it when we want fully automatic operation again.

H1 AOS (SEI)
neil.doerksen@LIGO.ORG - posted 12:29, Monday 15 July 2024 - last comment - 15:34, Monday 15 July 2024(79132)
April 18 Seismic Timeline, Lock, Lockloss Summary

Pre April 18

April 18

Post April 18

Comments related to this report
neil.doerksen@LIGO.ORG - 13:18, Monday 15 July 2024 (79133)

Here are some plots around the EQ from April 18, 2024 at which USGS reports coming from Canada at 06:06.

Images attached to this comment
neil.doerksen@LIGO.ORG - 15:34, Monday 15 July 2024 (79138)

Using KAPPA_C channel.

Images attached to this comment
H1 ISC (SQZ)
camilla.compton@LIGO.ORG - posted 12:25, Monday 15 July 2024 (79128)
Translating SQZ beam after IFO alignemt shift

Naoki, Sheila, Camilla

To continue investigating alignment changes: 79105. More details in Sheila's 79127.

We put SRM back to before alignment shift, misaligned SRM2 and ITMs,  injected 1.28mW of SQZ SEED beam (as measured on SQZT7 OPO_IR_PD_DC_POWERMON). Followed as in 78115. Increased AS_AIR camera exposure form 10,000 to 100,000

Translated SQZ beam around last NLN spot, each direction the throughput was getting better but in both pitch directions we saturated ZM4 before moving far. Were able to get all expected 0.9mW throughput by moving ZM5 Pitch and yaw both positive.

Images attached to this report
H1 AOS (DetChar)
mattia.emma@LIGO.ORG - posted 11:52, Monday 15 July 2024 (79129)
Computing the average relock time for O4a

The two attached histograms show the relock times for LIGO Hanford during the whole of O4a.

The average relock time has been 2 hours and 55 minutes, while the median was at 1 hour and 33 minutes.

The relock time was computed as the time for which the ISC_LOCK_N value was consecutively below 580.

The 580 value corresponds to inject squeezing, which is why there were some short "relocks" when we used below 600 for the lockloss threshold, because the squeezer unlocked and relocked automatically.

Link to the github repository with the employed code and data

Images attached to this report
H1 SQZ (IOO, ISC)
sheila.dwyer@LIGO.ORG - posted 11:29, Monday 15 July 2024 (79127)
power estimates for sqz beams

Summary: We think that the sqz beam is seeing similar loss to the IFO beam in th alignment that we were running in until Friday, but double passed. The sqz beam transmission can be restored by moving the beam, and there should be enough power in this beam to be a useful diagnostic in chamber to distinguish between OFI and SRM damage.

Camilla and Noaki have measured 1.3mW coming from HAM7 towards HAM5 with the OPO locked on the seed. SRM transmission is 32.34%, ( 72680 ) so if nothing was damaged we should have something like 0.41mW of the OPO seed beam in transmission of SRM, which we should be able to measure with a power meter. 

The table in 79101 indicates that the single bounce transmission to AS_C has dropped to 64% of what it was last Thursday.  Here's a table of power predictions for the sqz beam arriving in HAM6:

no damage sqz seed beam we expect at AS_C
no damage 1.3mW*(1-0.3234) * 0.99^2 = 0.86mW
single pass of damaged optic with 0.64 transmission 1.3*(1-0.3234)*0.99*0.99 * 0.64 = 0.55mW
double pass of damaged optic with 0.64 tramission 1.3*(1-0.3234)*0.99*0.99 * 0.64 *0.64 = 0.35mW

With the alignment used for observing in recent weeks Naoki and Camilla measured 0.2 mW on AS_C QPD (not well centered on AS_C, but on the diode), which is calibrated into Watts arriving in HAM6.  They moved ZM4 +5 to improve the transmission of the sqz beam to 0.9mW when centered on AS_C.  These numbers don't agree super well with the predictions above, but they seem to suggest that the SQZ beam sees similar damage to the main IFO beam, but double passed.

H1 ISC
jennifer.wright@LIGO.ORG - posted 10:57, Monday 15 July 2024 (79125)
Current dip in output arm gain seems more abrupt compared to previous OFI 'burn' on 22nd April

Sheila asked me to look at the same trends we used to track the damage in the OFI over the 22nd and 23rd of April when it first happened. We think something like this might have happened again on Friday.

Attached you can see the optical gain, coupled cavity pole, range in Mpc, power at the anti-symmetric port, circulating power, and power reflected from the OMC all changed between April the 22nd, when the damage first started affecting the range causing it to decrease slowly.

In the next image we have the same trends from the last couple of days, there doesn't seem to be a slow degradation in range in this case, the range got worse between two locks on the 12th but nothing else seems to have degraded (unlike beforte when we observed a drop in optical gain, circulating power, output power and coupled cavity pole) before we got to the point where we couldn't relock.

Images attached to this report
LHO FMCS (PEM)
ryan.short@LIGO.ORG - posted 09:28, Monday 15 July 2024 (79120)
HVAC Fan Vibrometers Check - Weekly

FAMIS 26315, last checked in alog78896

No appreciable change in any fan's noise levels compared to last check.

Images attached to this report
H1 PSL
ryan.short@LIGO.ORG - posted 09:21, Monday 15 July 2024 (79118)
PSL 10-Day Trends

FAMIS 21008

No major events of note for these trends except for the PSL work done last Tuesday (seen in almost every trend), and, concerningly, a slight rise in PMC reflected power of about 0.3W over 5 days. It seems to have leveled off in the past day or so, however.

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 09:16, Monday 15 July 2024 - last comment - 15:40, Monday 15 July 2024(79119)
Atomic Clock lost sync with timing-system

at 08:50 PDT the MSR Atomic Clock lost sync with the timing system, the comparator is reading a steady 0.24S 1PPS difference. This has happened before, it requires a manual resync of the Atomic Clock to the timing-system 1PPS. This was last done 6th May 2024

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 15:40, Monday 15 July 2024 (79139)

Daniel resynced the Atomic Clock to the timing 1PPS at 15:30 PDT this afternoon. The corner station timing error has gone away.

Images attached to this comment
LHO VE
david.barker@LIGO.ORG - posted 08:11, Monday 15 July 2024 (79115)
Mon CP1 Fill

Mon Jul 15 08:06:22 2024 INFO: Fill completed in 6min 18secs

Jordan confirmed a good fill curbside.

Images attached to this report
H1 General
ryan.crouch@LIGO.ORG - posted 07:44, Monday 15 July 2024 - last comment - 09:36, Monday 15 July 2024(79113)
OPS Monday day shift start

TITLE: 07/15 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 8mph Gusts, 5mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.05 μm/s
QUICK SUMMARY:

Comments related to this report
ryan.crouch@LIGO.ORG - 08:06, Monday 15 July 2024 (79114)CDS

I'm getting a 502 proxy error whenever I try to load the DCC

david.barker@LIGO.ORG - 08:14, Monday 15 July 2024 (79116)

I'm seeing the same issue from off-site, there is now a message on the page indicating a server issue

Images attached to this comment
david.barker@LIGO.ORG - 09:36, Monday 15 July 2024 (79122)

Jonathan's emailed instructions on how to use the local archived DCC (read-only):
 


1. Goto https://dcc-lho.ligo.org
2. click login
3. Do not click the ligo icon that says fed-proxy (this is what is
broken).
4. Click on the link that says select from a list
5. In the drop down list select the Backup::LHO option
6. Click Ok
7. log on as normal.

This will work.  But you cannot push new documents and it is about a
day out of date.

As for other services, if you get a choice of selecting the backup LHO
login will work that way.  Anything that does not and just goes
straight to the federated proxy (ie like the dcc, you goto a page that
says cilogon.org to select your institution) will not work right now.

This is being worked on and should clear up later.

LHO VE
david.barker@LIGO.ORG - posted 09:15, Sunday 14 July 2024 - last comment - 09:33, Monday 15 July 2024(79108)
Pressure glitch on PT100B HAM1 at 07:25:15 Sun 14 July 2024

There was a pressure glitch on PT100B HAM1 cold-cathode at 07:25:15 PDT this morning. There was no corresponding glitch on any other corner station gauge as far as I can tell.

Pressure increased from 3.3e-08 to 4.0e-08 Torr, recovered in 4 minutes.

Images attached to this report
Comments related to this report
jenne.driggers@LIGO.ORG - 11:31, Sunday 14 July 2024 (79109)

This is clearly very concerning. 

Attached is a trend of the pressure in the top right, along with several other channels (IMC power in, ISC_LOCK guardian state, RM alignments, DC power levels on all 4 PDs in that chamber), and I don't see anything in any of these channels that looks suspicious.

EDIT to add second screenshot, of in-vac tabletop L4C seismic sensors.  Again, nothing suspicious-looking to me that I see yet.

Images attached to this comment
michael.zucker@LIGO.ORG - 07:41, Monday 15 July 2024 (79112)

Given no lockloss correlation, may not be significant; ~3 orders of magnitude less PV than the June event(s).   Maybe an ion pump burped up some argon.

jordan.vanosky@LIGO.ORG - 09:33, Monday 15 July 2024 (79121)

IP13 ion current/pressure spiked up ~1 second before the PT100B gauge did, see attached. Likely argon instability of the pump

Images attached to this comment
H1 General (ISC, OpsInfo)
ryan.short@LIGO.ORG - posted 01:04, Sunday 14 July 2024 - last comment - 08:42, Monday 15 July 2024(79105)
Comparisons before/after alignment changes

I've gathered several trends comparing the before and after of the alignment shift the IFO saw in the past couple of days, likely on Friday the 12th.

On all of these plots, I use y-cursors to indicate the levels each signal was at the appropriate time leading up to the last lock H1 had fully up to Nominal Low Noise, which lasted from 10:14 to 16:45 UTC on July 12th, in order to compare to what was likely the last time H1 was behaving "well."

DRMI signals with OM1/2 OSEMs:

The last DRMI acquisition before NLN, POP18 was around 66 counts while POP90 was around 11 counts.

The most recent DRMI acquisition, POP18 has dropped to 42 counts while POP90 has risen to 16 counts. The only significant alignment difference I see is with OM1 Y being about 70 counts different.

Optic alignments during 10W single-bounce: (times all taken from when ALIGN_IFO was in the state SR2_ALIGN [58])

The last SR2 alignment before NLN (P/SRC optics; large optics), AS_C_NSUM signal was at 0.0227 counts.

The next time in this configuration (P/SRC optics; large optics), AS_C_NSUM signal had dropped to 0.018 counts. The most obvious alignment changes are with PR2Y, PRM, some SR2, and SRM.

The most recent time in this configuration (P/SRC optics; large optics), AS_C_NSUM signal had dropped again to 0.014 counts. The same optics are off in alignment as before, but PRM has now flipped to being off in the opposite directions for both pitch and yaw.

OFI and KAPPA_C:

I recreated trends like Camilla did in alog78399 to check the behavior of the OFI. See attached for multi-day trend, zoomed in on shakes during high-state locklosses, and zoom in on one of these shakes.

HAM6 vacuum pressure:

Out of an abundance of caution, I trended the pressure gauge on HAM6 since we've seen pressure spikes somewhat recently, and I don't see anything that hasn't been previously noted on a 21-day timescale (the pressure rise ~20 days ago was noted to be because of the OM2 thermistor in alog78829).

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 08:42, Monday 15 July 2024 (79117)ISC

Looking more at the OFI TEC/temperature sensor, it seems like the alignment through the OFI changed, into probably a worse alignment that clipped somewhere (requiring more OFI temperature control as we powered up), at a time between the 2024/07/12 16:45UTC IFO unlock (looked normal, maybe 10-20% larger than normal temp swings) and the Power up at 17:45UTC. See attached. This is before we explicitly changed the SRC alignment.

Images attached to this comment
H1 ISC
ryan.short@LIGO.ORG - posted 21:16, Saturday 13 July 2024 - last comment - 21:14, Monday 15 July 2024(79103)
Moved SR3 with SR2 following to find spots with good AS port power

To help characterize the recent issues we've seen with IFO alignment that are familiar to the output change in late April, I moved SR3 in 10W single-bounce configuration with the SRC2 ASC loop on so that SR2 would follow (ALIGN_IFO's SR2_ALIGN state), much like in alog77694. I moved SR3 far enough in each direction (+/- pitch and yaw) to bring the ASC-AS_C_NSUM signal back up to our target value of 0.022, which I was successful doing in every direction except -Y, where I could only bring AS_C up to 0.018. In the three directions where I was successful, the beam spot on the AS AIR camera looked much more like the clear circle we're used to seeing and less like the upside-down apostrophe we have now.

It seems that our old output spot (starting place in TJ's alog) is still bad (-Y direction with SR3 from current place) since that was the only direction where I couldn't get the AS_C power back up.

Slider values of SR3 after each move:

  Start +P move -P move +Y move -Y move
SR3 P slider 438.7 651.1 138.7 438.7 438.7
SR3 Y slider 122.2 122.2 122.2 322.2 -167.8

Attachment 1 is the starting AS AIR camera image (and our current spot), attachment 2 is after the +P move, attachment 3 is after the -P move, and attachment 4 is after the +Y move.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 15:15, Monday 15 July 2024 (79135)
 
Before
+P move
-P move
+Y move
-Y move
Time
1404952764
2024/07/14 00:39:06 UTC
1404955465
2024/07/14 01:24:07 UTC
1404958133
2024/07/14 02:08:35 UTC
1404959716
2024/07/14 02:34:58 UTC
1404963518
2024/07/14 03:38:20 UTC
H1:SUS-SR3_M1_OPTICALIGN_P_OFFSET
438.7
651.1
138.7
438.7
438.7
H1:SUS-SR3_M1_OPTICALIGN_Y_OFFSET
438.7
122.2
122.2
322.2
-167.8
H1:SUS-SRM_M1_DAMP_P_INMON
-1033.7
-1035.2
-1036.3
-1036.1
-1037.1
H1:SUS-SRM_M1_DAMP_Y_INMON 
913.7
914.0
914.1
914.1
914.2
H1:SUS-SR2_M1_DAMP_P_INMON
597.7
-871.6
2660.2
614.7
566.2
H1:SUS-SR2_M1_DAMP_Y_INMON 
1125.3
1179.9
1069.3
1878.8
-72.4
H1:SUS-SR3_M1_DAMP_P_INMON 
-290.2
-57.7
-619.4
-297.9
-279.1
H1:SUS-SR3_M1_DAMP_Y_INMON
-411.0
-425.2
-390.4
-256.9
-633.7

Here are the OSEM values, so that Alena can cotinine her 78268 analysis of the OFI beam spot position.

sheila.dwyer@LIGO.ORG - 21:14, Monday 15 July 2024 (79154)

Adding a screenshot of sliders for the -P alignment above, after initial alignment.

Images attached to this comment
H1 SYS
daniel.sigg@LIGO.ORG - posted 20:27, Saturday 13 July 2024 - last comment - 17:25, Monday 15 July 2024(79102)
Fast shutter is OK

Yesterday, the fast shutter test failed due to dark offsets in the AS WFS DC NSUM channels.

The guardian started running the TwinCAT testing code, which works as intended: it sends a close command to the trigger logic, which in turn fires the fast shutter. The fast shutter works fine as can be seen on the HAM6 geophones. The slow controls readbacks also indicate that both fast shutter and PZT shutter are closed no later than 200ms after the trigger. However 0.5sec after the guardians started the test, it checks the AS WFS DC NSUM outputs and compares them against dark offset limits of ±15. Since the dark offset on WFS B was over 30, the guardian then sent an abort command to the TwinCAT code and reported a failure.

Comments related to this report
keita.kawabe@LIGO.ORG - 23:35, Saturday 13 July 2024 (79104)

That story agrees with my observation on Friday night when I started looking at the FS after 11:15 PM.

Sheila reported (79092) that the Fast Shutter reopened only after ~50ms or so. It seems that the low voltage drive to keep the shutter closed was not working. 1st attachment shows the last time that happened at around 23:10 local time. Daniel points out that the shutter was in an error state at that time but that was after Ryan power cycled the FS driver. We don't know exactly what kind of state the fast shutter was in here.

The next time the HV firing was tested was at 23:23 local time (2nd attachment), the shutter was kept shut (i.e. low voltage thing was somehow working) but there are two things to note.

  1. H1:ASC-AS_B_NSUM_OUT_DQ was about 25 or so, i.e. larger than the dark offset threshold of +-15. The FS guardian go to failed state.
  2. Bouncing was worse than I've ever seen. It bounced multiple times with 40 to 50ms period and eventually settles to the "closed" posotion. (Before Friday I only saw single bounce and that was it.)

The last FS test I've done was 00:12 local time on Jul/13 when the error was cleared, with the smaller power than nominal (3rd attachment). Bouncing was as bad but the power coming to HAM6 was smaller (see the trigger power at the top left). AS_B_NSUM was somewhat smaller (more like 10).

The reason why AS_B_NSUM is worse is because I reduced the analog DC gain by a factor of 10 and compensated for that by digital gain. The effect of analog offset as well as ADC/electronics noise are 10x worse than AS_A. I adjusted the dark offset while IMC was unlocked but we can probably increase the threshold to 30 or so if it continues to bother us.

Bouncing behavior might be more serious as it could mean that the beam was close to the end of the travel of the FS mirror (and it was bad on Friday because of funny alignment), or low voltage drive was somehow still funny. I suspect the former.

Images attached to this comment
ryan.short@LIGO.ORG - 17:25, Monday 15 July 2024 (79126)

Here is my attempt at recreating the (rough) timeline of events related to the fast shutter Friday night (all times in UTC):

  • 04:55:41 - While locking, ISC_LOCK gets to CHECK_AS_SHUTTERS and has the FAST_SHUTTER Guardian go to its TEST_SHUTTER state
    • The fast shutter test runs. Shutter closes, but due to improper dark offsets in the AS WFS DC NSUM channels (noted by Daniel above), Guardian thinks the test failed
      • Guardian sets AS_PROTECTION error code
      • Guardian reports "Fast shutter failed tests!! Downrange light does not disapear properly!" and jumps to SHUTTER_FAILURE
  • 04:55:43 - The fast shutter shows its error code
  • 04:55:46 - Fast shutter error code clears
  • ~05:20 - I power cycle the fast shutter driver chassis
  • 05:21:46 - Shutter opens
  • 05:28:00 - Next shutter close test; only closes for 65ms
  • ~05:45 - I power cycle the shutter logic chassis
    • No change in behavior is seen
  • 06:10 - There have been several attempts at running the shutter test at this point, all of them the shutter only was closed for 60-70ms at a time
  • 06:20 - The shutter finally closes and stays closed
  • 06:53 - Keita fixes the dark offset on AS WFS
  • 07:10 - Keita uses the Guardian to test the shutter; it works and stays open
  • 07:12 - Beckhoff errors clear after another full cycle using the Guardian
Images attached to this comment
keita.kawabe@LIGO.ORG - 14:53, Monday 15 July 2024 (79131)

Seems like ASC-AS_B_DC gain was probably a red herring, important thing is that the beam got uglier/bigger at some point, therefore a part of the beam was not blocked by the fast shutter.

First attachment shows when ASC-AS_B_DC gain switch was flipped from x10 to x1 on Tuesday. You can see that the Fast Shutter has been firing OK until Friday evening.

The rest of the plots show the FAST Shutter test done by the Beckhoff at three different points in time, i.e. the last test before my AS_B_DC change ("happy" Monday July/08 ~16:37 UTC or 9:37 local), the first one after my AS_B_DC change ("happy" July/11 4:43 UTC or Wed July/10 21:43 local time), and the first time the FAST shutter went into the error mode ("sad" Jul/13 4:56 UTC or Fri Jul/12 21:56 local time). The last one is when Sheila and Ryan started having problem.

Important points are:

  1. The two "happy" plots are quite similar. Especially, even though I see bouncing action, the power measured by both AS_A and AS_B more or less settle to the minimum value. Ratio of closed/open is about 230ppm in both cases. This is compatible with the specification of Fast Shutter mirror (which is <1000ppm).
  2. The "sad" plot looks nothing like the happy ones. Right after the shutter was closed the light level went down to less than 1000ppm, but bounced back and eventually settled down to ~6500ppm. You can also see that the light level when FS was open (~2200 cts) is about 50% of what it used to be in happy plots.
  3. ASC-AS_C was pretty close to center in all of the three plots.

From these, my conclusion is that the beam position on the fast shutter mirror was pretty much the same in all three tests, but the beam was pretty ugly for the "sad" plot as was witnessed by many of us on AS port camera. Because of this a part of the lobe was missed by the Fast Shutter. Centering an ugly beam on AS_C might have complicated the matter.

Later, when I forced the test with much lower power, the error was cleared because even though the ugly beam was still there the power went lower than the "shutter closed" threshold of the guardian.

I don't fully understand who did what when during the time the shutter was in the error state (it includes people pressing buttons and power cycling the driver and then pressing buttons again, and I certainly pressed buttons too).

Looking at this, and since Daniel agrees that the Fast Shutter has been working fine, my only concerns about locking the IFO are:

  1. The beam should be properly attenuated by the fast shutter. Not thousands of ppm, but more like ~230ppm.
  2. The beam should not clip on ASC-AS_C. If it's not clipped there, the chances of thing getting clipped downstream is very small.
Images attached to this comment
keita.kawabe@LIGO.ORG - 17:02, Monday 15 July 2024 (79148)

FYI, fast shutter has never failed to fire up when the trigger voltage crossed the threshold of 2V since I connected the ASC-AS_C analog sum  to the PEM ADC to monitor the trigger voltage at 2kHz. Firing of the FS has been without any problem.

In the attached, top is the trigger voltage monitored by the ADC. Red circles indicate the time when the trigger crossed the threshold. Bottom plot shows when the fast shutter was driven with high voltage (sensed by GS13).

The reason why it looks as if the trigger level changed before and after t=0 on the plot is because a 1:11 resistive divider with a total resistance of 10kOhm was installed at t=0. Before that, the 2V threshold was 32k counts, 2980 counts after.

The connection to the ADC was first made at 18:30 UTC on Jul/11, which is at t~-1D in the plot.

Images attached to this comment
H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 09:52, Friday 12 July 2024 - last comment - 09:05, Monday 15 July 2024(79073)
Lockloss at 16:44 UTC

16:44 UTC lockloss. PRCL was oscillating at about ~3.6 Hz.

Comments related to this report
ryan.crouch@LIGO.ORG - 12:55, Friday 12 July 2024 (79078)

We've lost it at PREP_DC_READOUT twice in a row, during different points of the OMC locking process. Lockloss tool tags ADS_EXCURSION

camilla.compton@LIGO.ORG - 09:05, Monday 15 July 2024 (79111)

We turned on a PRCL FF the day before: 79035. But this 3.6Hz PRCL wobble is normal, it was constant throughout the lock plot and locks before the feedforward was installed (example).

This lockloss looked very normal, AS_A then IMC loosing lock, as usual, plot.

Images attached to this comment
X1 SUS
ibrahim.abouelfettouh@LIGO.ORG - posted 15:29, Thursday 11 July 2024 - last comment - 15:04, Monday 15 July 2024(79042)
BBSS M1 BOSEM Count Drift Over Last Week - Temperature Driven Suspension Sag

Ibrahim, Rahul

BOSEM counts have been visibly drifting over the last few days since I centered them last week. Attached are two screenshots:

  1. Screenshot 1 shows the 48hr shift of the BOSEM counts as the temperature is varying
  2. Screenshot 2 shows the full 8 day drift since I centered the OSEMs.

I think this can easily be explained by Temperature Driven Suspension Sag (TDSS - new acronym?) due to the blades. (Initially, Rahul suggested maybe the P-adjuster was loose and moving but I think the cyclic nature of the 8-day trend disproves this)

I tried to find a way to get the temp in the staging building but Richard said there's no active data being taken so I'll take one of the thermometer/temp sensors available and place it in the cleanroom when I'm in there next, just to have the available data.

On average, the OSEM counts for RT and LF, the vertical facing OSEMs have sagged by about 25 microns. F1, which is above the center of mass, is also seeing a long-term drift. Why?

More importantly, how does this validate/invalidate our OSEM results given that some were taken hours after others and that they were centered days before the TFs were taken?

Images attached to this report
Comments related to this report
ibrahim.abouelfettouh@LIGO.ORG - 15:04, Monday 15 July 2024 (79137)

Ibrahim

Taking new trends today shows that while the suspension sag "breathes" and comes back and forth as the temperature fluctuates on a daily basis, the F1 OSEM counts are continuing to trend downwards despite temperature not changing peak to peak over the last few days.
This F1 OSEM has gone down an additional 670 cts in the last 4 days (screenshot 1). Screenshot 2 shows the OSEM counts over the last 11 days. What does this tell us?

What I don't think it is:

  1. It somewhat disproves the idea that the F1 OSEM drift was just due to the temperatures going up, since they have not leveled out as the temperatures have - unless for some reason something is heating up more than usual
  2. A suggestion was that the local cleanroom temperature closer to the walls was hotter but this would have an effect on all OSEMs on this face (F2 and F3), but those OSEMs are not trending downwards in counts.
  3. It is likely not an issue with the OSEM itself since the diagnostic pictures (alog 79079) do show a percieveable shift when there wasn't one during centering, meaning the pitch has definitiely changed, which would show up on the F1 OSEM necessarily.

What it still might be:

  1. The temperature causes the Top Stage and Top Mass blades to sag. These blades are located in front of one another and while the blades are matched, they are not identical. An unlucky matching could mean that either the back top stage blade or two of the back top mass blades could be sagging net more than the other two, causing a pitch instability. Worth check
  2. It is not temperature related at all, but that the sagging is revealing that we still have our hysteresis issue that we thought we fixed 2 weeks ago. This OSEM has been drifting in counts ever since it was centered, but the temperature has also been drastically in that time (50F difference between highs and lows last week).

Next Steps:

  • I'm going to go set up temperature probes in the cleanroom in order to see if there is indeed some weird differential temperature effect specifically in the cleanroom. Tyler and Eric have confirmed that the Staging Building temperature only really fluctuates between 70 and 72 so I'll attempt to reproduce this. This should give more details about the effect of temperature on the OSEM drift.
  • See using the individual OSEM counts and their basis DOF matrix transformation values if there's a way to determine that some blades are sagging more than others via seeing if other OSEMs are spotting it.
    • Ultimately, we could re-do the blade position tests to difinitively measure the blade height changes at different temperatures. I will look into the feasibility of this.
Images attached to this comment
H1 ISC
sheila.dwyer@LIGO.ORG - posted 16:11, Wednesday 24 April 2024 - last comment - 10:21, Monday 15 July 2024(77392)
we suspect that something in OFI has changed

LHO control room crew-

We suspect that something about the output Faraday has changed. 

The only thing that seems common to these two is the OFI, a baffle in it's vicinity or something like a wire from the fast shutter impacting the beam on the way to OM1.  We'd actuated the fast shutter and see no change in the AS camera, so we don't think that the fast shutter is the issue, which points us to looking at the OFI.

Editing to add that Keita points out something like a bad spot on OM1 is also a possibility.

Images attached to this report
Comments related to this report
ryan.crouch@LIGO.ORG - 10:21, Monday 15 July 2024 (79124)

Recreating this trend for our current situation, power drops aren't as big? Comparing a lock before last Thursday (07/11) to our most recent lock.

Images attached to this comment
Displaying reports 21-40 of 76860.Go to page 1 2 3 4 5 6 7 8 9 10 End