Displaying reports 381-400 of 77237.Go to page Start 16 17 18 19 20 21 22 23 24 End
Reports until 17:28, Monday 15 July 2024
H1 General
anthony.sanchez@LIGO.ORG - posted 17:28, Monday 15 July 2024 (79085)
PCAL EX End Station Measurements & Lab Measurements

A PCAL EndX Station Measurement was done on 2024-07-02, the PCAL team (Dripta B. & Francisco Llamas.) went to EndX with Working Standard Hanford aka WSH(PS4) and took End station measurements using T1500062-V17 as a guide.

Measurement Log
First thing we did was take a picture of the beam spot before anything is touched!

Martel:
Martel Voltage source applies voltage into the PCAL Chassis's Input 1 channel. We record the GPStimes that a -4.000V, -2.000V and a 0.000V voltage was applied to the Channel. This can be seen in Martel_Voltage_Test.png . We also did a measurement of the Martel's voltages in the PCAL lab to calculate the ADC conversion factor, which is included on the above document.

Plots while the Working Standard(PS4) is in the Transmitter Module during Inner beam being blocked, then the outer beam being block, followed by the background measurment: WS_at_TX.png.

The Inner, outer, and background measurement while WS in the Receiver Module: WS_at_RX.png.

Plots of inner, outer, and background measurement while WS sphere is in the RX enclosure with both beams on it and RX with both beams : WS_at_RX_BOTH_BEAMS.png


Then placed the Working Standard (PS4) in the path of the INNER Beam at the TX module.
Then the Working Standard (PS4) in the path of the OUTER Beam at the TX module.
A background measurement.

Took the Working Standard and put it in the RX module to get the INNER Beam.
Then the OUTTER Beam in the RX Module.
And a Background.

We remove the beam block and give the Working Standard Both Inner and Outer Beams at the SAME TIME while it's at the RX module.
We also put the RX sphere back to the RX module and put both beams on it at the same time. Like nominal opperation when the PCAL lines are turned off.
Then we take a background.

The last picture is of the Beam spots after we had finished the measurement.


All of this data and Analysis has been commited to the SVN or GIT:
https://git.ligo.org/Calibration/pcal/-/tree/development/O4/measurements/LHO_EndX/tD20240702?ref_type=heads


This adventure has been brought to you by Dripta B. & Francisco.
This is a late alog due to the updates in the procedure and the trends document.

Images attached to this report
Non-image files attached to this report
H1 ISC (ISC, SUS)
marc.pirello@LIGO.ORG - posted 17:03, Monday 15 July 2024 - last comment - 13:19, Tuesday 16 July 2024(79151)
Briefly tested LD32 DAC SUS-EX

Today we conducted a brief test on the SUS-EX ESD DAC vs the SUS-EX LD32 DAC, both driving the PEM AA chassis at end X per WP11976.  This testing was cut short due to relocking attempts.  We changed the cable to the PEM output ~1:10pm PST, and replaced the cable back to its original position at ~2:15pm PST.  We are still looking at the results of this testing.

F. Clara, R. McCarthy, M. Pirello, D. Sigg

Comments related to this report
daniel.sigg@LIGO.ORG - 13:19, Tuesday 16 July 2024 (79173)

We compared the output of the 20-bit DAC with the LIGO DAC. There is delay of about 90us. More than we would have expected. Decimation filter delay for a ~30kHz cut-off is about 14us. The filters are getting processed and sent to the DAC at a rate of 1MHz. So, maybe ~1-3us processing delay.

H1 General
ryan.crouch@LIGO.ORG - posted 16:30, Monday 15 July 2024 (79123)
OPS Monday day shift summary

TITLE: 07/15 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Oli
SHIFT SUMMARY: We're testing/investigating if we can still lock, DRMI locks pretty easily so this alignment might be ok.

LOG:                                                                                                                                                                                                                                     

Start Time System Name Location Lazer_Haz Task Time End
15:08 FAC Karen Optics lab,. vpw N Tech clean 15:35
15:36 pem Kim MidX N Tech clean 16:17
16:14 CAL Francisco PCAL lab LOCAL PCAL work 17:09
16:14 FAC Karen PCAL lab LOCAL Tech clean 16:29
16:30 FAC Karen MidY N Tech clean 17:44
18:59 SUS Jeff TCSX rack N Take pictures, RMs 19:11
19:08 SUS Jeff Mech room racks N Take pictures 19:18
20:07 EE/PEM Fil, Marc EndX N Testing AA chassis for new DAQ 20:38
21:13 EE Fil EndX N Reset test setup 21:40
21:48 SUS Jeff TSCX rack N Pictures 22:11
Images attached to this report
H1 SEI
jim.warner@LIGO.ORG - posted 16:29, Monday 15 July 2024 - last comment - 12:14, Wednesday 24 July 2024(79145)
HAM2 moving a lot more than HAM3 around 40hz, probably time to do some re-tuning

Genevieve and Sam asked about HAM1 and HAM2 motion around 40 hz. I hadn't look in detail in this frequency band for a while, this is typically higher frequency than we are worried about with ISI motion. But it turns out HAM2 is moving a lot more than HAM3 generally over 25-55hz, and particularly around 39hz. It looks like it might be due to gain peaking from the isolation loops, but could also be from something bad in the HEPI to ISI feedforward. The extra motion is so broad I don't think it's just one loop has a little too much gain, so I'm not sure what is going on here. 

First image are spectra comparing the motion of the HEPIs for those chambers (HAM2 HEPI is red and HAM3 is blue) and the ISIs (HAM2 ISI is green, HAM3 is brown). The HEPI motion is pretty similar, so I don't think it's a difference in input motion. HAM2 is moving something like 10x as much as HAM3 over 25-55hz. The sharp peak at 39 hz looks like gain peaking, but I'm not sure that explains all the difference.

Second plot shows the transfer functions from HEPI to ISI for each chamber. Red is HAM2, blue is HAM3. The 25-55hz tf for HAM3  is not very clean probably because HAM3 is well isolated. HAM2 tf is pretty clean, it makes me wonder if maybe something is messed up with feedforward on that chamber. Maybe that is something I could (carefully) fix while other troubleshooting for the detector is going on.

Images attached to this report
Comments related to this report
jim.warner@LIGO.ORG - 17:00, Monday 15 July 2024 (79150)

I looked at some of my design scripts and realized that the HAM3 FF X filter is probably a bit better fit, so I copied that into the HAM2 foton file, loaded and engaged it on HAM2. It improved the 50ish hz motion quite a bit, but HAM2 is moving more than HAM3 still. Probably some tuning that could still be done here.

Images attached to this comment
samantha.callos@LIGO.ORG - 12:14, Wednesday 24 July 2024 (79301)

Looked further into the peak at ~40Hz and found an improvement in coherence after the re-tuning Jim did. Image 1 shows the peak pre-tuning, and image 2 shows it post-tuning.

Images attached to this comment
H1 General
oli.patane@LIGO.ORG - posted 16:07, Monday 15 July 2024 (79143)
Ops Eve Shift Start

TITLE: 07/15 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 21mph Gusts, 18mph 5min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.05 μm/s
QUICK SUMMARY:

Currently in ENGAGE_ASC_FOR_FULL_IFO, which is currently doing okay. Winds have jumped up but look like they're going back down.

H1 ISC
sheila.dwyer@LIGO.ORG - posted 15:43, Monday 15 July 2024 - last comment - 20:58, Monday 15 July 2024(79140)
moved SR3, relocking after fast shutter test

Sheila, Keita, Ryan Short, Camilla and TJ

Part way through initial alignment, we stopped and moved SR3 towards the positive yaw spot from 79103, and used the SR2 osem values from that time.  A small manul adjustment of AS_C was needed, otherwise initial alingment was uneventful.

With DRMI locked, we ran the fast shutter test that Keita and Ryan S have both looked at.

We also looked at the ratios of AS PDs  to compare to   78667, they are most similar to the good times in that alog. After these checks we have decided to try locking.

Comments related to this report
keita.kawabe@LIGO.ORG - 15:55, Monday 15 July 2024 (79141)

Fast shutter behavior is attached. It's working fine, but the throughput to HAM6 is ~14% down compared with before.

Before the shutter was closed, ASC-AS_A_DC_NSUM was ~3.7k counts, and  ~0.75 counts after (fractional number because of decimation to 2k). That's about 200ppm.

However, it used to be ~4.3k and 1cts on one of the "happy" plots alog 79131, 3.7k/4.3k~0.86, so the throughput to HAM6 seems to be ~14% lower than before.

Images attached to this comment
keita.kawabe@LIGO.ORG - 16:33, Monday 15 July 2024 (79147)

IFO lost lock even before going DC and we had another FS test forced by the guardian, and it didn't fail, but the throughput is even worse.

A_DC_SUM=3.4k when the shutter was open, Closed/Open ratio is about 1000ppm, and a tiny part of the beam is being missed by the shutter (attached). Note that I'm NOT eyeballing the "open" Y cursor in log scale, I set it while in linear Y scale but changed it to log to show that the power after the shutter was closed seems to be measurably larger than it should be.

Maybe this is going on because of tiny alignment differences from lock to lock, but anyway this doesn't look to be the place we want.

Images attached to this comment
sheila.dwyer@LIGO.ORG - 20:58, Monday 15 July 2024 (79153)

I think the reason that the power levels were low on AS_A and AS_B in the DRMI lock mentioned above was because DRMI was poorly aligned, not due to excess output arm losses.

Last good lock: DRMI locked, POP18 NORM was 65.3, POP90 NORM was 9.7,  AS_A DC 4173 cnts, AS B DC was 4395 and AS_C was 0.0517 (Watts into HAM6).

In today's earlier DRMI lock, POP18 NORM was 55.35 (15% lower than the good lock) POP90 NORM was 12.27 counts (30% higher).  I think this indicates that DRMI was poorly aligned in this earlier lock, not necessarily that it was a bad spot through the OFI.

H1 AOS (SEI)
neil.doerksen@LIGO.ORG - posted 12:29, Monday 15 July 2024 - last comment - 15:34, Monday 15 July 2024(79132)
April 18 Seismic Timeline, Lock, Lockloss Summary

Pre April 18

April 18

Post April 18

Comments related to this report
neil.doerksen@LIGO.ORG - 13:18, Monday 15 July 2024 (79133)

Here are some plots around the EQ from April 18, 2024 at which USGS reports coming from Canada at 06:06.

Images attached to this comment
neil.doerksen@LIGO.ORG - 15:34, Monday 15 July 2024 (79138)

Using KAPPA_C channel.

Images attached to this comment
H1 CDS
david.barker@LIGO.ORG - posted 09:16, Monday 15 July 2024 - last comment - 15:40, Monday 15 July 2024(79119)
Atomic Clock lost sync with timing-system

at 08:50 PDT the MSR Atomic Clock lost sync with the timing system, the comparator is reading a steady 0.24S 1PPS difference. This has happened before, it requires a manual resync of the Atomic Clock to the timing-system 1PPS. This was last done 6th May 2024

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 15:40, Monday 15 July 2024 (79139)

Daniel resynced the Atomic Clock to the timing 1PPS at 15:30 PDT this afternoon. The corner station timing error has gone away.

Images attached to this comment
H1 ISC
ryan.short@LIGO.ORG - posted 21:16, Saturday 13 July 2024 - last comment - 21:14, Monday 15 July 2024(79103)
Moved SR3 with SR2 following to find spots with good AS port power

To help characterize the recent issues we've seen with IFO alignment that are familiar to the output change in late April, I moved SR3 in 10W single-bounce configuration with the SRC2 ASC loop on so that SR2 would follow (ALIGN_IFO's SR2_ALIGN state), much like in alog77694. I moved SR3 far enough in each direction (+/- pitch and yaw) to bring the ASC-AS_C_NSUM signal back up to our target value of 0.022, which I was successful doing in every direction except -Y, where I could only bring AS_C up to 0.018. In the three directions where I was successful, the beam spot on the AS AIR camera looked much more like the clear circle we're used to seeing and less like the upside-down apostrophe we have now.

It seems that our old output spot (starting place in TJ's alog) is still bad (-Y direction with SR3 from current place) since that was the only direction where I couldn't get the AS_C power back up.

Slider values of SR3 after each move:

  Start +P move -P move +Y move -Y move
SR3 P slider 438.7 651.1 138.7 438.7 438.7
SR3 Y slider 122.2 122.2 122.2 322.2 -167.8

Attachment 1 is the starting AS AIR camera image (and our current spot), attachment 2 is after the +P move, attachment 3 is after the -P move, and attachment 4 is after the +Y move.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 15:15, Monday 15 July 2024 (79135)
 
Before
+P move
-P move
+Y move
-Y move
Time
1404952764
2024/07/14 00:39:06 UTC
1404955465
2024/07/14 01:24:07 UTC
1404958133
2024/07/14 02:08:35 UTC
1404959716
2024/07/14 02:34:58 UTC
1404963518
2024/07/14 03:38:20 UTC
H1:SUS-SR3_M1_OPTICALIGN_P_OFFSET
438.7
651.1
138.7
438.7
438.7
H1:SUS-SR3_M1_OPTICALIGN_Y_OFFSET
438.7
122.2
122.2
322.2
-167.8
H1:SUS-SRM_M1_DAMP_P_INMON
-1033.7
-1035.2
-1036.3
-1036.1
-1037.1
H1:SUS-SRM_M1_DAMP_Y_INMON 
913.7
914.0
914.1
914.1
914.2
H1:SUS-SR2_M1_DAMP_P_INMON
597.7
-871.6
2660.2
614.7
566.2
H1:SUS-SR2_M1_DAMP_Y_INMON 
1125.3
1179.9
1069.3
1878.8
-72.4
H1:SUS-SR3_M1_DAMP_P_INMON 
-290.2
-57.7
-619.4
-297.9
-279.1
H1:SUS-SR3_M1_DAMP_Y_INMON
-411.0
-425.2
-390.4
-256.9
-633.7

Here are the OSEM values, so that Alena can cotinine her 78268 analysis of the OFI beam spot position.

sheila.dwyer@LIGO.ORG - 21:14, Monday 15 July 2024 (79154)

Adding a screenshot of sliders for the -P alignment above, after initial alignment.

Images attached to this comment
H1 SYS
daniel.sigg@LIGO.ORG - posted 20:27, Saturday 13 July 2024 - last comment - 12:00, Tuesday 16 July 2024(79102)
Fast shutter is OK

Yesterday, the fast shutter test failed due to dark offsets in the AS WFS DC NSUM channels.

The guardian started running the TwinCAT testing code, which works as intended: it sends a close command to the trigger logic, which in turn fires the fast shutter. The fast shutter works fine as can be seen on the HAM6 geophones. The slow controls readbacks also indicate that both fast shutter and PZT shutter are closed no later than 200ms after the trigger. However 0.5sec after the guardians started the test, it checks the AS WFS DC NSUM outputs and compares them against dark offset limits of ±15. Since the dark offset on WFS B was over 30, the guardian then sent an abort command to the TwinCAT code and reported a failure.

Comments related to this report
keita.kawabe@LIGO.ORG - 23:35, Saturday 13 July 2024 (79104)

That story agrees with my observation on Friday night when I started looking at the FS after 11:15 PM.

Sheila reported (79092) that the Fast Shutter reopened only after ~50ms or so. It seems that the low voltage drive to keep the shutter closed was not working. 1st attachment shows the last time that happened at around 23:10 local time. Daniel points out that the shutter was in an error state at that time but that was after Ryan power cycled the FS driver. We don't know exactly what kind of state the fast shutter was in here.

The next time the HV firing was tested was at 23:23 local time (2nd attachment), the shutter was kept shut (i.e. low voltage thing was somehow working) but there are two things to note.

  1. H1:ASC-AS_B_NSUM_OUT_DQ was about 25 or so, i.e. larger than the dark offset threshold of +-15. The FS guardian go to failed state.
  2. Bouncing was worse than I've ever seen. It bounced multiple times with 40 to 50ms period and eventually settles to the "closed" posotion. (Before Friday I only saw single bounce and that was it.)

The last FS test I've done was 00:12 local time on Jul/13 when the error was cleared, with the smaller power than nominal (3rd attachment). Bouncing was as bad but the power coming to HAM6 was smaller (see the trigger power at the top left). AS_B_NSUM was somewhat smaller (more like 10).

The reason why AS_B_NSUM is worse is because I reduced the analog DC gain by a factor of 10 and compensated for that by digital gain. The effect of analog offset as well as ADC/electronics noise are 10x worse than AS_A. I adjusted the dark offset while IMC was unlocked but we can probably increase the threshold to 30 or so if it continues to bother us.

Bouncing behavior might be more serious as it could mean that the beam was close to the end of the travel of the FS mirror (and it was bad on Friday because of funny alignment), or low voltage drive was somehow still funny. I suspect the former.

Images attached to this comment
ryan.short@LIGO.ORG - 17:25, Monday 15 July 2024 (79126)

Here is my attempt at recreating the (rough) timeline of events related to the fast shutter Friday night (all times in UTC):

  • 04:55:41 - While locking, ISC_LOCK gets to CHECK_AS_SHUTTERS and has the FAST_SHUTTER Guardian go to its TEST_SHUTTER state
    • The fast shutter test runs. Shutter closes, but due to improper dark offsets in the AS WFS DC NSUM channels (noted by Daniel above), Guardian thinks the test failed
      • Guardian sets AS_PROTECTION error code
      • Guardian reports "Fast shutter failed tests!! Downrange light does not disapear properly!" and jumps to SHUTTER_FAILURE
  • 04:55:43 - The fast shutter shows its error code
  • 04:55:46 - Fast shutter error code clears
  • ~05:20 - I power cycle the fast shutter driver chassis
  • 05:21:46 - Shutter opens
  • 05:28:00 - Next shutter close test; only closes for 65ms
  • ~05:45 - I power cycle the shutter logic chassis
    • No change in behavior is seen
  • 06:10 - There have been several attempts at running the shutter test at this point, all of them the shutter only was closed for 60-70ms at a time
  • 06:20 - The shutter finally closes and stays closed
  • 06:53 - Keita fixes the dark offset on AS WFS
  • 07:10 - Keita uses the Guardian to test the shutter; it works and stays open
  • 07:12 - Beckhoff errors clear after another full cycle using the Guardian
Images attached to this comment
keita.kawabe@LIGO.ORG - 14:53, Monday 15 July 2024 (79131)

Seems like ASC-AS_B_DC gain was probably a red herring, important thing is that the beam got uglier/bigger at some point, therefore a part of the beam was not blocked by the fast shutter.

First attachment shows when ASC-AS_B_DC gain switch was flipped from x10 to x1 on Tuesday. You can see that the Fast Shutter has been firing OK until Friday evening.

The rest of the plots show the FAST Shutter test done by the Beckhoff at three different points in time, i.e. the last test before my AS_B_DC change ("happy" Monday July/08 ~16:37 UTC or 9:37 local), the first one after my AS_B_DC change ("happy" July/11 4:43 UTC or Wed July/10 21:43 local time), and the first time the FAST shutter went into the error mode ("sad" Jul/13 4:56 UTC or Fri Jul/12 21:56 local time). The last one is when Sheila and Ryan started having problem.

Important points are:

  1. The two "happy" plots are quite similar. Especially, even though I see bouncing action, the power measured by both AS_A and AS_B more or less settle to the minimum value. Ratio of closed/open is about 230ppm in both cases. This is compatible with the specification of Fast Shutter mirror (which is <1000ppm).
  2. The "sad" plot looks nothing like the happy ones. Right after the shutter was closed the light level went down to less than 1000ppm, but bounced back and eventually settled down to ~6500ppm. You can also see that the light level when FS was open (~2200 cts) is about 50% of what it used to be in happy plots.
  3. ASC-AS_C was pretty close to center in all of the three plots.

From these, my conclusion is that the beam position on the fast shutter mirror was pretty much the same in all three tests, but the beam was pretty ugly for the "sad" plot as was witnessed by many of us on AS port camera. Because of this a part of the lobe was missed by the Fast Shutter. Centering an ugly beam on AS_C might have complicated the matter.

Later, when I forced the test with much lower power, the error was cleared because even though the ugly beam was still there the power went lower than the "shutter closed" threshold of the guardian.

I don't fully understand who did what when during the time the shutter was in the error state (it includes people pressing buttons and power cycling the driver and then pressing buttons again, and I certainly pressed buttons too).

Looking at this, and since Daniel agrees that the Fast Shutter has been working fine, my only concerns about locking the IFO are:

  1. The beam should be properly attenuated by the fast shutter. Not thousands of ppm, but more like ~230ppm.
  2. The beam should not clip on ASC-AS_C. If it's not clipped there, the chances of thing getting clipped downstream is very small.
Images attached to this comment
keita.kawabe@LIGO.ORG - 12:00, Tuesday 16 July 2024 (79148)

FYI, fast shutter has never failed to fire up and never failed to keep shutter closed when the trigger voltage crossed the threshold of 2V since I connected the ASC-AS_C analog sum  to the PEM ADC to monitor the trigger voltage at 2kHz at around Jul/11 2024 18:30 UTC, i.e. in the past 5 days.

In other words, FS has never failed when we actually needed it to protect AS port.

In the 1st attachment, top is the trigger voltage monitored by the ADC. Red circles indicate the time when the trigger crossed the threshold. We've had 12 such events, including 4 that happened after Friday midnight local time.

Middle plot shows when the fast shutter was driven with high voltage (sensed by GS13). Bottom shows the AS_A_DC_NSUM and B_DC_NSUM.

The reason why it looks as if the trigger level changed before and after t=0 on the plot is because a 1:11 resistive divider with a total resistance of 10kOhm was installed at t=0. Before that, the 2V threshold was 32k counts, 2980 counts after.

The rest of plots show that each and every time the shutter was triggered by the trigger voltage rather than a test, it always fired up and kept the shutter closed.

Images attached to this comment
H1 ISC (ISC)
jennifer.wright@LIGO.ORG - posted 15:55, Thursday 11 July 2024 - last comment - 15:26, Friday 19 July 2024(79045)
DARM Offset step with hot OM2

We were only about 2 and half hours into lock when I did this test due to our earthquake lockloss this morning.

I ran the

python auto_darm_offset_step.py

in /ligo/gitcommon/labutils/darm_offset_step

Starting at GPS 1404768828

See attached image.

Analysis to follow.

Returned DARM offset H1:OMC-READOUT_X0_OFFSET to 10.941038 (nominal) at 2024 Jul 11 21:47:58 UTC (GPS 1404769696)

DARM offset moves recorded to 
data/darm_offset_steps_2024_Jul_11_21_33_30_UTC.txt

Images attached to this report
Comments related to this report
jennifer.wright@LIGO.ORG - 14:25, Friday 12 July 2024 (79080)

Here is the calculated Optical gain vs dcpd power and DARM offset vs optical gain as calculated by ligo/gitcommon/labutils/darm_offset_step/plot_darm_optical_gain_vs_dcpd_sum.py

The contrast defect is  calculated from the height of the 410Hz PCAL line at each offset step in the output DCPD, and is 1.014 +/- 0.033 mW.

Non-image files attached to this comment
jennifer.wright@LIGO.ORG - 15:58, Monday 15 July 2024 (79130)

I added an additional plotting step to the code and it now makes this plot which shows us how the power at AS_C changes with the DARM offset power at the DCPDs. The slope of this graph tells us what fraction of the power is lost between the input to HAM6 (AS_C) and the DCPDs.

P_AS = 1.770*P_DCPD + 606.5mW

Where the second term is light that will be rejected by the OMC and that which gets through the OMC but is insensitive to DARM length changes.

The loss term between the anti-symmetric port and the DCPDs is 1/1.77 = 0.565

Non-image files attached to this comment
X1 SUS
ibrahim.abouelfettouh@LIGO.ORG - posted 15:29, Thursday 11 July 2024 - last comment - 15:04, Monday 15 July 2024(79042)
BBSS M1 BOSEM Count Drift Over Last Week - Temperature Driven Suspension Sag

Ibrahim, Rahul

BOSEM counts have been visibly drifting over the last few days since I centered them last week. Attached are two screenshots:

  1. Screenshot 1 shows the 48hr shift of the BOSEM counts as the temperature is varying
  2. Screenshot 2 shows the full 8 day drift since I centered the OSEMs.

I think this can easily be explained by Temperature Driven Suspension Sag (TDSS - new acronym?) due to the blades. (Initially, Rahul suggested maybe the P-adjuster was loose and moving but I think the cyclic nature of the 8-day trend disproves this)

I tried to find a way to get the temp in the staging building but Richard said there's no active data being taken so I'll take one of the thermometer/temp sensors available and place it in the cleanroom when I'm in there next, just to have the available data.

On average, the OSEM counts for RT and LF, the vertical facing OSEMs have sagged by about 25 microns. F1, which is above the center of mass, is also seeing a long-term drift. Why?

More importantly, how does this validate/invalidate our OSEM results given that some were taken hours after others and that they were centered days before the TFs were taken?

Images attached to this report
Comments related to this report
ibrahim.abouelfettouh@LIGO.ORG - 15:04, Monday 15 July 2024 (79137)

Ibrahim

Taking new trends today shows that while the suspension sag "breathes" and comes back and forth as the temperature fluctuates on a daily basis, the F1 OSEM counts are continuing to trend downwards despite temperature not changing peak to peak over the last few days.
This F1 OSEM has gone down an additional 670 cts in the last 4 days (screenshot 1). Screenshot 2 shows the OSEM counts over the last 11 days. What does this tell us?

What I don't think it is:

  1. It somewhat disproves the idea that the F1 OSEM drift was just due to the temperatures going up, since they have not leveled out as the temperatures have - unless for some reason something is heating up more than usual
  2. A suggestion was that the local cleanroom temperature closer to the walls was hotter but this would have an effect on all OSEMs on this face (F2 and F3), but those OSEMs are not trending downwards in counts.
  3. It is likely not an issue with the OSEM itself since the diagnostic pictures (alog 79079) do show a percieveable shift when there wasn't one during centering, meaning the pitch has definitiely changed, which would show up on the F1 OSEM necessarily.

What it still might be:

  1. The temperature causes the Top Stage and Top Mass blades to sag. These blades are located in front of one another and while the blades are matched, they are not identical. An unlucky matching could mean that either the back top stage blade or two of the back top mass blades could be sagging net more than the other two, causing a pitch instability. Worth check
  2. It is not temperature related at all, but that the sagging is revealing that we still have our hysteresis issue that we thought we fixed 2 weeks ago. This OSEM has been drifting in counts ever since it was centered, but the temperature has also been drastically in that time (50F difference between highs and lows last week).

Next Steps:

  • I'm going to go set up temperature probes in the cleanroom in order to see if there is indeed some weird differential temperature effect specifically in the cleanroom. Tyler and Eric have confirmed that the Staging Building temperature only really fluctuates between 70 and 72 so I'll attempt to reproduce this. This should give more details about the effect of temperature on the OSEM drift.
  • See using the individual OSEM counts and their basis DOF matrix transformation values if there's a way to determine that some blades are sagging more than others via seeing if other OSEMs are spotting it.
    • Ultimately, we could re-do the blade position tests to difinitively measure the blade height changes at different temperatures. I will look into the feasibility of this.
Images attached to this comment
H1 ISC (ISC)
jennifer.wright@LIGO.ORG - posted 11:27, Thursday 13 June 2024 - last comment - 16:33, Friday 19 July 2024(78413)
DARM offset step

I ran the DARM offset step code starting at:

2024 Jun 13 16:13:20 UTC (GPS 1402330418)

Before recording this time stamp it records the PCAL current line settings and makes sure notches for 2 PCAL frequencies are set in the DARM2 filter bank.

It then puts all the PCAL power into these lines at 410.3 and 255Hz (giving them both a height of 4000 counts), and measures the current DARM offset value.

It then steps the DARM offset and waits for 120s each time.

The script stopped at 2024 Jun 13 16:27:48 UTC (GPS 1402331286).

In the analysis the PCAL lines can be used to calculate how the optical gain changes at each offset.

See the attached traces, where you can see that H1:OMC-READOUT_X0_OFFSET is stepped and the OMC-DCPD_SUM and ASC-AS_C respond to this change.

Watch this space for analysed data.

The script sets all the PCAL settings back to nominal after the test from the record it ook at the start.

The script lives here:

/ligo/gitcommon/labutils/darm_offset_step/auto_darm_offset_step.py

The data lives here:

/ligo/gitcommon/labutils/darm_offset_step/data/darm_offset_steps_2024_Jun_13_16_13_20_UTC.txt

 

Images attached to this report
Comments related to this report
jennifer.wright@LIGO.ORG - 11:10, Friday 14 June 2024 (78436)

See the results in the attached pdf also found at

/ligo/gitcommon/labutils/darm_offset_step/figures/plot_darm_optical_gain_vs_dcpd_sum/all_plots_plot_darm_optical_gain_vs_dcpd_sum_1402330422_380kW__Post_OFI_burn_and_pressure_spikes.pdf

The contrast defect is 0.889 ± 0.019 mW and the true DASRM offset 0 is 0.30 counts.

Non-image files attached to this comment
jennifer.wright@LIGO.ORG - 16:11, Monday 15 July 2024 (79144)

I plotted the power at the antisymmetric port as in this entry to find out the loss term between the input to HAM6 and the DCPDs, which in this case is  (1/1.652) =  0.605 with 580.3 mW of light at the AS port insensitive to DARM length changes.

Non-image files attached to this comment
victoriaa.xu@LIGO.ORG - 16:33, Friday 19 July 2024 (79251)ISC, SQZ

From Jennie's measurement of 0.88 mW contrast defect, and dcpd_sum of 40mA/resp = 46.6mW, this implies an upper bound on the homodyne readout angle of 8 degrees.

This readout angle can be useful for the noise budget (ifo.Optics.Quadrature.dc=(-8+90)*np.pi/180) and analyzing sqz datasets e.g. May 2024, lho:77710.

 

Table of readout angles "recently":

   
 total_dcpd_light
 (dcpd_sum = 40mA)
 contrast_defect
 homodyne_angle
 alog
 O4a  Aug 2023  46.6 mW  1.63 mW  10.7 deg  lho71913 
 ER16  9 March 2024  46.6 mW  2.1 mW  12.2 deg  lho76231
 ER16  16 March 2024  46.6 mW  1.15 mW  9.0  deg  lho77176 
 O4b  June 2024  46.6 mW  0.88 mW  8.0 deg  lho78413 
 O4b  July 2024  46.6 mW  1.0 mW  8.4 deg  lho79045 

 

##### quick python terminal script to calculate #########

# craig lho:65000
contrast_defect   = 0.88    # mW  # measured on 2024 June 14, lho78413, 0.88 ± 0.019 mW
total_dcpd_light  = 46.6    # mW  # from dcpd_sum = 40mA/(0.8582 A/W) = 46.6 mW
import numpy as np
darm_offset_power = total_dcpd_light - contrast_defect
homodyne_angle_rad = np.arctan2(np.sqrt(contrast_defect), np.sqrt(darm_offset_power))
homodyne_angle_deg = homodyne_angle_rad*180/np.pi # degrees
print(f"homodyne_angle = {homodyne_angle_deg:0.5f} deg\n")


##### To convert between dcpd amps and watts if needed #########

# using the photodetector responsivity (like R = 0.8582 A/W for 1064nm)
from scipy import constants as scc
responsivity = scc.e * (1064e-9) / (scc.c * scc.h)
total_dcpd_light = 40/responsivity  # so dcpd_sum 40mA is 46.6mW
H1 ISC (ISC)
ibrahim.abouelfettouh@LIGO.ORG - posted 09:15, Saturday 16 March 2024 - last comment - 16:34, Monday 15 July 2024(76454)
DARM Offset Test

DARM Offset Test:

Test was run without issues and upon checking the PCAL X and Y Excitation screens, the only differences I can see before vs. after are in the OSC_TRAMP Times:

PCALX: OSC TRAMP (sec) OSC1 was 3 and went to 5

PCALY: OSC TRAMP (sec) OSC1-9 were 10 and went to 5.

I reverted these to their before values - everything else is the same (screenshots below).

Images attached to this report
Comments related to this report
jennifer.wright@LIGO.ORG - 11:19, Monday 15 April 2024 (77176)

I accidentally placed the analysis for this test as a comment on the wrong alog. Thanks Vicky for pointing this out!

See here for the optical gain and DARM offset plots.

jennifer.wright@LIGO.ORG - 16:34, Monday 15 July 2024 (79146)

I added a plot showing the loss ( inverse of the slope of attached graph) between the input of HAM 6 (AS port) and the DCPDs as in this entry.

This loss term is 1/1.247 = 0.802 with 653.7 mW of light insensitive to DARM at the AS port.

Non-image files attached to this comment
H1 DAQ
daniel.sigg@LIGO.ORG - posted 12:18, Wednesday 07 February 2024 - last comment - 15:54, Monday 15 July 2024(75761)
Previous/spare timing master

The previous timing master which was again running out of range on the voltage to the OCXO, see alogs 68000 and  61988, has been retuned using the mechanical adjustment of the OCXO.

Today's readback voltage is at +3.88V. We will keep it running over the next few months to see, if it eventually settles.

Comments related to this report
daniel.sigg@LIGO.ORG - 10:33, Wednesday 21 February 2024 (75912)

Today's readback voltage is at +3.55V.

daniel.sigg@LIGO.ORG - 16:18, Monday 18 March 2024 (76497)

Today's readback voltage is at +3.116V.

daniel.sigg@LIGO.ORG - 15:25, Tuesday 07 May 2024 (77697)

Today's readback voltage is at +1.857V.

daniel.sigg@LIGO.ORG - 15:54, Monday 15 July 2024 (79142)

Today's readback voltage is at +0.951V.

H1 ISC (SQZ)
jennifer.wright@LIGO.ORG - posted 13:20, Friday 05 May 2023 - last comment - 17:00, Monday 15 July 2024(69298)
Corrected DARM spectra for DCPD whitening on/off and DARM offset on/off on 3rd May

Jennie, Jenne, Elenna, Vicky, Erik

As referred to in this entry we took a suite of measurements on the 3rd May to determine what changing the SRCL offset, DARM offset and turning off the whitening on DCPD would have on the sensitivity.

Measurement Order: from 05/03/2023

Measurement Set DARM Measurement Time PCAL > DARM Measurement Time DARM offset (mA) Whitening? SRCL1 OFFSET Squeezing optimised? DARM Measurement (1 min) PCAL > DARM Measurement (3 mins) Figure Folder
1 1367179951 1367180172 20 ON -200 YES but slightly different squeezing angle was set between PCAL and DARM measurements /ligo/home/jennifer.wright/git/DARM_offset/2023-05-03_2009UTC_H1_DARMSPEC_1m.xml 

Ref 1

/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O3/H1/Measurements/FullIFOSensingTFs/2023-05-03_2139UTC_H1_PCALY2DARMTF_BB_3min.xml

 
2 1367184708 1367183677 40 ON -200 YES /ligo/home/jennifer.wright/git/DARM_offset/2023-05-03_2131UTC_H1_DARMSPEC_1m.xml

Ref 2

/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O3/H1/Measurements/FullIFOSensingTFs/2023-05-03_2139UTC_H1_PCALY2DARMTF_BB_3min.xml

 
3 1367184497 1367184372 40 OFF -200 YES /ligo/home/jennifer.wright/git/DARM_offset/

ref 3

/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O3/H1/Measurements/FullIFOSensingTFs/2023-05-03_2139UTC_H1_PCALY2DARMTF_BB_3min.xml

 

At low frequencies DARM looks better with 20mA offset, but we must remember that many things at low frequency are optimised for this DARM offset. See first plot for 20mA/40mA comparison.

The whitening ON/OFF (see second plot attached) does not look to have made a large difference to the sensitivity from these plots.

There is no coherence in the PCAL > DARM BB measurements we took at high frequency so to give us a good idea of the difference between 20mA and 40 mA DARM offset we will scale the DARM spectra using the one of the high frequency calibration lines.

Unfortunately, we did not spend any time in NLN with 40mA (just NLN CAL MEAS) so we will need to measure another DARM spectra in that state today.

The code I used to correct the DARM spectra with the PCAL to DARM measurements is in /ligo/home/jennifer.wright/git/DARM_offset

 

Non-image files attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 16:03, Friday 05 May 2023 (69361)

The contrast defect measurement I made indicated that the contrast defect is still 1.7 mW. This measurement was taken about 18 hrs into lock, so we were fully thermalized.

Non-image files attached to this comment
jennifer.wright@LIGO.ORG - 17:00, Monday 15 July 2024 (79149)

Added in a third plot of the power scaling in the anti-symmetric port with DARM offset changing the power at the DCPDs.

The inverse of the slope of this plot gives the loss term as in this entry.

loss term = 1/1.219 = 0.820,

the amount of light at the anti-symmetric port insensitive to DARM is 837.5 mW.

Non-image files attached to this comment
Displaying reports 381-400 of 77237.Go to page Start 16 17 18 19 20 21 22 23 24 End