Displaying reports 1-20 of 76853.Go to page 1 2 3 4 5 6 7 8 9 10 End
Reports until 16:30, Monday 15 July 2024
H1 General
ryan.crouch@LIGO.ORG - posted 16:30, Monday 15 July 2024 (79123)
OPS Monday day shift summary

TITLE: 07/15 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Oli
SHIFT SUMMARY: We're testing/investigating if we can still lock, DRMI locks pretty easily so this alignment might be ok.

LOG:                                                                                                                                                                                                                                     

Start Time System Name Location Lazer_Haz Task Time End
15:08 FAC Karen Optics lab,. vpw N Tech clean 15:35
15:36 pem Kim MidX N Tech clean 16:17
16:14 CAL Francisco PCAL lab LOCAL PCAL work 17:09
16:14 FAC Karen PCAL lab LOCAL Tech clean 16:29
16:30 FAC Karen MidY N Tech clean 17:44
18:59 SUS Jeff TCSX rack N Take pictures, RMs 19:11
19:08 SUS Jeff Mech room racks N Take pictures 19:18
20:07 EE/PEM Fil, Marc EndX N Testing AA chassis for new DAQ 20:38
21:13 EE Fil EndX N Reset test setup 21:40
21:48 SUS Jeff TSCX rack N Pictures 22:11
Images attached to this report
H1 SEI
jim.warner@LIGO.ORG - posted 16:29, Monday 15 July 2024 (79145)
HAM2 moving a lot more than HAM3 around 40hz, probably time to do some re-tuning

Genevieve and Sam asked about HAM1 and HAM2 motion around 40 hz. I hadn't look in detail in this frequency band for a while, this is typically higher frequency than we are worried about with ISI motion. But it turns out HAM2 is moving a lot more than HAM3 generally over 25-55hz, and particularly around 39hz. It looks like it might be due to gain peaking from the isolation loops, but could also be from something bad in the HEPI to ISI feedforward. The extra motion is so broad I don't think it's just one loop has a little too much gain, so I'm not sure what is going on here. 

First image are spectra comparing the motion of the HEPIs for those chambers (HAM2 HEPI is red and HAM3 is blue) and the ISIs (HAM2 ISI is green, HAM3 is brown). The HEPI motion is pretty similar, so I don't think it's a difference in input motion. HAM2 is moving something like 10x as much as HAM3 over 25-55hz. The sharp peak at 39 hz looks like gain peaking, but I'm not sure that explains all the difference.

Second plot shows the transfer functions from HEPI to ISI for each chamber. Red is HAM2, blue is HAM3. The 25-55hz tf for HAM3  is not very clean probably because HAM3 is well isolated. HAM2 tf is pretty clean, it makes me wonder if maybe something is messed up with feedforward on that chamber. Maybe that is something I could (carefully) fix while other troubleshooting for the detector is going on.

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 16:07, Monday 15 July 2024 (79143)
Ops Eve Shift Start

TITLE: 07/15 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 21mph Gusts, 18mph 5min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.05 μm/s
QUICK SUMMARY:

Currently in ENGAGE_ASC_FOR_FULL_IFO, which is currently doing okay. Winds have jumped up but look like they're going back down.

H1 ISC
sheila.dwyer@LIGO.ORG - posted 15:43, Monday 15 July 2024 - last comment - 16:33, Monday 15 July 2024(79140)
moved SR3, relocking after fast shutter test

Sheila, Keita, Ryan Short, Camilla and TJ

Part way through initial alignment, we stopped and moved SR3 towards the positive yaw spot from 79103, and used the SR2 osem values from that time.  A small manul adjustment of AS_C was needed, otherwise initial alingment was uneventful.

With DRMI locked, we ran the fast shutter test that Keita and Ryan S have both looked at.

We also looked at the ratios of AS PDs  to compare to   78667, they are most similar to the good times in that alog. After these checks we have decided to try locking.

Comments related to this report
keita.kawabe@LIGO.ORG - 15:55, Monday 15 July 2024 (79141)

Fast shutter behavior is attached. It's working fine, but the throughput to HAM6 is ~14% down compared with before.

Before the shutter was closed, ASC-AS_A_DC_NSUM was ~3.7k counts, and  ~0.75 counts after (fractional number because of decimation to 2k). That's about 200ppm.

However, it used to be ~4.3k and 1cts on one of the "happy" plots alog 79131, 3.7k/4.3k~0.86, so the throughput to HAM6 seems to be ~14% lower than before.

Images attached to this comment
keita.kawabe@LIGO.ORG - 16:33, Monday 15 July 2024 (79147)

IFO lost lock even before going DC and we had another FS test forced by the guardian, and it didn't fail, but the throughput is even worse.

A_DC_SUM=3.4k when the shutter was open, Closed/Open ratio is about 1000ppm, and a tiny part of the beam is being missed by the shutter (attached). Note that I'm NOT eyeballing the "open" Y cursor in log scale, I set it while in linear Y scale but changed it to log to show that the power after the shutter was closed seems to be measurably larger than it should be.

Maybe this is going on because of tiny alignment differences from lock to lock, but anyway this doesn't look to be the place we want.

Images attached to this comment
H1 OpsInfo (ISC)
thomas.shaffer@LIGO.ORG - posted 14:52, Monday 15 July 2024 (79136)
Added manual_control parameter to lscparams to avoid some automation states when True

Sheila, TJ

In times of heavy commissioning or troubleshooting, we want full control of the IFO and don't want to fight the automation. Sheila suggested that we add a manual_control boolean to lscparams.py that ISC_LOCK will look at to decide whether it will automatically run states like Check_Mich_Fringes, PRMI, Increase_Flashes, etc. When this is set to True, it will avoid these automated state either through a conditional in a state's logic, or by weighting edges to force ISC_LOCK to avoid particular states.

For now we are setting manual_control = True while we troubleshoot the IFO.  We will need to remember to return it when we want fully automatic operation again.

H1 AOS (SEI)
neil.doerksen@LIGO.ORG - posted 12:29, Monday 15 July 2024 - last comment - 15:34, Monday 15 July 2024(79132)
April 18 Seismic Timeline, Lock, Lockloss Summary

Pre April 18

April 18

Post April 18

Comments related to this report
neil.doerksen@LIGO.ORG - 13:18, Monday 15 July 2024 (79133)

Here are some plots around the EQ from April 18, 2024 at which USGS reports coming from Canada at 06:06.

Images attached to this comment
neil.doerksen@LIGO.ORG - 15:34, Monday 15 July 2024 (79138)

Using KAPPA_C channel.

Images attached to this comment
H1 ISC (SQZ)
camilla.compton@LIGO.ORG - posted 12:25, Monday 15 July 2024 (79128)
Translating SQZ beam after IFO alignemt shift

Naoki, Sheila, Camilla

To continue investigating alignment changes: 79105. More details in Sheila's 79127.

We put SRM back to before alignment shift, misaligned SRM2 and ITMs,  injected 1.28mW of SQZ SEED beam (as measured on SQZT7 OPO_IR_PD_DC_POWERMON). Followed as in 78115. Increased AS_AIR camera exposure form 10,000 to 100,000

Translated SQZ beam around last NLN spot, each direction the throughput was getting better but in both pitch directions we saturated ZM4 before moving far. Were able to get all expected 0.9mW throughput by moving ZM5 Pitch and yaw both positive.

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 09:16, Monday 15 July 2024 - last comment - 15:40, Monday 15 July 2024(79119)
Atomic Clock lost sync with timing-system

at 08:50 PDT the MSR Atomic Clock lost sync with the timing system, the comparator is reading a steady 0.24S 1PPS difference. This has happened before, it requires a manual resync of the Atomic Clock to the timing-system 1PPS. This was last done 6th May 2024

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 15:40, Monday 15 July 2024 (79139)

Daniel resynced the Atomic Clock to the timing 1PPS at 15:30 PDT this afternoon. The corner station timing error has gone away.

Images attached to this comment
H1 ISC
ryan.short@LIGO.ORG - posted 21:16, Saturday 13 July 2024 - last comment - 15:15, Monday 15 July 2024(79103)
Moved SR3 with SR2 following to find spots with good AS port power

To help characterize the recent issues we've seen with IFO alignment that are familiar to the output change in late April, I moved SR3 in 10W single-bounce configuration with the SRC2 ASC loop on so that SR2 would follow (ALIGN_IFO's SR2_ALIGN state), much like in alog77694. I moved SR3 far enough in each direction (+/- pitch and yaw) to bring the ASC-AS_C_NSUM signal back up to our target value of 0.022, which I was successful doing in every direction except -Y, where I could only bring AS_C up to 0.018. In the three directions where I was successful, the beam spot on the AS AIR camera looked much more like the clear circle we're used to seeing and less like the upside-down apostrophe we have now.

It seems that our old output spot (starting place in TJ's alog) is still bad (-Y direction with SR3 from current place) since that was the only direction where I couldn't get the AS_C power back up.

Slider values of SR3 after each move:

  Start +P move -P move +Y move -Y move
SR3 P slider 438.7 651.1 138.7 438.7 438.7
SR3 Y slider 122.2 122.2 122.2 322.2 -167.8

Attachment 1 is the starting AS AIR camera image (and our current spot), attachment 2 is after the +P move, attachment 3 is after the -P move, and attachment 4 is after the +Y move.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 15:15, Monday 15 July 2024 (79135)
 
Before
+P move
-P move
+Y move
-Y move
Time
1404952764
2024/07/14 00:39:06 UTC
1404955465
2024/07/14 01:24:07 UTC
1404958133
2024/07/14 02:08:35 UTC
1404959716
2024/07/14 02:34:58 UTC
1404963518
2024/07/14 03:38:20 UTC
H1:SUS-SR3_M1_OPTICALIGN_P_OFFSET
438.7
651.1
138.7
438.7
438.7
H1:SUS-SR3_M1_OPTICALIGN_Y_OFFSET
438.7
122.2
122.2
322.2
-167.8
H1:SUS-SRM_M1_DAMP_P_INMON
-1033.7
-1035.2
-1036.3
-1036.1
-1037.1
H1:SUS-SRM_M1_DAMP_Y_INMON 
913.7
914.0
914.1
914.1
914.2
H1:SUS-SR2_M1_DAMP_P_INMON
597.7
-871.6
2660.2
614.7
566.2
H1:SUS-SR2_M1_DAMP_Y_INMON 
1125.3
1179.9
1069.3
1878.8
-72.4
H1:SUS-SR3_M1_DAMP_P_INMON 
-290.2
-57.7
-619.4
-297.9
-279.1
H1:SUS-SR3_M1_DAMP_Y_INMON
-411.0
-425.2
-390.4
-256.9
-633.7

Here are the OSEM values, so that Alena can cotinine her 78268 analysis of the OFI beam spot position.

H1 SYS
daniel.sigg@LIGO.ORG - posted 20:27, Saturday 13 July 2024 - last comment - 16:54, Monday 15 July 2024(79102)
Fast shutter is OK

Yesterday, the fast shutter test failed due to dark offsets in the AS WFS DC NSUM channels.

The guardian started running the TwinCAT testing code, which works as intended: it sends a close command to the trigger logic, which in turn fires the fast shutter. The fast shutter works fine as can be seen on the HAM6 geophones. The slow controls readbacks also indicate that both fast shutter and PZT shutter are closed no later than 200ms after the trigger. However 0.5sec after the guardians started the test, it checks the AS WFS DC NSUM outputs and compares them against dark offset limits of ±15. Since the dark offset on WFS B was over 30, the guardian then sent an abort command to the TwinCAT code and reported a failure.

Comments related to this report
keita.kawabe@LIGO.ORG - 23:35, Saturday 13 July 2024 (79104)

That story agrees with my observation on Friday night when I started looking at the FS after 11:15 PM.

Sheila reported (79092) that the Fast Shutter reopened only after ~50ms or so. It seems that the low voltage drive to keep the shutter closed was not working. 1st attachment shows the last time that happened at around 23:10 local time. Daniel points out that the shutter was in an error state at that time but that was after Ryan power cycled the FS driver. We don't know exactly what kind of state the fast shutter was in here.

The next time the HV firing was tested was at 23:23 local time (2nd attachment), the shutter was kept shut (i.e. low voltage thing was somehow working) but there are two things to note.

  1. H1:ASC-AS_B_NSUM_OUT_DQ was about 25 or so, i.e. larger than the dark offset threshold of +-15. The FS guardian go to failed state.
  2. Bouncing was worse than I've ever seen. It bounced multiple times with 40 to 50ms period and eventually settles to the "closed" posotion. (Before Friday I only saw single bounce and that was it.)

The last FS test I've done was 00:12 local time on Jul/13 when the error was cleared, with the smaller power than nominal (3rd attachment). Bouncing was as bad but the power coming to HAM6 was smaller (see the trigger power at the top left). AS_B_NSUM was somewhat smaller (more like 10).

The reason why AS_B_NSUM is worse is because I reduced the analog DC gain by a factor of 10 and compensated for that by digital gain. The effect of analog offset as well as ADC/electronics noise are 10x worse than AS_A. I adjusted the dark offset while IMC was unlocked but we can probably increase the threshold to 30 or so if it continues to bother us.

Bouncing behavior might be more serious as it could mean that the beam was close to the end of the travel of the FS mirror (and it was bad on Friday because of funny alignment), or low voltage drive was somehow still funny. I suspect the former.

Images attached to this comment
keita.kawabe@LIGO.ORG - 14:53, Monday 15 July 2024 (79131)

Seems like ASC-AS_B_DC gain was probably a red herring, important thing is that the beam got uglier/bigger at some point, therefore a part of the beam was not blocked by the fast shutter.

First attachment shows when ASC-AS_B_DC gain switch was flipped from x10 to x1 on Tuesday. You can see that the Fast Shutter has been firing OK until Friday evening.

The rest of the plots show the FAST Shutter test done by the Beckhoff at three different points in time, i.e. the last test before my AS_B_DC change ("happy" Monday July/08 ~16:37 UTC or 9:37 local), the first one after my AS_B_DC change ("happy" July/11 4:43 UTC or Wed July/10 21:43 local time), and the first time the FAST shutter went into the error mode ("sad" Jul/13 4:56 UTC or Fri Jul/12 21:56 local time). The last one is when Sheila and Ryan started having problem.

Important points are:

  1. The two "happy" plots are quite similar. Especially, even though I see bouncing action, the power measured by both AS_A and AS_B more or less settle to the minimum value. Ratio of closed/open is about 230ppm in both cases. This is compatible with the specification of Fast Shutter mirror (which is <1000ppm).
  2. The "sad" plot looks nothing like the happy ones. Right after the shutter was closed the light level went down to less than 1000ppm, but bounced back and eventually settled down to ~6500ppm. You can also see that the light level when FS was open (~2200 cts) is about 50% of what it used to be in happy plots.
  3. ASC-AS_C was pretty close to center in all of the three plots.

From these, my conclusion is that the beam position on the fast shutter mirror was pretty much the same in all three tests, but the beam was pretty ugly for the "sad" plot as was witnessed by many of us on AS port camera. Because of this a part of the lobe was missed by the Fast Shutter. Centering an ugly beam on AS_C might have complicated the matter.

Later, when I forced the test with much lower power, the error was cleared because even though the ugly beam was still there the power went lower than the "shutter closed" threshold of the guardian.

I don't fully understand who did what when during the time the shutter was in the error state (it includes people pressing buttons and power cycling the driver and then pressing buttons again, and I certainly pressed buttons too).

Looking at this, and since Daniel agrees that the Fast Shutter has been working fine, my only concerns about locking the IFO are:

  1. The beam should be properly attenuated by the fast shutter. Not thousands of ppm, but more like ~230ppm.
  2. The beam should not clip on ASC-AS_C. If it's not clipped there, the chances of thing getting clipped downstream is very small.
Images attached to this comment
H1 ISC (ISC)
jennifer.wright@LIGO.ORG - posted 15:55, Thursday 11 July 2024 - last comment - 15:58, Monday 15 July 2024(79045)
DARM Offset step with hot OM2

We were only about 2 and half hours into lock when I did this test due to our earthquake lockloss this morning.

I ran the

python auto_darm_offset_step.py

in /ligo/gitcommon/labutils/darm_offset_step

Starting at GPS 1404768828

See attached image.

Analysis to follow.

Returned DARM offset H1:OMC-READOUT_X0_OFFSET to 10.941038 (nominal) at 2024 Jul 11 21:47:58 UTC (GPS 1404769696)

DARM offset moves recorded to 
data/darm_offset_steps_2024_Jul_11_21_33_30_UTC.txt

Images attached to this report
Comments related to this report
jennifer.wright@LIGO.ORG - 14:25, Friday 12 July 2024 (79080)

Here is the calculated Optical gain vs dcpd power and DARM offset vs optical gain as calculated by ligo/gitcommon/labutils/darm_offset_step/plot_darm_optical_gain_vs_dcpd_sum.py

The contrast defect is  calculated from the height of the 410Hz PCAL line at each offset step in the output DCPD, and is 1.014 +/- 0.033 mW.

Non-image files attached to this comment
jennifer.wright@LIGO.ORG - 15:58, Monday 15 July 2024 (79130)

I added an additional plotting step to the code and it now makes this plot which shows us how the power at AS_C changes with the DARM offset power at the DCPDs. The slope of this graph tells us what fraction of the power is lost between the input to HAM6 (AS_C) and the DCPDs.

P_AS = 1.770*P_DCPD + 606.5mW

Where the second term is light that will be rejected by the OMC and that which gets through the OMC but is insensitive to DARM length changes.

The loss term between the anti-symmetric port and the DCPDs is 1/1.77 = 0.565

Non-image files attached to this comment
X1 SUS
ibrahim.abouelfettouh@LIGO.ORG - posted 15:29, Thursday 11 July 2024 - last comment - 15:04, Monday 15 July 2024(79042)
BBSS M1 BOSEM Count Drift Over Last Week - Temperature Driven Suspension Sag

Ibrahim, Rahul

BOSEM counts have been visibly drifting over the last few days since I centered them last week. Attached are two screenshots:

  1. Screenshot 1 shows the 48hr shift of the BOSEM counts as the temperature is varying
  2. Screenshot 2 shows the full 8 day drift since I centered the OSEMs.

I think this can easily be explained by Temperature Driven Suspension Sag (TDSS - new acronym?) due to the blades. (Initially, Rahul suggested maybe the P-adjuster was loose and moving but I think the cyclic nature of the 8-day trend disproves this)

I tried to find a way to get the temp in the staging building but Richard said there's no active data being taken so I'll take one of the thermometer/temp sensors available and place it in the cleanroom when I'm in there next, just to have the available data.

On average, the OSEM counts for RT and LF, the vertical facing OSEMs have sagged by about 25 microns. F1, which is above the center of mass, is also seeing a long-term drift. Why?

More importantly, how does this validate/invalidate our OSEM results given that some were taken hours after others and that they were centered days before the TFs were taken?

Images attached to this report
Comments related to this report
ibrahim.abouelfettouh@LIGO.ORG - 15:04, Monday 15 July 2024 (79137)

Ibrahim

Taking new trends today shows that while the suspension sag "breathes" and comes back and forth as the temperature fluctuates on a daily basis, the F1 OSEM counts are continuing to trend downwards despite temperature not changing peak to peak over the last few days.
This F1 OSEM has gone down an additional 670 cts in the last 4 days (screenshot 1). Screenshot 2 shows the OSEM counts over the last 11 days. What does this tell us?

What I don't think it is:

  1. It somewhat disproves the idea that the F1 OSEM drift was just due to the temperatures going up, since they have not leveled out as the temperatures have - unless for some reason something is heating up more than usual
  2. A suggestion was that the local cleanroom temperature closer to the walls was hotter but this would have an effect on all OSEMs on this face (F2 and F3), but those OSEMs are not trending downwards in counts.
  3. It is likely not an issue with the OSEM itself since the diagnostic pictures (alog 79079) do show a percieveable shift when there wasn't one during centering, meaning the pitch has definitiely changed, which would show up on the F1 OSEM necessarily.

What it still might be:

  1. The temperature causes the Top Stage and Top Mass blades to sag. These blades are located in front of one another and while the blades are matched, they are not identical. An unlucky matching could mean that either the back top stage blade or two of the back top mass blades could be sagging net more than the other two, causing a pitch instability. Worth check
  2. It is not temperature related at all, but that the sagging is revealing that we still have our hysteresis issue that we thought we fixed 2 weeks ago. This OSEM has been drifting in counts ever since it was centered, but the temperature has also been drastically in that time (50F difference between highs and lows last week).

Next Steps:

  • I'm going to go set up temperature probes in the cleanroom in order to see if there is indeed some weird differential temperature effect specifically in the cleanroom. Tyler and Eric have confirmed that the Staging Building temperature only really fluctuates between 70 and 72 so I'll attempt to reproduce this. This should give more details about the effect of temperature on the OSEM drift.
  • See using the individual OSEM counts and their basis DOF matrix transformation values if there's a way to determine that some blades are sagging more than others via seeing if other OSEMs are spotting it.
    • Ultimately, we could re-do the blade position tests to difinitively measure the blade height changes at different temperatures. I will look into the feasibility of this.
Images attached to this comment
H1 ISC (ISC)
jennifer.wright@LIGO.ORG - posted 11:27, Thursday 13 June 2024 - last comment - 16:11, Monday 15 July 2024(78413)
DARM offset step

I ran the DARM offset step code starting at:

2024 Jun 13 16:13:20 UTC (GPS 1402330418)

Before recording this time stamp it records the PCAL current line settings and makes sure notches for 2 PCAL frequencies are set in the DARM2 filter bank.

It then puts all the PCAL power into these lines at 410.3 and 255Hz (giving them both a height of 4000 counts), and measures the current DARM offset value.

It then steps the DARM offset and waits for 120s each time.

The script stopped at 2024 Jun 13 16:27:48 UTC (GPS 1402331286).

In the analysis the PCAL lines can be used to calculate how the optical gain changes at each offset.

See the attached traces, where you can see that H1:OMC-READOUT_X0_OFFSET is stepped and the OMC-DCPD_SUM and ASC-AS_C respond to this change.

Watch this space for analysed data.

The script sets all the PCAL settings back to nominal after the test from the record it ook at the start.

The script lives here:

/ligo/gitcommon/labutils/darm_offset_step/auto_darm_offset_step.py

The data lives here:

/ligo/gitcommon/labutils/darm_offset_step/data/darm_offset_steps_2024_Jun_13_16_13_20_UTC.txt

 

Images attached to this report
Comments related to this report
jennifer.wright@LIGO.ORG - 11:10, Friday 14 June 2024 (78436)

See the results in the attached pdf also found at

/ligo/gitcommon/labutils/darm_offset_step/figures/plot_darm_optical_gain_vs_dcpd_sum/all_plots_plot_darm_optical_gain_vs_dcpd_sum_1402330422_380kW__Post_OFI_burn_and_pressure_spikes.pdf

The contrast defect is 0.889 ± 0.019 mW and the true DASRM offset 0 is 0.30 counts.

Non-image files attached to this comment
jennifer.wright@LIGO.ORG - 16:11, Monday 15 July 2024 (79144)

I plotted the power at the antisymmetric port as in this entry to find out the loss term between the input to HAM6 and the DCPDs, which in this case is  (1/1.652) =  0.605 with 580.3 mW of light at the AS port insensitive to DARM length changes.

Non-image files attached to this comment
H1 ISC (ISC)
ibrahim.abouelfettouh@LIGO.ORG - posted 09:15, Saturday 16 March 2024 - last comment - 16:34, Monday 15 July 2024(76454)
DARM Offset Test

DARM Offset Test:

Test was run without issues and upon checking the PCAL X and Y Excitation screens, the only differences I can see before vs. after are in the OSC_TRAMP Times:

PCALX: OSC TRAMP (sec) OSC1 was 3 and went to 5

PCALY: OSC TRAMP (sec) OSC1-9 were 10 and went to 5.

I reverted these to their before values - everything else is the same (screenshots below).

Images attached to this report
Comments related to this report
jennifer.wright@LIGO.ORG - 11:19, Monday 15 April 2024 (77176)

I accidentally placed the analysis for this test as a comment on the wrong alog. Thanks Vicky for pointing this out!

See here for the optical gain and DARM offset plots.

jennifer.wright@LIGO.ORG - 16:34, Monday 15 July 2024 (79146)

I added a plot showing the loss ( inverse of the slope of attached graph) between the input of HAM 6 (AS port) and the DCPDs as in this entry.

This loss term is 1/1.247 = 0.802 with 653.7 mW of light insensitive to DARM at the AS port.

Non-image files attached to this comment
H1 DAQ
daniel.sigg@LIGO.ORG - posted 12:18, Wednesday 07 February 2024 - last comment - 15:54, Monday 15 July 2024(75761)
Previous/spare timing master

The previous timing master which was again running out of range on the voltage to the OCXO, see alogs 68000 and  61988, has been retuned using the mechanical adjustment of the OCXO.

Today's readback voltage is at +3.88V. We will keep it running over the next few months to see, if it eventually settles.

Comments related to this report
daniel.sigg@LIGO.ORG - 10:33, Wednesday 21 February 2024 (75912)

Today's readback voltage is at +3.55V.

daniel.sigg@LIGO.ORG - 16:18, Monday 18 March 2024 (76497)

Today's readback voltage is at +3.116V.

daniel.sigg@LIGO.ORG - 15:25, Tuesday 07 May 2024 (77697)

Today's readback voltage is at +1.857V.

daniel.sigg@LIGO.ORG - 15:54, Monday 15 July 2024 (79142)

Today's readback voltage is at +0.951V.

H1 ISC (SQZ)
jennifer.wright@LIGO.ORG - posted 13:20, Friday 05 May 2023 - last comment - 17:00, Monday 15 July 2024(69298)
Corrected DARM spectra for DCPD whitening on/off and DARM offset on/off on 3rd May

Jennie, Jenne, Elenna, Vicky, Erik

As referred to in this entry we took a suite of measurements on the 3rd May to determine what changing the SRCL offset, DARM offset and turning off the whitening on DCPD would have on the sensitivity.

Measurement Order: from 05/03/2023

Measurement Set DARM Measurement Time PCAL > DARM Measurement Time DARM offset (mA) Whitening? SRCL1 OFFSET Squeezing optimised? DARM Measurement (1 min) PCAL > DARM Measurement (3 mins) Figure Folder
1 1367179951 1367180172 20 ON -200 YES but slightly different squeezing angle was set between PCAL and DARM measurements /ligo/home/jennifer.wright/git/DARM_offset/2023-05-03_2009UTC_H1_DARMSPEC_1m.xml 

Ref 1

/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O3/H1/Measurements/FullIFOSensingTFs/2023-05-03_2139UTC_H1_PCALY2DARMTF_BB_3min.xml

 
2 1367184708 1367183677 40 ON -200 YES /ligo/home/jennifer.wright/git/DARM_offset/2023-05-03_2131UTC_H1_DARMSPEC_1m.xml

Ref 2

/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O3/H1/Measurements/FullIFOSensingTFs/2023-05-03_2139UTC_H1_PCALY2DARMTF_BB_3min.xml

 
3 1367184497 1367184372 40 OFF -200 YES /ligo/home/jennifer.wright/git/DARM_offset/

ref 3

/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/O3/H1/Measurements/FullIFOSensingTFs/2023-05-03_2139UTC_H1_PCALY2DARMTF_BB_3min.xml

 

At low frequencies DARM looks better with 20mA offset, but we must remember that many things at low frequency are optimised for this DARM offset. See first plot for 20mA/40mA comparison.

The whitening ON/OFF (see second plot attached) does not look to have made a large difference to the sensitivity from these plots.

There is no coherence in the PCAL > DARM BB measurements we took at high frequency so to give us a good idea of the difference between 20mA and 40 mA DARM offset we will scale the DARM spectra using the one of the high frequency calibration lines.

Unfortunately, we did not spend any time in NLN with 40mA (just NLN CAL MEAS) so we will need to measure another DARM spectra in that state today.

The code I used to correct the DARM spectra with the PCAL to DARM measurements is in /ligo/home/jennifer.wright/git/DARM_offset

 

Non-image files attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 16:03, Friday 05 May 2023 (69361)

The contrast defect measurement I made indicated that the contrast defect is still 1.7 mW. This measurement was taken about 18 hrs into lock, so we were fully thermalized.

Non-image files attached to this comment
jennifer.wright@LIGO.ORG - 17:00, Monday 15 July 2024 (79149)

Added in a third plot of the power scaling in the anti-symmetric port with DARM offset changing the power at the DCPDs.

The inverse of the slope of this plot gives the loss term as in this entry.

loss term = 1/1.219 = 0.820,

the amount of light at the anti-symmetric port insensitive to DARM is 837.5 mW.

Non-image files attached to this comment
Displaying reports 1-20 of 76853.Go to page 1 2 3 4 5 6 7 8 9 10 End