TITLE: 07/15 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Oli
SHIFT SUMMARY: We're testing/investigating if we can still lock, DRMI locks pretty easily so this alignment might be ok.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:08 | FAC | Karen | Optics lab,. vpw | N | Tech clean | 15:35 |
15:36 | pem | Kim | MidX | N | Tech clean | 16:17 |
16:14 | CAL | Francisco | PCAL lab | LOCAL | PCAL work | 17:09 |
16:14 | FAC | Karen | PCAL lab | LOCAL | Tech clean | 16:29 |
16:30 | FAC | Karen | MidY | N | Tech clean | 17:44 |
18:59 | SUS | Jeff | TCSX rack | N | Take pictures, RMs | 19:11 |
19:08 | SUS | Jeff | Mech room racks | N | Take pictures | 19:18 |
20:07 | EE/PEM | Fil, Marc | EndX | N | Testing AA chassis for new DAQ | 20:38 |
21:13 | EE | Fil | EndX | N | Reset test setup | 21:40 |
21:48 | SUS | Jeff | TSCX rack | N | Pictures | 22:11 |
Genevieve and Sam asked about HAM1 and HAM2 motion around 40 hz. I hadn't look in detail in this frequency band for a while, this is typically higher frequency than we are worried about with ISI motion. But it turns out HAM2 is moving a lot more than HAM3 generally over 25-55hz, and particularly around 39hz. It looks like it might be due to gain peaking from the isolation loops, but could also be from something bad in the HEPI to ISI feedforward. The extra motion is so broad I don't think it's just one loop has a little too much gain, so I'm not sure what is going on here.
First image are spectra comparing the motion of the HEPIs for those chambers (HAM2 HEPI is red and HAM3 is blue) and the ISIs (HAM2 ISI is green, HAM3 is brown). The HEPI motion is pretty similar, so I don't think it's a difference in input motion. HAM2 is moving something like 10x as much as HAM3 over 25-55hz. The sharp peak at 39 hz looks like gain peaking, but I'm not sure that explains all the difference.
Second plot shows the transfer functions from HEPI to ISI for each chamber. Red is HAM2, blue is HAM3. The 25-55hz tf for HAM3 is not very clean probably because HAM3 is well isolated. HAM2 tf is pretty clean, it makes me wonder if maybe something is messed up with feedforward on that chamber. Maybe that is something I could (carefully) fix while other troubleshooting for the detector is going on.
TITLE: 07/15 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 21mph Gusts, 18mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.05 μm/s
QUICK SUMMARY:
Currently in ENGAGE_ASC_FOR_FULL_IFO, which is currently doing okay. Winds have jumped up but look like they're going back down.
Sheila, Keita, Ryan Short, Camilla and TJ
Part way through initial alignment, we stopped and moved SR3 towards the positive yaw spot from 79103, and used the SR2 osem values from that time. A small manul adjustment of AS_C was needed, otherwise initial alingment was uneventful.
With DRMI locked, we ran the fast shutter test that Keita and Ryan S have both looked at.
We also looked at the ratios of AS PDs to compare to 78667, they are most similar to the good times in that alog. After these checks we have decided to try locking.
Fast shutter behavior is attached. It's working fine, but the throughput to HAM6 is ~14% down compared with before.
Before the shutter was closed, ASC-AS_A_DC_NSUM was ~3.7k counts, and ~0.75 counts after (fractional number because of decimation to 2k). That's about 200ppm.
However, it used to be ~4.3k and 1cts on one of the "happy" plots alog 79131, 3.7k/4.3k~0.86, so the throughput to HAM6 seems to be ~14% lower than before.
IFO lost lock even before going DC and we had another FS test forced by the guardian, and it didn't fail, but the throughput is even worse.
A_DC_SUM=3.4k when the shutter was open, Closed/Open ratio is about 1000ppm, and a tiny part of the beam is being missed by the shutter (attached). Note that I'm NOT eyeballing the "open" Y cursor in log scale, I set it while in linear Y scale but changed it to log to show that the power after the shutter was closed seems to be measurably larger than it should be.
Maybe this is going on because of tiny alignment differences from lock to lock, but anyway this doesn't look to be the place we want.
Sheila, TJ
In times of heavy commissioning or troubleshooting, we want full control of the IFO and don't want to fight the automation. Sheila suggested that we add a manual_control boolean to lscparams.py that ISC_LOCK will look at to decide whether it will automatically run states like Check_Mich_Fringes, PRMI, Increase_Flashes, etc. When this is set to True, it will avoid these automated state either through a conditional in a state's logic, or by weighting edges to force ISC_LOCK to avoid particular states.
For now we are setting manual_control = True while we troubleshoot the IFO. We will need to remember to return it when we want fully automatic operation again.
Pre April 18
April 18
Post April 18
Naoki, Sheila, Camilla
To continue investigating alignment changes: 79105. More details in Sheila's 79127.
We put SRM back to before alignment shift, misaligned SRM2 and ITMs, injected 1.28mW of SQZ SEED beam (as measured on SQZT7 OPO_IR_PD_DC_POWERMON). Followed as in 78115. Increased AS_AIR camera exposure form 10,000 to 100,000
Translated SQZ beam around last NLN spot, each direction the throughput was getting better but in both pitch directions we saturated ZM4 before moving far. Were able to get all expected 0.9mW throughput by moving ZM5 Pitch and yaw both positive.
at 08:50 PDT the MSR Atomic Clock lost sync with the timing system, the comparator is reading a steady 0.24S 1PPS difference. This has happened before, it requires a manual resync of the Atomic Clock to the timing-system 1PPS. This was last done 6th May 2024
To help characterize the recent issues we've seen with IFO alignment that are familiar to the output change in late April, I moved SR3 in 10W single-bounce configuration with the SRC2 ASC loop on so that SR2 would follow (ALIGN_IFO's SR2_ALIGN state), much like in alog77694. I moved SR3 far enough in each direction (+/- pitch and yaw) to bring the ASC-AS_C_NSUM signal back up to our target value of 0.022, which I was successful doing in every direction except -Y, where I could only bring AS_C up to 0.018. In the three directions where I was successful, the beam spot on the AS AIR camera looked much more like the clear circle we're used to seeing and less like the upside-down apostrophe we have now.
It seems that our old output spot (starting place in TJ's alog) is still bad (-Y direction with SR3 from current place) since that was the only direction where I couldn't get the AS_C power back up.
Slider values of SR3 after each move:
Start | +P move | -P move | +Y move | -Y move | |
SR3 P slider | 438.7 | 651.1 | 138.7 | 438.7 | 438.7 |
SR3 Y slider | 122.2 | 122.2 | 122.2 | 322.2 | -167.8 |
Attachment 1 is the starting AS AIR camera image (and our current spot), attachment 2 is after the +P move, attachment 3 is after the -P move, and attachment 4 is after the +Y move.
|
Before |
+P move
|
-P move
|
+Y move
|
-Y move
|
Time
|
1404952764
2024/07/14 00:39:06 UTC
|
1404955465
2024/07/14 01:24:07 UTC
|
1404958133
2024/07/14 02:08:35 UTC
|
1404959716
2024/07/14 02:34:58 UTC
|
1404963518
2024/07/14 03:38:20 UTC
|
H1:SUS-SR3_M1_OPTICALIGN_P_OFFSET
|
438.7 |
651.1
|
138.7
|
438.7
|
438.7
|
H1:SUS-SR3_M1_OPTICALIGN_Y_OFFSET
|
438.7 |
122.2
|
122.2
|
322.2
|
-167.8
|
H1:SUS-SRM_M1_DAMP_P_INMON
|
-1033.7 |
-1035.2
|
-1036.3
|
-1036.1
|
-1037.1
|
H1:SUS-SRM_M1_DAMP_Y_INMON
|
913.7 |
914.0
|
914.1
|
914.1
|
914.2
|
H1:SUS-SR2_M1_DAMP_P_INMON
|
597.7 |
-871.6
|
2660.2
|
614.7
|
566.2
|
H1:SUS-SR2_M1_DAMP_Y_INMON
|
1125.3 |
1179.9
|
1069.3
|
1878.8
|
-72.4
|
H1:SUS-SR3_M1_DAMP_P_INMON
|
-290.2 |
-57.7
|
-619.4
|
-297.9
|
-279.1
|
H1:SUS-SR3_M1_DAMP_Y_INMON
|
-411.0 |
-425.2
|
-390.4
|
-256.9
|
-633.7
|
Here are the OSEM values, so that Alena can cotinine her 78268 analysis of the OFI beam spot position.
Yesterday, the fast shutter test failed due to dark offsets in the AS WFS DC NSUM channels.
The guardian started running the TwinCAT testing code, which works as intended: it sends a close command to the trigger logic, which in turn fires the fast shutter. The fast shutter works fine as can be seen on the HAM6 geophones. The slow controls readbacks also indicate that both fast shutter and PZT shutter are closed no later than 200ms after the trigger. However 0.5sec after the guardians started the test, it checks the AS WFS DC NSUM outputs and compares them against dark offset limits of ±15. Since the dark offset on WFS B was over 30, the guardian then sent an abort command to the TwinCAT code and reported a failure.
That story agrees with my observation on Friday night when I started looking at the FS after 11:15 PM.
Sheila reported (79092) that the Fast Shutter reopened only after ~50ms or so. It seems that the low voltage drive to keep the shutter closed was not working. 1st attachment shows the last time that happened at around 23:10 local time. Daniel points out that the shutter was in an error state at that time but that was after Ryan power cycled the FS driver. We don't know exactly what kind of state the fast shutter was in here.
The next time the HV firing was tested was at 23:23 local time (2nd attachment), the shutter was kept shut (i.e. low voltage thing was somehow working) but there are two things to note.
The last FS test I've done was 00:12 local time on Jul/13 when the error was cleared, with the smaller power than nominal (3rd attachment). Bouncing was as bad but the power coming to HAM6 was smaller (see the trigger power at the top left). AS_B_NSUM was somewhat smaller (more like 10).
The reason why AS_B_NSUM is worse is because I reduced the analog DC gain by a factor of 10 and compensated for that by digital gain. The effect of analog offset as well as ADC/electronics noise are 10x worse than AS_A. I adjusted the dark offset while IMC was unlocked but we can probably increase the threshold to 30 or so if it continues to bother us.
Bouncing behavior might be more serious as it could mean that the beam was close to the end of the travel of the FS mirror (and it was bad on Friday because of funny alignment), or low voltage drive was somehow still funny. I suspect the former.
Seems like ASC-AS_B_DC gain was probably a red herring, important thing is that the beam got uglier/bigger at some point, therefore a part of the beam was not blocked by the fast shutter.
First attachment shows when ASC-AS_B_DC gain switch was flipped from x10 to x1 on Tuesday. You can see that the Fast Shutter has been firing OK until Friday evening.
The rest of the plots show the FAST Shutter test done by the Beckhoff at three different points in time, i.e. the last test before my AS_B_DC change ("happy" Monday July/08 ~16:37 UTC or 9:37 local), the first one after my AS_B_DC change ("happy" July/11 4:43 UTC or Wed July/10 21:43 local time), and the first time the FAST shutter went into the error mode ("sad" Jul/13 4:56 UTC or Fri Jul/12 21:56 local time). The last one is when Sheila and Ryan started having problem.
Important points are:
From these, my conclusion is that the beam position on the fast shutter mirror was pretty much the same in all three tests, but the beam was pretty ugly for the "sad" plot as was witnessed by many of us on AS port camera. Because of this a part of the lobe was missed by the Fast Shutter. Centering an ugly beam on AS_C might have complicated the matter.
Later, when I forced the test with much lower power, the error was cleared because even though the ugly beam was still there the power went lower than the "shutter closed" threshold of the guardian.
I don't fully understand who did what when during the time the shutter was in the error state (it includes people pressing buttons and power cycling the driver and then pressing buttons again, and I certainly pressed buttons too).
Looking at this, and since Daniel agrees that the Fast Shutter has been working fine, my only concerns about locking the IFO are:
We were only about 2 and half hours into lock when I did this test due to our earthquake lockloss this morning.
I ran the
python auto_darm_offset_step.py
in /ligo/gitcommon/labutils/darm_offset_step
Starting at GPS 1404768828
See attached image.
Analysis to follow.
Returned DARM offset H1:OMC-READOUT_X0_OFFSET to 10.941038 (nominal) at 2024 Jul 11 21:47:58 UTC (GPS 1404769696)
DARM offset moves recorded to
data/darm_offset_steps_2024_Jul_11_21_33_30_UTC.txt
Here is the calculated Optical gain vs dcpd power and DARM offset vs optical gain as calculated by ligo/gitcommon/labutils/darm_offset_step/plot_darm_optical_gain_vs_dcpd_sum.py
The contrast defect is calculated from the height of the 410Hz PCAL line at each offset step in the output DCPD, and is 1.014 +/- 0.033 mW.
I added an additional plotting step to the code and it now makes this plot which shows us how the power at AS_C changes with the DARM offset power at the DCPDs. The slope of this graph tells us what fraction of the power is lost between the input to HAM6 (AS_C) and the DCPDs.
P_AS = 1.770*P_DCPD + 606.5mW
Where the second term is light that will be rejected by the OMC and that which gets through the OMC but is insensitive to DARM length changes.
The loss term between the anti-symmetric port and the DCPDs is 1/1.77 = 0.565
Ibrahim, Rahul
BOSEM counts have been visibly drifting over the last few days since I centered them last week. Attached are two screenshots:
I think this can easily be explained by Temperature Driven Suspension Sag (TDSS - new acronym?) due to the blades. (Initially, Rahul suggested maybe the P-adjuster was loose and moving but I think the cyclic nature of the 8-day trend disproves this)
I tried to find a way to get the temp in the staging building but Richard said there's no active data being taken so I'll take one of the thermometer/temp sensors available and place it in the cleanroom when I'm in there next, just to have the available data.
On average, the OSEM counts for RT and LF, the vertical facing OSEMs have sagged by about 25 microns. F1, which is above the center of mass, is also seeing a long-term drift. Why?
More importantly, how does this validate/invalidate our OSEM results given that some were taken hours after others and that they were centered days before the TFs were taken?
Ibrahim
Taking new trends today shows that while the suspension sag "breathes" and comes back and forth as the temperature fluctuates on a daily basis, the F1 OSEM counts are continuing to trend downwards despite temperature not changing peak to peak over the last few days.
This F1 OSEM has gone down an additional 670 cts in the last 4 days (screenshot 1). Screenshot 2 shows the OSEM counts over the last 11 days. What does this tell us?
What I don't think it is:
What it still might be:
Next Steps:
I ran the DARM offset step code starting at:
2024 Jun 13 16:13:20 UTC (GPS 1402330418)
Before recording this time stamp it records the PCAL current line settings and makes sure notches for 2 PCAL frequencies are set in the DARM2 filter bank.
It then puts all the PCAL power into these lines at 410.3 and 255Hz (giving them both a height of 4000 counts), and measures the current DARM offset value.
It then steps the DARM offset and waits for 120s each time.
The script stopped at 2024 Jun 13 16:27:48 UTC (GPS 1402331286).
In the analysis the PCAL lines can be used to calculate how the optical gain changes at each offset.
See the attached traces, where you can see that H1:OMC-READOUT_X0_OFFSET is stepped and the OMC-DCPD_SUM and ASC-AS_C respond to this change.
Watch this space for analysed data.
The script sets all the PCAL settings back to nominal after the test from the record it ook at the start.
The script lives here:
/ligo/gitcommon/labutils/darm_offset_step/auto_darm_offset_step.py
The data lives here:
/ligo/gitcommon/labutils/darm_offset_step/data/darm_offset_steps_2024_Jun_13_16_13_20_UTC.txt
See the results in the attached pdf also found at
/ligo/gitcommon/labutils/darm_offset_step/figures/plot_darm_optical_gain_vs_dcpd_sum/all_plots_plot_darm_optical_gain_vs_dcpd_sum_1402330422_380kW__Post_OFI_burn_and_pressure_spikes.pdf
The contrast defect is 0.889 ± 0.019 mW and the true DASRM offset 0 is 0.30 counts.
I plotted the power at the antisymmetric port as in this entry to find out the loss term between the input to HAM6 and the DCPDs, which in this case is (1/1.652) = 0.605 with 580.3 mW of light at the AS port insensitive to DARM length changes.
DARM Offset Test:
Test was run without issues and upon checking the PCAL X and Y Excitation screens, the only differences I can see before vs. after are in the OSC_TRAMP Times:
PCALX: OSC TRAMP (sec) OSC1 was 3 and went to 5
PCALY: OSC TRAMP (sec) OSC1-9 were 10 and went to 5.
I reverted these to their before values - everything else is the same (screenshots below).
I accidentally placed the analysis for this test as a comment on the wrong alog. Thanks Vicky for pointing this out!
See here for the optical gain and DARM offset plots.
I added a plot showing the loss ( inverse of the slope of attached graph) between the input of HAM 6 (AS port) and the DCPDs as in this entry.
This loss term is 1/1.247 = 0.802 with 653.7 mW of light insensitive to DARM at the AS port.
The previous timing master which was again running out of range on the voltage to the OCXO, see alogs 68000 and 61988, has been retuned using the mechanical adjustment of the OCXO.
Today's readback voltage is at +3.88V. We will keep it running over the next few months to see, if it eventually settles.
Today's readback voltage is at +3.55V.
Today's readback voltage is at +3.116V.
Today's readback voltage is at +1.857V.
Today's readback voltage is at +0.951V.
Jennie, Jenne, Elenna, Vicky, Erik
As referred to in this entry we took a suite of measurements on the 3rd May to determine what changing the SRCL offset, DARM offset and turning off the whitening on DCPD would have on the sensitivity.
Measurement Order: from 05/03/2023
Measurement Set | DARM Measurement Time | PCAL > DARM Measurement Time | DARM offset (mA) | Whitening? | SRCL1 OFFSET | Squeezing optimised? | DARM Measurement (1 min) | PCAL > DARM Measurement (3 mins) | Figure Folder |
1 | 1367179951 | 1367180172 | 20 | ON | -200 | YES but slightly different squeezing angle was set between PCAL and DARM measurements | /ligo/home/jennifer.wright/git/DARM_offset/2023-05-03_2009UTC_H1_DARMSPEC_1m.xml |
Ref 1
|
|
2 | 1367184708 | 1367183677 | 40 | ON | -200 | YES | /ligo/home/jennifer.wright/git/DARM_offset/2023-05-03_2131UTC_H1_DARMSPEC_1m.xml |
Ref 2
|
|
3 | 1367184497 | 1367184372 | 40 | OFF | -200 | YES | /ligo/home/jennifer.wright/git/DARM_offset/ |
ref 3
|
At low frequencies DARM looks better with 20mA offset, but we must remember that many things at low frequency are optimised for this DARM offset. See first plot for 20mA/40mA comparison.
The whitening ON/OFF (see second plot attached) does not look to have made a large difference to the sensitivity from these plots.
There is no coherence in the PCAL > DARM BB measurements we took at high frequency so to give us a good idea of the difference between 20mA and 40 mA DARM offset we will scale the DARM spectra using the one of the high frequency calibration lines.
Unfortunately, we did not spend any time in NLN with 40mA (just NLN CAL MEAS) so we will need to measure another DARM spectra in that state today.
The code I used to correct the DARM spectra with the PCAL to DARM measurements is in /ligo/home/jennifer.wright/git/DARM_offset
The contrast defect measurement I made indicated that the contrast defect is still 1.7 mW. This measurement was taken about 18 hrs into lock, so we were fully thermalized.
Added in a third plot of the power scaling in the anti-symmetric port with DARM offset changing the power at the DCPDs.
The inverse of the slope of this plot gives the loss term as in this entry.
loss term = 1/1.219 = 0.820,
the amount of light at the anti-symmetric port insensitive to DARM is 837.5 mW.