TITLE: 12/21 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: 2 locklosses, recovering from the last one currently. New o4b record of 47 hours?
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
16:53 | SAF | LVEA | LVEA | YES | LVEA IS LASER HAZARD | 22:08 |
16:12 | FAC | Kim | Optics lab & Vac prep | N | Tech clean | 16:55 |
18:29 | CAL | Francisco | PCAL lab | LOCAL | PCAL work | 18:39 |
21:24 | FAC | Christina | OSB recieving | N | Roll up door | 21:49 |
21:49 | OPS | Oli | LVEA | Y -> N | Transition | 22:08 |
21:49 | SEI | Jim | LVEA | Y | Look for parts around biergarten | 21:56 |
22:08 | OPS | LVEA | LKVEA | N | LASER SAFE | 15:07 |
23:39 | SQZ | Camilla | Optics lab | N | Put away stuff | 00:30 |
20:59 UTC lockloss from a 47 hour lock which I think is the new O4b record. Easy relock with an IA, which I had to help out inbetween SR2 and SRY align as it got couldn't get SRY (lots of SRM sats), I reran SR2 and them SRY was fine.
TITLE: 12/21 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 3mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.41 μm/s
QUICK SUMMARY: H1 has been locked for just over an hour. If there's a lockloss this evening, I plan to adjust the ISS RefSignal to bring the diffracted power back up to around 4% (currently down to around 2.5%).
New calibration lines turned on 7/25/2023 resulted in unexpected additional inter-modulation products. These new calibration lines were turned off on 8/9/2023.
When looking at coherence data between the main detector and safe channels, we expect no change in the before-during-after periods of the new calibration lines and additional inter-modulation products. However, in “safe” channel CS MAG EBAY LSCRACK Y, we see coherence jump at the specific frequencies of the calibration lines and inter-modulation products. This indicates that the channel may actually be unsafe.
Plots for H1_PEM-CS_MAG_EBAY_LSCRACK_Y_DQ before / during / after
Research Presentation: https://dcc.ligo.org/G2402592
ETMX glitch lockloss tool
22:53 UTC back to Observing
Fri Dec 20 10:11:13 2024 INFO: Fill completed in 11min 10secs
TC-B is still elevated by about +30C compared to TC-A and did not reach the trip temp.
Closes FAMIS#26473, last checked 81396
HEPI pump trends looking as expected(ndscope). There is a period of rapid change in the pressure at EX 28 days ago, right after TJ put in his alog (linked above) about the pressure being a bit weird, and that's because Jim and Mitchell had gone out to investigate it (81399).
TITLE: 12/20 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 160Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: SEISMON_ALERT
Wind: 3mph Gusts, 2mph 3min avg
Primary useism: 0.07 μm/s
Secondary useism: 0.32 μm/s
QUICK SUMMARY:
Range looks fairly steady around just under 160 for the whole lock, the coherence check yielded.
TITLE: 12/20 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
H1 Still locked for 32 hours! Currently Observing.
Aside from the breif Internet outage nothing really happened.
Violins looking good, No Pi's ringing up or nothin.
This is a good start going into the holidays.
TITLE: 12/20 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 7mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.34 μm/s
QUICK SUMMARY:
CDS & GS internet connectivity to the outside world went down. Issues due to Core Switch work?
Outage lasted from 2:00 UTC to ~ 2:24 UTC.
IFO stayed locked during that time, We now have a 28+ Hour lock.
Control over the IFO was maintained throughout the entire outage.
Local networks stayed up and running just fine.
I called Dave & Jonathan, Jonathan told me to wait it out, as it's just a switch updating , rebooting, and falling back to another network.
I presume the falling back to the other network bit took some time.
All systems recovered the way Jonathan said they would. He advised me to wait it out if there is another one, and to call him again if things went down after 3 :00 UTC (7pm local) .
So far so good.
This alog follows up LHO:81769 where I calibrated the ASC drives to test mass motion for all eight arm ASC control loops. Now, I have taken the noise budget injections that we run to measure the ASC coupling to DARM, and used that to calibrate an angle-to-length coupling function in mm/rad. I have only done this for the HARD loops because the SOFT loops do not couple very strongly to DARM (notable exception to CSOFT P, which I will follow up on).
The noise budget code uses an excess power projection to DARM, but instead I chose to measure the linear transfer function. The coherence is just ok, so I think a good follow up is to remeasure the coupling again and drive a bit harder/average longer (these are 60 second measurements). This plot shows the noise budget injection into calibrated DARM/ASC PUM drive [m/Nm] transfer function, and the coherence of the measurement.
I followed a similar calibration procedure to my previous alog:
I did not apply the drive matrix here, so the calibration is into ETM motion only (factor of +/-1), whereas the calibration into ITM motion would have an additional +/- 0.74 (+/- 0.72 for yaw) applied.
HARD Pitch Angle to Length coupling plot
HARD Yaw Angle to Length coupling plot
Overall, the best-measured DOF here is CHARD Y. In both CHARD Y and DHARD Y, there seem to be two clear coupling regions: one fairly flat region above 20 Hz in CHARD Y and above 30 Hz in DHARD Y, reaching between 20-30 mm/rad. Below, there is a steep coupling. This is reminiscient of the coupling that Gabriele, Louis, and I measured back in March and tried to mitigate with A2L and WFS offset. We found that we could reduce the flatter coupling in DHARD Y by adjusting the A2L gain, and the steeper coupling by applying a small offset in AS WFS A yaw DC. We are currently not running with that WFS offset. The yaw coupling suggests that we have some sort of miscentering on both the REFL and AS WFS which causes a steep low frequency coupling which is less sensitive to beam centering on the test mass (as shown by the A2L tests); meanwhile, the flat coupling is sensitive to beam miscentering on the test mass, which is expected (see e.g. T0900511).
The pitch coupling has the worst coherence here, but the coupling is certainly not flat. It appears to be rising with about f^4 at high frequency. I have a hard time understanding what could cause that. There is also possibly a similar steep coupling at low frequency like the yaw coupling, but the coherence is so poor it's hard to see.
Assuming that I have my calibration factors correct here (please don't assume this! check my work!), this suggests that the beam miscentering is higher than 1 mm everywhere and possibly up to 30 mm on the ETMs (remember this would be ~25% lower on the ITMs). This seems very large, so I'm hoping that there is another errant factor of two or something somewhere.
My code for both the calibrated motion and calibrated coupling is in a git repo here: https://git.ligo.org/ecapote/ASC_calibration
Today I had a chance to rerun these injections so I could get better coherence, injection plot. I ran all the injections with the calibration lines off.
The pitch couplings now appear to be very flat, which is what we expect. However, they are very high (100 mm/rad !!) which seems nearly impossible.
The yaw couplings still show a strong frequency dependence below 30 Hz, and are flat above, and around 30-50 mm/rad, still large.
Whether or not the overall beam miscentering value is correct, this does indicate that there is some funny behavior in yaw only that causes two different alignment coupling responses. Since this is observed in both DHARD and CHARD, it could be something common to both (so maybe less likely to be related to the DARM offset light on the AS WFS).
I also ran a measurement of the CSOFT P coupling, injection plot. I was only able to get good coherence up to 30 Hz, but it seems to be fairly flat too, CSOFT P coupling.
Edit: updated coupling plots to include error shading based on the measurement coherence.
TITLE: 12/20 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 159Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 2mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.32 μm/s
QUICK SUMMARY:
H1 has been locked for over 26 Hours.
Current plan: stay in Observing.
Note:
Dust mon LAB2 still not reporting back correctly.
TITLE: 12/20 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 162Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Locked for 26.5 hours so far. We took calibration measurements this morning, followed by 3 hours of commissioning time.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
16:53 | SAF | LVEA | LVEA | YES | LVEA IS LASER HAZARD | 00:38 |
15:38 | PCAL | Francisco | PCAL lab | local | Check on meas | 15:43 |
16:54 | FAC | Tyler | High Bay | N | While we are not observing | 17:24 |
17:08 | SEI | Jim, Mitchell | EY | n | Wind fence check | 18:54 |
18:21 | - | Richard | OSB | n | Opening receiving rollup | 18:36 |
18:44 | FAC | Kim | H2 build | n | Tech clean | 19:02 |
20:19 | VAC | Janos | MX, MY | n | Taking measurements | 21:44 |
22:12 | PCAL | Rick, Francisco, Dripta | PCAL lab | local | PCAL meas | 01:12 |
J. Freed,
SRM M1 stage bosems (speciffically T2, T3) show strong coupling below 10Hz. SR3 M1 had some on LF, RT bosems between 10-15Hz
Today I did damping loop injections on all 6 BOSEMs on the SR3 and SRM M1. This is a continuation of the work done previously for ITMX, ITMY, PR2, PR3, PRM, SR2. As with PRM, gains of 300 and 600 were collected for SR3 (300 is labled as ln or L). Only a gain of 600 was collected for SRM due to time constraints. Calibration lines were on for SR3.
The plots, code, and flagged frequencies are located at /ligo/home/joshua.freed/bosem/SR3/scripts. While the diaggui files are at /ligo/home/joshua.freed/bosem/SR3/data for SR3. I used Sheila code located under /ligo/home/joshua.freed/bosem/SR2/scripts/osem_budgeting.py to produce the sum of all contributions as well as the individual plots.
Commissioning wrapped up, back to Observing 2007UTC.
Locked for 22 hours.
This morning I swapped out the dust monitor in the VAC prep lab for a spare that was calibrated earlier this year. After swapping the connection issues remain.
Camilla, Sheila, Following on from 81852.
Today I re-ran the sqz/h1/scripts/SCAN_PSAMS.py script, with SQZ_MANAGER guardian paused (so it's not forcing the ADF servo on). The script worked as expected (setup by turning off ADF servo and increasing SQZ ASC gain to speed up, then changed ZM PSAMS, waited 120s for ASC to coverage, turned off ASC, scanned sqz angle and saved data and then repeated) and took ~3m30 per step.
The results are attached (heatmap, scans) but didn't show an obvious direction to move in. Maybe the steps were too small but are already larger than those use at LLO LLO#72749: 0.3V vs 0.1V per step.
Our initial ZM4/5 PSAMs values were 5.6V, -1.1V and looking at the attached data, we took them to 5.2V, -1.5V. We then decided ther range looked better when the PSAMs were higher to went to 6.0V, -0.4Vthis seemed to gain improve the range ~5MPc, we checked by going back to 5.2V, -1.5V and the range again dropped, main change appears to be in orange BAND_2 OMC BLRMs which is 20-34Hz, the places with glitches in OAF BLRMS are noisy times from Robert's injections. Then went further in this direction to 6.5V, 0.0V, this didn't help the range so we are staying at 6.0V, -0.4V, sdf's attached. This seemed to be a repeat-able change that gave us a few MPc's in range! Success for PSAMS: zoomed out and zoomed in plots attached.
Future work: we could run the SCAN_PSAMS.py script with small 0.1V changes as LLO does.
Vicky asked to check the HOM plot and wide plot, don't see a big difference, if anything it's worse with larger HOM peaks after the PSAMS change. We can see that the 20-35Hz in DARM looks a little better, though this does drift throughout the lock as Sheila showed in 81843.
This alog presents the first steps I am taking into answering the question: "what is the calibrated residual test mass motion from the ASC?"
As a reminder, the arm alignment control is run in the common/differential, hard/soft basis, so we have eight total loops governing the angular motion of the test masses: pitch and yaw for differential hard/soft and common hard/soft. These degrees of freedom are diagonalized in actuation via the ASC drive matrix. The signals from each of these ASC degrees of freedom are then sent to each of the four test masses, where the signal is fed "up" from the input of the TST ISC filter banks through the PUM/UIM/TOP locking filters banks (I annotated this screenshot of the ITM suspension medm for visualization). No pitch or yaw actuation is sent to the TST or UIM stages at Hanford. The ASC drive to the pum is filtered through some notches/bandstops for various suspension modes. The ASC drive to the TOP acquires all of these notches and bandstops and an additional integrator and low pass filter, meaning that the top mass actuates in angle at very low frequency only (sub 0.5 Hz).
Taking this all into account involves a lot of work, so to just get something off the ground, I am only thinking about ASC drive to the PUM in this post. With a little more time, I can incorporate the drive to the top mass stage as well. Thinking only about the PUM makes this a "simple" problem:
I have done just this to achieve the four plots I have attached to this alog. These plots show the ITM and ETM test mass motion in rad/rtHz from each degree of freedom and the overall radian RMS value. That is, each trace is showing you exactly how much radians of motion each ASC degree of freedom is sending to the test mass through the PUM drive. The drive matrix value is the same in magnitude for each ITM and each ETM, meaning that the "ITM" plot is true for both ITMX and ITMY (the drives might differ by an overall sign though).
Since I am just looking at the PUM, I also didn't include the drive notches. Once I add in the top mass drive, I will make sure I capture the various drive filters properly.
Some commentary: These plots make it very evident how different the drive is from each ASC degree of freedom. This is confusing because in principle we know that the "HARD" and "SOFT" plants are the same for common and differential, and could use the same control design. However, we know that the sensor noise at the REFL WFS, which controls CHARD, is different than the sensor noise at the AS WFS that control DHARD, so even with exact same controller, we would see different overall drives. We also know that we don't use the same control design for each DOF, due to the sensor noise limitations and also the randomness of commissioning that has us updating each ASC controller at different times for different reasons. For example, the soft loops both run on the TMS QPDs, but still have different drive levels.
Some action items: besides continuing the process of getting all the drives from all stages properly calibrated, we can start thinking again about our ASC design and how to improve it. I think two standout items are the SOFT P and CHARD Y noise above 10 Hz on these plots. Also, the fact that the overall RMS from each loop varies is something that warrants more investigation. I think this is probably related to the differing control designs, sensor noise, and noise from things like HAM1 motion or PR3 bosems. So, one thing I can do is project the PR3 damping noise that we think dominates the REFL WFS RMS into test mass motion.
I have just realized I mixed up the DACs and ADCs (again) and the correct count-to-torque calibration should be:
So these plots are wrong by a factor of 2. I will correct this in my code and post the corrected plots shortly.
The attached plots are corrected by the erroneous factor of two mentioned above, which has an overall effect of reducing the motion by 2.