Displaying reports 1041-1060 of 80598.Go to page Start 49 50 51 52 53 54 55 56 57 End
Reports until 15:45, Friday 20 December 2024
H1 DetChar
jason.levelle@LIGO.ORG - posted 15:45, Friday 20 December 2024 (81928)
"Safe" channel CS MAG EBAY LSCRACK Y found to be unsafe when looking at 2 week duration

New calibration lines turned on 7/25/2023 resulted in unexpected additional inter-modulation products. These new calibration lines were turned off on 8/9/2023.

When looking at coherence data between the main detector and safe channels, we expect no change in the before-during-after periods of the new calibration lines and additional inter-modulation products. However, in “safe” channel CS MAG EBAY LSCRACK Y, we see coherence jump at the specific frequencies of the calibration lines and inter-modulation products. This indicates that the channel may actually be unsafe.

Plots for H1_PEM-CS_MAG_EBAY_LSCRACK_Y_DQ  before / during / after

Research Presentation: https://dcc.ligo.org/G2402592

Images attached to this report
H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 13:02, Friday 20 December 2024 - last comment - 14:54, Friday 20 December 2024(81926)
Lockloss 20:59 UTC 47 hour lock

ETMX glitch lockloss tool

Comments related to this report
ryan.crouch@LIGO.ORG - 14:54, Friday 20 December 2024 (81927)

22:53 UTC back to Observing

LHO VE
david.barker@LIGO.ORG - posted 10:28, Friday 20 December 2024 (81924)
Fri CP1 Fill

Fri Dec 20 10:11:13 2024 INFO: Fill completed in 11min 10secs

TC-B is still elevated by about +30C compared to TC-A and did not reach the trip temp.

Images attached to this report
H1 SEI
oli.patane@LIGO.ORG - posted 10:18, Friday 20 December 2024 (81923)
HEPI Pump Trends Monthly FAMIS

Closes FAMIS#26473, last checked 81396


 HEPI pump trends looking as expected(ndscope). There is a period of rapid change in the pressure at EX 28 days ago, right after TJ put in his alog (linked above) about the pressure being a bit weird, and that's because Jim and Mitchell had gone out to investigate it (81399).
 

Images attached to this report
H1 General
ryan.crouch@LIGO.ORG - posted 07:24, Friday 20 December 2024 - last comment - 07:41, Friday 20 December 2024(81920)
OPS Friday DAY shift start

TITLE: 12/20 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 160Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: SEISMON_ALERT
    Wind: 3mph Gusts, 2mph 3min avg
    Primary useism: 0.07 μm/s
    Secondary useism: 0.32 μm/s
QUICK SUMMARY:

Comments related to this report
ryan.crouch@LIGO.ORG - 07:41, Friday 20 December 2024 (81921)

Range looks fairly steady around just under 160 for the whole lock, the coherence check yielded.

Images attached to this comment
H1 General
anthony.sanchez@LIGO.ORG - posted 22:04, Thursday 19 December 2024 (81919)
Thursday Eve shift End

TITLE: 12/20 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
H1 Still locked for 32 hours! Currently Observing.
Aside from the breif Internet outage nothing really happened.
Violins looking good, No Pi's ringing up or nothin.
This is a good start going into the holidays.

 

 

H1 CDS
anthony.sanchez@LIGO.ORG - posted 18:38, Thursday 19 December 2024 (81918)
CDS & GC outbound internet connection interuption.

TITLE: 12/20 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 7mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.34 μm/s
QUICK SUMMARY:
CDS & GS internet connectivity to the outside world went down. Issues due to Core Switch work?
Outage lasted from 2:00 UTC to ~ 2:24 UTC. 

IFO stayed locked during that time, We now have a 28+ Hour lock. 
Control over the IFO was maintained throughout the entire outage.
Local networks stayed up and running just fine.
I called Dave & Jonathan, Jonathan told me to wait it out, as it's just a switch updating , rebooting, and falling back to another network.
I presume the falling back to the other network bit took some time.

All systems recovered the way Jonathan said they would. He advised me to wait it out if there is another one, and to call him again if things went down after 3 :00 UTC (7pm local) .
So far so good.
 

H1 ISC
elenna.capote@LIGO.ORG - posted 17:36, Thursday 19 December 2024 - last comment - 15:08, Monday 06 January 2025(81917)
Calibrated Angle to Length Coupling from HARD loops

This alog follows up LHO:81769 where I calibrated the ASC drives to test mass motion for all eight arm ASC control loops. Now, I have taken the noise budget injections that we run to measure the ASC coupling to DARM, and used that to calibrate an angle-to-length coupling function in mm/rad. I have only done this for the HARD loops because the SOFT loops do not couple very strongly to DARM (notable exception to CSOFT P, which I will follow up on).

The noise budget code uses an excess power projection to DARM, but instead I chose to measure the linear transfer function. The coherence is just ok, so I think a good follow up is to remeasure the coupling again and drive a bit harder/average longer (these are 60 second measurements). This plot shows the noise budget injection into calibrated DARM/ASC PUM drive [m/Nm] transfer function, and the coherence of the measurement.

I followed a similar calibration procedure to my previous alog:

I did not apply the drive matrix here, so the calibration is into ETM motion only (factor of +/-1), whereas the calibration into ITM motion would have an additional +/- 0.74 (+/- 0.72 for yaw) applied.

HARD Pitch Angle to Length coupling plot

HARD Yaw Angle to Length coupling plot

Overall, the best-measured DOF here is CHARD Y. In both CHARD Y and DHARD Y, there seem to be two clear coupling regions: one fairly flat region above 20 Hz in CHARD Y and above 30 Hz in DHARD Y, reaching between 20-30 mm/rad. Below, there is a steep coupling. This is reminiscient of the coupling that Gabriele, Louis, and I measured back in March and tried to mitigate with A2L and WFS offset. We found that we could reduce the flatter coupling in DHARD Y by adjusting the A2L gain, and the steeper coupling by applying a small offset in AS WFS A yaw DC. We are currently not running with that WFS offset. The yaw coupling suggests that we have some sort of miscentering on both the REFL and AS WFS which causes a steep low frequency coupling which is less sensitive to beam centering on the test mass (as shown by the A2L tests); meanwhile, the flat coupling is sensitive to beam miscentering on the test mass, which is expected (see e.g. T0900511).

The pitch coupling has the worst coherence here, but the coupling is certainly not flat. It appears to be rising with about f^4 at high frequency. I have a hard time understanding what could cause that. There is also possibly a similar steep coupling at low frequency like the yaw coupling, but the coherence is so poor it's hard to see.

Assuming that I have my calibration factors correct here (please don't assume this! check my work!), this suggests that the beam miscentering is higher than 1 mm everywhere and possibly up to 30 mm on the ETMs (remember this would be ~25% lower on the ITMs). This seems very large, so I'm hoping that there is another errant factor of two or something somewhere.

My code for both the calibrated motion and calibrated coupling is in a git repo here: https://git.ligo.org/ecapote/ASC_calibration

Non-image files attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 15:08, Monday 06 January 2025 (82136)

Today I had a chance to rerun these injections so I could get better coherence, injection plot. I ran all the injections with the calibration lines off.

The pitch couplings now appear to be very flat, which is what we expect. However, they are very high (100 mm/rad !!) which seems nearly impossible.

The yaw couplings still show a strong frequency dependence below 30 Hz, and are flat above, and around 30-50 mm/rad, still large.

Whether or not the overall beam miscentering value is correct, this does indicate that there is some funny behavior in yaw only that causes two different alignment coupling responses. Since this is observed in both DHARD and CHARD, it could be something common to both (so maybe less likely to be related to the DARM offset light on the AS WFS).

I also ran a measurement of the CSOFT P coupling, injection plot. I was only able to get good coherence up to 30 Hz, but it seems to be fairly flat too, CSOFT P coupling.

Edit: updated coupling plots to include error shading based on the measurement coherence.

Non-image files attached to this comment
H1 General
anthony.sanchez@LIGO.ORG - posted 16:49, Thursday 19 December 2024 (81916)
Thursday Eve shift start

TITLE: 12/20 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 159Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 2mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.32 μm/s
QUICK SUMMARY:
H1 has been locked for over 26 Hours.
Current plan: stay in Observing.

Note:
Dust mon LAB2 still not reporting back correctly.
 

LHO General
thomas.shaffer@LIGO.ORG - posted 16:27, Thursday 19 December 2024 (81905)
Ops Day Shift End

TITLE: 12/20 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 162Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Locked for 26.5 hours so far. We took calibration measurements this morning, followed by 3 hours of commissioning time.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
16:53 SAF LVEA LVEA YES LVEA IS LASER HAZARD 00:38
15:38 PCAL Francisco PCAL lab local Check on meas 15:43
16:54 FAC Tyler High Bay N While we are not observing 17:24
17:08 SEI Jim, Mitchell EY n Wind fence check 18:54
18:21 - Richard OSB n Opening receiving rollup 18:36
18:44 FAC Kim H2 build n Tech clean 19:02
20:19 VAC Janos MX, MY n Taking measurements 21:44
22:12 PCAL Rick, Francisco, Dripta PCAL lab local PCAL meas 01:12
H1 SUS (SEI, SUS)
joshua.freed@LIGO.ORG - posted 15:07, Thursday 19 December 2024 (81914)
SR3,SRM OSEM Noise Injections

J. Freed,

SRM M1 stage bosems (speciffically T2, T3) show strong coupling below 10Hz. SR3 M1 had some on LF, RT bosems between 10-15Hz

Today I did damping loop injections on all 6 BOSEMs on the SR3 and SRM M1. This is a continuation of the work done previously for ITMX, ITMY, PR2, PR3, PRM, SR2. As with PRM, gains of 300 and 600 were collected for SR3 (300 is labled as ln or L). Only a gain of 600 was collected for SRM due to time constraints. Calibration lines were on for SR3.

The plots, code, and flagged frequencies are located at /ligo/home/joshua.freed/bosem/SR3/scripts. While the diaggui files are at /ligo/home/joshua.freed/bosem/SR3/data for SR3. I used Sheila code located under /ligo/home/joshua.freed/bosem/SR2/scripts/osem_budgeting.py to produce the sum of all contributions as well as the individual plots.

SRM.png Shows SRM M1 bosems contribution to DARM at 600 amplitude on each bosems excitation channel. Only T2 and T3 seem to have a noticible effect.
SR3.png Shows SR3 M1 bosems contribution to DARM at 600 amplitude. Besides the bounce mode at 9Hz, SR3 only had some coupling from 10-15Hz on the LF and RT BOSEMs. Might have more behind the calibration lines
main.png Shows the current version of the sum of the BOSEM contributions. Only PR3, PRM, SR2, SR3, SRM are included so far.
Side note:
Screenshot_60Hz_background.png Shows SRM M1 T3 out channel is giving an unusual strong 60Hz signal during quiet time. All BOSEM on SRM were giving this signal. Unsure if this is a continuous issue. Unlikely to have affected this measurment but worth pointing out.
 
reference number in diaggui files for SR3
Background time: (ref0 DARM, ref1 LF_out, ref2 RT_out, ref3 SD_out, ref4 T1_out, ref5 T2_out, ref6 T3_out)
LFL time:                (ref7 DARM, ref8 LF_out)
LF time:                  (ref9 DARM, ref10 LF_out)
RTL time:               (ref11 DARM, ref12 RT_out)
RT time:                 (ref13 DARM, ref14 RT_out)
SDL time:              (ref15 DARM, ref16 SD_out)
SD time:                (ref17 DARM, ref18 SD_out)
T1L time:              (ref19 DARM, ref20 T1_out)
T1 time:                (ref21 DARM, ref22 T1_out)
T2L time:              (ref23 DARM, ref24 T2_out)
T2 time:                (ref25 DARM, ref26 T2_out)
T3L time:             (ref27 DARM, ref28 T3_out)
T3 time:               (ref29 DARM, ref30 T3_out)
 
reference number in diaggui files for SRM
Background time: (ref0 DARM, ref1 LF_out, ref2 RT_out, ref3 SD_out, ref4 T1_out, ref5 T2_out, ref6 T3_out)
LF time:                  (ref7 DARM, ref8 LF_out)
RT time:                 (ref9 DARM, ref10 RT_out)
SD time:                (ref11 DARM, ref12 SD_out)
T1 time:                (ref13 DARM, ref14 T1_out)
T2 time:                (ref15 DARM, ref16 T2_out)
T3 time:               (ref17 DARM, ref18 T3_out)
 
Images attached to this report
H1 General
thomas.shaffer@LIGO.ORG - posted 12:11, Thursday 19 December 2024 (81911)
Back to Observing 2007UTC

Commissioning wrapped up, back to Observing 2007UTC.

Locked for 22 hours.

H1 General
ryan.crouch@LIGO.ORG - posted 11:59, Thursday 19 December 2024 (81910)
Dust monitor swap

This morning I swapped out the dust monitor in the VAC prep lab for a spare that was calibrated earlier this year. After swapping the connection issues remain.

H1 GRD (ISC, OpsInfo, SUS)
oli.patane@LIGO.ORG - posted 11:51, Thursday 19 December 2024 (81909)
SUS_PI guardian now increases ETMY ESD bias to help damp PI24

Since we've been having issues with PI24 ringing up lately we decided to change the SUS_PI guardian code to increase the bias offset on ETMY L3 LOCK. That bias offset value is normally -4.9, and we noticed that when damping PI24, we typically start hitting our damping limit when PI24 RMSMON exceeds a value of 10. I have turned the PI_DAMPING state into a generator function so it can now make two different states, PI_DAMPING and EXTREME_PI_DAMPING. When in PI_DAMPING, pi24 will be damped with our normal bias until we reach an RMSMON average value of 20. Once that happens, the guardian will jump to a state called INCREASE_BIAS (what it does is self-explanatory), and then goes into XTREME_PI_DAMPING, where it continues damping and changing phase as needed until we are back under an RMSMON value of 4(pi24 script logic). Once we are good the guardian will take us through DECREASE_BIAS (guess what that does) and then back into PI_DAMPING(node graph, states list).

Because the nominal state for Observing is PI_DAMPING, when we move into one of these other states we will be taken out of Observing (tagging OPS). Once we are back in PI_DAMPING we will automatically go back into Observing as long as we are in AUTOMATIC.

Occasionally we do get small quick ringups that go up above 20 that are capable of being damped by regular damping and just changing the phase - ndscope1(versus the slower ringups that cannot - ndscope2), so with these script changes, when these happen we will be taken out of Observing as the SUS_PI guardian jumps to up the bias and damp harder. These don't happen very often so we are okay for now, but a future to-do is to edit the script to keep it from increasing the bias at least until it's tried all phase changes.

Images attached to this report
H1 SQZ
camilla.compton@LIGO.ORG - posted 11:42, Thursday 19 December 2024 - last comment - 14:35, Thursday 19 December 2024(81908)
SCAN_PSAMS.py script ran, PSAMS changed by looking at OAF-RANGE_BAND2 and Range

Camilla, Sheila, Following on from 81852.

Today I re-ran the sqz/h1/scripts/SCAN_PSAMS.py script, with SQZ_MANAGER guardian paused (so it's not forcing the ADF servo on). The script worked as expected (setup by turning off ADF servo and increasing SQZ ASC gain to speed up, then changed ZM PSAMS, waited 120s for ASC to coverage, turned off ASC, scanned sqz angle and saved data and then repeated) and took ~3m30 per step.

The results are attached (heatmap, scans)  but didn't show an obvious direction to move in. Maybe the steps were too small but are already larger than those use at LLO LLO#72749: 0.3V vs 0.1V per step.

Our initial ZM4/5 PSAMs values were 5.6V, -1.1V and looking at the attached data, we took them to 5.2V, -1.5V. We then decided ther range looked better when the PSAMs were higher to went to 6.0V, -0.4Vthis seemed to gain improve the range ~5MPc, we checked by going back to 5.2V, -1.5V and the range again dropped, main change appears to be in orange BAND_2 OMC BLRMs which is 20-34Hz, the places with glitches in OAF BLRMS are noisy times from Robert's injections. Then went further in this direction to 6.5V, 0.0V, this didn't help the range so we are staying at 6.0V, -0.4V, sdf's attached. This seemed to be a repeat-able change that gave us a few MPc's in range! Success for PSAMS: zoomed out and zoomed in plots attached.

Future work: we could run the SCAN_PSAMS.py script with small 0.1V changes as LLO does.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 14:35, Thursday 19 December 2024 (81913)

Vicky asked to check the HOM plot and wide plot, don't see a big difference, if anything it's worse with larger HOM peaks after the PSAMS change. We can see that the 20-35Hz in DARM looks a little better,  though this does drift throughout the lock as Sheila showed in 81843.

Images attached to this comment
H1 ISC (SUS)
elenna.capote@LIGO.ORG - posted 15:25, Wednesday 11 December 2024 - last comment - 15:58, Thursday 19 December 2024(81769)
Test mass motion from ASC drives

This alog presents the first steps I am taking into answering the question: "what is the calibrated residual test mass motion from the ASC?"

As a reminder, the arm alignment control is run in the common/differential, hard/soft basis, so we have eight total loops governing the angular motion of the test masses: pitch and yaw for differential hard/soft and common hard/soft. These degrees of freedom are diagonalized in actuation via the ASC drive matrix. The signals from each of these ASC degrees of freedom are then sent to each of the four test masses, where the signal is fed "up" from the input of the TST ISC filter banks through the PUM/UIM/TOP locking filters banks (I annotated this screenshot of the ITM suspension medm for visualization). No pitch or yaw actuation is sent to the TST or UIM stages at Hanford. The ASC drive to the pum is filtered through some notches/bandstops for various suspension modes. The ASC drive to the TOP acquires all of these notches and bandstops and an additional integrator and low pass filter, meaning that the top mass actuates in angle at very low frequency only (sub 0.5 Hz).

Taking this all into account involves a lot of work, so to just get something off the ground, I am only thinking about ASC drive to the PUM in this post. With a little more time, I can incorporate the drive to the top mass stage as well. Thinking only about the PUM makes this a "simple" problem:

I have done just this to achieve the four plots I have attached to this alog. These plots show the ITM and ETM test mass motion in rad/rtHz from each degree of freedom and the overall radian RMS value. That is, each trace is showing you exactly how much radians of motion each ASC degree of freedom is sending to the test mass through the PUM drive. The drive matrix value is the same in magnitude for each ITM and each ETM, meaning that the "ITM" plot is true for both ITMX and ITMY (the drives might differ by an overall sign though).

Since I am just looking at the PUM, I also didn't include the drive notches. Once I add in the top mass drive, I will make sure I capture the various drive filters properly.

Some commentary: These plots make it very evident how different the drive is from each ASC degree of freedom. This is confusing because in principle we know that the "HARD" and "SOFT" plants are the same for common and differential, and could use the same control design. However, we know that the sensor noise at the REFL WFS, which controls CHARD, is different than the sensor noise at the AS WFS that control DHARD, so even with exact same controller, we would see different overall drives. We also know that we don't use the same control design for each DOF, due to the sensor noise limitations and also the randomness of commissioning that has us updating each ASC controller at different times for different reasons. For example, the soft loops both run on the TMS QPDs, but still have different drive levels.

Some action items: besides continuing the process of getting all the drives from all stages properly calibrated, we can start thinking again about our ASC design and how to improve it. I think two standout items are the SOFT P and CHARD Y noise above 10 Hz on these plots. Also, the fact that the overall RMS from each loop varies is something that warrants more investigation. I think this is probably related to the differing control designs, sensor noise, and noise from things like HAM1 motion or PR3 bosems. So, one thing I can do is project the PR3 damping noise that we think dominates the REFL WFS RMS into test mass motion.

Non-image files attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 14:05, Thursday 19 December 2024 (81912)

I have just realized I mixed up the DACs and ADCs (again) and the correct count-to-torque calibration should be:

  • for PUM: 20 V /2**20 ct [DAC] * 0.268 mA / V [drive strength] * 0.0309 N / A [force coeff] * 70.7 mm [lever arm], pit/yaw lever arm is the same for the PUM

So these plots are wrong by a factor of 2. I will correct this in my code and post the corrected plots shortly.

elenna.capote@LIGO.ORG - 15:58, Thursday 19 December 2024 (81915)

The attached plots are corrected by the erroneous factor of two mentioned above, which has an overall effect of reducing the motion by 2.

Non-image files attached to this comment
Displaying reports 1041-1060 of 80598.Go to page Start 49 50 51 52 53 54 55 56 57 End