Displaying reports 5761-5780 of 83266.Go to page Start 285 286 287 288 289 290 291 292 293 End
Reports until 14:58, Friday 30 August 2024
LHO FMCS (PEM)
ryan.short@LIGO.ORG - posted 14:58, Friday 30 August 2024 (79826)
HVAC Fan Vibrometers Check - Weekly

FAMIS 26326, last checked in alog79788

All fans are looking normal and within range.

Images attached to this report
H1 SEI
ryan.short@LIGO.ORG - posted 14:51, Friday 30 August 2024 (79824)
BRS Drift Trends - Monthly

FAMIS 26446, last checked in alog79327

Both BRSs look good. BRS-X was drifting towards the upper limit, but has since started to turn around.

Images attached to this report
H1 GRD
thomas.shaffer@LIGO.ORG - posted 14:07, Friday 30 August 2024 (79820)
Guardian user code file comparison between sites

I was curious to see what files the two observatories actually share in their guardian user code. I've attached full lists, but here is a breakdown of the file comparisons.

LHO

Total number of nodes:   166

Total files used by nodes:   258

Files unique to LHO:   163

Files shared with LLO:   95

Files in common but only used by LHO:   30

 

A few key takeaways:

Non-image files attached to this report
H1 ISC
anthony.sanchez@LIGO.ORG - posted 11:21, Friday 30 August 2024 - last comment - 15:53, Friday 30 August 2024(79819)
Pre Vent v Post vent Range Comparisions , Range Coherence and BLRMs plots

after_key = 'LHO_1409042566'
b4_key = 'LHO' # 1403879360 # Hot OM2, 2024/07/01 14:29UTC

(aligoNB) anthony.sanchez@cdsws29: python H1/darm_intergal_compare.py

Figures made by this script will be placed in:
/ligo/gitcommon/NoiseBudget/aligoNB/out/H1/darm_intergal_compare

Fetching data from nds.ligo-wa.caltech.edu:31200 with GPS times 1409042566 to 1409043166


Successfully retrieved data from nds.ligo-wa.caltech.edu:31200 with GPS times 1409042566 to 1409043166

data is 60.1 days old

Fetching data from nds.ligo-wa.caltech.edu:31200 with GPS times 1403879360 to 1403879960


Successfully retrieved data from nds.ligo-wa.caltech.edu:31200 with GPS times 1403879360 to 1403879960

running lho budget
Saving file as /ligo/gitcommon/NoiseBudget/aligoNB/out/H1/darm_intergal_compare/compare_darm_spectra_OM2_hot_vs_cold_no_sqz.svg
Saving file as /ligo/gitcommon/NoiseBudget/aligoNB/out/H1/darm_intergal_compare/compare_darm_range_integrand_OM2_hot_vs_cold_no_sqz.svg
Saving file as /ligo/gitcommon/NoiseBudget/aligoNB/out/H1/darm_intergal_compare/compare_cumulative_range_OM2_hot_vs_cold_no_sqz.svg
Saving file as /ligo/gitcommon/NoiseBudget/aligoNB/out/H1/darm_intergal_compare/cumulative_range_big_OM2_hot_vs_cold_no_sqz.svg

Script darm_intergal_compare.py done in 0.23667725324630737 minutes


H1's current coherence with jitter, LSC signals, ASC signals plot.
 

Current H1 Squeezer BLRMS

 

Bruco - Brute force coherence. Looks at the coherence between many channels. was the next thing I wanted to Try but I was greeted with a permissions denied when trying to ssh into the cluster with my A.E creds.

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 12:06, Friday 30 August 2024 (79821)

I ran a bruco on GDS CLEAN with data from the current lock after 11 hrs of lock time. My instructions. I have been running my brucos on the LHO cluster lately because it seems like every time I run on Caltech I get some error.

bruco command:

python -m bruco --ifo=H1 --channel=GDS-CALIB_STRAIN_CLEAN --gpsb=1409075057 --length=600 --outfs=4096 --fres=0.1 --dir=/home/elenna.capote/public_html/brucos/GDS_CLEAN_1409075057 --top=100 --webtop=20 --plot=html --nproc=20 --xlim=7:2000 --excluded=/home/elenna.capote/bruco-excluded/lho_DARM_excluded.txt

Results:

https://ldas-jobs.ligo-wa.caltech.edu/~elenna.capote/brucos/GDS_CLEAN_1409075057/

Greatest hits:

  • ASC coherences: CHARD P, CHARD Y, DHARD Y, previously discussed in various alogs about spot positions, etc 79776, 79790, 79807
  • Some SRCL, I wrote "there may be some worsening of the coupling 100-200 Hz" in my FF tuning alog and that's exactly what happened. I can reasses the fit. 79755
  • MICH, only at low frequency, previously discussed in FF alogs, 79755
  • PRCL, low frequency perhaps due to ASC and the 100 Hz coupling perhaps due to SRCL? also previously discussed 79807
  •  LSC REFL RIN coherence across a large part of the band- this is the kind of thing that prompted us to try a PRCL offset in April, 76805, 76814
  • Some OMC REFL
  • IMC REFL ?
  • One of many PEM ACC coherences, this one in the PSL- this one looks like the IMC REFL coherence, ?
    • this "shape of coherence" is also appearing in various ISS sensors too. Is this jitter or intensity noise?
  • SUS ITMY L3 which is maybe a red herring, or maybe just another manifestation of the various ASC coherences?
  • HAM1 L4C coherence, HAM1 feedforward perhaps not performing as well as it could
  • HAM2 GS13, ?

 

sushant.sharma-chaudhary@LIGO.ORG - 15:53, Friday 30 August 2024 (79829)
At Livingston, we have been using a separate DIAG_COH guardian to monitor the health of the feed forward. It is similar to Bruco in functionality, in sense that it computes coherence. It does it automatically every 3 hrs within an observing period and computes band limited coherence of channels according to a set config file. An additional feature is that it stores previous coherence values to a file as reference and compares current values with that. If it differs drastically say 2 sigma, it displays a message in DIAG_MAIN warning that certain FF is under performing compared to reference.
LHO VE
david.barker@LIGO.ORG - posted 09:08, Friday 30 August 2024 - last comment - 10:20, Friday 30 August 2024(79812)
HAM6 vacuum glitch

at 00:29 Fri 30 Aug 2024 PDT there was a short vacuum glitch in HAM6 detected by PT110_MOD1.

The pressure increased from 1.90e-07 to 2.06e-07 Torr.

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 09:10, Friday 30 August 2024 (79814)

Gitch was detected by VACSTAT.

VACSTAT is in testing mode, and the MEDMs still needs some polishing. The Overview and PT110 MEDM are attached.

Images attached to this comment
david.barker@LIGO.ORG - 09:13, Friday 30 August 2024 (79815)

Gitch time doesn't appear to be related to H1 locking or unlocking. 24 hour PT110 and H1 range trend attached.

The smaller, wider glitch on left was at 16:51 Thu and is known about (pump operations as part of noise hunting). The 00:29 glitch is the larger, sharp one to the right.

Images attached to this comment
david.barker@LIGO.ORG - 10:20, Friday 30 August 2024 (79818)

I've promoted VACSTAT from a H3 test system to a H1 pre-production system. This allows me to add the channel H1:CDS-VAC_STAT_GLITCH_DETECTED to the alarms system (alarms was restarted 10:15). For testing it will send alarms to my cellphone.

LHO FMCS
eric.otterman@LIGO.ORG - posted 09:00, Friday 30 August 2024 (79811)
Instrument air hiccup
Yesterday in the instrument air trend there was a brief period of pressure loss which was caused by the air dryer being turned off for several minutes while a line voltage switch was added to the unit. The bypass on the dryer was not opened, so additional air was not fed into the system. Once power was restored the dryer functioned correctly and allowed air back in. 
LHO VE
david.barker@LIGO.ORG - posted 08:20, Friday 30 August 2024 (79809)
Fri CP1 Fill

Fri Aug 30 08:11:24 2024 INFO: Fill completed in 11min 20secs

 

Images attached to this report
H1 General (CDS)
anthony.sanchez@LIGO.ORG - posted 07:59, Friday 30 August 2024 - last comment - 08:33, Friday 30 August 2024(79808)
Friday Ops Day Shift Start

TITLE: 08/30 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 7mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY:

IFO was locked and Observing for 8 Hours before I walked in to the Control room.
Looks like the IFO locked it's self last night with out calling Oli after losing lock at LASER_NOISE_Suppression.

H1's Current Status is LOCK at NOMINAL_LOW_NOISE and OBSERVING for 8 Hours and 27 Min.

H1EDC is reporting a NUC33 error :

       Total Channels : 56980
   Connected Channels : 56968
Disconnected Channels :    12

H1:CDS-MONITOR_NUC33_CPU_LOAD_PERCENT
H1:CDS-MONITOR_NUC33_CPU_COUNT
H1:CDS-MONITOR_NUC33_MEMORY_AVAIL_PERCENT
H1:CDS-MONITOR_NUC33_MEMORY_AVAIL_MB
H1:CDS-MONITOR_NUC33_PROCESSES
H1:CDS-MONITOR_NUC33_INET_CONNECTIONS
H1:CDS-MONITOR_NUC33_NET_TX_TOTAL_MBIT
H1:CDS-MONITOR_NUC33_NET_RX_TOTAL_MBIT
H1:CDS-MONITOR_NUC33_NET_TX_LO_MBIT
H1:CDS-MONITOR_NUC33_NET_RX_LO_MBIT
H1:CDS-MONITOR_NUC33_NET_RX_ENO1_MBIT
H1:CDS-MONITOR_NUC33_NET_TX_ENO1_MBIT

Comments related to this report
anthony.sanchez@LIGO.ORG - 08:33, Friday 30 August 2024 (79810)CDS

Did some troubleshooting on NUC33.
couldn't VNC or SSH into it.
Got a Keyboard and Mouse and couldn't get NUC33 to do anything, completely frozen.
I tried the REISUB linux rescue and no response. So I gave it a hard shutdown via the power button.
Once it powered back on everything came back up as normal.  H1EDC went back to functioning as normal.

LHO General
corey.gray@LIGO.ORG - posted 22:06, Thursday 29 August 2024 (79805)
Thurs EVE Ops Summary

TITLE: 08/29 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 141Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:

H1 had a late-shift lockloss, and was pretty much automated (need INCREASE FLASHES & PRMI), but near the end there was a lockloss at LASER NOISE SUPPRESSION...most likely due to an EQ from Russia.  H1's already giving it a go.
LOG:

H1 ISC
elenna.capote@LIGO.ORG - posted 17:35, Thursday 29 August 2024 - last comment - 15:08, Friday 30 August 2024(79807)
PRCL and CHARD, which way is the coupling?

Here is an investigation that might give us insight into the PRCL/CHARD/DARM coherences.

Today, we ran a PRCL injection for the feedforward where we injected directly into the PRCL loop. I took this injection time and looked at how the CHARD P and Y error signals changed, as well as their respective coherences to PRCL and DARM (figure). When injecting about 15x above ambient in PRCL from 10-100 Hz, there is a 2x increase in the CHARD P error signal and 4x increase in the CHARD Y error signal. The coherences of CHARD P and Y to PRCL increase as well, and the coherences of CHARD P and Y to DARM also increase. This injection allows us to measure a well-defined CHARD/PRCL transfer function. In the attached screenshot of this measurement, all the reference traces are from the injection time, and all the live traces are during a quiet time.

Meanwhile, I looked back at the last time we injected into CHARD P and Y for the noise budget, on June 20. In both cases, injecting close to 100x above ambient in CHARD P and Y did not change either the CHARD/PRCL coherence or PRCL/DARM coherence. There is some change in the PRCL error signal, but it is small. Again, in the attachments, reference traces are injection time and live traces are quiet time. CHARD P figure and CHARD Y figure.

I think that this is enough to say that the PRCL/DARM coupling is likely mostly through CHARD P and Y. This would also make sense with the failure of the PRCL feedforward today (79806). However, we may want to repeat the CHARD injections since there have been many IFO changes since June 20.

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 09:26, Friday 30 August 2024 (79817)

As a reminder, the previous work we did adding a PRCL offset did reduce the PRCL/CHARD coupling: read 76814 and the comments. We are currently not running with any PRCL offset.

elenna.capote@LIGO.ORG - 15:08, Friday 30 August 2024 (79827)

I decided to compare the PRCL injection times back in March when I set the PRCL offset to reduce the coherence of DARM with LSC REFL RIN (76814, 76805). One conclusion of these tests was that a PRCL offset can reduce the REFL RIN/DARM coherence, but not necessarily improve the sensitivity. Also, the offset reduced the PRCL/CHARD Y coupling and increased the PRCL/CHARD P coupling.

A bruco shows that there is once again significant coherence with REFL RIN and DARM. I compared the PRCL injection time from yesterday with the PRCL injections with different offsets. The PRCL/ CHARD couplings have increased for both pitch and yaw: plot. I also included the coherences of CHARD to DARM for these times but then realized that data might actually be confusing to compare. However, the PRCL offset has an effect on the PRCL/CHARD coupling, so it could be one strategy to combat this coupling. Unfortunately, it has opposite effects for pitch and yaw.

There was a lot of work done to move the alignment of the PRC around in May; here are some alogs I found to remind myself of what happened: 77736, 77855, 77988. Seems like the goal was to reduce clipping/center on PR2. I wonder if this alignment shift caused the increase in the PRCL/CHARD coupling from March to now.

We should consider checking the PRCL and CHARD coupling while adjusting the PRC alignment. The yaw coupling is stronger, maybe because the beam is much further offcenter of PR2 in yaw than in pitch?

Overall, I think the benefit of this investigation would be a reduction of the noise in CHARD, which would help improve the sensitivity.

 

Images attached to this comment
H1 ISC
elenna.capote@LIGO.ORG - posted 13:03, Thursday 29 August 2024 - last comment - 09:23, Friday 30 August 2024(79799)
New HAM1 FF for CHARD P and INP1 P - Improvement

We did a 10 minute on/off HAM1 FF test today. I determined from that test that improvement could be made to the feedforward to CHARD P and INP1 P, so I used that off time to train new filters (time here: 79792).

I took a screenshot of the DTT results. The golden traces represent the various loop spectra with the feedforward OFF. Blue represents the previous feedforward, and red represents the new feedforward I fit today. First, look at the top and bottom plots on the left, showing CHARD P and INP1 P. You can see that the old feedforward was having minimal benefit (gold to blue), and the new feedforward is performing much better (gold to red).

Next, look at the middle plot on the left showing CHARD Y. This plot shows that the feedforward is making CHARD Y WORSE (blue worse than gold). Therefore, I just turned it off. I am not yet happy with the fitting results for CHARD Y, so I will continue to work on them.

You can also see in the middle right plot, PRC2 P current feedforward is performing well (gold to blue), so I made no change, red still matches blue.

Note! This means the significant change here must be related to RF45- PRC2 is only sensed on RF9, while INP1 and CHARD use RF45.

Finally, DARM on the right hand side shows improvement from blue to red. It looks like HAM1 FF off is better below 15 Hz, maybe due to CHARD Y.

The new CHARD P and INP1 P filters are all saved in FM9 of their respective banks. I SDFed the new filters, and the CHARD Y gains to zero, second screenshot.

I made one mistake which is that I did not SDF these changes in SAFE, only in OBSERVE, which means that needs to be done before an SDF revert, since they are not guardian controlled!

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 09:23, Friday 30 August 2024 (79816)
H1 ISC
thomas.shaffer@LIGO.ORG - posted 10:45, Monday 26 August 2024 - last comment - 09:08, Friday 30 August 2024(79714)
Ran A2L for all quads

I ran A2L while we were still thermalizing, might run again later. No change for ETMY but the ITMs had large changes. I've accepted these in SDF, I reverted the tramps that the picture shows I accepted. I didn't notice much of a change in DARM or on the DARM blrms.


         ETMX P
Initial:  3.12
Final:    3.13
Diff:     0.01

         ETMX Y
Initial:  4.79
Final:    4.87
Diff:     0.08

         ETMY P
Initial:  4.48
Final:    4.48
Diff:     0.0

         ETMY Y
Initial:  1.13
Final:    1.13
Diff:     0.0

         ITMX P
Initial:  -1.07
Final:    -0.98
Diff:     0.09

         ITMX Y
Initial:  2.72
Final:    2.87
Diff:     0.15

         ITMY P
Initial:  -0.47
Final:    -0.37
Diff:     0.1

         ITMY Y
Initial:  -2.3
Final:    -2.48
Diff:     -0.18
 

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 15:06, Wednesday 28 August 2024 (79776)

I am not sure how the A2L is run these days, but there is some DARM coherence with DHARD Y that makes me think we should recheck the Y2L gains. See attached screenshot from today's lock.

As a reminder, the work that Gabriele and I did last April found that the DHARD Y coupling had two distinct frequency regimes: a steep low frequency coupling that depended heavily on the AS A WFS yaw offset, and a much flatter coupling about ~30 Hz that depended much more strongly on the Y2L gain of ITMY (this was I think before we started adjusting all the A2L gains on the test masses). Relevant alogs: 76407 and 76363

Based on this coherence, the Y2L gains at least deserve another look. Is it possible to track a DHARD Y injection during the test?

Images attached to this comment
thomas.shaffer@LIGO.ORG - 09:08, Friday 30 August 2024 (79813)

Since I converted this script to run on all TMs and dofs simultaneously, its performance hasn't been stellar. We've only run it a handful of times, but we definitely need to change something. One difference between the old version and the new is the frequencies the injected lines are at. As of right now, they range from 23-31.5Hz, but perhaps these needs to be moved. In June, Sheila and I ran it, then swapped the highest frequency and the lowest frequency to see if it made a difference (alog78495) and in that one test it didn't seem to matter.

Sheila and I are talking about the AS WFS offset and DHARD injection testing to try to understand this coupling a bit better. Planning in progess.

Displaying reports 5761-5780 of 83266.Go to page Start 285 286 287 288 289 290 291 292 293 End