Displaying reports 17001-17020 of 86687.Go to page Start 847 848 849 850 851 852 853 854 855 End
Reports until 16:02, Friday 28 July 2023
H1 DetChar (DetChar)
taylor.starkman@LIGO.ORG - posted 16:02, Friday 28 July 2023 (71800)
Asymmetry in narrow line contamination in bands around violin mode frequencies

During the May 30th violin mode ring-up, the narrow line contamination in the frequency bands surrounding the fundamental and harmonic frequencies displays an asymmetry that is unexplained thus far. This contamination is most visible in the regions surrounding the 1500Hz harmonic with a 30Hz shift up in frequency as shown in figures 1 and 2. The shift increases with frequency as the contamination around the 500Hz fundamental is actually shifted 3Hz down in frequency, as shown in figure 3. Around 1000Hz, there is also a shift but it is only 17Hz shown in figure 4. 

Similar behavior is also seen during the June 30th ring-up as shown in figures 5-8. The shift is in the opposite direction for this ring-up and it decreases with frequency, going from a 17Hz shift down at the 500Hz fundamental to a 3Hz shift down at the 1500Hz harmonic. Again, this behavior is unexplained. 

These shifts are calculated as the difference between the median frequency of the lines in the 200Hz band surrounding the violin modes and the median frequency of the violin mode lines. Lines are considered violin mode lines if they have an amplitude that is 70% or more of the max violin mode amplitude in the band.

 

Images attached to this report
H1 CAL
louis.dartez@LIGO.ORG - posted 11:18, Friday 28 July 2023 - last comment - 14:00, Thursday 03 August 2023(71790)
noise in Kappa_UIM line
Kappa_UIM has seen a huge increase in noise starting ~8 days ago (screenshot attached). It's not clear to me what is causing this yet.

This is also shown on the summary pages: https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20230728/cal/time_varying_factors/

The UIM Kappa line value started to show noisy activity at GPS 1373905226.2881477. 
Images attached to this report
Comments related to this report
ansel.neunzert@LIGO.ORG - 14:18, Friday 28 July 2023 (71794)

There's a feature just above the corresponding 15.6 Hz calibration line which appears at the identified time. Fig 1 shows this feature in high resolution. It peaks around 15.605 and is present consistently (in Fscan daily data) since the date of the noise change.

I also computed some shorter-duration spectra with gwpy to double check that its appearance corresponds to the gps time Louis posted. I needed about 500s fft length to resolve the relevant features, and I wanted a couple of averages so I ended up looking at 1000s time periods. Apologies for the messy overlay of figures with not-precisely-matching y-axes! I think the shape difference is clear regardless of the scale. Fig 2 shows some samples right before the change. The change occurs near the end of an observing segment but low-noise data remains available right after, so I looked at both the immediate time after the change (fig 3) and the next observing segment (fig 4). Indeed, it looks like a good match.

Images attached to this comment
gabriele.vajente@LIGO.ORG - 12:49, Thursday 03 August 2023 (71934)

This is caused by a mistake in the MICHFF filter. It turns out that the filter I retuned on July 20 has a sharp feature at 15.6 Hz that I did not nottice before. This is injecting MICH noise in DARM at 15.6 Hz. This can be fixed by either tweaking the current filter, or retuning the MICHFF again (being more careful with narrow feartures in the fit!)

I modified the current filter to remove the 15.6 Hz feature and saved it. We should reload the MICHFF filters at the first opportunity

Images attached to this comment
jenne.driggers@LIGO.ORG - 14:00, Thursday 03 August 2023 (71937)

This filter was reloaded since we lost lock.  We should see an improvement in our next lock.

H1 General (Lockloss)
thomas.shaffer@LIGO.ORG - posted 10:39, Friday 28 July 2023 (71792)
Lock loss 1615 UTC

1374596167

Lock loss caused by commissioning activity.

LHO VE
david.barker@LIGO.ORG - posted 10:11, Friday 28 July 2023 (71791)
Fri CP1 Fill

Fri Jul 28 10:08:08 2023 INFO: Fill completed in 8min 4secs

Travis confirmed a good fill curbside.

Images attached to this report
H1 CAL
vladimir.bossilkov@LIGO.ORG - posted 08:29, Friday 28 July 2023 - last comment - 12:31, Tuesday 12 December 2023(71787)
H1 Systematic Uncertainty Patch due to misapplication of calibration model in GDS

First observed as a persistent mis-calibration in systematic error monitoring Pcal lines which measure PCAL / GDS-CALIB_STRAIN affecting both LLO and LHO, [LLO Link] [LHO Link], characterised by these measurements consistently disagreeing with the uncertainty envelope.
It us presently understood that this arises from bugs in the code producing the GDS FIR filters there exists a sizeable discrepancy, which Joseph Betzwieser is spear-heading a thorough investigation to correct,

I make a direct measurement of this systematic error by dividing CAL-DARM_ERR_DBL_DQ / GDS-CALIB_STRAIN , where the numerator is further corrected for kappa values of the sensing, cavity pole, and the 3 actuation stages (GDS does the same corrections internally). This gives a transfer function of the difference induced from errors in the GDS filters.

Attached in this aLog, and its sibling aLog in LLO, is this measurement in blue, the PCAL / GDS-CALIB_STRAIN measurement in orange, and the smoothed uncertainty correction vector in red. Attached also is a text file of this uncertainty correction for application in pyDARM to produce the final uncertainty, in the format of [Frequency, Real, Imaginary].

Images attached to this report
Non-image files attached to this report
Comments related to this report
ling.sun@LIGO.ORG - 15:33, Friday 28 July 2023 (71798)

After applying this error TF, the uncertainty budget seems to agree with monitoring results (attached).

Images attached to this comment
ling.sun@LIGO.ORG - 13:02, Thursday 17 August 2023 (72299)

After running the command documented in alog 70666, I've plotted the monitoring results on top of the manually corrected uncertainty estimate (see attached). They agree quite well.

The command is:

python ~cal/src/CalMonitor/bin/calunc_consistency_monitor --scald-config  ~cal/src/CalMonitor/config/scald_config.yml --cal-consistency-config  ~cal/src/CalMonitor/config/calunc_consistency_configs_H1.ini --start-time 1374612632 --end-time 1374616232 --uncertainty-file /home/ling.sun/public_html/calibration_uncertainty_H1_1374612632.txt --output-dir /home/ling.sun/public_html/

The uncertainty is estimated at 1374612632 (span 2 min around this time). The monitoring data are collected from 1374612632 to 1374616232 (span an hour).

 

Images attached to this comment
jeffrey.kissel@LIGO.ORG - 17:01, Wednesday 13 September 2023 (72871)
J. Kissel, J. Betzwieser

FYI: The time at which Vlad used to gather TDCFs to update the *modeled* response function at the reference time (R, in the numerator of the plots) is 
    2023-07-27 05:03:20 UTC
    2023-07-26 22:03:20 PDT
    GPS 1374469418

This is a time when the IFO was well thermalized.

The values used for the TDCFs at this time were
    \kappa_C  = 0.97764456
    f_CC      = 444.32712 Hz
    \kappa_U  = 1.0043616 
    \kappa_P  = 0.9995768
    \kappa_T  = 1.0401824

The *measured* response function (GDS/DARM_ERR, the denominator in the plots) is from data with the same start time, 2023-07-27 05:03:20 UTC, over a duration of 384 seconds (8 averages of 48 second FFTs).

Note these TDCF values list above are the CAL-CS computed TDCFs, not the GDS computed TDCFs. They're the value exactly at 2023-07-27 05:03:20 UTC, with no attempt to average further over the duration of the *measurement*. See attached .pdf which shows the previous 5 minutes and the next 20 minutes. From this you can see that GDS was computing essentially the same thing as CALCS -- except for \kappa_U, which we know
 - is bad during that time (LHO:72812), and
 - unimpactful w.r.t. the overall calibration.
So the fact that 
    :: the GDS calculation is frozen and
    :: the CALCS calculation is noisy, but is quite close to the frozen GDS value is coincidental, even though
    :: the ~25 minute mean of the CALCS is actually around ~0.98 rather than the instantaneous value of 1.019
is inconsequential to Vlad's conclusions.

Non-image files attached to this comment
louis.dartez@LIGO.ORG - 00:54, Tuesday 12 December 2023 (74747)
I'm adding the modeled correction due to the missing 3.2 kHz pole here as a text file. I plotted a comparison showing Vlad's fit (green), the modeled correction evaluated on the same frequency vector as Vlad (orange), and the modeled correction evaluated using a dense frequency spacing (blue), see eta_3p2khz_correction.png. The denser frequency spacing recovers error of about 2% between 400 Hz and 600 Hz. Otherwise, the coarsely evaluated modeled correction seems to do quite well. 
Images attached to this comment
Non-image files attached to this comment
ling.sun@LIGO.ORG - 12:31, Tuesday 12 December 2023 (74758)

The above error was fixed in the model at GPS time 1375488918 (Tue Aug 08 00:15:00 UTC 2023) (see LHO:72135)

LHO General
thomas.shaffer@LIGO.ORG - posted 08:19, Friday 28 July 2023 (71786)
Ops Day Shift Start

TITLE: 07/28 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 6mph Gusts, 4mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.06 μm/s
QUICK SUMMARY: Locked for 2+hours, calm day so far with no planned activities. calibration and commission activities will happen in the morning.

 

H1 General
oli.patane@LIGO.ORG - posted 08:11, Friday 28 July 2023 (71785)
Ops OWL Shift End

TITLE: 07/28 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
SHIFT SUMMARY:

Observing and Locked for 2 hours.

- While relocking after the lockloss (71783), got stuck at FIND_IR - was able to lock the x arm but at ~0.1 (see attachment 1) and it couldn't finetune it, but H1_MANAGER finally decided to run an initial alignment and all was good.


7:00 Detector Observing and Locked for 7hrs 18mins

8:44 Entered Earthquake mode
8:54 Out of Earthquake mode

9:29 Entered Earthquake mode
9:50 Out of Earthquake mode

11:12 Lockloss from sudden local seismic event (71783)

13:06 Reached NOMINAL_LOW_NOISE

13:17 H1_MANAGER brought us back into Observing

 

LOG:

No log

Images attached to this report
H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 04:16, Friday 28 July 2023 - last comment - 06:19, Friday 28 July 2023(71783)
Lockloss

Lockloss at 11:12 due to some sort of local seismic event

Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 06:19, Friday 28 July 2023 (71784)

Just got back into Observing. Didn't have to touch anything! We did have to run an initial alignment.

H1 General
oli.patane@LIGO.ORG - posted 04:05, Friday 28 July 2023 (71782)
Ops OWL Midshift Report

Detector is in Observing and has been locked for 11hrs 22mins. There were a few earthquakes that rolled through so we did go into Earthquake mode a couple of times (8:44-8:54 and 9:29-9:50), but we rode them out.

The MY temp alarm hasn't been triggered since it went off during Ryan S's shift (71778).

H1 General
oli.patane@LIGO.ORG - posted 00:11, Friday 28 July 2023 (71781)
Ops OWL Shift Start

TITLE: 07/28 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 14mph Gusts, 8mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.06 μm/s
QUICK SUMMARY:

Taking over from Ryan S. We're Observing and have been Locked for 7hrs 29mins.

I'll keep watch on the MY station temps.

LHO General (FMP)
ryan.short@LIGO.ORG - posted 00:04, Friday 28 July 2023 (71778)
Ops Eve Shift Summary

TITLE: 07/27 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
SHIFT SUMMARY: Quiet shift tonight, relocked easily and H1 has been observing for 7 hours.

Handing off to Oli for the rest of the night.


LOG:

No log for this shift.

Images attached to this report
LHO General
ryan.short@LIGO.ORG - posted 20:00, Thursday 27 July 2023 (71780)
Ops Eve Mid Shift Report

State of H1: Observing at 150Mpc

H1 has been locked and observing for 3 hours. Locking at the start of the shift went smoothly (except for a small manual adjustment of the DIFF offset). Some dust alarms for the optics lab, these have since stopped.

H1 ISC
gabriele.vajente@LIGO.ORG - posted 10:36, Thursday 27 July 2023 - last comment - 14:46, Friday 28 July 2023(71765)
CHARD_Y experiments

Yesterday during commissioning time I did a couple of experiments with CHARD_Y (71738)

How much margin do we have for CHARD_Y noise?

With the 10-100 Hz noise injection, I could estimate a CHARD_Y noise projection to DARM, using the excess power method (ratio of PSD). Using the measured transfer function between CHARD_Y and DARM gies the same result. The first plot shows the effect of the noise injection in CHARD_Y. The second plot shows the noise projection, and that we have a safety factor of about 30-100 above 15 Hz. We can use this information to design a new CHARD_Y filter.

Increasing the CHARD_Y gain by 3

In the second experiment I increased the CHARD_Y gain by a factor of 3, since the model predicted that the loop would be stable. This would give me more suppression at low frequency and a bit of suppression of the 2.6 Hz peak. This is pretty much what we observed. The change in the DARM or CHARD_Y residual RMS isn't large, as expected. So there is no effect on the sensitivity. We should try to design a better filter that gives us suppression at 1 Hz and 2.6 Hz to reduce the CHARD_Y RMS.

Note that the 1 Hz peak in CHARD_Y is coherent with PR2 and PR3 damping loops, so maybe we can gain something by also looking at those damping loops.

Images attached to this report
Comments related to this report
gabriele.vajente@LIGO.ORG - 13:11, Thursday 27 July 2023 (71769)

Here's a proposed new CHARD_Y controller, based on the 3x gain, adding more suppression at 1 and 2.6 Hz, and with increased noise injection above 10 Hz that should be ok given the measured coupling to DARM.

The last plot shows the predicted performance of this new loop: residual motion below 3 Hz should be largely suppressed. Only the 3.4 Hz peak is increased less than a factor 2.

Images attached to this comment
Non-image files attached to this comment
gabriele.vajente@LIGO.ORG - 09:35, Friday 28 July 2023 (71789)

Engaging this new controller with a gain of 180 caused a lock loss with an oscillation at 3.4 Hz, which is the expected higher UGF.

Probably the plant measurement is not accurate enough at such high frequency.

 

Images attached to this comment
gabriele.vajente@LIGO.ORG - 14:46, Friday 28 July 2023 (71796)

Tried a slighlty modified controller with more phase margin at 3-4 Hz. Now uploaded to FM9. This can be engaged with the nominal gain of 60, and it is supposed to be stable all the way to the working gain of 180.

However, incrreasing the gain to 120 already generates a large peak at 3.4 Hz. This is consistent with the previous lock loss.

The low frequency performance with this new controller with a gain of 120 is good as expected, but the new peak at 3.4 Hz actually increases the DARM RMS. I believe thin increase is responsible for the higher noise in DARM at >10 Hz, since there isn't much coherence between CHARD_Y and DARM.

I wanted to measure again the CHARD_Y plant, since the previous measurement was not very good at >2 Hz, and I suspect the real plant gives less phase margin that the fit model we have now. Unfortunately I incraesed the noise amplitude too much and we lost lock. To be repeated.

I also tried to reduce the coupling of CHARD_Y to DARM by fine tuning the ITMY A2L, but I couldn't get any improvement. I injected a 21.5 Hz line in CHARD_Y, but that showed up in DARM with a lot of sidebands and appeared quite non-stationary. More care will be needed to retuned the A2L to reduce CHARD_Y coupling to DARM: this might be necessary if the new controller injects too much noise at frequencies above 10 Hz

Images attached to this comment
H1 CAL
anthony.sanchez@LIGO.ORG - posted 08:33, Wednesday 26 July 2023 - last comment - 14:30, Friday 28 July 2023(71725)
PCAL X Noise found by Shivaraj


Shivaraj sent Rick and I a message about some noise found on H1:PCALX_TX_PD and H1:PCALX_RX_PD.
https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20230703/cal/pcal_x/

This particular noise took place on the 3rd of July, LHO was empty of people except an operator. Clear examples of this type of noise on these channels can also be found on the 4th, the 5th, and 17th of july.


Checking the same channels on EY:
There is some form of noise on H1:PCALY_TX_PD that shows up on the 11th , 12th, and 13th of this month that looks similar, though not as often and not as intense as the noise found on PCALX and it's not always on both the PCALY_TX and PCALY_RX PD at the same time like seen at EX.

I have reached out to Shivaraj, to try to learn more about this and see if it's a problem for DARM. Which it doesn't seem to be according to what he saw in Bruco .

This noise could point to a problem with our PCAL lasers since its in both TX and RX PD's on at EX. But it also could be the AOM, or the OFS being saturated , or otherwise interacting with changes in temp or humidity.

This could also be a DAQ issue, like a chassis or board because it's showing up on both channels at the same time at EX. Shivaraj mentioned that there might be "cross talk between different channels in a board, and if the glitches are in the light and seen by both PD's they would also show up in other channels, which we could likely use to our advantage."

 

Images attached to this report
Comments related to this report
anthony.sanchez@LIGO.ORG - 13:03, Friday 28 July 2023 (71793)

I took this issue to the Noise sprint on Wednesday and Adrian Helming-Cornell, Jane Glanzer,Vishal Yalla took up the project.
Dave Barker and Erik also apparently tried to also look into this as well and by Wednesday's lunch time there was some sharing of information

The Noise Sprint group started a google doc where we put all the information that we were gathering:
https://docs.google.com/document/d/127y-9zX6So-zWHxpziH0cU9SAjMjV1lUiJrwKRHdB4A/edit

That may not be a clickable link so here is the content:

alog: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=71725 

 

PCAL X Noise found by Shivaraj


Shivaraj sent Rick and I a message about some noise found on H1:PCALX_TX_PD and H1:PCALX_RX_PD.
https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20230703/cal/pcal_x/

This particular noise took place on the 3rd of July, LHO was empty of people except an operator. Clear examples of this type of noise on these channels can also be found on the 4th, the 5th, and 17th of july.


Checking the same channels on EY:
There is some form of noise on H1:PCALY_TX_PD that shows up on the 11th , 12th, and 13th of this month that looks similar, though not as often and not as intense as the noise found on PCALX and it's not always on both the PCALY_TX and PCALY_RX PD at the same time like seen at EX.

I have reached out to Shivaraj, to try to learn more about this and see if it's a problem for DARM. Which it doesn't seem to be according to what he saw in Bruco .

This noise could point to a problem with our PCAL lasers since its in both TX and RX PD's on at EX. But it also could be the AOM, or the OFS being saturated , or otherwise interacting with changes in temp or humidity.

This could also be a DAQ issue, like a chassis or board because it's showing up on both channels at the same time at EX. Shivaraj mentioned that there might be "cross talk between different channels in a board, and if the glitches are in the light and seen by both PD's they would also show up in other channels, which we could likely use to our advantage."


 

PCAL Background: 

-> PCAL = Photon Calibration 

Used for calibration using test masses at the ends stations of the interferometer using a physical force

 

PCAL chassis layout: https://dcc.ligo.org/cgi-bin/private/DocDB/ShowDocument?.submit=Identifier&docid=S1400489&version=5 

 

Potential Causes: 

  • Laser 
  • AOM
  • AOM Power Supply 

 

Tasks:

  • See how often this occurs via summary pages
  • Try and compare the noise to other channels (list on the way)/look for excitations
  • Trend PCAL lines and systematic error in lines, do they affect transfer functions?
  • (High Priority) Is H1:GRD-ISC_LOCK_STATE_N in NLN_CAL_MEAS?  State 700? 
    • Are PCAL noise bursts happening in nominal low noise (state 600), calibration measurements (state 700)
    • Can find on IDVW 
    • Alog when Power supply was replaced see if noise was present before this date.
      • We do see this noise before May 2nd in both PCALX and PCALY. April 10th-14th. Rules out power supply change.
  • Some noise in the BSC9 (which is in EX) X/Y/Z channels on July 3rd 2023 from about 14-20utc from 20-40ish Hz.
    • BLRMS these channels, any temporal correlations between these guys and TX/RX noise bursts?
    • What about the endstation ground motion SEI system BLRMS?


 

Channel Names: 

(Might have correlation between calibration channels; however chassis channels are not resolving without calibration channels) 

 

  • H1:CAL-PCALX_RX_PD_WATTS_OUT
    H1:CAL-PCALX_TX_PD_WATTS_OUT
  • H1:CAL-PCALX_SHUTTERPOWERENABLE
  • H1:CAL-PCALX_RECEIVERMODULETEMPERATURE   (Jitter on channel) 
  • H1:CAL-PCALX_TRANSMITTERMODULETEMPERATURE ( Jitter on channel) 
  • H1:CAL-PCALX_OPTICALFOLLOWERSERVOOSCILLATION
  • H1:CAL-PCALX_OFS_AOM_DRIVE_MON_OUTMON 
    • H1:CAL-PCALX_OFS_AOM_DRIVE_OUTMON (Plotting)
  • H1:CAL-PCALX_WS_PD_OUTMON
  • Effectively any channel that starts with H1:CAL-PCALX_ [ varable ]
    AOM , OFSPD, OFS, TX, RX, OPTICALFOLLOWER

 

Calibration channels:  

Link to GSTAL If you find a time that the kappas have some weird signals then check out the GSTAL data for those times as well.

 

  • H1:CAL-CS_TDEP_KAPPA_UIM_OUTPUT 
  • H1:CAL-CS_TDEP_KAPPA_PUM_OUTPUT
  • H1:CAL-CS_TDEP_KAPPA_TST_OUTPUT
  • H1:GRD-ISC_LOCK_STATE_N Look for state 700


 

Chassis documentation: https://dcc.ligo.org/LIGO-D1400153

 

Other instances:

June 9

June 10 - few lines

June 11 - few lines

June 19

June 24

July 3

July 4

July 5

July 11

July 17

July 18

July 19

July 20

July 21 - few lines

July 25 - few lines

 

  • DAC update:
    • The channel we use to drive the test mass is sampled at a different rate (16Hz) vs what the pcal photodiodes  (16Khz)
    • Means we may be inserting this noise into the photodiodes because of this difference, if the driving channels are only 16hz. (Later confirmed that this is not the case.) 
    • H1:CAL-PCALX_TX_PD_WATTS_OUT_DQ  and the corresponding RX channels on both X and Y end are all not available on LIGO-DV web. Though they are available on the CDS network and the control room. 
    • Going to get 16kHz DAC readback channels that are faster to try and confirm if this is the issue. Working with Dave and Erik, we have confirmed that the driving channels are 16khz channels but the readback channels are still only 16hz which prevents us from getting analysis results above 8 hz using LIGO DV. 
    • Using ndcscope & Diaggui we have confirmed that there is an issue when the Roaming PCAL X line changes. Which is a PCAL Line that gets changed by a Gaurdian node every 24 hours of NLN Lock state. The issue is that the Roaming X line is changed without a ramp down/up time. This may be the start of the noise that we are seeing but this has not been confirmed yet. The solution to this would be to add a second channel, and ramp down the initial channel and replace the initial frequency of the roaming X line with a ramp up of the next frequency.  

I have 2 gps times that I have narrowed this down to happen between.

Between 1372456038 -1372456158


Calibration channels:  

July 4th, 2023: 16:00:00 - 18:00:00 UTC (1372510818 GPS) 

 

H1:CAL-CS_TDEP_KAPPA_PUM_OUTPUT, raw,start: 2023-07-04 16:00:00 (1372521618) len: 2:00:00.

H1:CAL-CS_TDEP_KAPPA_UIM_OUTPUT, raw,start: 2023-07-04 16:00:00 (1372521618) len: 2:00:00.

H1:CAL-CS_TDEP_KAPPA_TST_OUTPUT, raw,start: 2023-07-04 16:00:00 (1372521618) len: 2:00:00.

 

 

 












 

July 3th, 2023: 14:00:00 - 16:00:00 UTC (1372510818 GPS) 

 

H1:CAL-CS_TDEP_KAPPA_PUM_OUTPUT, raw,start: 2023-07-03 14:00:00 (1372428018) len: 2:00:00.

H1:CAL-CS_TDEP_KAPPA_UIM_OUTPUT, raw,start: 2023-07-03 14:00:00 (1372428018) len: 2:00:00.

H1:CAL-CS_TDEP_KAPPA_TST_OUTPUT, raw,start: 2023-07-03 14:00:00 (1372428018) len: 2:00:00.

 

 

https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20230703/cal/pcal_x/ 

 

anthony.sanchez@LIGO.ORG - 14:30, Friday 28 July 2023 (71795)

More searching is needed to ensure that this is resolved.

H1 TCS (DetChar)
camilla.compton@LIGO.ORG - posted 12:37, Tuesday 16 May 2023 - last comment - 15:27, Friday 28 July 2023(69648)
Turned back on both ETMX and ETMY HWS Lasers and Cameras after 1 week test

WP 11184  This morning, Tony and I turned back on both ETMX and ETMY HWS lasers and their cameras. Had them off for one week (alog 69431) to check for any noise caused by them. Tagging DetChar. 

Comments related to this report
ansel.neunzert@LIGO.ORG - 15:27, Friday 28 July 2023 (71797)DetChar

Evan Goetz, Debasmita Nandi, Taylor Starkman, Ansel Neunzert

We compared the weekly average spectrum starting May 10 with the week starting May 17. We saw a few noticeable changes in the weekly spectra, but careful follow-up shows that several of them are unassociated with the HWS changes. In particular, daily spectra show that changes in the 29.96 Hz and 1.66 Hz combs do not happen at the same time as the HWS changes.

There is a small 14.9009 Hz comb that seems to turn on during the week of the test and off afterward. (That is, the comb seems to be associated with the HWS being in the off state, which is fairly counterintuitive). Only a few peaks are visible, and it has not been noticed since.

Mostly noting these things here for future reference and searchability. We do not see large-scale spectral changes associated with this test.

Displaying reports 17001-17020 of 86687.Go to page Start 847 848 849 850 851 852 853 854 855 End