Displaying reports 17021-17040 of 86691.Go to page Start 848 849 850 851 852 853 854 855 856 End
Reports until 04:05, Friday 28 July 2023
H1 General
oli.patane@LIGO.ORG - posted 04:05, Friday 28 July 2023 (71782)
Ops OWL Midshift Report

Detector is in Observing and has been locked for 11hrs 22mins. There were a few earthquakes that rolled through so we did go into Earthquake mode a couple of times (8:44-8:54 and 9:29-9:50), but we rode them out.

The MY temp alarm hasn't been triggered since it went off during Ryan S's shift (71778).

H1 General
oli.patane@LIGO.ORG - posted 00:11, Friday 28 July 2023 (71781)
Ops OWL Shift Start

TITLE: 07/28 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 14mph Gusts, 8mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.06 μm/s
QUICK SUMMARY:

Taking over from Ryan S. We're Observing and have been Locked for 7hrs 29mins.

I'll keep watch on the MY station temps.

LHO General (FMP)
ryan.short@LIGO.ORG - posted 00:04, Friday 28 July 2023 (71778)
Ops Eve Shift Summary

TITLE: 07/27 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
SHIFT SUMMARY: Quiet shift tonight, relocked easily and H1 has been observing for 7 hours.

Handing off to Oli for the rest of the night.


LOG:

No log for this shift.

Images attached to this report
LHO General
ryan.short@LIGO.ORG - posted 20:00, Thursday 27 July 2023 (71780)
Ops Eve Mid Shift Report

State of H1: Observing at 150Mpc

H1 has been locked and observing for 3 hours. Locking at the start of the shift went smoothly (except for a small manual adjustment of the DIFF offset). Some dust alarms for the optics lab, these have since stopped.

H1 DetChar (DetChar)
katie.rink@LIGO.ORG - posted 17:15, Thursday 27 July 2023 (71779)
Data Quality Shift Report 2023-07-17 to 2023-07-23

DQ shifter: Katie Rink

Shadow: Caitlin Rawcliffe

Link to full report: https://wiki.ligo.org/DetChar/DataQuality/DQShiftLHO20230717

 

Week's Summary

 

H1 SEI
ryan.short@LIGO.ORG - posted 16:47, Thursday 27 July 2023 (71777)
BRS Drift Trends - Monthly

FAMIS 17561, last checked in alog 70984

All BRS channels are well within their safe regions, as was the case with the previous check.

Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 16:06, Thursday 27 July 2023 (71759)
Ops Day Shift Summary

TITLE: 07/27 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Aligning
SHIFT SUMMARY: A little over and hour this morning for calibration and SQZ tuning. We lost lock at 2124UTC ending a 24 hour lock. Relocking has been a great test for our H1_MANAGER node and the IFO_NOTIFY/calling system as we had Find_IR fail and HEPI HAM2 trip with a coefficient load. I forced an initial alignment which ran autonomously. Ryan is now bringing it back up.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
16:17 CAL TJ CR n CAL measurement 16:44
16:17 FAC Cindi MY n Tech clean 17:54
16:44 SQZ Camilla CR n SQZ NLG tuning 16:59
17:00 SQZ Vicky, Camilla CR,remote n SQZ tuning 17:31
17:24 PEM Richard FCTE n Retrieving temperature sensors 17:48
LHO General
ryan.short@LIGO.ORG - posted 16:02, Thursday 27 July 2023 (71776)
Ops Eve Shift Start

TITLE: 07/27 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 18mph Gusts, 11mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.05 μm/s
QUICK SUMMARY: Taking over from TJ. H1 has just finished initial alignment and starting with lock acquisition now.

H1 General
thomas.shaffer@LIGO.ORG - posted 14:30, Thursday 27 July 2023 (71774)
Lock loss 2124 UTC

1374528269

No obvious cause yet

H1 DetChar (DetChar, PEM)
andrew.lundgren@LIGO.ORG - posted 14:23, Thursday 27 July 2023 (71772)
Jitter glitches from control room activity
Marissa, Shania, Tabata, Gaby, Brennan, Andy

While looking into jitter glitches (alog 71739), we found a time with a few similar glitches on July 17th. They happen when fiber was being pulled from the control room to the fire panel (alog 71416).

The glitches have a similar appearance to the previous jitter glitches (plot 1), and are witnessed by the IMC and PSL periscope accelerometer, so they are probably through the same coupling mechanism. Plot 2 shows the Omicron triggers (21:12 to 21:18 UTC). The motion is very visible in the LVEA floor accelerometer near HAM1 (plot 3, compare to plot 4 of DARM), and also on many other PEM channels.

A full Omega scan of one of these glitches is here.
Images attached to this report
LHO FMCS (PEM, VE)
thomas.shaffer@LIGO.ORG - posted 13:04, Thursday 27 July 2023 (71770)
Filter Cavity Tube Enclosure temperatures staying lower than outside temps

Outside temperatures have been quite hot in the last week, near 100F at times. Luckily though, it looks like the temperature sensors in the FCTE are reporting ~15F below that. The attached plots are of the same time frame and I tried to scale as closely as possible. the FCTE clearly doesn't see the same extremes with its diurnal temps.

(Apologies for the seperate plots, I gave up combining due to different data formats, time formats, data rates)

Images attached to this report
H1 SQZ (DetChar)
camilla.compton@LIGO.ORG - posted 11:32, Thursday 27 July 2023 - last comment - 15:45, Thursday 27 July 2023(71761)
Increased OPO ISS setpoint from 50uW to 65uW.

Vicky, Camilla. Out of Observing today for squeezing optimization. No SQZ time 16:44UTC to 16:55UTC and 17:01UTC to 17:27UTC. See SQZ Troubleshooting Wiki for instructions.

Can see an increase in SQZ 350-1700Hz BLRMS comparing before and after this optimization.

NLG Increased:
As Austin had the ISS saturate with a 80uW set point and dropped it down to 50uW 71675 following 70050, I have increased opo_grTrans_setpoint_uW in sqzparams.py from 50 to 65uW at 16:45UTC. Hoping this is a sustainable level that we can stop changing. Readjusted TEC and accepted in sdf. As Corey states in 71613, we are aiming to not let H1:SQZ-OPO_ISS_CONTROLMON be set below 2.5 to 3V as that is close to 0V where the control loop bottoms out and unlocks. Can't lock with 75uW, 70uW = 1.8V, 65uW = 3W CONTROLMON.
The higher the setpoint, the higher the NLG and higher squeezing we get, see attached BLRMS, where lower is better.
Tagging DetChar - This NLG change may effect range. See change in H1:SQZ-OPO_TRANS_LF_OUTPUT and 71691 on how DARM changes with NLG.
 
AS42 SQZ offsets offloaded:
Followed instructions in 71083.
  • Take SQZ_MANAGER to "NO_SQUEEZING"
  • From sitemap > SQZ > SQZ ASC IFO > see screenshot circling buttons to click:
    • First, purple button that says "! reset AS42 nosqz". Wait for script to run (it averages for ~30 sec). Log/post a screenshot of the output if you can. Close pop-up terminal when done.
    • Next, purple button for "! graceful clear history". Close terminal when finished.
Attempted to increase SHG output H1:SQZ-SHG_GR_DC_POWERMON:
FIRST:
Aligned fiber polarization using pico waveplates on SQZT0, reduced H1:SQZ-SHG_FIBR_REJECTED_DC_POWERMON from 0.12 to 0.03mW with below procedure, see Vicky's attached screenshot. 
  1. Take SQZ_OPO_LR to "LOCKED_CLF_DUAL_NO_ISS" (ensure that SQZT0 screen AOM box is yellow)
  2. Enable waveplate PICOs before fiber (L/2, L/4) -- (C_PICO_I_MOTOR_3_X/Y) (Pico settings {medium, walk} is fine. {small, walk} works if you are optimizing and very close)
  3. SQZT0 medm > click "! fiber pol" to launch scope
  4. Minimize SHG_FIBR_REJECTED green trace by using _X and _Y motors in turn until minium reached.  
SECOND:
Tried to adjust SHG temperature via SQZT0 > SHG > H1:SQZ-SHG_TEC_SETTEMP, didn't make a difference so reverted.
THIRD:
Tried to adjust SHG beam pointing via mirror picos on SQZT0 > BEAM STEERING red triangles, didn't make a difference.
 
Vicky also checked the amount of lights throughput and suggests that we may need to go onto SQZT0 to and adjust the SHG aom pump alignment if we need more squeezing.
Images attached to this report
Comments related to this report
victoriaa.xu@LIGO.ORG - 14:21, Thursday 27 July 2023 (71771)

Last time we re-aligned the pump aom due to clipping was in June, LHO:70198. In summary, I'd expect the power launched to the fiber, "H1:SQZ-SHG_REJECTED_DC_POWERMON", to be ~70% of the SHG output power, "H1:SQZ-SHG_GR_DC_POWERMON".

I added some scopes to the SQZT0 medm screen to make this easier to check, it is in:
--> $userapps/sqz/h1/Templates/ndscope/check_pump_aom_clipping.yaml
or from sitemap > SQZT0 > purple "! SQZT0 scopes" > "check pump aom" from the dropdown list.

From today's scope
--> with all power going through the AOM (H1:SQZ-OPO_ISS_DRIVEPOINT = 0), 
--> it looks like we're only getting ~22 mW out of the pump AOM (sum of bottom 2 traces)
--> when we should be getting 45.8*0.7 ~ 32mW, if the pump aom throughput was normal.

So, if we want to turn up squeeze levels, or if we start to get unstable at this sqz level, we could go into SQZT0 on a Tuesday to check the pump aom + fiber input alignments.

Images attached to this comment
victoriaa.xu@LIGO.ORG - 14:23, Thursday 27 July 2023 (71773)

Also adding in a screenshot for how we "Aligned fiber polarization using pico waveplates on SQZT0..," circling the relevant parts of medm screens.

Images attached to this comment
victoriaa.xu@LIGO.ORG - 15:45, Thursday 27 July 2023 (71775)

No-sqz times from this work, 1374512130-1374512500.

H1 DetChar (DetChar)
siddharth.soni@LIGO.ORG - posted 10:58, Thursday 27 July 2023 (71767)
20 to 50 Hz transient noise present in O3 as well

Marissa, Sidd

There is evidence that the current 20-50 Hz transients in the LHO data were present during the O3 as well. They were nearly not as ubiquitous as they are now, which can be explained by the increase in low frequency sensitivity in the current Observing run. However, I am not sure if the difference in low frequency sensitivity can completely explain the emergence of this noise, the existing noise coupling could also have gotten worse. 

The first scan is from O3 and the second from O4. The separation between the blobs is similar  (about 0.2 secs). A gravityspy search for fast scatter in O3b shows a whole lot of these transients. 

In the current Observing run as well, we have noticed that when the sensitivity gets better, the transient rate goes up. The third plot shows the glitch rate between June 1 and July 12. For the days after June 22, when the LHO powered down and the low frequency sensitivity improved as shown by the fourth plot, the glitch rate is higher.  

There are times when within the same day, the change in sensitivity resulted in very different rate of transients. One such example is June 16 2023 (fifth plot). As seen from this plot, there are more glitches in the second red box compared to the first one. From the sixth ASD plot, we can see that as the DARM sensitivity got better on June 16, the transient rate went up. And it went further up after the power down (76 W to 60 W) as the low frequency sensitivity improved (green curve). 

This analysis shows that some processes are continuously generating these transients and as the DARM sensitivity gets better, the rate of transients go up. We are looking for times when there is a change in the amplitude or frequency of this noise and correlate it with changes in different parts of the detector. We are more and more certain that the coupling does not depend on the ground motion, else we would have seen noise variations in the last two months of data. 

Images attached to this report
H1 ISC
gabriele.vajente@LIGO.ORG - posted 10:36, Thursday 27 July 2023 - last comment - 14:46, Friday 28 July 2023(71765)
CHARD_Y experiments

Yesterday during commissioning time I did a couple of experiments with CHARD_Y (71738)

How much margin do we have for CHARD_Y noise?

With the 10-100 Hz noise injection, I could estimate a CHARD_Y noise projection to DARM, using the excess power method (ratio of PSD). Using the measured transfer function between CHARD_Y and DARM gies the same result. The first plot shows the effect of the noise injection in CHARD_Y. The second plot shows the noise projection, and that we have a safety factor of about 30-100 above 15 Hz. We can use this information to design a new CHARD_Y filter.

Increasing the CHARD_Y gain by 3

In the second experiment I increased the CHARD_Y gain by a factor of 3, since the model predicted that the loop would be stable. This would give me more suppression at low frequency and a bit of suppression of the 2.6 Hz peak. This is pretty much what we observed. The change in the DARM or CHARD_Y residual RMS isn't large, as expected. So there is no effect on the sensitivity. We should try to design a better filter that gives us suppression at 1 Hz and 2.6 Hz to reduce the CHARD_Y RMS.

Note that the 1 Hz peak in CHARD_Y is coherent with PR2 and PR3 damping loops, so maybe we can gain something by also looking at those damping loops.

Images attached to this report
Comments related to this report
gabriele.vajente@LIGO.ORG - 13:11, Thursday 27 July 2023 (71769)

Here's a proposed new CHARD_Y controller, based on the 3x gain, adding more suppression at 1 and 2.6 Hz, and with increased noise injection above 10 Hz that should be ok given the measured coupling to DARM.

The last plot shows the predicted performance of this new loop: residual motion below 3 Hz should be largely suppressed. Only the 3.4 Hz peak is increased less than a factor 2.

Images attached to this comment
Non-image files attached to this comment
gabriele.vajente@LIGO.ORG - 09:35, Friday 28 July 2023 (71789)

Engaging this new controller with a gain of 180 caused a lock loss with an oscillation at 3.4 Hz, which is the expected higher UGF.

Probably the plant measurement is not accurate enough at such high frequency.

 

Images attached to this comment
gabriele.vajente@LIGO.ORG - 14:46, Friday 28 July 2023 (71796)

Tried a slighlty modified controller with more phase margin at 3-4 Hz. Now uploaded to FM9. This can be engaged with the nominal gain of 60, and it is supposed to be stable all the way to the working gain of 180.

However, incrreasing the gain to 120 already generates a large peak at 3.4 Hz. This is consistent with the previous lock loss.

The low frequency performance with this new controller with a gain of 120 is good as expected, but the new peak at 3.4 Hz actually increases the DARM RMS. I believe thin increase is responsible for the higher noise in DARM at >10 Hz, since there isn't much coherence between CHARD_Y and DARM.

I wanted to measure again the CHARD_Y plant, since the previous measurement was not very good at >2 Hz, and I suspect the real plant gives less phase margin that the fit model we have now. Unfortunately I incraesed the noise amplitude too much and we lost lock. To be repeated.

I also tried to reduce the coupling of CHARD_Y to DARM by fine tuning the ITMY A2L, but I couldn't get any improvement. I injected a 21.5 Hz line in CHARD_Y, but that showed up in DARM with a lot of sidebands and appeared quite non-stationary. More care will be needed to retuned the A2L to reduce CHARD_Y coupling to DARM: this might be necessary if the new controller injects too much noise at frequencies above 10 Hz

Images attached to this comment
H1 General
thomas.shaffer@LIGO.ORG - posted 10:32, Thursday 27 July 2023 (71766)
Out of observing 1614-1729UTC for CAL meas and SQZ tuning

Back to observing after CAL measurements and SQZ tuning.

H1 DetChar (DetChar)
andrew.lundgren@LIGO.ORG - posted 15:19, Wednesday 26 July 2023 - last comment - 11:04, Thursday 27 July 2023(71739)
Jitter glitches discovered by GravitySpy volunteers
Marissa, Shania, Tabata, Gaby, Brennan, Andy

GravitySpy volunteers noticed a new type of glitch that happened on May 31 and June 1 (GSpy link). Many of these glitches appeared as a high-Q line at 590 Hz, but they also appear as a stack of high-Q lines in the 100 to 600 Hz range, and sometimes only below 300 Hz (plot 1).

The cause seems to be physical motion of the PSL periscope (plot 2) in short bursts. The coupling to DARM is through the periscope motion causing beam jitter (plot 3). The MCL/REFL_SERVO loop are also witnesses. Some of the glitches in the periscope are broadband, while others have a few well-defined frequencies (plot 4).

There was an increase in the motion of the periscope during this period, seen by BLRMS of the periscope accelerometer (plot 5). The motion was still elevated above the previous level when this period ended. This kind of jitter glitch has occurred again in short bursts in some later days, but not for sustained periods. Scans of selected times are at this link (needs LVK login). Note that this configuration does not include the jitter or periscope accelerometer channels.
Images attached to this report
Comments related to this report
marissa.walker@LIGO.ORG - 11:04, Thursday 27 July 2023 (71768)DetChar

We also see in some of these glitches a loud noise in HAM1 HPI and LVEA floor accelerometer channels. See the attached example omega scans showing one glitch from June 26 showing up in H1:HPI-HAM1_BLND_L4C_Y_IN1_DQ and H1:PEM-CS_ACC_LVEAFLOOR_HAM1_Z_DQ. You can also see other accelerometers showing up in the full omega scans here but those were ones that showed up especially strong in several of the glitch times we examined. These are obvious in the omega scans at the specific glitch times, but this motion can't be clearly seen in the day-long spectrograms in the summary pages. 

Non-image files attached to this comment
Displaying reports 17021-17040 of 86691.Go to page Start 848 849 850 851 852 853 854 855 856 End