Displaying reports 11621-11640 of 84677.Go to page Start 578 579 580 581 582 583 584 585 586 End
Reports until 16:20, Monday 08 January 2024
H1 PSL
ryan.short@LIGO.ORG - posted 16:20, Monday 08 January 2024 (75255)
PSL 10-Day Trends

FAMIS 20010

PMC reflected power has been slightly increasing over the past ~3 days, and FSS transmitted power has been decreasing over the same period, but I don't see that PMC transmitted power has changed much at all.

Images attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:13, Monday 08 January 2024 (75254)
OPS Day Shift Summary

TITLE: 01/09 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:

IFO is LOCKING and at TRANSITION_FROM_ETMX

EQ recovery is going smoothly

Lockloss alogs:

Lockloss 21:13 UTC (EQ)

Lockloss 18:53 UTC

Lockloss 16:26 UTC

Other:

LOG:

Start Time System Name Location Lazer_Haz Task Time End
16:24 FAC Tyler EX, EY N Tumbleweed check 17:24
17:17 SUS Randy MX N Inventory 18:03
18:03 FAC Karen Optics/Vac Prep N Technical Cleaning 18:03
18:29 VAC Travis MX N Pfeiffer Box Check 19:29
21:46 SUS Randy LVEA N Tue Maint prep 22:01
21:57 PEM Ryan C CER N Looking at dust monitors 22:15
22:45 VAC Gerardo LVEA N Vacuum prep for Tue 23:05
LHO General
ryan.short@LIGO.ORG - posted 16:00, Monday 08 January 2024 (75253)
Ops Eve Shift Start

TITLE: 01/08 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 3mph Gusts, 2mph 5min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.51 μm/s
QUICK SUMMARY:

H1 is relocking following a M7.0 earthquake; currently up to MOVE_SPOTS. All systems look good; wind is low and microseism is sitting just below the 90th percentile.

H1 TCS
camilla.compton@LIGO.ORG - posted 14:45, Monday 08 January 2024 (75249)
Status of CO2 Lasers: Power into vac more stable, CO2X still relocking weekly

Since TJ implemented bootstrapping on the CO2_PWR guardians, 74075 the power sent into vacuum has been much more stable.

We installed a new chiller in CO2X 73704 October 27th. It's been more stable and decaying less quickly since then but has still been relocking around once a week. Plot attached

Images attached to this report
H1 OpsInfo (PEM, SUS)
camilla.compton@LIGO.ORG - posted 14:42, Monday 08 January 2024 - last comment - 08:32, Tuesday 09 January 2024(75252)
Canceled Tomorrow Mornings PEM Magnetic Injections and SUS In-Lock Charge Measurements

As in 74872 and 74741, I have taken PEM_MAG_INJ and SUS_CHARGE from WAITING to DOWN so that they do not run tomorrow. Instead, tomorrow Louis and Sheila will try the risky DARM loop swaps and calibration from 7amPT. To re-enable the automated measurements, the nodes should be requested to INJECTIONS_COMPLETE before next Tuesday.

Comments related to this report
camilla.compton@LIGO.ORG - 08:32, Tuesday 09 January 2024 (75268)

IFO was unlocked from wind this morning. Re-requested both guardians to INJECTIONS_COMPLETE.

H1 General (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 13:19, Monday 08 January 2024 (75251)
Lockloss 21:13 UTC

Lockloss due to 7.0 EQ from Philippines

Staying in DOWN until it passes.

H1 SQZ
camilla.compton@LIGO.ORG - posted 12:24, Monday 08 January 2024 - last comment - 08:58, Thursday 11 January 2024(75250)
Attempted to change ADF frequency at 20:05UTC - Unsure of Issue so Reverted

We've been seeing SQZ angle not optimizing correctly (75245, 75151). At 20:05 Ibrahim took us into commissioning and I tried to change ADF frequency H1:SQZ-ADF_VCXO_FREQ_SET from 1300Hz to 200Hz. The ADF line didn't move from the 1300Hz region, just became noisy when I changed it. PLL didn't lock. Unsure why the ADF wouldn't move, I also tried 800Hz to no avail.  ADF frequency hasn't successfully been changed since Daniel adjusted the model in May 69453.

Comments related to this report
camilla.compton@LIGO.ORG - 17:00, Monday 08 January 2024 (75257)

At ~16:15UTC when we got to NLN, I tried and failed at this again.

Vicky showed (image) that H1:SQZ-ADF_VCXO_CONTROLS_SETFREQUENCYOFFSET and H1:SQZ-ADF_VCXO_FREQ_SET need to be changed then the ADF successfully moved. I also turned the size of the line down by turning up H1:SQZ-RLF_INTEGRATION_ADFATTENUATE, but it was still big and probably reduced our range a few MPc. Attached settings changed and then reverted.

It didn't seem to be able to converge on zero, plot attached. After trying twice I  reverted the changes.

Images attached to this comment
camilla.compton@LIGO.ORG - 08:58, Thursday 11 January 2024 (75314)

Dhruva points out to correctly change both of these settings we can use the script in /sqz/h1/scripts/ADF/ 'python setADF.py -f newfrequency'

H1 General (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 11:10, Monday 08 January 2024 (75247)
Lockloss 16:26 UTC

No particulare cause for this lockloss.

The lockloss tool shows that EX L3 saturated first, prompting the lockloss.

The following lock acquisition was fully automatic.

H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 10:55, Monday 08 January 2024 (75246)
Lockloss

Lockloss 01/08 @ 18:53UTC

LHO VE
david.barker@LIGO.ORG - posted 10:14, Monday 08 January 2024 (75244)
Mon CP1 Fill

Mon Jan 08 10:11:30 2024 INFO: Fill completed in 11min 26secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 CDS
patrick.thomas@LIGO.ORG - posted 09:02, Monday 08 January 2024 (75242)
Restarted IOC service on h1digivideo3 for FCES IR TRANS B camera
Dave alerted me that this had frozen as indicated by a blue screen on the client. The systemd service status reported:

● pylon-camera-server@H1-VID-CAM-FCES-IR-TRANS-B.service - Basler 2D GigE camera RTP H264 UDP server
     Loaded: loaded (/etc/systemd/system/pylon-camera-server@.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/pylon-camera-server@.service.d
             └─lho.conf
     Active: active (running) since Tue 2023-10-03 10:00:32 PDT; 3 months 3 days ago
   Main PID: 402217 (pylon-camera-se)
      Tasks: 40 (limit: 77094)
     Memory: 114.8M
        CPU: 3w 3d 1h 32min 52.488s
     CGroup: /system.slice/system-pylon\x2dcamera\x2dserver.slice/pylon-camera-server@H1-VID-CAM-FCES-IR-TRANS-B.service
             └─402217 /usr/bin/pylon-camera-server H1-VID-CAM-FCES-IR-TRANS-B.ini

Jan 05 13:11:05 h1digivideo3 pylon-camera-server[402217]: Height: 540
Jan 05 13:11:05 h1digivideo3 pylon-camera-server[402217]: Setting GevSCPSPacketSize (packet size): 8192
Jan 05 13:11:05 h1digivideo3 pylon-camera-server[402217]: Setting GevSCPD (inter-packet delay): 25000
Jan 05 13:11:05 h1digivideo3 pylon-camera-server[402217]: Starting grabbing.
Jan 05 13:11:05 h1digivideo3 pylon-camera-server[402217]: Setting auto exposure: 0
Jan 05 13:11:05 h1digivideo3 pylon-camera-server[402217]: Setting exposure time: 200000
Jan 05 13:11:05 h1digivideo3 pylon-camera-server[402217]: Setting auto gain: 0
Jan 05 13:11:05 h1digivideo3 pylon-camera-server[402217]: Setting gain: 360
Jan 05 22:33:53 h1digivideo3 pylon-camera-server[402217]: The grab failed.
Jan 05 22:33:53 h1digivideo3 pylon-camera-server[402217]: The buffer was incompletely grabbed. This can be caused by performance problems of the network hardware used, i.e. network adapter, switch, or ethernet

I ran 'service pylon-camera-server@H1-VID-CAM-FCES-IR-TRANS-B restart' and it came back.

This is the first I can recall of this occurring on this new server and code.
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 08:03, Monday 08 January 2024 (75241)
OPS Day Shift Start

TITLE: 01/08 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 3mph Gusts, 2mph 5min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.50 μm/s
QUICK SUMMARY:

IFO is in NLN and OBSERVING (17 hr 32 min lock).

LHO General
ryan.short@LIGO.ORG - posted 00:00, Monday 08 January 2024 (75240)
Ops Eve Shift Summary

TITLE: 01/08 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: H1 was locked the entire shift; current lock stretch up to 9.5 hours. Just one instance of lost observation from SQZ unlocking, but everything was brought back swiftly.

LHO General (SQZ)
ryan.short@LIGO.ORG - posted 20:08, Sunday 07 January 2024 - last comment - 14:49, Tuesday 09 January 2024(75239)
Ops Eve Mid Shift Report

State of H1: Observing at 159Mpc

Very quiet shift with H1 observing the entire time except at 03:25 UTC when SQZ unlocked (SQZ_OPO_LR Guardian reported "PZT voltage limits exceeded"). Guardians were able to bring everything back automatically and observing was resumed within 2 minutes. BNS range improved by 5-6Mpc after this event.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 10:35, Monday 08 January 2024 (75245)

We have a checker in SQZ_MAMNGER to relock the OPO if the PZT is not in 50-110V range when IFO is down to prevent this. The PZt changed too fast for this checker to help though, probably caused by a 0.4degF LVEA temperature change in zone 4 at the time.

The range increase appears to be because the SQZ angle was in a bad place before relock (220 degrees rather than nominal 180), see attached. Unsure if we expect the OPO PZT changing to effect SQZ angle.

This may be improved by moving the ADF closer to where we want to optimize (200Hz?). Currently the ADF at 1.3kHz but best range is with SQZ not optimized at 1.3kHz 75151. There could be two zero ADF servo crossings at 1.3kHz, one with good 300Hz SQZ and one with bad 300Hz SQZ and sometimes the servo takes us to the wrong one.

Images attached to this comment
camilla.compton@LIGO.ORG - 14:49, Tuesday 09 January 2024 (75281)

Attached is DARM before and after this relock.

Images attached to this comment
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:02, Sunday 07 January 2024 (75237)
OPS Day Shift Summary

TITLE: 01/07 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:

IFO is in NLN and OBSERVING

Other:

GraceDB query failure still flashing on and off at times - We (and LLO) were experiencing the same thing yesterday (01/06) so probably still mini-server delays/reconnections.

LOG:

None

LHO General
ryan.short@LIGO.ORG - posted 16:02, Sunday 07 January 2024 (75238)
Ops Eve Shift Start

TITLE: 01/07 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 8mph Gusts, 7mph 5min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.61 μm/s
QUICK SUMMARY:

H1 has been locked and observing for 1.5 hours. Range this lock is a bit lower than last, but otherwise all systems look good.

H1 ISC
sheila.dwyer@LIGO.ORG - posted 15:54, Friday 05 January 2024 - last comment - 09:45, Monday 08 January 2024(75204)
Locklosses when transitioning to new darm

Sheila, Louis, with help from Camilla and TJ

Louis and I have had several locklosses transitioning to the new DARM configuration.  We don't understand why.

This transition was done several times in December, one of these times was 15:15 UTC on December 19th, when the guardian state was used (74977).  Camilla used the guardian git to show us the code that was loaded that morning, which does seem very much the same as what we are using now (the only difference is a ramp time which Louis found was wrong and corrected, this ramp time doesn't actually matter though since the value is only being reset to the value it is already at). 

We also looked in the filter archive and see that the H1SUSETMX filters have not been reloaded since December 14th, so the filters should be the same.  We also looked at the filters and believe that they should be correct.

In the last attachment to 74790 you can see that this configuration has more drive to the ESD at the microseism (the reduction in the ESD RMS comes from reduced drive around a few Hz), so this may be less robust when there is more wind and microseism.  I don't think this is our current problem though, because we are loosing lock due to a 2.6Hz oscillation saturating the ESD.

We've tried both to do this transition in the way that it was done in December, (using the NEW_DARM state) and by setting the flag in the TRANSITION_FROM_ETMX state, which I wrote in Decmber but we hadn't tested until today.  This code looks to have set everything up correctly, but we still loose lock due to a 2.6Hz saturation of the ESD.

Camilla looked at the transition we did on December 19th, there was also a 2.6Hz ring up at that time, but perhaps with the lower microseism we were able to survive this. A solution may be to ramp to the new configuration more quickly (right now we use a 5 second ramp). 

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 09:45, Monday 08 January 2024 (75243)

Elenna suggested ASC could be making this transition unstable and that we could think about raising the gain of an ASC lop during the transition. On Friday's lockloss, attached, you can see CSOFT and DSOFT YAW wobble at 2.6Hz. HARD loops look fine.

Images attached to this comment
H1 CDS
david.barker@LIGO.ORG - posted 09:58, Friday 05 January 2024 - last comment - 11:18, Monday 08 January 2024(75192)
h1hwsmsr stopped running at 22:14 Thu 04 Jan 2024 PST

Camilla, Erik, Dave:

h1hwsmsr (HWS ITMX and /data RAID) computer froze at 22:14 Thu 04 Jan 2024 PST. The EDC disconnect count went to 88 at this time.

Erik and Camilla have just viewed h1hwsmsr's console, which indicated a HWS driver issue at the time. They rebooted the computer to get the /data RAID NFS shared to h1hwsex and h1hwsmsr1. Currently the ITMX HWS code is not running, we will start it during this afternoon's commissioning break.

One theory of the recent instabilities is the camera_control code I started just before the break to ensure the HWS cameras are inactive (in extenal trigger mode) when H1 is locked. Every minute the camera_control code gets the status of the camera, which along with the status of H1 lets it decide if the camera needs to be turned ON or OFF. Perhaps with the main HWS code getting frames from the camera, and the control code getting the camera status, there is a possible collision risk.

To test, we turn the camera_control code off at noon. I will rework the code to minimize the number of camera operations to the bare minimum.

Comments related to this report
camilla.compton@LIGO.ORG - 12:57, Friday 05 January 2024 (75200)TCS

At ~ 20:00UTC we left the HWS code running (restarted ITMX) but stopped Dave's carema control code 74951 on ITMX, ITMY, ETMY, leaving the camera's off. They'll be left off over the weekend until Tuesday. ETMX is still down from yesterday 75176

If the computers remain up over the weekend we'll look at incorporating the camera control into the hws code to avoid crashes. 

camilla.compton@LIGO.ORG - 15:25, Friday 05 January 2024 (75203)

Erik swapped h1hwsex to a new v1 machine. We restarted the HWS code and turned the camera to external trigger mode so it too should remain off over the weekend.

ryan.short@LIGO.ORG - 16:29, Friday 05 January 2024 (75208)OpsInfo

I've commented out the HWS test entirely (only ITMY was being checked) from DIAG_MAIN since no HWS cameras are capturing data. Tagging OpsInfo.

erik.vonreis@LIGO.ORG - 17:24, Friday 05 January 2024 (75210)

Trace from h1hwsmsr crash attached.

 

 

Images attached to this comment
camilla.compton@LIGO.ORG - 11:18, Monday 08 January 2024 (75248)TCS

All 4 computers remained up and running over the weekend, with the camera on/off code paused. We'll look into either making Dave's code smarter or incorporating the cameras turning on/off into the hws-server code so that we don't send multiple calls to the camera at the same time, our leading theory as to why these hws computers have been crashing. 

H1 ISC
sheila.dwyer@LIGO.ORG - posted 15:52, Tuesday 19 December 2023 - last comment - 10:23, Wednesday 10 January 2024(74916)
comparison of darm with OM2 hot vs cold

Jenne, Naoki, Louis, Camilla, Sheila

Here is comparison of the DARM CLEAN spectrum with OM2 hot vs cold. The second screenshot shows a time series of OM2 cooling off.  The optical gain increased by 2%, as was seen in the past (for example 71087).  Thermistor 1 shows that the thermal transient takes much longer (12 + hours) than what thermistor 2 says (2 hours). 

Louis posted a comparison of the calibration between the two states, there are small differences in calibration ~1% (74913).  While the DARM spectrum is worse below 25Hz, it is similar at 70 Hz where we in the past thought that the sensitivity was worse with OM2 cold.  From 100-200 Hz the sensitivity seems slightly better with OM2 cold, some of the peaks are removed by Jenne's jitter subtraction (74879) but there also seems to be a lower level of noise between the peaks (which could be small enough to be a calibration issue).  At high frequency the cold OM2 noise seems worse, this could be because of the squeezing.  We plan to take data with some different squeezing angles tomorow and will check the squeezing angle as part of that.

So, it seems that this test gives us a different conculsion than the one we did in the spring/summer, and that now it seems that we should be able to run with OM2 cold to have better mode matching from the interferometer to the OMC.  We may have not had our feedforwards well tuned in the previous test, or perhaps some other changes in the noise mean that the result is different now. 

Images attached to this report
Comments related to this report
gabriele.vajente@LIGO.ORG - 09:57, Wednesday 20 December 2023 (74933)

Is this additonal nosie at low frequency due to the same non-stationarity we oberved before and we believe is related to the ESD upconversion? Probably not, here's why.

First plot compares the strain spectrum from two times with cold and hot OM2. This confirms Sheila's observation.

The second and third plots are spectrograms of GDS-CALIB_STRAIN during the two periods. Both show non-stationry noise at low frequency. The third plot shows the strain spectrogram normalized to the median of the hot OM2 data: beside the non-stationariity, it looks like the background noise is higher below 30 Hz.

This is confirmed by looking at the BLRMS in the 16-60 Hz region for the two times, as shown in the fourth plot: its higher with cold OM2

Finally, the last plot shows the correlation between the ESD RMS and the strain BLRMS, normalized to the hot OM2 state. There is still a correlation, but it appear again that the cold OM2 state has an additional background noise: when the ESD RMS is att the lower end, the strain BLRMS setlles to higher values

Images attached to this comment
sheila.dwyer@LIGO.ORG - 15:57, Wednesday 20 December 2023 (74949)

Here is the same comparison, without squeezing.  Using times from  74935 and 74834

This suggests that where cold OM2 seems better than hot OM2 above that is due to the squeezing (and the jitter subtraction Jenne added, which is also on in this plot for cold OM2 but not for hot OM2).  And the additional noise with cold OM2 reaches up to about 45Hz. 

 

Images attached to this comment
naoki.aritomi@LIGO.ORG - 14:16, Friday 22 December 2023 (74997)SQZ

After we optimized ADF demod phase in 74972, the BNS range seems better and consistently 160-165Mpc. The attached plot shows the comparison of OM2 cold/hot with/without SQZ. The OM2 cold with SQZ is measured after optimization of ADF demod phase and other measurements are same as Sheila's previous plots.

This plot supports what Sheila says in the previous alogs.

  • The OM2 cold is worse below 40 Hz for both SQZ/no SQZ.  
  • Without SQZ, OM2 cold and hot are almost the same above 40 Hz.
  • With SQZ, OM2 cold is better between 100-600 Hz, but worse above 1 kHz. This difference could be due to SQZ and we could try to optimize SQZ around 100 Hz with OM2 hot
Images attached to this comment
victoriaa.xu@LIGO.ORG - 18:12, Thursday 04 January 2024 (75181)SQZ

Echo-ing the above, and summarizing a look at OM2 with sqz in both Sept 2023 and Dec 2023 (running gps times dictionary is attached here).

If we compare the effect of squeezing -- there is higher kHz squeezing efficiency with hot OM2. We can look at either just the darm residuals dB[sqz/unsqz] (top), or do subtraction of non-quantum noise (bottom) which shows that hot OM2 improved the kHz squeezing level by ~0.5 dB at 1.7 kHz (the blue sqz blrms 5). This is consistent with summary pages: SQZ has not reached 4.5 dB since cooling OM2 74861. Possibly suggests better SQZ-OMC mode-matching with hot OM2.

Without squeezing, cold om2 has more optical gain and more low-freq non-quantum noise. Better IFO-OMC mode-matching with cold OM2.

In total, it's almost a wash for kHz sensitivity: heating OM2 loses a few % optical gain, but recovers 0.2-0.5 dB of shot noise squeezing. 

It's worth noting the consistent range increases with SQZ tuning + improvements: even in FDS, there is a non-zero contribution of quantum noise down to almost 50 Hz. For example Naoki's adjustment of sqz angle setpoint on 12/21 74972 improved range, same for Camilla's Jan sqz tuning 75151. Looking at DARM (bottom green/purple traces), these sqz angle tunings reproducibly improved quantum noise between about 60-450 Hz.

Images attached to this comment
Non-image files attached to this comment
sheila.dwyer@LIGO.ORG - 11:20, Monday 08 January 2024 (75195)

Here are some more plots of the times that Vicky plotted above. 

The first attachment is just a DARM comparison with all 4 no sqz times, OM2 cold vs hot in December vs September. 

Comparing OM2 hot September vs December shows that our sensitivity at from 20-40 Hz has gotten worse since September, the MICH coherence seems lower while the jitter and SRCL coherence seem similar.  The same comparison for OM2 cold shows that with OM2 cold our sensitivity has also gotten worse from 15-30 Hz. 

Comparing cold vs hot, in September the MICH coherence did get worse from 60-80 Hz for cold OM2, which might explain the worse sensitivity in that region.  The MICH coherence got better from 20-30 Hz where the sensitivty was better for cold OM2.  The December test had better tuned MICH FF for both hot and cold OM2, so this is the better test of the impact of the curvature change. 

As Gabriele pointed out with his BRUCO, 74886 there is extra coherence with DHARD Y for cold OM2 at the right frequencies to help explain the extra noise.  There isn't much change in the HARD pitch coherence between these December times, but the last attachment here shows a comparison of the HARD Y coherences for hot and cold OM2 in December. 

 

 

Images attached to this comment
sheila.dwyer@LIGO.ORG - 10:23, Wednesday 10 January 2024 (75298)

Peter asked if the difference in coherence with the HARD Yaw ASC was due to a change in the coupling or the control signal. 

Here is a comparison of the control signals with OM2 hot and cold, they look very similar at the frequencies of the coherence.

Images attached to this comment
Displaying reports 11621-11640 of 84677.Go to page Start 578 579 580 581 582 583 584 585 586 End