Displaying reports 7461-7480 of 85578.Go to page Start 370 371 372 373 374 375 376 377 378 End
Reports until 21:35, Thursday 03 October 2024
H1 General (SQZ)
anthony.sanchez@LIGO.ORG - posted 21:35, Thursday 03 October 2024 (80455)
SQZ OPO Temp adjustment

Starting 3:46 UTC

I adjusted H1:SQZ-OPO_TEC_SETTEMP while Watching the H1:SQZ-CLF_REFL_RF6_ABS_OUTPUT ndscope ( OPO temo ndscope found on the SQZ scopes dropdown)  to change the OPO Tec Temp.
ending at 3:50 UTC


Sheila and I discussed doing a SQZ Angle adjustment but it seemed fine at the time.

 

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 16:42, Thursday 03 October 2024 (80454)
Thursday Eve Shift start

TITLE: 10/03 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan Short
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 2mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.26 μm/s
QUICK SUMMARY:

H1 is currently in Nominal_LOW_NOISE[600] with little wind and decreasing secondary microseism.
 

Start Time System Name Location Lazer_Haz Task Time End
21:58 PEM Robert, guests Y-arm n CE equipment demonstration 01:58
21:58 PEM Anamaria, guests X-arm n CE equipment demonstration 01:58

 

LHO General
ryan.short@LIGO.ORG - posted 16:30, Thursday 03 October 2024 (80452)
Ops Day Shift Summary

TITLE: 10/03 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Busy today today, from multiple lock acquisitions to commissioning activities this morning. Unfortunately, not much observing time for H1 today, but's relocking now, currently up to MAX_POWER.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
14:47 FAC Tyler, MM All Buildings n Glycol samples 17:57
15:07 TCS Camilla LVEA n Getting parts 15:24
15:10 FAC Karen Opt/Vac Labs n Technical cleaning 15:37
15:35 ISC Sheila LVEA/Opt Lab n Gathering parts 15:46
15:47 SQZ Sheila, Camilla LVEA Local Moving OPO crystal 19:03
17:24 ISC Elenna Remote n Testing PRCL FF 18:22
17:58 FAC Tyler, Richard, Trane MY n Looking at chillers 19:21
18:26 CDS Fil CER n Taking pictures 18:31
18:44 CDS Fil FCES n Checking extension cords 18:52
18:44 VAC Gerardo LVEA n Replace pump supply on HAM1 19:05
20:07 SQZ Sheila LVEA Local Checking PD on SQZT7 20:36
20:59 SQZ Sheila, Camilla LVEA Local Fixing OPO alignment 21:51
21:58 PEM Robert, guests Y-arm n CE equipment demonstration Ongoing
21:58 PEM Anamaria, guests X-arm n CE equipment demonstration Ongoing
H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 15:55, Thursday 03 October 2024 - last comment - 11:47, Friday 04 October 2024(80453)
Lockloss @ 22:19 UTC

Lockloss @ 22:19 UTC - link to lockloss tool

No obvious cause, but it looks like the IMC lost lock at the same time as the arms. However, I don't suspect the FSS in this case.

Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 11:47, Friday 04 October 2024 (80466)
Images attached to this comment
H1 SQZ
camilla.compton@LIGO.ORG - posted 15:51, Thursday 03 October 2024 (80451)
Translated OPO Crystal to new spot to Reduce Losses

Sheila, Camilla. WP 1211612117. Last done in October 2023: 73524.

Followed instructions from 7352465684, with additional notes below. 

Took SQZT7 area to local laser hazard and plugged straight into the OPO_IR_PD for our red signal and the OPO_TRANS PD for our green signal. Trying to take the signals from the outside of the table or the SQZ racks was confusing. 

Setup:

What we did:

Today, 75uW OPO trans power corresponded to 0.945 on the OPO refl diode, last night that was 4.161mW on OPO refl diode. Meaning after this spot move we now need to inject a lot less power into the OPO to get the same output power.

Images attached to this report
H1 SQZ
sheila.dwyer@LIGO.ORG - posted 15:06, Thursday 03 October 2024 (80450)
OPO PZT voltage changes squeezing

Camilla and I ran SQZ_ALIGNMENT and SCAN_SQZ_ANG with the offset on PZT2 (we are using H1:SQZ-CLF_SERVO_SLOWOUTOFS) set to 0, while the nominal value is 10.  We then went to no squeezing and set the offset back to it's nominal value, and saw reduced squeezing when we reinjected squeezing.  We could recover the better squeezing level by slowly stepping this offset back to 0V.  We think that this could be because the PZT setting changes the alignment of the OPO, and the squeezed beam coming out of it.  (which also changes the squeezing angle). 

A similar effect was seen with PZT1 in 80396 and 78529.  It would be interesting another time to try changing this offset and seeing if we can recover squeezing by readjusting alignment and squeezing angle.

H1 ISC
sheila.dwyer@LIGO.ORG - posted 14:55, Thursday 03 October 2024 (80449)
OPO pump power adjust half wave plate is not working

The picomotor to adjust the OPO pump power is not working, H1:SYS-MOTION_C_PICO_I_MOTOR_2.  When someone commands the waveplate to turn from medm, there is a clicking sound on the table that seems to come from the rotation stage, but it doesn't move. 

We need to adjust this by hand for now if the ISS runs out of range (or alternatively, we can change the ISS set point).

FRS: 32288

H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 11:55, Thursday 03 October 2024 - last comment - 14:55, Thursday 03 October 2024(80443)
Lockloss @ 18:27 UTC

Lockloss @ 18:27 UTC - link to lockloss tool

Happened at the end of commissioning time while PRCL NB injection was being run, although I don't suspect that as the cause. There was also work happening near HAM7 and the SQZ tables. Only thing in the trends I notice is some motion in ETMX L3 and DARM as the first action; hard to tell which happens first. No other obvious cause, including the FSS.

The FSS was very unhappy when trying to relock the IMC after this lockloss. I needed to toggle the autolocker a couple of times before the glitching calmed down enough to allow the IMC to lock.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 14:55, Thursday 03 October 2024 (80448)

H1 back to observing at 21:49 UTC. Recovery took a bit longer than usual due to a lockloss at TRANSITION_FROM_ETMX from a small seismic event (see screenshot from Jim) and SQZ optimization (see incoming alog from Sheila/Camilla).

Images attached to this comment
H1 ISC
elenna.capote@LIGO.ORG - posted 11:50, Thursday 03 October 2024 (80445)
New PRCL Feedforward fit

After a failed test of the "correct" PRCL feedforward fit today (80444 and 80287), I tried again to fit a PRCL feedforward, this time using the good measurements Camilla took in 80444, using both the PRCL excitation and the correct PRCL preshaping.

There was some trial and error here, mainly due to the difficulty in matching the phase as well as the rising gain of the transfer function above 100 Hz, which you can see in this plot. This results in the best fit having some high Q pole just above 250 Hz, and of course the usual struggle to keep the gain low below 10 Hz. I found better results when I limited the gain between 1 and 10 Hz to a maximum of 0.5, and also reduced my fit range to go between 10 and 60 Hz, which is where the sensitivity improvement is required, and then limited the gain from 100-1000 Hz to 0.02. PRCL is limiting the noise up to 30 Hz, and far below th noise floor around 100 Hz, so it's probably ok to keep the PRCL noise the same or just slightly worse above 60 Hz (NB alog).

Finally, with Gabriele's help, I found a fit that worked! Here is the injection comparison with no feedforward and the new feedforward. I updated the DARM traces, so you can see that the PRCL contribution to DARM is slightly worse around 100 Hz, but much better up to 60 Hz, by up to a factor of 10.

The new filter is in FM7 (I have triple checked this!) and I have added it to the guardian and SDFed this in observe (screenshot).

I did some chopping of the PRCL FF to check the improvement in DARM, see plot. I think there is improvement between 20-30 Hz, but it's hard to tell.

I set out to rerun the noise budget injection of PRCL so we could remake the plots, but there was a lockloss while I was measuring (I think because people were on the floor?). I don't think the lockloss was from the NB injection.

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 08:50, Thursday 03 October 2024 (80441)
Thu CP1 Fill

Thu Oct 03 08:09:57 2024 INFO: Fill completed in 9min 53secs

Jordan confirmed a good fill curbside.

It is getting colder outside, I've increased the trip temp from -110C to -100C.

Images attached to this report
LHO General
ryan.short@LIGO.ORG - posted 07:35, Thursday 03 October 2024 - last comment - 10:09, Thursday 03 October 2024(80439)
Ops Day Shift Start

TITLE: 10/03 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 3mph 3min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.30 μm/s
QUICK SUMMARY: H1 just lost lock about 15 minutes ago; will start looking into it shortly, although the cause may be from a recent earthquake out of Japan.

Comments related to this report
ryan.short@LIGO.ORG - 08:12, Thursday 03 October 2024 (80440)

Locklosses from last night:

  • 08:31 UTC - only thing out of the ordinary I notice is motion mostly in the L2 stages of the QUADs about 150ms before the lockloss, see screenshot
  • 14:18 UTC - nothing obvious; doesn't look like the FSS, no sign of glitches in DARM or ETMX either
Images attached to this comment
ryan.short@LIGO.ORG - 10:09, Thursday 03 October 2024 (80442)

H1 back to NLN at 16:38 UTC. Going straight into planned commissioning time.

H1 General (Lockloss)
anthony.sanchez@LIGO.ORG - posted 22:07, Wednesday 02 October 2024 (80438)
Wednesday Ops Eve Shift End

TITLE: 10/03 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:

Another Lockloss @ 1:23 UTC 

Relocking was rather difficult even after an Initial Alignment due to the IMC goining into fault because the FSS.
I eventually took ISC_LOCK to Down. and the PSL FSS to Down and the IMC to Down. and then requested Nominal_Low_Noise from ISC_LOCK. IDK it just seemed to work for me. your milage may vary.

Once the FSS decided to play ball, the rest of Locking went well enough.

Observing reached @ 3:25 UTC and with out any notable incidents .

The IFO has now been locked and Observing for 1 hour and 43 minutes.
 

LOG:

Start Time System Name Location Lazer_Haz Task Time End
22:04 TCS Camilla LVEA N Getting TCS parts near the Filtertube 22:25
22:26 PEM Robert & Anna-Maria Lance EX N PEM investigations 01:26
H1 General (Lockloss, PSL)
anthony.sanchez@LIGO.ORG - posted 16:29, Wednesday 02 October 2024 - last comment - 11:21, Friday 04 October 2024(80436)
Control room has Internet access again and an unrelated Lockloss

TITLE: 10/02 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 21mph Gusts, 13mph 3min avg
    Primary useism: 0.08 μm/s
    Secondary useism: 0.45 μm/s
QUICK SUMMARY:

Sudden Lockloss @ 21:55 UTC very likley cause by a PSL FSS issue.

IMC Had a hard time relocking locking after the lockloss.

NLN Reached @ 22:54 UTC
Observing reached @ 22:56 UTC

Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 11:21, Friday 04 October 2024 (80465)

Definitely looks like the FSS had a large glitch and lost lock before DARM saw the lockloss. This lockloss didn't have the FSS glitches happening before though.

Images attached to this comment
LHO VE
janos.csizmazia@LIGO.ORG - posted 19:22, Tuesday 01 October 2024 - last comment - 14:46, Thursday 03 October 2024(80411)
HAM1(/HAM2) Annulus Ion Pump swap
Gerardo, Jordan, Travis, Janos

Reacting to the issue in aLog https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=80370, the HAM1 AIP was checked for controller and pump failures. After trying a couple of controllers, it was clear that the pump broke down.
Shortly after the maintenance period, because of the high seismic activity, locking was impossible, so the vacuum team took the opportunity and swapped the pump (as normally it would have been impossible during the 4 hours of the maintenance period).
The AIP was swapped with a noble diode pump - this was not exactly intentional, but turned out to be a happy accident, as it seemingly works much better than the Starcell or even the Galaxy pumps. However, as the noble diode pump has positive polarity, a positive polarity controller was needed: the only available piece is an Agilent IPC Mini (see it in the picture attached), which works well, but in the MEDM screen it appears to be faulty, due to its wiring differences.
All in all, the HAM1/HAM2 twin-annulus system was pumped with the 2 AIPs and an Aux cart - Turbo pump bundle. The noble diode pump stabilized very nicely (at 5-7E-7 Torr, which is unusually low), so eventually the Aux cart Turbo pump bundle was switched off - at 6:55 pm PT.
Since then, the 2 AIPs continue to decrease the annulus pressure, which is indeed very nice, so practically we are back to normal.
In the meantime, Gerardo quickly modified an appropriate Varian controller to have positive polarity, so at the next opportunity the vacuum team will swap it with the Agilent controller, so the MEDM screen will also be normal. Note that the Aux cart - Turbo bundle remain there until this swap happens.
Images attached to this report
Comments related to this report
gerardo.moreno@LIGO.ORG - 08:04, Wednesday 02 October 2024 (80416)VE

While the ion pump was replaced we managed to trip the cold cathode giving us the signal for the vacuum pressure internal to HAM1, PT100B, found the CC off last night, but since the IFO was collecting data decided to wait until we were out lock, I turned the CC back on this morning.  See trend data attached.

Images attached to this comment
jon.feicht@LIGO.ORG - 10:57, Wednesday 02 October 2024 (80427)
Diode pumps are typically faster than triode pumps including Starcell cathode types. Nice work!
gerardo.moreno@LIGO.ORG - 14:46, Thursday 03 October 2024 (80446)VE

Removed the Agilent IPC Mini controller that was temporarily installed on Tuesday, and replaced it with a positive (+) MiniVac controller.  Attached is a trend of current load for both controllers, HAM1 and HAM2.

Note for the vacuum team, we still have the aux cart connected to HAM2 annulus ion pump isolation valve.

Images attached to this comment
H1 SQZ
sheila.dwyer@LIGO.ORG - posted 10:41, Tuesday 01 October 2024 - last comment - 14:50, Thursday 03 October 2024(80396)
OPO PZT voltage seems related to low range overnight

Yesterday afternoon until 2 am, we had low range because the squeezing angle was not well tuned.  As Naoki has noted 78529 this can happen when the OPO PZT is at a lower voltage than we normally operate at. This probably could have been improved by running SCAN_SQZANG. 

I've edited the OPO_PZT_OK checker, so that it requires the OPO PZT to be between 70 and 110 V (it used to be 50 to 110V).  This might mean that sometimes the OPO has difficulty locking, (ie, 76642), which will cause the IFO to call for help, but that will avoid running with low range when it needs to run SCAN_SQZANG.

 

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 14:50, Thursday 03 October 2024 (80447)

Reverted OPO checker back to 50-110V as we moved the OPO crystal spot. 

H1 ISC
camilla.compton@LIGO.ORG - posted 09:29, Monday 30 September 2024 - last comment - 11:32, Thursday 03 October 2024(80369)
PRCL FF measurements taken as FM6 fit didn't reduce PRCL noise.

I tried Elenna's FM6 from 80287, this made the PRCL coupled noise worse, see first attached plot. 

Then Ryan turned off CAL lines and we retook to preshaping (PRCLFF_excitation_ETMYpum.xml) and PRCL injection (PRCL_excitation.xml) templates.  I took the PRCL_excitation.xml injection with the PRCL FF off and increased amplitude from 0.02 to 0.05 to increase coherence over 50Hz. Exported as prclff_coherence/tf.txt, and prcl_coherence/tf_FFoff.txt. All in /opt/rtcds/userapps/release/lsc/h1/scripts/feedforward

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 13:50, Monday 30 September 2024 (80377)

Elenna pointed out that I tested the wrong filter, the new one is actual FM7, labeled "new0926". We can test that on Thursday.

elenna.capote@LIGO.ORG - 11:32, Thursday 03 October 2024 (80444)

The "correct" filter today in FM7 was tested today and still didn't work. Possibly because I still didn't have the correct pre-shaping applied in this fit. I will refit using the nice measurement Camilla took in this alog.

H1 DetChar (DetChar, DetChar-Request)
gabriele.vajente@LIGO.ORG - posted 11:27, Wednesday 18 September 2024 - last comment - 17:47, Wednesday 02 October 2024(80165)
Scattered light at multiples of 11.6 Hz

Looking at data from a couiple of days ago, there is evidence of some transient bumps at multiples of 11.6 Hz. Those are visible in the summary pages too around hour 12 of this plot.

Taking a spectrogram of data starting at GPS 1410607604, one can see at least two times where there is excess noise at low frequency. This is easier to see in a spectrogram whitened to the median. Comparing the DARM spectra in a period with and without this noise, one can identify the bumps at roughly multiples of 11.6 Hz.

Maybe somebody from DetChar can run LASSO on the BLRMS between 20 and 30 Hz to find if this noise is correlated to some environmental of other changes.

Images attached to this report
Comments related to this report
jane.glanzer@LIGO.ORG - 14:29, Thursday 26 September 2024 (80314)DetChar

I took a look at this noise, and I have some slides attached to this comment. I will try to roughly summarize what I found. 

I first started by taking some 20-30 hz BLRMS around the noise. Unfortunately, the noise is pretty quiet, so I don't think lasso will be super useful here. Even taking a BLRMS for a longer period around the noise didn't produce much. I can re-visit this (maybe take a narrower BLRMS?), but as a separate check I looked at spectra of the ISI, HPI, SUS, and PEM channels to see if there was excess noise anywhere in particular. I figured maybe this could at least narrow down a station where there is more noise at these frequencies.

What I found was:

  1. Didn't see excess noise in the EY or EX channels at ~11.6 Hz or at the second/third harmonics.
  2. Many CS channels had some excess noise around 11.6 hz, less at the second/third harmonics.
  3. However, of the CS channels that DID have excess noise around 11.6 Hz and 23.2 Hz, HAM8 area popped up the most. Specifically these channels: H1:PEM-FCES_ACC_BEAMTUBE_FCTUBE_X_DQ, H1:ISI-HAM8_BLND_GS13Z_IN1_DQ, H1:ISI-HAM8_BLND_GS13X_IN1_DQ.
  4. HAM3 also popped up, and the Hveto results for this day had some glitches witnessed by H1:HPI-HAM3_BLND_L4C_RZ_IN1_DQ.
  5. Potential scatter areas: something near either HAM8 or HAM3?
Non-image files attached to this comment
jane.glanzer@LIGO.ORG - 12:33, Wednesday 02 October 2024 (80429)DetChar

I was able to run lasso on a narrower strain blrms (suggested by Gabriele) which made the noise more obvious. Specifically, I used a 21 Hz - 25 Hz blrms of auxiliary channels (CS/EX/EY HPI,ISI,PEM & SUS channels) to try and model a strain blrms of the same frequency via lasso. In the pdf attached, the first slide shows the fit from running lasso. The r^2 value was pretty low, but the lasso fit does pick up some peaks in the auxiliary channels that do line up with the strain noise. In the following slides, I made time series plots of  the channels that lasso found to be contributing the most to the re-creation of the strain. The results are a bit hard to interpret though. There seems to be roughly 5 peaks in the aux channel blrms, but only 2 major ones in the strain blrms. The top contributing aux channels are also not really from one area, so I can't say that this narrowed down a potential location. However, two HAM8 channels were among the top contributors (H1:ISI_HAM8_BLND_GS_X/Y). It is hard to say if that is significant or not, since I am only looking at about an hours worth of data. 

I did a rough check on the summary pages to see if this noise happened on more than one day, but at this moment I didn't find other days with this behavior. If I do come across it happening again (or if someone else notices it), I can run lasso again.

Non-image files attached to this comment
adrian.helmling-cornell@LIGO.ORG - 17:47, Wednesday 02 October 2024 (80437)DetChar

I find that the noise bursts are temporally correlated with vibrational transients seen in H1:PEM-CS_ACC_IOT2_IMC_Y_DQ. Attached are some slides which show (1) scattered light noise in H1:GDS-CALIB_STRAIN_CLEAN from 1000-1400 on Septmeber 17, (2) and (3) the scattered light incidents compared to a timeseries of the accelerometer, and (4) a spectrogram of the accelerometer data.

Non-image files attached to this comment
Displaying reports 7461-7480 of 85578.Go to page Start 370 371 372 373 374 375 376 377 378 End