Displaying reports 16121-16140 of 86613.Go to page Start 803 804 805 806 807 808 809 810 811 End
Reports until 11:51, Friday 01 September 2023
H1 General (Lockloss)
anthony.sanchez@LIGO.ORG - posted 11:51, Friday 01 September 2023 (72624)
Friday Lockloss 1377628699

Lockloss:
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1377628699
Still waiting on analysis to finish running.

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:12, Friday 01 September 2023 (72621)
Fri CP1 Fill

Fri Sep 01 10:07:14 2023 INFO: Fill completed in 7min 10secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 PSL
anthony.sanchez@LIGO.ORG - posted 09:51, Friday 01 September 2023 (72620)
PSL Weekly Famis 26207

PSL Weekly Famis 26207

Laser Status:
    NPRO output power is 1.831W (nominal ~2W)
    AMP1 output power is 67.19W (nominal ~70W)
    AMP2 output power is 135.2W (nominal 135-140W)
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN

PMC:
    It has been locked 26 days, 1 hr 27 minutes
    Reflected power = 16.43W
    Transmitted power = 109.3W
    PowerSum = 125.8W

FSS:
    It has been locked for 0 days 11 hr and 45 min
    TPD[V] = 0.8494V

ISS:
    The diffracted power is around 2.3%
    Last saturation event was 0 days 9 hours and 38 minutes ago


Possible Issues: None

H1 SUS
camilla.compton@LIGO.ORG - posted 09:38, Friday 01 September 2023 (72619)
Commissioning 15:56 to 15:59UTC to Manually damp PI 24

Tony, Camilla We went into Commissioning 15:56 to 15:59UTC as we needed to take SUS_PI into IDLE and manually change the phase of PI mode 24. It was circling through phases and ringing up, plot attached, t-cursor where we changed SUS_PI to IDLE.

We are unsure why this would have changed today, Tony checked that it hasn't rung up this high in the last week, his plot attached. We should check the SUS_PI settings to avoid this in future locks.

There is instructions on how SUS_PI works in 68610 68379 but all we did was take SUS_PI to IDLE and change H1:SUS-PI_PROC_COMPUTE_MODE24_PLL_PHASE to 50, shown in attached image. Tony suggests maybe SUS_PI doesn't need to be monitored as the phase already changes during observing.

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 08:04, Friday 01 September 2023 (72618)
Friday Ops Day Shift Start

TITLE: 09/01 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 141Mpc
OUTGOING OPERATOR: Austin
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 6mph Gusts, 4mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.15 μm/s
QUICK SUMMARY:

Inherited an IFO that has been Locked for 7 hours.
 

H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 00:02, Friday 01 September 2023 (72616)
Lockloss at 07:00UTC

We lost lock right at 07:00. not sure why there was a DCPD saturation right before.

H1 General
ryan.crouch@LIGO.ORG - posted 00:00, Friday 01 September 2023 - last comment - 00:15, Friday 01 September 2023(72614)
OPS Thursday eve shift summary

TITLE: 09/01 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
INCOMING OPERATOR: Austin
SHIFT SUMMARY: Quiet shift, 1 lockloss fairly automated relock, I touched two things. Locked for 1:07 as of 07:00UTC

Lockloss at 04:35UTC

LOG:

No log for this shift

Comments related to this report
ryan.crouch@LIGO.ORG - 00:15, Friday 01 September 2023 (72617)

DRMI locked on its first try pretty quickly on the relock

H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 21:38, Thursday 31 August 2023 - last comment - 23:04, Thursday 31 August 2023(72613)
Lockloss at 04:35, no obvious cause

https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1377578169

Comments related to this report
ryan.crouch@LIGO.ORG - 23:04, Thursday 31 August 2023 (72615)

Reaquired Observing at 06:04UTC

H1 General
ryan.crouch@LIGO.ORG - posted 20:05, Thursday 31 August 2023 (72609)
OPS Thursday eve shift midshift update

We've been locked for 4:56, everything seems stable. We've been Observing since 22:24 aside from a brief droppout from 1:43 to 1:45UTC from the ITMX CO2 laser relocking.

H1 CAL (ISC)
jeffrey.kissel@LIGO.ORG - posted 17:36, Thursday 31 August 2023 (72610)
Post 2023-08-31 Calibration Change Systematic Error Model vs. Measurement Update
J. Kissel, L. Dartez

After the calibration update today (2023-08-31; LHO:72594) we now have automatically generated comparisons between measured systematic error in the GDS-CALIB_STRAIN channel (as measured directly by PCAL) vs the traditional model of the systematic error. The results look great.

I attach two automated comparisons to highlight the difference:
Before from 1377445828 2023-08-30 15:50 UTC:
   - pre- 2023-08-30 maintenance, 
   - pre- DARM2 FM8 boost turn on, 
   - \kappa_T is still a 1.08, so DELTAL_EXTERNAL (which isn't corrected for time-dependence) disagrees with GDS-CALIB_STRAIN which does.
   - still not having updated the sensing function since we turned OM2 TSAMS back on
   - the modeled systematic error is still using GPR fits of measured data from 2023-06-21 and prior
vs. 
After from 1377561032 2023-08-31 23:50 UTC:
   - post-calibration update
   - DARM2 FM8 boost is accounted for,
   - DELTAL_EXTERNAL and GDS-CALIB_STRAIN agree
   - All TDCFs are close to 1.0
   - Modeled systematic error is now using GPR fits are now using an updated collection of measurements.

Images attached to this report
H1 CAL
louis.dartez@LIGO.ORG - posted 17:15, Thursday 31 August 2023 (72531)
LHO cal epochs since start of O4
As we are gearing up to regenerate our hourly uncertainty estimates spanning back to the start of O4, I needed to go back through the history of LHO calibration changes, the IFO changes they address, and the identification and placement of each new epoch for the sensing and actuation functions. The table below lists the calibration epochs since the start of O4. For each epoch, I include some relevant alogs that chronicle which IFO changes triggered each epoch. More in depth information on each change to the LHO calibration pipeline can be found in the living Record of Real-Time Calibration Pipeline Parameter Changes (DCC: T2300297).

Date Epoch Type Reason for new epoch
20230504T055052Z Sensing, Actuation new SRCL offset (LHO:69289), TCS changes (LHO:69032), and PCALX cal line change (LHO:69303). More in LHO:69332. This report was *not* exported to the front end.
20230506T182203Z Sensing Changed SRCL offset, DARM loop. More in LHO:69561. N.B. This report marks a new epoch for the sensing function but it did not get exported to the front end (i.e. the "calibration" was not updated using this report). Instead, the report that was used to update the calibration and includes the IFO changes listed in LHO:69561 is 20230510T062635Z.
20230621T191615Z Sensing Moved back to 60W input power(LHO:70693). 20230621T191615Z marks the start of the epoch but the report used to update the IFO calibration is 20230621T211522Z.
20230628T015112Z Sensing OM2 was heated (first time). See LHO:70849.
20230716T034950Z Sensing OM2 was unheated. See LHO:72524.
20230727T162112Z Sensing OM2 was reheated. See LHO:72523.
20230817T214248Z Actuation 3.2kHz filter restored in L3 actuation path. See LHO:72043
Several of the calibration reports had not been properly tagged yet to reflect the epoch boundaries defined above. I manually went through each one and made sure to do so, but this necessitated the reprocessing/regeneration of several calibration reports to ensure that the GPR calculations for the sensing and actuation functions only included the relevant set of measurements. Here is a list of the reports that were reprocessed with the appropriate epoch boundaries. Before being overwritten each report was backed up to /ligo/groups/cal/H1/reports/archive/. 06/21 reason: 60W 06/27 14:53 : om2 hot 20230628T015112Z [regenerated for epoch-sensing] 07/12 16:05 : om2 cold 20230716T034950Z [regenerated for epoch-sensing] 07/19 08:15 : om2 hot 20230727T162112Z [regenerated for epoch-sensing] 20230802T000812Z [regenerated for new epoch-sensing] 20230817T214248Z [regenerated for epoch-actuation] 20230823T213958Z [regenerated for new epoch-sensing and new epoch-actuation] actuation: 05/04: start of O4 08/08: 3.2kHz filter 20230817T214248Z [regenerated for epoch-actuation]
H1 SQZ
sheila.dwyer@LIGO.ORG - posted 17:08, Thursday 31 August 2023 - last comment - 17:55, Thursday 19 October 2023(72604)
SQZ loss measurements on SQZT7

Vicky and I went to SQZT7 while calibration work was happening, to follow up on some of our observations from 72525 (and Vicky's comment). 

Polarization issue:

​​​​​​​With the seed dither locked, we placed a PBS before the half wave plate in the homodyne sqz path and measured 67.2uW transmitted (vertical pol) and 750uW reflected (horizontal) (817uW total, 8% in the wrong polarization, 16.5 degrees polarization rotation).  After the half wave plate we measured 5.48uW transmission through the PBS (vertical) and 802uW reflected (horizontal) ( 807uW total, 0.7% in the wrong polarization, polarization less than 5 degrees away from horizontal).  We also placed the PBS right at the bottom of the periscope, and there measured 70uW transmitted and 820uW before the PBS was inserted (8.5% in the wrong polarization, 17 degrees polarization rotation away from horizontal).  This would not limit the squeezing measured on the homodyne since we are able to correct it with the HWP, but measuring the same polarization rotation at the bottom of the periscope suggests that the beam could be coming out of HAM7 with this polarization error, which would look like an 8% loss to the squeezing level in the IFO. 

In Sept 2022, during the vent for the OM2 swap, we measured the throughput of the seed beam from HAM7 to HAM6 65110, which agreed well with the only loss between HAM7 and HAM6 being the 65.6% reflectivity of SRM, and suggests that there was not an 8% loss in the OFI at that time. 

Loss on SQZT7 (not bad):

Comparing the total power measurements here, we have 820uW at the bottom of the periscope, and 807uW measured right before the homodyne, so we have something like 1.6% loss on SQZT7 optics (small compared to the type of loss we need to explain our squeezing level).  

Seed transmitted power over reflected power ratio has dropped:

We also measured the seed power reflected from the OPO, so that we could compare the ratio of transmitted to reflected seed measured at the time of the squeezer installation in HAM7 in Feb 2022: 61904 (3.9% trans/refl). Today we saw 0.82mW seed transmitted, and 27mW of reflected seed at the bottom of the periscopes (3.03% trans/refl).  This is 78% of the ratio measured at installation.  Because this seems like a large drop, we repeated the measurement twice more, and got 3% each time.  We also checked that the dither lock is locking at the maximum seed transmission.  

Homodyne PD QE check (QE of PDB might be low):

We used an Ophir which was calibrated in 2018 to measure the LO power onto the homodyne PDs, the filter and head are SN 889882 and the controller is SN 889428.  For PDA we saw 0.6mW, for PDB we saw 0.63mW.

Both the PDs are calibrated into mA in the front end, which includes anti-gain of gain(0.25)gain(0.22027), transimpedance of 0.001 (1kOhm), two anti-whitening filters (and cnts2V and mA factors).  For PDA there is a fudge factor in the filter gain, if we divide this out, the readback is that the PDA photocurrent was 0.512mA, and 0.5126mA for PDB (with a drift of 0.5% over the measurement time).  This gives a responsivity of 0.855A/W for PDA and 0.813A/W for PDB.  For QE of 1, the responsivity would be e lambda/(h c) = 0.8582 A/W, so our measurement is 99.6% QE of PDA, and 95% QE for PDB.  (See Vicky measured higher reflection off PDB than PDA in 63893 and Haocuns' measurement in 43452).  

Comments related to this report
sheila.dwyer@LIGO.ORG - 11:03, Tuesday 05 September 2023 (72678)

Above I mixed up vertical and horizontal polarization.  The LO beam arriving at the homodyne is vertically polarized, as well as the seed beam coming out of the chamber. 

Revisiting old alogs about the seed refl/trans (throughput) measurement:

In the first installation Feb 2022, The refl/trans ratio was measured as 4% Feb 24th 61904, and the ratio of IR trans arriving on SQZT7 to right after the OPO was 95% measured Feb 10th 61698

When the CLF fiber was swapped this measurement was redone: 64272  There we didn't measure CLF refl, but combining the measurements of 37mW out of fiber and 8mW rejected we can expect 29mW CLF refl.  With 0.81mW reaching HAM7 this was a 2.8% ratio of refl/trans.  This is worse than at the inital installation but similar to what Vicky and I measured last week.  But, this alog also indicated 95% transmission from right out of the OPO to SQZT7.  So this second measurement is consistent with the one we made last week, and would indicate no excess losses in HAM7 compared to that time. 

victoriaa.xu@LIGO.ORG - 17:55, Thursday 19 October 2023 (73605)

Polarization rotation is only an on-table problem for SQZT7, not an issue for IFO. It can be attributed to the SQZT7 persicope. To close the loop, see LHO:73537 for Don's latest CAD layout with the squeezer beam going to SQZT7 at a 14.4 degree angle (90-75.58) from +Y. SQZT7 periscope re-directs the beam to travel basically along +Y.

H1 ISC
lanceanderson.blagg@LIGO.ORG - posted 17:01, Thursday 31 August 2023 (72607)
Biasing Follow-up

Sheila had been looking at biasing back in January (66814) and wanted to make some comparisons from then and now for the RMS drive to the ESD. Plots from observing on Aug 9 vs biasing changes from Jan 14. These are the "ADS lines on" times from the alog referenced. Reference times shown on leftmost plots.

Jan 14 18:09:32 Full Bias (Blue)

Jan 14 18:33:20 1/4 Bias (Green)

Aug 8 14:00:00 Full Bias (Red)

Images attached to this report
H1 ISC
elenna.capote@LIGO.ORG - posted 16:44, Thursday 31 August 2023 (72598)
MICH feedforward impacted by changing test mass actuation strength

The LSC feedforward, and in particular the MICH feedforward, has needed regular updating since we reduced the IFO input power to 60W in June. I wrote up a summary of the "saga" as of the start of August, see 72037. My main assumption was this: ignoring major IFO changes such as input power/TCS/DARM offset, the changes to the feedforward have to occur because we "uncover" more and more LSC coupling as we improve the low frequency sensitivity. To justify this idea, I referenced the fact that Gabriele and I (except for the first retuning on June 22 with the power reduction) have mainly been doing iterative retuning of the feedforward. Specifically, we run our "retuning" injection with the feedforward on so that the noise coupling we attempt to reduce is any residual noise coupling left over while our main feedforward runs. However, since that alog we have again needed to update the MICH feedforward multiple times without any corresponding improvement to low frequency sensitivity, finally prompting me to think that something here is wrong. However, it is not a significant change in the feedforward from time to time, but merely a few percent change that we iteratively improve against some baseline decent feedforward.

Gabriele and I have found evidence that the MICH coupling to DARM is changing because the ETMX test mass actuation strength has been changing from ESD charge accumulation. Jeff details the effect of this charge accumulation on the calibration in LHO:72416, and has some notes about seeing this effect in the past, effects on the DARM loop, etc.

This change in the test mass actuation can also change the coupling function for the LSC noise contribution. In particular, when we measure the coupling of MICH for the feedforward fitting, we measure two functions. One is the DARM [W] / MICH [N] coupling, and the other is the DARM [W] / DARM [N] coupling. The changing strength of the DARM actuation will effect the required strength of the feedforward actuation. In fact, looking at the filters we have tuned for the MICH feedforward in August, they all have the same shape, but different overall gains.

We make a measurement of the second function above by injecting from the MICHFF filter bank with the input off and the feedforward filter off (but a gain of 1), so we can capture whatever effect is upstream of the feedforward filter banks. Gabriele plotted all of these measurements that we have taken since June 22 and normalized them by the June 22 measurement. The result is shown in the first image attached. This plot shows that the DARM actuation is changing in the same direction over time. We also tracked the Kappa TST value, and noticed that is has been steadily increasing since June 22.

This effect is most visible in the MICH coupling, and we have needed to update the MICH feedforward iteratively more than we have the SRCL feedforward. We think this is because the subtraction of MICH is much more significant than the subtraction of SRCL, by at least a factor of 5. Looking at the implemented SRCL feedforward since June 22, a similar change in gain is evident. The bigger changes in the SRCL shape have been mostly at low frequency to reduce injection of excess SRCL actuation noise into DARM which worsens the DARM RMS.

We predict that as the test mass actuation strength changes, we will continue to need to update the feedforward to improve the subtraction of noise and reach our best possible sensitivity. The MICH feedforward was updated yesterday, Aug 30, and the calibration has been updated today, resetting Kappa TST to 1 (72594). We should track the Kappa TST value. If it becomes even a few percent different than 1, Gabriele and I imagine we will need to make another iterative update to the feedforward.

While we think the few percent change in actuation explains most of the few percent change in the MICH coupling, there could be other changing factors. We have been tracking possible alignment changes in the interferometer related to OM2 TSAMS changes, unexpected temperature changes, etc. Changing IFO alignment could also contribute to some of the changing LSC coupling we have witnessed (although the how of this process is less clear to me). If we manage to mitigate the charge accumulation on ETMX, we should continue to track the MICH coupling in case there are other effects that compromise the success of the noise subtraction.

Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 16:06, Thursday 31 August 2023 (72590)
Ops Eve Shift Summary

TITLE: 08/31 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: One lock loss during the shift that ended a 27 hour lock. Relocking was straight forward but I had to very slightly move ETMY to get ALSY to lock. This was the only locking intervention (outside of a small test). We've now been locked for 1 hour and observing for 45min.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
15:13 FAC Randy MY n Inventory 18:47
15:51 FAC Ken OSB receiving n Replacing lights 22:07
17:30 FAC Tyler EY n Chiller line work on top of air handler roon 18:07
17:32 CC/SEI Mitch EY, EX n FAMIS checks in end station mech rooms 18:17
18:41 SQZ Sheila, Vicky LVEA - SQZ bay Local SQZ table alignment 19:51
18:41 CAL Jeff, Louis CR n CAL measurement 19:57
19:03 SEI Jim Office n HAM1 filter tests 19:58
20:00 FAC Tyler EY n More HVAC work on AUR 20:31
21:22 SEI Jim Office n HAM7 new filters 21:40
H1 General
ryan.crouch@LIGO.ORG - posted 16:04, Thursday 31 August 2023 - last comment - 19:35, Thursday 31 August 2023(72608)
OPS Thursday eve shift start

TITLE: 08/31 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 7mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.18 μm/s
QUICK SUMMARY:

Comments related to this report
ryan.crouch@LIGO.ORG - 19:35, Thursday 31 August 2023 (72612)TCS

We dropped out of Observing briefly due to the 2 diffs from TCS ITMX_CO2 lasers from 01:43 to 1:45UTC

Images attached to this comment
H1 ISC (OpsInfo)
camilla.compton@LIGO.ORG - posted 17:20, Wednesday 30 August 2023 - last comment - 13:31, Wednesday 13 September 2023(72572)
MICH FF Retuned

Elenna, Gabriele, Camilla 

This afternoon we updated the MICH Feedforward, it is now back to around the level it was last Friday, comparison attached. Last done in 72430. Maybe need to be done so soon because of the 72497 alignment changes on Friday.

The code for excitations and analysis has been moved to /opt/rtcds/userapps/release/lsc/h1/scripts/feedforward/

Elenna updated in guardian to engage FM1 rather than FM9 and sdf accepted. New filter attached. I forgot the accept this in h1lsc safe.snap and will ask the operators to accept MICHFF FM1 when we loose lock or come out of observe(72431), tagging OpsInfo.

Attached is a README file with instructions.

Images attached to this report
Non-image files attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 15:13, Thursday 31 August 2023 (72605)

Accepted FM1 in the LSC safe.snap

jeffrey.kissel@LIGO.ORG - 13:31, Wednesday 13 September 2023 (72865)CAL, DetChar
Calling out a line from the above README instructions that Jenne pointed me to that confirms my suspicions that the *reason* the bad FF filter's high Q feature showed up at 102.128888 Hz, right next to the 102.13 Hz calibration line:
    "IFO in Commissioning mode with Calibration Lines off (to avoid artifacts like in alog#72537)."

in other words -- go to NLN_CAL_MEAS to turn off all calibration lines before taking active measurements that inform any LSC feed forward filter design.

Elenna says the same thing -- quoting the paragraph from LHO:72537 later added in edit:

How can we avoid this problem in the future? This feature is likely an artifact of running the injection to measure the feedforward with the calibration lines on, so a spurious feature right at the calibration line appeared in the fit. Since it is so narrow, it required incredibly fine resolution to see it in the plot. For example, Gabriele and I had to bode plot in foton from 100 to 105 Hz with 10000 points to see the feature. However, this feature is incredibly evident just by inspecting the zpk of the filter, especially if you use the "mag/Q" of foton and look for the poles and zeros with a Q of 3e5 (!!). If we ensure to both run the feedforward injection with cal lines off and/or do a better job of checking our work after we produce a fit, we can avoid this problem.
Displaying reports 16121-16140 of 86613.Go to page Start 803 804 805 806 807 808 809 810 811 End