Displaying reports 16741-16760 of 86676.Go to page Start 834 835 836 837 838 839 840 841 842 End
Reports until 02:09, Tuesday 08 August 2023
H1 SQZ (SQZ)
corey.gray@LIGO.ORG - posted 02:09, Tuesday 08 August 2023 (72050)
Squeezer Filter Cavity Unlock Loops & Then H1 Lockloss....but back up within about an hour

H1 LOCKLOSS

At 07:47:08, H1 dropped out of observing due to the Squeezer.  The SQZ_MANAGER noted this:

2023-08-08_07:47:08.319064Z SQZ_MANAGER [FREQ_DEP_SQZ.run] FC-IR UNLOCKED!

In the attached plot you can see that the Squeezer Manager tried to come back on its own, and at 7:57:07utc it came back and H1 was taken to OBSERVING.  Unfortunately, this was only for about 45sec and then the SQZ went down (dropping us out of Observing.  Came at 08:00:20, was automatically taken to OBSERVING for a few seconds and then this is when H1 had its lockloss.

I'm not sure what I should have tried while the SQZ was in this loop.  It was trying to come back, and it did, but then it would drop out after a few seconds.

H1 Relocking

While trying to see what happened for the lockloss I kept H1 in its fully automated configuration.  H1 went to PRMI since DRMI was not great, but PRMI locked up, and from there it was fairly smooth all the way to NOMINAL LOW NOISE.  Reached NLN at 8:52utc.  Unfortunately, I did get an Out-of-Observing alert at 8:55, but we were out of observing while waiting for the CAMERA SERVO to converge (which it did 9:02UTC).

Images attached to this report
LHO General (ISC)
austin.jennings@LIGO.ORG - posted 00:00, Tuesday 08 August 2023 (72040)
Monday Eve Shift Summary

TITLE: 08/07 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:

- Started off my shift with a lockloss, cause unknown

- Relocking went through CHECK MICH FRINGES 3x, so I will be doing an initial alignment, completed without issue

- 0:06 - inc 5.7 EQ from South Sandwich Islands region

- Acquired NLN @ 0:43/OBSERVE @ 1:00

- PI 31 ringups @ 0:48 - was able to be damped by the PI guardian

- Lockloss @ 1:26 - cause unknown

- Relocking failed at LOCKING ARMS GREEN the first time but got back to NLN unaided after @ 3:01/OBSERVING @ 3:11

- Superevent @ 4:07 - no alert given on Verbal but did receive an automated call

- BB CAL measurement was run, alog here

- Handing off to Corey with H1 observing, currently going on a 4 hour lock
LOG:

No log for this shift.

Images attached to this report
H1 CAL
austin.jennings@LIGO.ORG - posted 22:10, Monday 07 August 2023 (72047)
CAL BB Measurement

Following the instructions found in the TakingCalibrationMeasurements wiki, I ran a broad band measurement script. Attached calibration monitor screen just before starting the BB.

I ran: PYTHONPATH=/ligo/home/louis.dartez/repos/pydarm python -m pydarm measure --run-headless bb

GPS start time: 1375506271

GPS end time: 1375506588

 

INFO | bb measurement complete.
INFO | bb output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20230808T050415Z.xml
INFO | all measurements complete.
 

By commissioners request, I have skipped running the simulines.

Images attached to this report
LHO General
austin.jennings@LIGO.ORG - posted 20:05, Monday 07 August 2023 (72049)
Mid Shift Eve Report

Following a very short lock (only about 1:30 hours), we just got back to NLN and currently waiting for the ADS signals to converge so we can go back into observing.

H1 General (Lockloss)
austin.jennings@LIGO.ORG - posted 19:12, Monday 07 August 2023 (72048)
Lockloss @ 2:09

Lockloss @ 1:26 - another short lock, 1:30 - cause unknown

H1 CAL
louis.dartez@LIGO.ORG - posted 17:00, Monday 07 August 2023 - last comment - 12:08, Thursday 10 August 2023(72043)
3.2 kHz HF pole filter module restored in CAL-CS ETMX L3 bank
Taking advantage of the fact that we're not locked, I put the missing ETMX "HFPole" filter module (LHO:72030) back in the H1CAL-CS_DARM_ANALOG_ETMX_L3 filterbank. From inspecting the filter archive, it looks like the "HFpole" ETMX filter module was removed on 4/25/2023. This is around the time we were rolling out the cmd-dev infrastructure for the calibration group. 

The plan is to follow up with a Broadband measurement later tonight or at the earlier opportunity to establish whether or not to keep this filter in place. 


The zpk string I used is zpk([], [3226.75],1,"n"). The value 3226.75 was calculated by summing the poles for all four ESD quadrants from LHO:46773 as per LHO:27150.

I've attached screenshots of the ETMX filterbank and the GDS TP window.

GDS table diff

324c324
< # DESIGN   CS_DARM_ANALOG_ETMX_L3 2 zpk([],[3226.75],1,"n")
---
> # DESIGN   CS_DARM_ANALOG_ETMX_L3 2 zpk([],[],9.787382864894167e-13,"n")
343c343
< CS_DARM_ANALOG_ETMX_L3 2 21 1      0      0 HFPole     4.158812836234200838170239e-01  -0.1682374327531596   0.0000000000000000   1.0000000000000000   0.0000000000000000
---
> CS_DARM_ANALOG_ETMX_L3 2 21 1      0      0 TEST_Npct_50W 9.787382864894166725851836e-13   0.0000000000000000   0.0000000000000000   0.0000000000000000   0.0000000000000000

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 17:17, Monday 07 August 2023 (72045)
The above aLOG covers another *solution* top the on-going studies about the ~5-10% systematic error in the calibration -- namely, what's unique to LHO and *left over* after the flaw in GDS filters that was fixed in LHO:71787.

The filter was loaded by 2023-08-07 17:15 UTC.
louis.dartez@LIGO.ORG - 15:39, Tuesday 08 August 2023 (72076)
This change has been added to the LHO record of calibration pipeline changes for O4, DCC:T2300297
jeffrey.kissel@LIGO.ORG - 12:08, Thursday 10 August 2023 (72135)
Correction to the timing of this filter update -- The filter was loaded by 2023-08-07 17:15 PDT -- i.e. 2023-08-08 00:15 UTC
H1 General
oli.patane@LIGO.ORG - posted 16:30, Monday 07 August 2023 (72042)
Ops DAY Shift End

TITLE: 08/07 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Austin
SHIFT SUMMARY:

Suprisingly calm day shift. Detector locked the entire time (lost lock ~20 mins ago at start of Austin's shift)

15:00UTC Detector Locked for 8 hours

16:38 Out of Observing to reload IFO guardian
16:39 Back into Observing

--Commissioning Start--
17:22 Out of Observe and into Commissioning
17:51 Back to Observing
--Commissioning End--


LOG:                                                                                                                  

Start Time System Name Location Lazer_Haz Task Time End
16:21 EE Ken MY n Working on lights 00:28
17:04 FAC Cindi MX n Tech clean 17:51
17:22 ISC Sheila CR n ESD bias test 17:48
17:46 Gen Christina MX/Y n Property logistics 19:08
17:49 PSL Jenne CR n Re-engage ISS 2nd Loop 17:51
19:09 Gen Christina Optics Lab n Property logistics 20:09
20:23 PCAL Tony PCal Lab y(local) Responsivity ratios for standards 22:01
21:08 VAC Travis, Jordan MY n Turbo pump cooling work 00:55
21:12 PCAL Rick, Julianna PCal Lab y (local) Join Tony 22:01
22:28 Gen Christina H2 n Property logistics 23:58
H1 General (Lockloss)
austin.jennings@LIGO.ORG - posted 16:14, Monday 07 August 2023 (72041)
Lockloss @ 23:11

Lockloss @ 23:11, cause unknown.

H1 ISC
elenna.capote@LIGO.ORG - posted 16:10, Monday 07 August 2023 (72037)
LSC Feedforward Saga

Today Jenne asked the very reasonable question: "why do we need to update the LSC feedforward so much?"

Here is an accounting of the number of times we retuned the LSC feedforward since the engineering run and why.

Overall, yes, we have had to update the LSC feedforward quite a bit. I think some of this is expected. When we make a major configuration change such as IFO power, DARM offset, or TSAMS setting, that can significantly change the coupling of the LSC to DARM. As a reminder, the LSC feedforward had not been updated for the 60W configuration before we increased the power further, and both at 60W and 76W the LSC loops were a significant contributor to the low frequency noise, see 68869 and 68382 for 60W and 76W DARM noise budgets.

If we remove the significant IFO changes from the count, our reasons for updates are varied: better tuned for the significant thermalization at 76W, improvements to the sub-10Hz contribution with better high pass filters, minor updates with iterative tuning that are only possible after we have a baseline feedforward and an improvement in other competing noise sources, and finally because I made some mistakes on my first try. Approximately half of the times we have updated have been due to significant IFO changes; the other half have been the other reasons I just listed.

We might need to make further changes to the SRCL feedforward based on Gabriele's latest alog regarding the DARM RMS, 71994. We will try other avenues first, such as reducing the noise in SRCL, but if we update the high pass, we will have to update the whole SRCL feedforward.

Jenne's comment was related to LLO's approach, which is that they rarely need to update their feedforward. I think we have had more work to do to improve our low frequency noise (even to approach LLO's sensitivity), and much of that work will continue to turn up additional LSC coherence that can be reduced. However, if we stop changing the IFO configuration and resolve some of these RMS issues, I believe we will no longer need to make adjustments of the feedforward.

LHO General
austin.jennings@LIGO.ORG - posted 16:04, Monday 07 August 2023 (72039)
Ops Eve Shift Start

TITLE: 08/07 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 10mph Gusts, 7mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.05 μm/s
QUICK SUMMARY:

- CDS/SEI/DMs ok

- H1 has been locked for 16 hours

H1 PSL
ryan.short@LIGO.ORG - posted 15:18, Monday 07 August 2023 (72036)
PSL 10-Day Trends

FAMIS 19988

Just over a day ago on the trends there's the brief period where the NPRO was off due to a site power glitch, see alog 72000. Since then, PMC REFL has been about 1W higher, but I don't see a noticeable difference in PMC TRANS.

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 13:27, Monday 07 August 2023 (72034)
Ops DAY Midshift Report

Still Observing and have been Locked for over 13 hours now. We purposefully left Observing twice for commissioning activities but have been Observing for the rest of the time.

H1 SEI (SEI)
corey.gray@LIGO.ORG - posted 11:58, Monday 07 August 2023 (72031)
H1 BSC/HAM ISI CPS Sensor Noise Spectra Check (FAMIS task, #19670)

[Measurements attached]

FAMIS LINK:  19670

BSC CPS:  Received following:

HAM CPS:  Looks good.

Non-image files attached to this report
H1 ISC (SUS)
sheila.dwyer@LIGO.ORG - posted 11:42, Monday 07 August 2023 (72025)
ESD bias change

Following up on 71938 the ETMX ESD bias has been set to + 411V and the DARM gain compensated.  We will wait for 20 minutes of quiet time like this, starting at 17:25 UTC August 7th until 17:45 UTC. 

Unfortunately there was a large glitch in DARM during this time, but there is a quiet stretch of time from about 17:28 UTC until 17:41 UTC. 

The attachment shows a spectra comparison of these times, it does look like the full bias time has lower noise.  It would probably be worth repeating this test because this noise is rather nonstationary.  

Images attached to this report
H1 CAL
louis.dartez@LIGO.ORG - posted 11:34, Monday 07 August 2023 - last comment - 12:34, Monday 07 August 2023(72030)
mismatch in ETMX TST foton filter between CAL-CS and pyDARM/GDS
pyDARM and the GDS pipeline (which produces H1:GDS-CALIB_STRAIN) relies on exports of the CAL-CS Foton filterbanks installed on the front end. The filters installed on the front end and those used by the calibration pipeline need to be in sync to properly calibrate the IFO.

I'm attaching a comparison of the inverse sensing and ETMX actuation Foton tfs as installed in the CAL-CS path (plots). The inverse sensing, ETMX UIM, and ETMX PUM tf exports match well between what is currently installed on the front end and what pyDARM thinks is installed. That's good. However, the ETMX TST stage has a significant deviation above 10Hz. This can be seen on the last page of the attached PDF, which shows both tfs overlaid on the same bode plot (left column) and their residual, also in the form of a bode plot (right column).

This discrepancy is not ideal and should be fixed by re-exporting the TST foton export for pyDARM to use it and be included in the next calibration export at LHO as per LHO:69563. 

T2000022 is a good resource with instructions on how to export the appropriate TFs from the CAL-CS foton banks.
Non-image files attached to this report
Comments related to this report
louis.dartez@LIGO.ORG - 12:34, Monday 07 August 2023 (72033)
Jeff informed me that this discrepancy is likely due to a 3.2kHz pole that needs to be compensated for, similar to LHO:33927. There's been some back and forth as to whether the "correct" thing to do is to include the compensation in the front end vs the GDS pipeline. 

There's confusion as to whether the pole should be compensated for in the front end (it currently isn't) or in the GDS pipeline. It's not clear yet if this is properly included in the GDS pipeline.
H1 DetChar
oli.patane@LIGO.ORG - posted 11:01, Monday 07 August 2023 - last comment - 16:27, Monday 07 August 2023(72029)
Voltage Drops Due to Weather

Over the past 6 hours, we've seen multiple drops in voltage due to the weather here - it has been raining on and off and there is thunder and lightning in the area. Attached plot shows what H0:FMC-EX_MAINS_CHAN_{1,2,3}_VOLTAGE are seeing.

Between 14:55-16:47UTC especially the voltage drops are quite large. Tagging DetChar

Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 16:27, Monday 07 August 2023 (72038)

The last of these voltage glitches seen in the EX_MAINS voltage channels was at 18:46UTC.

We've had a lot of big drops in our range today, so I plotted the range against the EX MAINS voltage channels to see if there was any correlation between the glitches and the drops in range.

Highlighting the two hour time period on the voltage channels with the largest and most frequent drops in voltage (attachment1) shows that the range did start dropping more often around this time, but the large range drops still continued past this time. Some of the voltage drops generally line up with drops in the range in the following minute, but there is no consistant correlation. Attachments 2, 3, and 4 show the range channel overlayed on the voltage channels over the period of 12:00UTC to 18:00UTC. Attachments 5, 6, and 7 are zoomed in looks at the potential range drop following a power glitch. Notice that in some cases, another voltage glitch occurred a bit before but did not result in a drop in the detector's range.

Images attached to this comment
H1 CDS
sheila.dwyer@LIGO.ORG - posted 23:11, Sunday 06 August 2023 - last comment - 16:28, Monday 07 August 2023(72019)
locking troubles, overflows on suspension computers

Austin, Sheila

Austin contacted me about intermittent locklosses at various stages of the acquisition sequence.  

He posted the attached verbal alarms log, a few interesting episodes in this log include: 

P_R_M  (Aug 6 23:32:38 UTC)
P_R_3  (Aug 6 23:32:38 UTC)
M_C_1  (Aug 6 23:32:38 UTC)
M_C_3  (Aug 6 23:32:38 UTC)

.....

P_R_2  (Aug 7 03:35:19 UTC)
S_R_2  (Aug 7 03:35:19 UTC)
M_C_2  (Aug 7 03:35:19 UTC)
T_M_S_X  (Aug 7 03:35:19 UTC)
T_M_S_Y  (Aug 7 03:35:19 UTC)
IFO_OUT  (Aug 7 03:35:19 UTC)

....

P_R_M  (Aug 7 04:14:22 UTC)
P_R_3  (Aug 7 04:14:22 UTC)
M_C_1  (Aug 7 04:14:22 UTC)
M_C_3  (Aug 7 04:14:22 UTC)

Verbal alarms looks at H1:FEC-(number)_ACCUM_OVERFLOW for these alarms.  PR3 saturations seem suspicous because we send no ISC feedback to PR3, I looked at the osems, drive requests, and individual channel overflows and see nothing at this time, but looking at the FEC-ACCUM_OVERFLOW it does show overflows at 4:14:19 UTC.  The suspensions reporting overflows at this time are all on HAM2, and all their models are on SUSH2A.  Also suspicous is when there is an overflow reported from PR2, SR2, and MC2 at the same time, these are all the suspensions on SUSH34.  This is what is making me think that the locking troubles may be due to some intermittent problem with CDS. 

Images attached to this report
Non-image files attached to this report
Comments related to this report
rahul.kumar@LIGO.ORG - 16:28, Monday 07 August 2023 (72035)SUS

Sheila, Dave, Austin, Rahul

Following up on SUS saturation issue which Austin and Sheila faced on Sunday, I trended the ADC0 and ADC1 channels on H1SUSPRM and SUSH34.

For H1SUSPRM, I found two saturations (PRM_M3_WD_OSEMAC_BANDIM_UR_INMON), first at 23.32 UTC and second at 04.14 UTC - see attached plot. I trended all the inmons and DAQ output for PRM and did not find any of the suspension channels saturating for those two times. I am attaching a screenshot of DAQ output channels for M1, M2 and M3 stage for both the time, i.e . 23.32 UTC and 04.14UTC.

Similarly for SUSH34 also showed some saturations at channel no 24 (which is MC2_M3_WD_OSEMAC_BANDIMUL_INMON) - see two plots attached - shows all the channels in ADC0 and this plot focuses on channel 24. For MC2 I see saturations in the DAQ output for M2 and M3 stage at times coincident with SUSH34. I will further investigate on MC2 (however Sheila mentioned that it is fairly common for MC2 DAQ to saturate during locking).    

Since they are also on SUSH34, hence I trended PR2, SR2 and did not find any saturations in the inmons or DAQ output.

                                                                              

Images attached to this comment
H1 SQZ
sheila.dwyer@LIGO.ORG - posted 16:22, Wednesday 31 May 2023 - last comment - 20:53, Tuesday 01 October 2024(70050)
what to do if the SQZ ISS saturates

I've set the set point for the OPO trans to 60 uW, this gives us better squeezing and a little bit higher range.  However, the SHG output power sometimes fluctuates for reasons we don't understand, which causes the ISS to saturate and knocks us out of observing.   Vicky and operators have fixed this several times, I'm adding instructions here so that we can hopefully leave the setpoint at 60uW and operators will know how to fix the problem if it arrises again. 

If the ISS saturates, you will get a message on DIAG_MAIN, then, the operators can lower the set point to 50 uW.

1) take sqz out of IFO by requesting NO_SQUEEZING from SQZ_MANAGER. 

2) reset ISS setpoint, by opening the SQZ overview screen, and opening the SQZ_OPO_LR guardian with a text editor.  in the sqzparams file you can set opo_grTrans_setpoint_uW to 50. Then load SQZ_OPO_LR, request LOCKED_CLF_DUAL_NO_ISS, then after it arrives re-request LOCKED_CLF_DUAL, this will turn the ISS on with your new setpoint.  

3)This change in the ciruclating power means that we need to adjust the OPO temperature to get the best SQZ.  Open the OPO temp ndscope, from the SQZ scopes drop down menu on the sqz overview (pink oval in screenshot).  THen adjust the OPO temp setting (green oval) to maximize the CLF-REFL_RF6_ABS channel, the green one on the scope.  

4) Go back to observing, by requesting FREQ_DEP_SQZ from SQZ_MANAGER.  You will have 2 SDF diffs to accept as shown in the screenshot attached. 

Images attached to this report
Comments related to this report
victoriaa.xu@LIGO.ORG - 20:13, Monday 05 June 2023 (70162)

Update: in the SDF diffs, you will likely not see H1:SQZ-OPO_ISS_DRIVEPOINT change, and just the 1 diff for OPO_TEC_SETTEMP. The channel *ISS_DRIVEPOINT is used for commissioning but ISS stabilizes power to the un-monitored value which changes, H1:SQZ-OPO_ISS_SETPOINT.

Also, if SQZ_OPO_LR guardian is stuck ramping in "ENGAGE_PUMP_ISS" (you'll see H1:SQZ-OPO_TRANS_LF_OUTPUT ramping), this is b/c the setpoint is too high to be reached, which is a sign to reduce "opo_gr_TRANS_setpoint_uW" in sqzparams.py.

naoki.aritomi@LIGO.ORG - 16:56, Monday 07 August 2023 (72044)

Update for operators:

2) reset ISS setpoint, by opening the SQZ overview screen, and opening the SQZ_OPO_LR guardian with a text editor. In the sqzparams file you can set opo_grTrans_setpoint_uW to 50 60. Then load SQZ_OPO_LR, request LOCKED_CLF_DUAL_NO_ISS, then after it arrives re-request LOCKED_CLF_DUAL, this will turn the ISS on with your new setpoint. Check if the OPO ISS control monitor (H1:SQZ-OPO_ISS_CONTROLMON) is around 3 by opening SQZ OVERVIEW -> SQZT0 -> AOM +80MHz -> Control monitor (attached screenshot). If the control monitor is not around 3, repeat 2) and adjust the opo_grTrans_setpoint_uW to make it around 3.

Images attached to this comment
anthony.sanchez@LIGO.ORG - 18:27, Sunday 10 September 2023 (72792)

Vicki has asked me to make a comment about how the line in sqzparams.py should stay at 80 due to tuning for 80 instead of 50 or 60.

Line 12: 
opo_grTrans_setpoint_uW = 80 #OPO trans power that ISS will servo to. alog 70050.

relevent alog:

https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72791

anthony.sanchez@LIGO.ORG - 20:53, Tuesday 01 October 2024 (80414)

Latest update on how to deal with this SQZ error message with a bit more clarity:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=80413

Displaying reports 16741-16760 of 86676.Go to page Start 834 835 836 837 838 839 840 841 842 End