Displaying reports 5241-5260 of 83232.Go to page Start 259 260 261 262 263 264 265 266 267 End
Reports until 09:17, Friday 27 September 2024
H1 SQZ
sheila.dwyer@LIGO.ORG - posted 09:17, Friday 27 September 2024 - last comment - 14:56, Friday 27 September 2024(80318)
SRCL detuning with FIS again

Took another data set of FIS with different SRCL offsets, to try to set the SRCL detuning for calibration measurement, similar to 79903

 

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 14:56, Friday 27 September 2024 (80334)

Here are some plots of this code, made by borrowing heavily from Vicky's repo here and from the Noise budget repo.  I will put this into a repo soon, perhaps here.

The first plot shows the spectra with different SRCL offsets in, always with the squeezing angle optimized for kHz squeezing.  The no squeezing model isn't well verified, I've used SRCL detuning of 0 which we know isn't correct.  We use this no squeezing model to subtract from the no squeezing measurement to estimate the non-quatum noise, shown in gray here.  The SRC detuning doesn't change this estimate much without squeezing injected.

The next plot is a re-creation of Vicky's brontosaurus plot, as in 79951.  The non-quantum noise estimate is subtracted from each of the FIS curves, which are then plotted in dB relative to the no squeezing model.  Each of those shows a squeezing data set with a model, where I by hand adjusted the SRCL offset in the model based on this plot.  The subtraction is needed to make the impact of the SRCL offset clear. 

The final plot shows the linear fit of the SRC detuning to SRCL offset, which gives us the SRCL offset we should use to go toward 0 detuning, (-191 counts). 

Images attached to this comment
LHO VE
david.barker@LIGO.ORG - posted 09:06, Friday 27 September 2024 (80327)
Fri CP1 Fill

Fri Sep 27 08:14:05 2024 INFO: Fill completed in 14min 1secs

Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 07:56, Friday 27 September 2024 (80326)
Ops Day Shift Start

TITLE: 09/27 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 4mph Gusts, 2mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.44 μm/s
QUICK SUMMARY: Locked for one and a half hours, some shorter locks overnight as well. Verbal didn't mention any PIs for the last lock loss, but the length of the lock is suspicious. More investigation needed. Our range isn't optimal and the squeezing looks poor in the higher frequencies based on the nuc33 FOM.

 

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 22:03, Thursday 26 September 2024 (80325)
OPS Eve Shift Summary

TITLE: 09/27 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:

IFO is LOCKING at ENGAGE_ASC_FOR_FULL_IFO. Fully auto so far...

Shift was mostly quiet. Despite locking issues today, IFO locked pretty automatically following Tony's initial alignment. We did lose lock during LOWNOISE_ESD_ETMX (558). Though, guardian brought it to NLN shortly after and we were in observing swiftly.

In terms of PI modes, there was one very harsh ringup 23 minutes into NLN (or 34 mins after MAX_POWER). This gave 3 verbal PI 24 alarms but the damping, even though at maximum, was able to bring it down. No other ringups.

There was one lockloss 04:08 UTC. probably attributed to the environment and rising secondary microseism/over 35mph wind combo - alog 80323.

TCS Work from today has 2 SDF Diffs (screenshot attached).
LOG:

Start Time System Name Location Lazer_Haz Task Time End
23:39 PCAL Tony, Neil EX N PCAL Computer Acquisition 23:51
00:21 PEM Robert Y-arm N Looking for parts 00:21

 

Images attached to this report
H1 SUS (SUS)
ibrahim.abouelfettouh@LIGO.ORG - posted 21:36, Thursday 26 September 2024 (80324)
Weekly In-Lock SUS Charge Measurement - FAMIS 28372

Weekly In-Lock SUS Charge Measurement - Closes FAMIS 28372

Observations:

Images attached to this report
H1 ISC (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 21:15, Thursday 26 September 2024 (80323)
Lockloss 04:08 UTC

Lockloss after a 3hr lock. A few details:

From the above, I think this lockloss may be environmentally cause rather than due to PI issues we have been experiencing. Now having trouble locking ALS due to high winds.

H1 General
anthony.sanchez@LIGO.ORG - posted 16:30, Thursday 26 September 2024 (80322)
Thursday OPS Day Shift End

TITLE: 09/26 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:

OMC issues
Ether Cat issues

NLN reached at 20:27 UTC!

CAL and PEM Team were able to get some Comissioning done in the later half of the day.... but....
PI 24 Ring up.... -> Lockloss   before Sheila could finish her work. :(
ETMY Ring Heater turned on, so ETMY Will start to drift.

 

LOG:

Start Time System Name Location Lazer_Haz Task Time End
23:58 SAF H1 LVEA YES LVEA is laser HAZARD Open ViewPort on HAM3! 18:24
15:02 FAC Christina Recycling bins N Using the fork lift to move some heavy items into recycling. 16:02
16:08 PEM Robert LVEA Yes Removed viewport and Setting up for commissioning 17:08
17:46 Fac Karen Optics lab & Vac Prep N Technical cleaning 18:31
18:35 EE Sigg & Fernando LVEA Yes Checking pinout for EthrCat 20:09
19:58 EE Sigg, Fill HAM7 racks Yes Removing and replacing Whitening boards 19:38
20:00 EE Fernando & Fil LVEA Yes OMC DCPD troubleshooting 20:09
20:53 EE Fil MidY N Gathering Parts to fix te Spare OMC Whitening Chassis 21:53
21:12 PEM Robert LVEA YES HAM2 PEM Injections 22:12
22:47 PEM Robert LVEA Yes Putting the Viewport back on and turn PEM equipment off. 23:47


 

 

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:10, Thursday 26 September 2024 (80321)
OPS Eve Shift Start

TITLE: 09/26 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Calibration
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 16mph Gusts, 10mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.24 μm/s
QUICK SUMMARY:

IFO is in MAINTENANCE (Calibration and OMC Whitening Chassis work) but now just trying to get locked.

We're in initial alignment now and prepping to lock for observing!

 

H1 TCS
sheila.dwyer@LIGO.ORG - posted 15:37, Thursday 26 September 2024 (80320)
increased ETMY ring heater

To try to avoid the 10kHz PI (MODE 24), I've increased the power on both segments of ETMY ring heater from 1W to 1.1W.  I did this after the calibration measurements when the PI was already ringing up, and we've now lost lock from the PI.  (for a recent history, see 80299)

 

 

H1 AOS
robert.schofield@LIGO.ORG - posted 15:20, Thursday 26 September 2024 (80319)
PR2 scraper baffle reflection at 19mW, was 17mW

I measured the power of the beam comming out of the HAM3 illuminator viewport and found it to be 19mW, as compared to 17mW measured in this alog 78878. The beam is a reflection off of the scraper baffle of the part of the PR2 to PR3 beam that is clipped by the aperture of the scraper baffle. We had minimized it from 47mW to 17mW for the referenced alog, and wanted to see if it was clipping more - not much.

H1 CAL
anthony.sanchez@LIGO.ORG - posted 15:05, Thursday 26 September 2024 (80317)
Calibration Sweep Complete

pydarm measure --run-headless bb
diag> save /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240926T212332Z.xml
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240926T212332Z.xml saved
diag> quit
EXIT KERNEL

2024-09-26 14:28:43,409 bb measurement complete.
2024-09-26 14:28:43,409 bb output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240926T212332Z.xml
2024-09-26 14:28:43,410 all measurements complete.
anthony.sanchez@cdsws29:

 

21:30 UTC gpstime;python /ligo/groups/cal/src/simulines/simulines/simuLines.py -i /ligo/groups/cal/H1/simulines_settings/newDARM_20231221/settings_h1_newDARM_scaled_by_drivealign_20231221_factor_p1_asc.ini;gpstime
PDT: 2024-09-26 14:30:05.062848 PDT
UTC: 2024-09-26 21:30:05.062848 UTC
GPS: 1411421423.062848

2024-09-26 21:53:35,296 | INFO | Finished gathering data. Data ends at 1411422832.0
2024-09-26 21:53:36,077 | INFO | It is SAFE TO RETURN TO OBSERVING now, whilst data is processed.
2024-09-26 21:53:36,077 | INFO | Commencing data processing.
2024-09-26 21:53:36,077 | INFO | Ending lockloss monitor. This is either due to having completed the measurement, and this functionality being terminated; or because the whole process was aborted.

2024-09-26 21:53:36,077 | INFO | It is SAFE TO RETURN TO OBSERVING now, whilst data is processed.
2024-09-26 21:53:36,077 | INFO | Commencing data processing.
2024-09-26 21:53:36,077 | INFO | Ending lockloss monitor. This is either due to having completed the measurement, and this functionality being terminated; or because the whole process was aborted.
2024-09-26 22:01:43,741 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240926T213006Z.hdf5
2024-09-26 22:01:43,754 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240926T213006Z.hdf5
2024-09-26 22:01:43,764 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240926T213006Z.hdf5
2024-09-26 22:01:43,774 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240926T213006Z.hdf5
2024-09-26 22:01:43,784 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240926T213006Z.hdf5
ICE default IO error handler doing an exit(), pid = 2864258, errno = 32
PDT: 2024-09-26 15:01:43.864814 PDT
UTC: 2024-09-26 22:01:43.864814 UTC
GPS: 1411423321.864814
anthony.sanchez@cdsws29:

 

 

Images attached to this report
H1 General (CDS)
anthony.sanchez@LIGO.ORG - posted 13:46, Thursday 26 September 2024 (80313)
SDF Diffs Accepted

After the EtherCAT reboot we had a few SDF Diffs that needed to be accepted.
Screenshot.

Images attached to this report
H1 ISC (CDS, ISC)
keita.kawabe@LIGO.ORG - posted 12:24, Thursday 26 September 2024 - last comment - 14:54, Thursday 26 September 2024(80309)
OMC whitening switching issue (Tony, TJ, JoeB, Sheila, Fil, Patrick, Daniel, Keita among others)

This morning Tony and TJ had a hard time locking the OMC.

We've found that OMC DCPD A and B output are very assymmetric only when there was a fast transient (1st attachment) but not when the OMC length was slowly brought close to the resonance (2nd attachment), which suggested whitening problem.

The transfer function from OMCA to B suggested that the switchable hardware whitening was ON for DCPD_A and OFF for B when it was supposed to be OFF for both. 3rd attachment shows the transfer function from DCPD_A to B, and 4th attachment shows the anti-whitening filter shape.

Switching ON the anti-whitening only for DCPD_A made the frequency response flat. Trying to switch the analog whitening ON and OFF by toggling H1:OMC-DCPD_A_GAINTOGGLE didn't change the hardware whitening status, it's totally stuck.

We tried to lock the IFO by only using DCPD_B, but IFO unlocked for some reason.

After IFO lost lock, people found on the floor that it's the problem of the whitening chassis, not the BIO. It's not clear if we can fix the board in the chassis (which is preferrable) or have to swap the whitening chassis (less preferable as calibration group needs to measure the analog TF and generate a compensation filter).

We'll update as we make progress.

 

Images attached to this report
Comments related to this report
daniel.sigg@LIGO.ORG - 12:57, Thursday 26 September 2024 (80310)

Fernando, Fil, Daniel

DCPD whitening chassis fixed.

We diagnosed a broken photocoupler in the DCPD whitening chassis. Since the photocoupler is located on the front interface board, we selected to swap this board with the one from the spare. This means the whitening transfer function should not have changed. Since we switched the front interface board together with the front-panel, the serial number of the chassis has (temporarily) changed to the one of the spare.

The in-vacuum DCPD amplifiers were powered off for 30-60 minutes while the repair took place. So, they need some time to thermalize.

filiberto.clara@LIGO.ORG - 13:32, Thursday 26 September 2024 (80312)

Unit installed is S2300003. The front panel and front interface board was removed/borrowed from S2300004.

louis.dartez@LIGO.ORG - 14:54, Thursday 26 September 2024 (80316)CAL
N.B. S2300004 and S2300002 have been characterized and fit already. See LHO:71763 and LHO:78072 for the S2300004 and S2300002 zpk fits, respectively.

Should the OMC DCPD Whitening chassis need to be fully swapped, we already have the information we need to install the corresponding compensation filters in the front end and in the pyDARM model to accommodate that change. This, of course, rides on the expectation that the electronics have not materially changed in their response in the interim.

H1 General
anthony.sanchez@LIGO.ORG - posted 11:27, Thursday 26 September 2024 (80307)
Thurs Mid Shift update

TITLE: 09/26 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 12mph Gusts, 6mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.22 μm/s
QUICK SUMMARY:

Locking notes:
Ran an initial Alignment.
After Initial Alignment I started the Locking process, and bounced to PRMI 3 times even though I had good DRMI flashes.
On the 3rd time of jumpin to PRMI,  I manualed back to Align_recycling mirrors, then requested NLN. that took us to DRMI , then I touched up SRM in Yaw.
 
OMC issue in Prep_DC_READOUT_TRANSITION.
Tridents looking good, but the TUNE_OFFSETS OMC_LOCK Guard state tunes Away from the desired value.
TJ Tried locking by hand...
OMC DCPDS are not balanced? (I'm not sure what they mean by this.
Sheila suggests that it might be a Whitening issue and Keita confirms this.
We are trying to switch the OMC DCPD Whitening that we are using. We are now using DCPD B. and Plan to switch back to DCPD A later on. An effort wasmade to try to keep the configuration the same for the CAL team, so Swapping Whitening Chassis was Not the best option. We are stuck in the "High State", which makes the output of the OMC's DCPD's high. But the High State is where we need it to be for NLN.
Sheila and Keita, did the DCPD Switching to get around the Whitening issues, just to have a LockLoss in the next ISC_Lock state DARM_TODC_READOUT. :(

Relocking again got to PRMI....

17:43 UTC EtherCat Failure, Fernando, Fil, and Sigg are working on resolving the issue.

Current H1 status is Down for Corrective maintenance.


 

 

 

 

H1 CDS
david.barker@LIGO.ORG - posted 10:55, Thursday 26 September 2024 - last comment - 13:13, Thursday 26 September 2024(80306)
Slow controls system down for investigation, alarms bypassed

Daniel has the Beckhoff slow controls system offline for investigation.

I've bypassed the following alarms:

Bypass will expire:
Thu Sep 26 10:54:08 PM PDT 2024
For channel(s):
    H1:PEM-C_CER_RACK1_TEMPERATURE
    H1:PEM-C_MSR_RACK1_TEMPERATURE
    H1:PEM-C_MSR_RACK2_TEMPERATURE
    H1:PEM-C_SUP_RACK1_TEMPERATURE
 

Comments related to this report
david.barker@LIGO.ORG - 13:13, Thursday 26 September 2024 (80311)

all alarms are active again.

H1 AOS
jason.oberling@LIGO.ORG - posted 13:44, Tuesday 24 September 2024 - last comment - 11:18, Thursday 26 September 2024(80271)
SR3 Optical Lever

J. Oberling, O. Patane

Today we started to re-center the SR3 optical lever after SR3 alignment was reverted to its pre-April alignment.  That's not quite how it went down, however...

We started by hooking up the motor driver and moving the QPD around (via the crossed translation stages it is attached to), and could not see any improvement to the OpLev signal.  While moving the horizontal translation stage it suddenly stopped and started making a loud grinding noise, like it had hit its, or a, limit.  Not liking the sound of that, we launched on figuring out fall protection to climb on top of HAM4 to investigate.  While the fall protection was getting figured out we took a look at the laser and found it dead.  No light, no life, all dead.  So we grabbed a spare laser from the Optics Lab and installed it (did not turn it on yet).

Once the fall protection was figured out I climbed on top of HAM4 and opened the OpLev receiver.  I couldn't visually see anything wrong with the stage.  It was near the center of its travel range, and nothing else looked like it was hung up.  I removed the QPD plate and the vertically mounted translation stage to get a better view of the stuck stage, and could still see nothing wrong.  Oli tried moving the stage with the driver and it was still making the loud noise, and the stage was not moving.  So it was well and truly stuck.  We grabbed one of the two spare translation stages from the EE shop (where Fernando was testing the remote OpLev recentering setup), tested it to make sure it worked (it did!), and installed it in the SR3 OpLev receiver.  The whole receiver was reassembled and the laser was turned on.  Oli slowly turned up the laser power while I watched for the beam, and once it was bright enough Oli then moved the translation stages to roughly center it on the QPD.

Something interesting, as Oli was turning up the laser power it would occasionally flash bright and then return to the brightness it was at before the flash.  They got it bright enough to see a SUM count of ~3k, and then re-centered the OpLev.  At this point I closed up the receiver and came down from the chamber.  I turned the laser power up to return the SUM counts to the ~20k it was at before the SR3 alignment shift and saw the SUM counts jump just like the beam would flash.  This happened early in the power adjustment (for example: started at ~3k SUM, adjusted up and saw a flash to ~15k, then back down to ~6k) but leveled off once the power was higher (I saw no jumps once the SUM counts were above 15k or so).  Maybe some oddness with a low injection current for the laser diode?  Not sure.  The OpLev is currently reading ~20k SUM counts and looks OK, but we'll keep an eye out to see if it remains stable starts behaving oddly.

The SR3 optical lever is now fixed and working again.

New laser SN is 197-3, old laser SN is 104-1.  SN of the new translation stage is 10371

Images attached to this report
Comments related to this report
jason.oberling@LIGO.ORG - 11:18, Thursday 26 September 2024 (80308)

Forgot to add, once the translation stage became stuck the driver was still recording movement as the counts would change when we tried to move the stage but the stage was clearly not moving.  So the motor encoder for the stage was working while the stage itself was stuck.

H1 DetChar (DetChar, DetChar-Request)
gabriele.vajente@LIGO.ORG - posted 11:27, Wednesday 18 September 2024 - last comment - 17:47, Wednesday 02 October 2024(80165)
Scattered light at multiples of 11.6 Hz

Looking at data from a couiple of days ago, there is evidence of some transient bumps at multiples of 11.6 Hz. Those are visible in the summary pages too around hour 12 of this plot.

Taking a spectrogram of data starting at GPS 1410607604, one can see at least two times where there is excess noise at low frequency. This is easier to see in a spectrogram whitened to the median. Comparing the DARM spectra in a period with and without this noise, one can identify the bumps at roughly multiples of 11.6 Hz.

Maybe somebody from DetChar can run LASSO on the BLRMS between 20 and 30 Hz to find if this noise is correlated to some environmental of other changes.

Images attached to this report
Comments related to this report
jane.glanzer@LIGO.ORG - 14:29, Thursday 26 September 2024 (80314)DetChar

I took a look at this noise, and I have some slides attached to this comment. I will try to roughly summarize what I found. 

I first started by taking some 20-30 hz BLRMS around the noise. Unfortunately, the noise is pretty quiet, so I don't think lasso will be super useful here. Even taking a BLRMS for a longer period around the noise didn't produce much. I can re-visit this (maybe take a narrower BLRMS?), but as a separate check I looked at spectra of the ISI, HPI, SUS, and PEM channels to see if there was excess noise anywhere in particular. I figured maybe this could at least narrow down a station where there is more noise at these frequencies.

What I found was:

  1. Didn't see excess noise in the EY or EX channels at ~11.6 Hz or at the second/third harmonics.
  2. Many CS channels had some excess noise around 11.6 hz, less at the second/third harmonics.
  3. However, of the CS channels that DID have excess noise around 11.6 Hz and 23.2 Hz, HAM8 area popped up the most. Specifically these channels: H1:PEM-FCES_ACC_BEAMTUBE_FCTUBE_X_DQ, H1:ISI-HAM8_BLND_GS13Z_IN1_DQ, H1:ISI-HAM8_BLND_GS13X_IN1_DQ.
  4. HAM3 also popped up, and the Hveto results for this day had some glitches witnessed by H1:HPI-HAM3_BLND_L4C_RZ_IN1_DQ.
  5. Potential scatter areas: something near either HAM8 or HAM3?
Non-image files attached to this comment
jane.glanzer@LIGO.ORG - 12:33, Wednesday 02 October 2024 (80429)DetChar

I was able to run lasso on a narrower strain blrms (suggested by Gabriele) which made the noise more obvious. Specifically, I used a 21 Hz - 25 Hz blrms of auxiliary channels (CS/EX/EY HPI,ISI,PEM & SUS channels) to try and model a strain blrms of the same frequency via lasso. In the pdf attached, the first slide shows the fit from running lasso. The r^2 value was pretty low, but the lasso fit does pick up some peaks in the auxiliary channels that do line up with the strain noise. In the following slides, I made time series plots of  the channels that lasso found to be contributing the most to the re-creation of the strain. The results are a bit hard to interpret though. There seems to be roughly 5 peaks in the aux channel blrms, but only 2 major ones in the strain blrms. The top contributing aux channels are also not really from one area, so I can't say that this narrowed down a potential location. However, two HAM8 channels were among the top contributors (H1:ISI_HAM8_BLND_GS_X/Y). It is hard to say if that is significant or not, since I am only looking at about an hours worth of data. 

I did a rough check on the summary pages to see if this noise happened on more than one day, but at this moment I didn't find other days with this behavior. If I do come across it happening again (or if someone else notices it), I can run lasso again.

Non-image files attached to this comment
adrian.helmling-cornell@LIGO.ORG - 17:47, Wednesday 02 October 2024 (80437)DetChar

I find that the noise bursts are temporally correlated with vibrational transients seen in H1:PEM-CS_ACC_IOT2_IMC_Y_DQ. Attached are some slides which show (1) scattered light noise in H1:GDS-CALIB_STRAIN_CLEAN from 1000-1400 on Septmeber 17, (2) and (3) the scattered light incidents compared to a timeseries of the accelerometer, and (4) a spectrogram of the accelerometer data.

Non-image files attached to this comment
Displaying reports 5241-5260 of 83232.Go to page Start 259 260 261 262 263 264 265 266 267 End