Displaying reports 10581-10600 of 84760.Go to page Start 526 527 528 529 530 531 532 533 534 End
Reports until 19:03, Thursday 14 March 2024
H1 CAL (AOS, ISC)
louis.dartez@LIGO.ORG - posted 19:03, Thursday 14 March 2024 - last comment - 19:44, Thursday 14 March 2024(76399)
First successful calibration suite in the new darm offloading configuration
Gabriele, Louis

We've successfully run a full set of calibration swept-sine measurements in the new DARM offloading (LHO:76315). In December, I tried running simulines in the new DARM state without success. I reduced all injection amplitudes by 50% but kept knocking the IFO out of lock (LHO:74883). After those repeated failures, I realized that the right thing to do was to scale the swept-sine amplitudes by the changes that we made to the filters in the actuation path. I prepared four sets of simulines injections last year that we finally got to try this evening. The simulines configurations that I prepared live at /ligo/groups/cal/src/simulines/simulines/newDARM_20231221. In that directory are 1.) simulines injections scaled by the exact changes we made to the locking filters, 2.-4.) reductions by 10,100, and 1000 of the rescaled injections that I made out of an abundance of caution.

The measurements we took this evening are: 

2024-03-15 01:44:02,574 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240315T012231Z.hdf5
2024-03-15 01:44:02,582 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240315T012231Z.hdf5
2024-03-15 01:44:02,591 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240315T012231Z.hdf5
2024-03-15 01:44:02,599 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240315T012231Z.hdf5
2024-03-15 01:44:02,605 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240315T012231Z.hdf5


We did not get to take a broadband PCALY2DARM measurement as we usually do as part of the normal measurement suite. Next steps are to update the pyDARM parameter file to reflect the current state of the IFO, process these new measurements, then use them to update the GDS pipeline and confirm that is working well. More on that progress in a comment.


Relevant Logs:
- success in transitioning to the new DARM offloading scheme in March 2024: LHO:76315
- unable to transition into the new offloading in January 2024, (we still don't have a good explanation for this): LHO:75308
- cal-cs updated for the new darm state: LHO:76392
- weird noise in cal-cs last time we tried updating the front end calibration for this state (still no explanation): LHO:75432
- previous problems calibrating this state in December: LHO:74977
- simulines lockloss in new darm state in December: LHO:74887
Comments related to this report
louis.dartez@LIGO.ORG - 19:08, Thursday 14 March 2024 (76401)
the script i used to rescale the simulines injections is at /ligo/groups/cal/common/scripts/adjust_amp_simulines.py. it's the same (but modified) I used in LHO:74883.
louis.dartez@LIGO.ORG - 19:41, Thursday 14 March 2024 (76403)
On Updating the pyDARM parameter file for the new DARM state:


- copied H1OMC_1394062193.txt to /ligo/groups/cal/H1/arx/fotonfilters/ (see Nov 28, 2023 discussion section in LIGO-T2200107 regarding cal directory structure changes for 04b). Since pyDARM logic isn't fully transitioned yet, I also copied the same file to the 'old' location : /ligo/svncommon/CalSVN/aligocalibration/trunk/Common/H1CalFilterArchive/h1omc/.
- i also copied H1SUSETMX_139441589.txt to both (corresponding) locations.
- pyDARM parameter file swstat values were updated according to what was active at 1394500426 (SUSETMX and DARM1,2)

the git commit encapsulating the changes to the parameter file can be found here: https://git.ligo.org/Calibration/ifo/H1/-/commit/119768de95a66658039036aca358364c1d39abe4
louis.dartez@LIGO.ORG - 19:44, Thursday 14 March 2024 (76404)
here is the pyDARM report for this measurement: https://ldas-jobs.ligo-wa.caltech.edu/~cal/?report=20240315T012231Z
H1 ISC
georgia.mansell@LIGO.ORG - posted 18:15, Thursday 14 March 2024 - last comment - 14:09, Friday 15 March 2024(76398)
Waits that maybe can be removed from ISC_LOCK and other guardian nodes

Last week when we locked the new OMC by hand I copy-pasted some guardian code into a shell, and found that there was a gain set and wait that were totally unnecessary. This inspired me to start reading through ISC_LOCK to look for other redundant waits. Here are my notes, I only got up to the start of LOWNOISE_ASC before I went totally cross-eyed.

Here are the notes I took, the ones in bold we can for sure remove.

Preparing for lock

Line 301-305 [ISC_LOCK, DOWN] Prcl UGF servo turn off (do we still do this?) no wait times but maybe unnecessary
Line 327 [PREP_FOR_LOCKING] Thermalization guardian (are we using this?)
Line 350-354 [PREP_FOR_LOCKING] Turn off CHARD blending, no wait times
Line 423 [PREP_FOR_LOCKING] turn off PR3 DRIVEALIGN P2P offset for PR3 wire heating
Line 719 [PREP_FOR_LOCKING] toggling ETM ESD HV if the output is low, seems redundant with line 445

Initial alignment


INITIAL_ALIGNMENT for the green arms only offloads a minute or 2 after it's visually converged. Initially I thought the convergence checker thresholds should be raised, but it's a 30 second average. Might make sense to reduce the averaging time?
(2 screenshots attached for this one)

Arm Length Stabilization

ALS_DIFF [LOCK] Ramps DARM gain to 40, waits 5 seconds, ramps DARM gain to 400, waits 10 seconds. Seems long.
ALS_DIFF Line 179, waits 2* the DARM ramp time, but why?
ALS_DIFF [LOCK] Enegages boosts with a 3 second wait, engages boosts with another 10 second wait
ALS DIFF Line 191 wait timer 10 seconds seems unnecessary.

ALS_COMM [PREP_FOR_HANDOFF] line 90 5 second wait - maybe we could shorten this?
ALS_COMM [HANDOFF_PART_3] lines  170 and 179 - 2 and 5 second timers but I'm not sure I get why they are there
ALS_COMM's Find IR takes 5 seconds of dark data, has two hard coded VCO offsets in FAST_SEARCH, if it sees a flash it waits 5 seconds to make sure it's real, and then moves to FINE_TUNE_IR, taking 50 count VCO steps until the transmitted power is >0.65
Suggest updating the hard coded offsets (ALS_DIFF line 247) from [78893614, 78931180] to [78893816, 78931184] (second spot seems good, first spot needs a few steps)

ALS_DIFF's find IR has 3 locations saved in alsDiffParams.dat which it searches around. This list gets updated each time it finds a new spot, HOWEVER the search starts 150 counts away from the startin location and steps in in increments of 30. Seems like it would be more efficient to start 30 below the saved offset?

ISC_LOCK [CHECK_IR] line 1206 has a 5 second wait after everything is done which could probably be reduced?

Power- and dual- recycled michelson locking

PRMI locking - a bunch of 1 second waits idk if they are needed?
ISC_DRMI line 627/640 [PRMI_LOCKED] self.timer['wait']=1 seems maybe unnecessary?
ISC_DRMI line 746, 748 [ENGAGE_PRMI_ASC] - MICH P and Y ASC ramps on with a 20 second timer, but wait = false, but this seems long anyway?
ISC_DRMI line 762/765/768 [ENGAGE_PRMI_ASC] self.timer['wait'] = 4... totally could remove this?
ISC_DRMI [PRMI_TO_DRMI] line 835 - wait timer of 4 seconds (but I don't think it actually waited 4 seconds, see third attached screenshot, so maybe I dont know what time['wait'] really means!!!
When doing the PRMI to DRMI transition it first offloads the PRMI ASC, does the PRMI_TO_DRMI_TRANSITION state, then runs through the whole DOWN state of ISC_DRMI which takes ~25 seconds? maybe there can be a quicker way to do this


In ISC_DRMI there's a self.caution flag which is set as True if AS_C is low, and has 10 second waits after ASC engagements and a *90 second* wait before tirning on the SRC ASC. Might be worthwhile to to replace this with a convergence checker since it might not be needed that we wait for a minute and a half if we are already well algined

Line 1845 ISC_LOCK [CHECK_AS_SHUTTERS] 10 second wait for...? This happens after the MICH RSET but before the FAST SHTTER is reqested to READY
Lines 1837/8 and 1865 redundant?
Line 1870 wait timer 2 seconds after AS centering + MICH turned back on but why
Line 1887 - straight up 10 second wait after increasing MICH gain by 20dB

CARM offset

Line 2119 [CARM_TO_TR] time.sleep(3) at the end of this state not clear what we're waiting for
Line 2222 [DARM_TO_RF] self.timer['wait'] = 2 that used to be 1
Line 2235 [DARM_TO_RF] another 2 second timer?
Line 2314 [DHARD_WFS] 20 second timer to turn DHARD on, but maybe we should just go straight to convergence checking once the gains are ramped on?
Line 2360 [PARK_ALS_VCO] 5 second wait after resetting the COMM and DIFF PLLs
Line 2406 [SHUTTER_ALS] 5 second wait followed by a 1 second sleep after the X arm, Y arm, and COMM are taken to shuttered
Line 2744 [CARM_TO_ANALOG] 2 second wait when REFLBIAS boost turned off but before summing node gain (A IN2) increased?
Line 2753 [CARM_TO_ANALOG] 5 second wait after summing node gain increased
Line 2760 [CARM_TO_ANALOG] 2 second wait after enabling digital CARM antiboost?
Line 2766 [CARM_TO_ANALOG] 2 second wait after turning on analog CARM boost
Line 2772 [CARM_TO_ANALOG] 2 second wait after raming the REFL_DC_BIAS gain to 0, actually maybe this one makes sense.

Full IFO


There are a ton of waits during the ASC engagement but I think usually the convergence checkers are the limit to time spent in the state.
Line 3706 [ENGAGE_SOFT_LOOPS] 5 second wait after everything has converged?
Line 3765 [PREP_DC_READOUT_TRANSITION] 3 second wait after turning on DARM boost but shouldn't it be 1 second?
Line 3816 [DARM_TO_DC_READOUT] 10 second wait after switching DARM intrix from AS45 to DC readout, might be excessive
Line 3826/7 [DARM_TO_DC_READOUT] - DARM gain is set to 400 (but it's already 400!) and then there is a 4 second wait, these two lines can for sure be removed!
Line 3834 [DARM_TO_DC_READOUT] - 5 second wait after turning ramping some offsets to 0 BUT the offsets ramp much more quickly than that!

Power up


line 4033 [POWER_10_W] 30 second wait after turning on some differential arm ASC filters but actually, never mind I don't think it actually does this wait
Line 4299 [REDUCE_RF45_MODULATION_DEPTH] we have a 30 second ramp time to ramp the modulation depths, maybe this could be shorter?
Line 4614 [MAX_POWER] 20 second thermal wait could be decreased?
Line 4641 [MAX_POWER] 30 second thermal wait could be decreased??
Line 4645 [MAX_POWER] 30 second thermal wait for the final small step could be devreased?

Lownoise stuff


line 4463/4482 [LOWNOISE_ASC] 5 second wait after we turn off RPC gains that were already off
line 4490 [LOWNOISE_ASC] 10 second wait after CHARD_Y gain lowered, but it looks to have stabilized after 5 seconds so I think we can drop this to 5.
honestly a lot of waits in lownoise_asc so I ran out of time to check them all for necessity

Images attached to this report
Comments related to this report
georgia.mansell@LIGO.ORG - 12:23, Friday 15 March 2024 (76425)

More waits in the guardian:

line 4530 [LOWNOISE_ASC] 5 second wait after turning up (more negative) MICH gain, next steps are not MICH related so maybe we can shorten it?
line 4563 [LOWNOISE_ASC] 10 second ramp after changing top mass damping loop yaw gains, then another 10 second ramp after lowering SR2 and SR3 everything damping loop yaw gains? probably can lump these together and then also maybe reduce the wait?
too scared to think about the timers in transition_from_etmx, but the whole state takes about 3 minutes, which I guess makes sense since we ramp the ESDs down, and up again, also this has been newly edited today
line 5503 [LOWNOISE_LENGTH_CONTROL] 10 second wait after setting up filters for LSC feedforward, and some MICH modifications but not sure why?
line 5536 [LOWNOISE_LENGTH_CONTROL] 10 second wait after changing filters and gains in MICH1/PRCL1/SRCL1 but all their ramp times are 2 or 5 seconds
line 5549 [LOWNOISE_LENGTH_CONTROL] 1 second wait after turning on LSCFF, maybe not needed?
line 5773 [LASER_NOISE_SUPPRESSION] 1 second waits after each LSC_REFL_SERVO gain step - could this be quicker?
line 5632 [OMC_WHITENING] 5 second wait after confirming OMC is locked could probably be skipped?

I'm attaching ISC_LOCK as I was reading it since it's always in a state of flux!

Non-image files attached to this comment
elenna.capote@LIGO.ORG - 13:01, Friday 15 March 2024 (76430)

Georgia and I looked through lownoise ASC together and tried to improve the steps of the state. Overall, it should be shorter except we now need to add the new AS_A DC offset change. I have it set for a 30 second ramp, and I want it to go independently of other changes in lownoise ASC since the DC centering loops are slow. However, Gabriele says he has successfully engaged this offset with 10 seconds, so maybe it can be shorter. I would like to try the state once like this, and if it's ok, go to a shorter ramp on the offset. This is line 4551 in my version, 'self.timer['LoopShapeRamp'] = 30'.

Generally, we combined various steps of the same ramp length that had previously been separate, such as ASC loop changes, MICH gain changes, smooth limiter changes, etc. I completely removed the RPC step because it does nothing now that we do not use RPC.

elenna.capote@LIGO.ORG - 13:38, Friday 15 March 2024 (76431)

Ok, this was a bad change because we lost lock in this state. It was not during the WFS offset change, it was likely during some other filter change. We probably combined too many steps into each section. I reverted to the old version of the state, but I did take out the RPC gain ramp to zero since that is unnecessary.

georgia.mansell@LIGO.ORG - 14:09, Friday 15 March 2024 (76433)

Quickly looking at the guardlog (first screenshot) and the buildups (second screenshot), and ASC error signals (third screenshot) during this LOWNOISE_ASC lockloss.

It seems like consolidating the test mass yaw damping loop gain change, and the SR2/SR3 damping loop gain change was not a good choice. It was a slow lockloss.

Probably the changes earlier in the state were safe though!

Images attached to this comment
H1 ISC (ISC)
craig.cahillane@LIGO.ORG - posted 18:14, Thursday 14 March 2024 (76397)
Set up SR785 at the PSL racks, and plugged into analog DARM A+ chassis
Graeme, Matt, Craig 

In preparation for some analog analysis of CARM and DARM, we set up and got some PSDs of REFL A 9 I (which is actually REFL A 9 Q at the racks due to our arbitrary delay).

Daniel helped us to used the patch panel in the CER to route analog DARM to the PSL racks.  

There hasn't been an obvious effect on DARM from our setup so far, so we will leave it like this for tonight.

Pictures are of the SR785 setup at the PSL racks,
the CER patch panel BNC we used a BNC to connect U2 patch 5 which goes to the PSL racks to U5 patch 6 which goes to the HAM6 racks,
our connection to the OMC DCPD Whitening Chassis (OMC DCPD A+ slot),
and our connection to the HAM6 patch panel. 
Images attached to this report
H1 CAL
louis.dartez@LIGO.ORG - posted 17:55, Thursday 14 March 2024 - last comment - 12:47, Friday 15 March 2024(76396)
DARM PSD with updated CAL-CS filters activated
The CAL-CS filters installed by Jeff (LHO:76392) do a better job of correcting CAL-DELTAL_EXTERNAL. See darm_fom.png. Here's a screenshot of the filter banks: new_darm_calcs_filters.png.

Also, Evan's 5Hz high pass filter (LHO:76365) pretty much killed the strong kick we've been seeing each time we switch into this new DARM state from ETMY. 
Images attached to this report
Comments related to this report
evan.hall@LIGO.ORG - 12:47, Friday 15 March 2024 (76428)

The rms drive to the ESD is now about 5000 DAC counts on each quadrant, dominated by motion around 3 Hz.

Images attached to this comment
Non-image files attached to this comment
H1 CAL (ISC)
jeffrey.kissel@LIGO.ORG - posted 16:14, Thursday 14 March 2024 (76392)
H1 CAL CS ETMX Actuator Model of Digital Filters Updated to Allow for New DARM Distribution Filters
J. Kissel, L. Dartez

In prep for calibrating the detector under the "new DARM" control scheme (see e.g. some of the conversation in LHO aLOGs 76315  75308), I've copied over the new filters that are needed from the H1SUSETMX.txt filter file over to the H1CALCS.txt filter file. The new filters are only in the replica of L2 LOCK, L1 LOCK, L3 DRIVEALIGN, and L2 DRIVEALIGN, and I only needed to copy over two filters.

I've committed the H1CALCS.txt to the userapps repo.
    /opt/rtcds/userapps/release/cal/h1/filterfiles/
        H1CALCS.txt

Attached is a screenshot highlighting the new filters copied over.
Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 16:03, Thursday 14 March 2024 (76391)
Ops Day Shift End

TITLE: 03/14 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
SHIFT SUMMARY: The morning was taken up by reverting IM and PR moves over the last few days, then ISCT1 work. Since then we have been trying to relock but an earthquake has been making this tricky. Starting to get past DRMI more consistently, currently powering up.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
15:06 FAC Karen OptLab n Tech clean 15:31
15:40 - Mitchell LVEA n HAM6 pictures 15:53
15:43 Vac Jordan LVEA n HAM6 pictures as well 15:53
15:55 VAC Chris Outbuildings n SS unistrut hunt 16:47
16:35 FAC Tyler LVEA n Parts hunt 16:36
16:58 ISC Sheila, TJ, Jennie LVEA - ISCT1 LOCAL Alignment on ISCT1 19:22
17:28 SQZ Dhruva, Nutsinee LVEA - SQZT7 LOCAL Homodyne work 20:05
17:33 FAC Mitchell EX, EY n FAMIS tasks 18:13
18:34 SQZ Nutsinee, Dhruv, Naoki SQZ table y(local) Squeezing even more 22:51
19:04 RUN Gabriele entire site n Running so far so fast 20:53
20:10 ISC Sheila, Oli LVEA local ISCT1 alignment 20:53
20:58 VAC Travis LVEA n Measurement 21:01
H1 ISC
thomas.shaffer@LIGO.ORG - posted 15:43, Thursday 14 March 2024 (76387)
Reverted IM and PR changes and went to ISCT1 to help relieve clipping and boost DIFF beatnote

Sheila, Jennifer W, Oli, TJ

It was decided that the recent changes of the IMs and the PRs might have put us in a bad alignment ( alog76366 and some control room and matter most conversations) . We reverted those to two days ago, scope trend, and then we decided to try to pico the HAM3 pop pico - labeled as ALS/POP beam steering HAM1, motor 8. This didn't make any positive changes in the beatnotes and arm powers together, so we brought them back to their starting positions and decided it was bes to to to ISCT1 to help match our IFO alignment, bring our DIFF beatnote back up, and we could fix some clipping that we've known about on the ALSY path to the PD and camera (alog76287).

Before going on table I ran a green initial alignment, input align, and mich bright. The BS circled in pink on the layout attached was translated to reduce the clipping on that optic. The downstream BSs within the light blue circle then needed to be adjusted to maximize the diff beatnote.

Images attached to this report
H1 TCS
camilla.compton@LIGO.ORG - posted 15:38, Thursday 14 March 2024 (76385)
Checking on HWS, no new Point Absorbers

Looking at a  power up (comparing when we get to 60W input as reference time to 40s and 120s later), we can see no evidence of point absorbers on ITMY after 120s and 40s. On ITMX after 120s and  40s can see our known point absorbers 66197.

Attached the spherical power trends of, ITMX changes 118μDiopters, ITMY changes 103μDiopter (double pass). It is expected the ITMX curvature  will change more due to the point absorbers.

TCS SIM current status (sitemap > TCS > SIMULATION):

Images attached to this report
H1 SUS
jeffrey.kissel@LIGO.ORG - posted 15:37, Thursday 14 March 2024 (76390)
Summary aLOG gathering all aLOGs on the SUS ETM and TMS Watchdog Upgrade
D. Barker, J. Kissel, O. Patane, R. Short
ECR E1700387
IIET Ticket 9392
WP 11743

This aLOG that summarizes all the details of the work that went in to upgrading the SUS ETM and SUS TMS watchdog systems this past Tuesday, Mar 12 2024.

Front-end model and MEDM screen infrastructure changes: LHO:76269
    To receive these upgrades, 
    - svn up the following common directories,
        ${userapps}/release/sus/common/models/
        ${userapps}/release/sus/common/medm/quad/
    - svn up the following h1 specific directories and copy what top-level changes that are shown in the above linked aLOG,
        ${userapps}/release/sus/h1/models/
        ${userapps}/release/sus/h1/filterfiles/
        
Dave's summary of the install, and corresponding issues: LHO:76304

Filling out the foton and EPICs infrastructure to get the new WD system functional: LHO:76352
    To receive these upgrades,
    - svn up the following h1 specific directories and copy what filters you see in the OSEMAC_BANDLIM and OSEMAC_RMSLP filter banks
        ${userapps}/release/sus/h1/filterfiles/
      or use the design strings described in LHO:76352.
     
A review of the calibrated BLRMS output vs. thresholds: For ETMs LHO:76343 for TMTS LHO:76347
     - Note, 25 [um_RMS] is still an arbitrary threshold. We've already found that it might behoove us to increase the ETM L2 threshold to 30 [um_RMS], but for many other stages, something lower might suffice. Only time and data will tell.

Surrounding python code underneath MEDM infrastructure: LHO:76389.
    To receive these upgrades (when we've reconciled the issues discussed in this LHO:76389 aLOG)
     - svn up the following common directory,
        ${userapps}/release/sus/common/scripts/

Proposed Schedule of the continued roll-out:
Next week (Mar 19 2024), we hope to upgrade the ITMs and the BS.
The following week (Mar 26 2024), we hope to upgrade the HSTS and HLTS.
The following week (Apr 02 2024), we hope to upgrade the OMCS, HTTS, and HAUX
The following week (Apr 09 2024), we hope to go through the already new WDs and calibrate their RMS signals, and update the thresholds, like we'll have done for the above "older" suspensions along the way.
H1 SUS (CDS, GRD)
jeffrey.kissel@LIGO.ORG - posted 15:18, Thursday 14 March 2024 (76389)
On the edits to python code to support ETM / TMS Watchdog Upgrades
J. Kissel, O. Patane, R. Short

One last aLOG about Tuesday's upgrade to the ETM and TMS watchdog system -- this time related to the python infrastructure surrounding the watchdog. That means two things:
    (1) The SUS guardian code, which is primarily driven by the code generic to all suspensions,
            /opt/rtcds/userapps/release/sus/common/scripts/
                sustools.py
        has a dictionary in it that defines whether to look for USER DACKILL channels. Since these were removed from EPICs, the ETM and TMS guardians threw connection errors after the model restarts.
        To resolve, we removed the "'USER': 'DACKILL', entries from the quadwd and tmtswd dictionary definitions around lines 1660 to 1670 of rev 23101.

        Unfortunately, because there's some incoming updates from L1 that we don't understand, we can't commit the changes.

    (2) There is python code underneath the hood of the "RESET ALL" button on the WD overview MEDM screens,
            /opt/rtcds/userapps/release/sus/common/scripts/
                wdreset_all.py
        In the old system, that python code pushes the (one) reset button (channel) that untrips all of the user watchdogs, as well as pressing the reset button (channel) on the USER DACKILL. 
            ezca['SUS-' + optic + '_WD_RESET'] = 1

            ezca['SUS-' + optic + '_DACKILL_RESET'] = 1
        Now that there no longer exists a USER DACKILL button to press on the ETMs and TMS, and because the python script is called in the MEDM screen with an ampersand, the call to the script successfully resets the user watchdogs, but then quietly breaks/fails in the background.

        Not good coding in practice, but not an impactful at this point. Indeed, because this is also common code to all suspension MEDM screens, the reset of the DACKILL is still *needed* for any suspensions that haven't yet had this upgrade. So, for now, we leave this ugly thing too.

H1 SQZ (SQZ)
nutsinee.kijbunchoo@LIGO.ORG - posted 15:15, Thursday 14 March 2024 (76388)
Homodyne alignment -- PDA is sketchy

Dhruva, Naoki, Nutsinee

We had power mysteriously dropped out of PDA multiple times after adjusting the half wave plate on SQZT7 so we went hunting for lose optics. We found a lose beam splitter cube mount on the LO path so we tighten that but it didn't fixed the problem. We accidently discovered that if we pushed PDA we got the power back. We suspect PDA came lose from the circuit board. HD DIFF has been readjusted.

Today's Fringe visibility on 

PDA is 0.986

PDB is 0.985

H1 ISC
trent.gayer@LIGO.ORG - posted 13:53, Thursday 14 March 2024 (76353)
Coherence between OMC-REFL and DARM

Trent, Georgia

The new QPD offsets improved the coherence from when the offsets were turned off. However, this new coherence is still lower than coherence from the initial QPD offset values.

See Gabriele's post for more info about the OMC alignment and QPD offsets.

We wanted to look at the coherence between OMC-REFL and DARM with the OMC QPD B offsets on and off. We looked at the coherence between the following channels.

Trace Date: UTC GPS Start Time Number of Averages OMC QPD A Pitch Offset OMC QPD A Yaw Offset OMC QPD B Pitch Offset OMC QPD B Yaw Offset Coherence Ref Darm Ref
Black 09/03/2024 07:59:49 1394006407 5325 -0.15 0.2 0.04 -0.14 32

36

Green 13/03/2024 10:59:06 1394362764 1464 -0.15 0.2 0 0 33 37
Blue 12/03/2024 09:07:16 1394269654 1765 -0.15 0.2 0 0 34 38
Red 13/03/2024 20:28:00 1394396898 2061 -0.25 0.1 -0.05 0.07 35 39

The blue, green, and red curves have less averages because there was not as long of a time between glitches/losing lock as the black curve.

We took the blue and green curves to determine if there was something else that could have caused the coherence to lower besides turning the offsets off. There is not a significant difference between the blue and green curves so we can conclude that the coherence is lower when the offsets are turned off.

The coherence of all the curves is quite low (<0.1).

The red curve is the coherence with the new QPD offsets and we see that the coherence is still lower than the black curve (the initial QPD offsets).

Images attached to this report
H1 CDS
jonathan.hanks@LIGO.ORG - posted 12:59, Thursday 14 March 2024 (76383)
WP 11769 switch remaining cisco switches to use lldp instead of cdp
I switched sw-psl-aux and sw-lvea-aux to using lldp instead of cdp.  This is to help with continued documentation and visibility of the network.  At this point with the rebuild of the network CDP gives us less information, so it has been turned off.
X1 SUS (SUS)
jeffrey.kissel@LIGO.ORG - posted 11:38, Thursday 14 March 2024 (76381)
bbssopt.m Parameter Set updated to match H1 (er, X1) BBSS First Article
M. Barton, J. Kissel. O. Patane

Recall, we'd had issues understanding why the modeled top-mass pitch to pitch transfer function looked so weird (see LHO aLOGs 75787 and 75947).

After 
    - Mark's suggestion of the problem (suggested over email and mentioned on the 2024-02-29 a+ SUS call), and
    - Oli's hard work proving that that *was* the problem (see LHO:76071), 

I've now closed the loop for the bbssopt.m Bigger Beam Splitter Suspension's production model parameter set that best matches the first article data:
    - Added a few more slides to G2400442 conveying the resolution, now at -v2
    - copied over the temporary bbssopt_pendstage2on_d1minus2p5mm_d4minus1p0mm.m parameter set on top of bbssopt.m parameter set,
    - committed bbssopt.m to the SusSVN as of rev 11778.

The screenshot of the comparison shows the differences between the previous rev and the current rev:
    - the pend.stage2 boolean flag is now set to 1.0, which matches that the ds are defined as physical ds, and
    - d1 and d4 are (slightly) adjusted to better match the first article.

To generate transfer functions from this production model parameter set, 
    (1) find a computer with the 
    /ligo/svncommon/SusSVN/sus/trunk/Common/MatlabTools/TripleModel_Production/

folder checked out, 
    (2) open Matlab, 
    (3) Run the following commands:
        >> freq = logspace(-2,2,1000);
        >> buildType = 'bbssopt';
        >> svnDir = '/ligo/svncommon/SusSVN/sus/trunk/';
        >> plotFlag = false;
        >> [bbss,~,~,pend] = generate_Triple_Model_Production(freq,buildType,svnDir,plotFlag);
        >> figure(1);
        >> ll=loglog(freq,abs(squeeze(bbss.f(bbss.out.m3.disp.L,bbss.in.gnd.disp.L,:)))); %%% Chose which DOF you want to plot here
        >> set(ll,'LineWidth',2);
        >> grid on;
        >> title('BBSS Susp. Point. L to Optic L');
        >> xlabel('Frequency [Hz]'); ylabel('Magnitude [m/N]')
        >> shg
which will produce the attached plot.
       
Images attached to this report
Non-image files attached to this report
H1 General
mitchell.robinson@LIGO.ORG - posted 11:17, Thursday 14 March 2024 - last comment - 13:14, Thursday 14 March 2024(76382)
Monthly Dust Monitor Vacuum Pump Check

The end station dust monitors are running smoothly. EX needed a slight adjustment. The corner station pump has been turned off. The temp was reading just over 200F. I will swap that pump out after it has cooled down.

Comments related to this report
thomas.shaffer@LIGO.ORG - 13:14, Thursday 14 March 2024 (76384)OpsInfo

This means that the corner station dust monitor readings will be incorrectly reporting until the pump is replaced in the near future.

H1 ISC
camilla.compton@LIGO.ORG - posted 11:16, Thursday 14 March 2024 (76369)
List of Commissioning / Locking Activities

List of Commissioning / Locking Activities that the commissioning, ops team and visitors have done so far this week:

H1 CAL (AOS)
louis.dartez@LIGO.ORG - posted 00:26, Tuesday 19 December 2023 - last comment - 19:06, Thursday 14 March 2024(74883)
reduced simulines config for new DARM config state
I've retuned a Simulines configuration for testing on Tuesday morning. The frequency vector is the same as we nominally use but I reduced all injection amplitudes by 50% across the board. If we're able to run Simulines without losing lock while in the new DARM state in the morning while, I'll need another round with Simulines at some point to determine the best injection strengths moving forward. 


The new injection configs that I tuned were placed in 
/ligo/groups/cal/src/simulines/simulines/FreqAmp_H1_newDARMconfig_20231218/.

I then sourced the cal pydarm environment with 
source /ligo/groups/cal/local/bin/activate and ran the vector optimization script at /ligo/groups/cal/src/simulines/simulines/amplitudeVectorOptimiser.py after adjusting the input and output directories for H1 on lines 33 & 34 (variables changed are inDir and outDir) to:


inDir = 'FreqAmp_H1_newDARMconfig_20231218'
outDir = 'FreqAmp_H1_simuLines_newDARMconfig_20231218'


This placed new "optimized" frequency vector files in /ligo/groups/cal/src/simulines/simulines/FreqAmp_H1_simuLines_newDARMconfig_20231218/. 

Lastly, to actually generate the config file that Simulines processes when it's run, while still in the cal virtual environment, I ran 
python simuLines_configparser.py --ifo H1,

after changing the H1 output filename in simuLines_configparser.py to
outputFilename = outDir+'settings_h1_newDARMconfig_20231218.ini'.

This returned the following output:

(local) louis.dartez@cdsws22: python simuLines_configparser.py --IFO H1
Total time = 1252.0
Average time taken to sweep individual frequency:
26.083333333333332
Separations in TIME for each sweep:
[233.0, 260.0, 253.0, 253.0]
Starting Frequencies for each swept sine:
[5.6, 7.97, 14.44, 64.78, 339.4]
Starting points in relative time for each swept sine:
[0, 233.0, 493.0, 746.0, 999.0]


I then commented out my temporary variable name changes in the simulines scripts.

At the end of the exercise, the simulines file to test in the new DARM loop configuration is /ligo/groups/cal/src/simulines/simulines/settings_h1_newDARMconfig_20231218.ini.

The command to execute simulines using this configuration is 

gpstime;python /ligo/groups/cal/src/simulines/simulines/simuLines.py -i /ligo/groups/cal/src/simulines/simulines/settings_h1_newDARMconfig_20231218.ini;gpstime. This is same as the instructions in the Operator's wiki, with the modification for the new ini file.
Comments related to this report
louis.dartez@LIGO.ORG - 00:28, Tuesday 19 December 2023 (74884)
The script I used to adjust the injection amplitudes can be found at /ligo/groups/cal/src/common/scripts/adjust_amp_simulines.py.
louis.dartez@LIGO.ORG - 19:06, Thursday 14 March 2024 (76400)
the script mentioned above now lives at /ligo/groups/cal/common/scripts/adjust_amp_simulines.py
H1 TCS
camilla.compton@LIGO.ORG - posted 11:37, Tuesday 17 January 2023 - last comment - 16:12, Monday 09 June 2025(66832)
TCS HWS SLED Stock.
Last recorded on alog 58758. Both were replaced in Dec 2022 (66179) so we expect they will be fine until the end of 2023.
Currently have two new 840nm spares, and one 790nm used spare that could be used in a pinch. We will order more spares. 

* Added to ICS DEFECT-TCS-7753, will give to Chrisitna for dispositioning once new stock has arrived. 

Comments related to this report
camilla.compton@LIGO.ORG - 12:47, Monday 30 January 2023 (67079)

New stock arrived and has been added to ICS. Will be stored in the totes in the TCS LVEA cabinet. 

  • ITMX: Superluminescent Diode QSDM-790-5 
    • S/N 11.21.380
    • S/N 11.21.382
    • S/N 05.21.346 (note the data sheet is labeled 04.21.346 but QPhotonics noted this is a typo) 
  • ITMY: Superluminescent Diode QSDM-840-5
    • S/N 11.21.303
camilla.compton@LIGO.ORG - 15:33, Thursday 10 August 2023 (72139)
  • ITMX: Superluminescent Diode QSDM-790-5 
    • S/N 06.18.002 - used spare
    • S/N 11.21.380
    • S/N 11.21.382
    • S/N 05.21.346 - Installed July 2023 71476
    • S/N 07.14.255 Old sled, removed 71476
  • ITMY: Superluminescent Diode QSDM-840-5
    • S/N 03.20.479 
    • S/N 06.16.005 
    • S/N 11.21.303 - Installed July 2023 71476
    • S/N 11.17.127 Old sled, removed 71476

ISC has been updated. As of August 2023, have 2 spare SLEDs for each ITM HWS.  

camilla.compton@LIGO.ORG - 14:41, Tuesday 10 October 2023 (73373)
  • ITMX Superluminescent Diode QSDM-790-5 
    • S/N 06.18.002 - used spare
    • S/N 11.21.380 - Installed Oct 2023 73371
    • S/N 11.21.382
    • S/N 05.21.346 - Old sled, removed  73371
    • S/N 07.14.255 Old sled, removed 71476
  • ITMY: Superluminescent Diode QSDM-840-5
    • S/N 03.20.479 - Installed Oct 2023 73371
    • S/N 06.16.005 
    • S/N 11.21.303 - Old sled, removed  73371
    • S/N 11.17.127 Old sled, removed 71476

ISC has been updated. As of October 2023, have 1 spare SLEDs for each ITM HWS, with more ordered.  

camilla.compton@LIGO.ORG - 15:31, Wednesday 06 December 2023 (74645)

Spare 8240nm SLEDs QSDM-840-5 09.23.313 and QSDM-840-5 09.23.314 arrived and will be placed in the TCS cabinets on Tuesday. We are expecting qty 2 790nm SLEDs too. 

camilla.compton@LIGO.ORG - 16:26, Thursday 14 March 2024 (76393)

Spare 790nm SLEDs QSDM-790-5--00-01.24.077 and QSDM-790-5--00-01.24.079 arrived and will be placed in the TCS cabinets on Tuesday. 

camilla.compton@LIGO.ORG - 16:12, Monday 09 June 2025 (84906)

In 84417, we swapped:

The removed SLEDs have been dispositioned, DEFECT-TCS-7839.

Displaying reports 10581-10600 of 84760.Go to page Start 526 527 528 529 530 531 532 533 534 End