Gabriele, Louis We've successfully run a full set of calibration swept-sine measurements in the new DARM offloading (LHO:76315). In December, I tried running simulines in the new DARM state without success. I reduced all injection amplitudes by 50% but kept knocking the IFO out of lock (LHO:74883). After those repeated failures, I realized that the right thing to do was to scale the swept-sine amplitudes by the changes that we made to the filters in the actuation path. I prepared four sets of simulines injections last year that we finally got to try this evening. The simulines configurations that I prepared live at/ligo/groups/cal/src/simulines/simulines/newDARM_20231221
. In that directory are 1.) simulines injections scaled by the exact changes we made to the locking filters, 2.-4.) reductions by 10,100, and 1000 of the rescaled injections that I made out of an abundance of caution. The measurements we took this evening are:2024-03-15 01:44:02,574 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240315T012231Z.hdf5 2024-03-15 01:44:02,582 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240315T012231Z.hdf5 2024-03-15 01:44:02,591 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240315T012231Z.hdf5 2024-03-15 01:44:02,599 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240315T012231Z.hdf5 2024-03-15 01:44:02,605 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240315T012231Z.hdf5
We did not get to take a broadband PCALY2DARM measurement as we usually do as part of the normal measurement suite. Next steps are to update the pyDARM parameter file to reflect the current state of the IFO, process these new measurements, then use them to update the GDS pipeline and confirm that is working well. More on that progress in a comment. Relevant Logs: - success in transitioning to the new DARM offloading scheme in March 2024: LHO:76315 - unable to transition into the new offloading in January 2024, (we still don't have a good explanation for this): LHO:75308 - cal-cs updated for the new darm state: LHO:76392 - weird noise in cal-cs last time we tried updating the front end calibration for this state (still no explanation): LHO:75432 - previous problems calibrating this state in December: LHO:74977 - simulines lockloss in new darm state in December: LHO:74887
Last week when we locked the new OMC by hand I copy-pasted some guardian code into a shell, and found that there was a gain set and wait that were totally unnecessary. This inspired me to start reading through ISC_LOCK to look for other redundant waits. Here are my notes, I only got up to the start of LOWNOISE_ASC before I went totally cross-eyed.
Here are the notes I took, the ones in bold we can for sure remove.
Line 301-305 [ISC_LOCK, DOWN] Prcl UGF servo turn off (do we still do this?) no wait times but maybe unnecessary
Line 327 [PREP_FOR_LOCKING] Thermalization guardian (are we using this?)
Line 350-354 [PREP_FOR_LOCKING] Turn off CHARD blending, no wait times
Line 423 [PREP_FOR_LOCKING] turn off PR3 DRIVEALIGN P2P offset for PR3 wire heating
Line 719 [PREP_FOR_LOCKING] toggling ETM ESD HV if the output is low, seems redundant with line 445
INITIAL_ALIGNMENT for the green arms only offloads a minute or 2 after it's visually converged. Initially I thought the convergence checker thresholds should be raised, but it's a 30 second average. Might make sense to reduce the averaging time?
(2 screenshots attached for this one)
ALS_DIFF [LOCK] Ramps DARM gain to 40, waits 5 seconds, ramps DARM gain to 400, waits 10 seconds. Seems long.
ALS_DIFF Line 179, waits 2* the DARM ramp time, but why?
ALS_DIFF [LOCK] Enegages boosts with a 3 second wait, engages boosts with another 10 second wait
ALS DIFF Line 191 wait timer 10 seconds seems unnecessary.
ALS_COMM [PREP_FOR_HANDOFF] line 90 5 second wait - maybe we could shorten this?
ALS_COMM [HANDOFF_PART_3] lines 170 and 179 - 2 and 5 second timers but I'm not sure I get why they are there
ALS_COMM's Find IR takes 5 seconds of dark data, has two hard coded VCO offsets in FAST_SEARCH, if it sees a flash it waits 5 seconds to make sure it's real, and then moves to FINE_TUNE_IR, taking 50 count VCO steps until the transmitted power is >0.65
Suggest updating the hard coded offsets (ALS_DIFF line 247) from [78893614, 78931180] to [78893816, 78931184] (second spot seems good, first spot needs a few steps)
ALS_DIFF's find IR has 3 locations saved in alsDiffParams.dat which it searches around. This list gets updated each time it finds a new spot, HOWEVER the search starts 150 counts away from the startin location and steps in in increments of 30. Seems like it would be more efficient to start 30 below the saved offset?
ISC_LOCK [CHECK_IR] line 1206 has a 5 second wait after everything is done which could probably be reduced?
PRMI locking - a bunch of 1 second waits idk if they are needed?
ISC_DRMI line 627/640 [PRMI_LOCKED] self.timer['wait']=1 seems maybe unnecessary?
ISC_DRMI line 746, 748 [ENGAGE_PRMI_ASC] - MICH P and Y ASC ramps on with a 20 second timer, but wait = false, but this seems long anyway?
ISC_DRMI line 762/765/768 [ENGAGE_PRMI_ASC] self.timer['wait'] = 4... totally could remove this?
ISC_DRMI [PRMI_TO_DRMI] line 835 - wait timer of 4 seconds (but I don't think it actually waited 4 seconds, see third attached screenshot, so maybe I dont know what time['wait'] really means!!!
When doing the PRMI to DRMI transition it first offloads the PRMI ASC, does the PRMI_TO_DRMI_TRANSITION state, then runs through the whole DOWN state of ISC_DRMI which takes ~25 seconds? maybe there can be a quicker way to do this
In ISC_DRMI there's a self.caution flag which is set as True if AS_C is low, and has 10 second waits after ASC engagements and a *90 second* wait before tirning on the SRC ASC. Might be worthwhile to to replace this with a convergence checker since it might not be needed that we wait for a minute and a half if we are already well algined
Line 1845 ISC_LOCK [CHECK_AS_SHUTTERS] 10 second wait for...? This happens after the MICH RSET but before the FAST SHTTER is reqested to READY
Lines 1837/8 and 1865 redundant?
Line 1870 wait timer 2 seconds after AS centering + MICH turned back on but why
Line 1887 - straight up 10 second wait after increasing MICH gain by 20dB
Line 2119 [CARM_TO_TR] time.sleep(3) at the end of this state not clear what we're waiting for
Line 2222 [DARM_TO_RF] self.timer['wait'] = 2 that used to be 1
Line 2235 [DARM_TO_RF] another 2 second timer?
Line 2314 [DHARD_WFS] 20 second timer to turn DHARD on, but maybe we should just go straight to convergence checking once the gains are ramped on?
Line 2360 [PARK_ALS_VCO] 5 second wait after resetting the COMM and DIFF PLLs
Line 2406 [SHUTTER_ALS] 5 second wait followed by a 1 second sleep after the X arm, Y arm, and COMM are taken to shuttered
Line 2744 [CARM_TO_ANALOG] 2 second wait when REFLBIAS boost turned off but before summing node gain (A IN2) increased?
Line 2753 [CARM_TO_ANALOG] 5 second wait after summing node gain increased
Line 2760 [CARM_TO_ANALOG] 2 second wait after enabling digital CARM antiboost?
Line 2766 [CARM_TO_ANALOG] 2 second wait after turning on analog CARM boost
Line 2772 [CARM_TO_ANALOG] 2 second wait after raming the REFL_DC_BIAS gain to 0, actually maybe this one makes sense.
There are a ton of waits during the ASC engagement but I think usually the convergence checkers are the limit to time spent in the state.
Line 3706 [ENGAGE_SOFT_LOOPS] 5 second wait after everything has converged?
Line 3765 [PREP_DC_READOUT_TRANSITION] 3 second wait after turning on DARM boost but shouldn't it be 1 second?
Line 3816 [DARM_TO_DC_READOUT] 10 second wait after switching DARM intrix from AS45 to DC readout, might be excessive
Line 3826/7 [DARM_TO_DC_READOUT] - DARM gain is set to 400 (but it's already 400!) and then there is a 4 second wait, these two lines can for sure be removed!
Line 3834 [DARM_TO_DC_READOUT] - 5 second wait after turning ramping some offsets to 0 BUT the offsets ramp much more quickly than that!
line 4033 [POWER_10_W] 30 second wait after turning on some differential arm ASC filters but actually, never mind I don't think it actually does this wait
Line 4299 [REDUCE_RF45_MODULATION_DEPTH] we have a 30 second ramp time to ramp the modulation depths, maybe this could be shorter?
Line 4614 [MAX_POWER] 20 second thermal wait could be decreased?
Line 4641 [MAX_POWER] 30 second thermal wait could be decreased??
Line 4645 [MAX_POWER] 30 second thermal wait for the final small step could be devreased?
line 4463/4482 [LOWNOISE_ASC] 5 second wait after we turn off RPC gains that were already off
line 4490 [LOWNOISE_ASC] 10 second wait after CHARD_Y gain lowered, but it looks to have stabilized after 5 seconds so I think we can drop this to 5.
honestly a lot of waits in lownoise_asc so I ran out of time to check them all for necessity
More waits in the guardian:
line 4530 [LOWNOISE_ASC] 5 second wait after turning up (more negative) MICH gain, next steps are not MICH related so maybe we can shorten it?
line 4563 [LOWNOISE_ASC] 10 second ramp after changing top mass damping loop yaw gains, then another 10 second ramp after lowering SR2 and SR3 everything damping loop yaw gains? probably can lump these together and then also maybe reduce the wait?
too scared to think about the timers in transition_from_etmx, but the whole state takes about 3 minutes, which I guess makes sense since we ramp the ESDs down, and up again, also this has been newly edited today
line 5503 [LOWNOISE_LENGTH_CONTROL] 10 second wait after setting up filters for LSC feedforward, and some MICH modifications but not sure why?
line 5536 [LOWNOISE_LENGTH_CONTROL] 10 second wait after changing filters and gains in MICH1/PRCL1/SRCL1 but all their ramp times are 2 or 5 seconds
line 5549 [LOWNOISE_LENGTH_CONTROL] 1 second wait after turning on LSCFF, maybe not needed?
line 5773 [LASER_NOISE_SUPPRESSION] 1 second waits after each LSC_REFL_SERVO gain step - could this be quicker?
line 5632 [OMC_WHITENING] 5 second wait after confirming OMC is locked could probably be skipped?
I'm attaching ISC_LOCK as I was reading it since it's always in a state of flux!
Georgia and I looked through lownoise ASC together and tried to improve the steps of the state. Overall, it should be shorter except we now need to add the new AS_A DC offset change. I have it set for a 30 second ramp, and I want it to go independently of other changes in lownoise ASC since the DC centering loops are slow. However, Gabriele says he has successfully engaged this offset with 10 seconds, so maybe it can be shorter. I would like to try the state once like this, and if it's ok, go to a shorter ramp on the offset. This is line 4551 in my version, 'self.timer['LoopShapeRamp'] = 30'.
Generally, we combined various steps of the same ramp length that had previously been separate, such as ASC loop changes, MICH gain changes, smooth limiter changes, etc. I completely removed the RPC step because it does nothing now that we do not use RPC.
Ok, this was a bad change because we lost lock in this state. It was not during the WFS offset change, it was likely during some other filter change. We probably combined too many steps into each section. I reverted to the old version of the state, but I did take out the RPC gain ramp to zero since that is unnecessary.
Quickly looking at the guardlog (first screenshot) and the buildups (second screenshot), and ASC error signals (third screenshot) during this LOWNOISE_ASC lockloss.
It seems like consolidating the test mass yaw damping loop gain change, and the SR2/SR3 damping loop gain change was not a good choice. It was a slow lockloss.
Probably the changes earlier in the state were safe though!
Graeme, Matt, Craig In preparation for some analog analysis of CARM and DARM, we set up and got some PSDs of REFL A 9 I (which is actually REFL A 9 Q at the racks due to our arbitrary delay). Daniel helped us to used the patch panel in the CER to route analog DARM to the PSL racks. There hasn't been an obvious effect on DARM from our setup so far, so we will leave it like this for tonight. Pictures are of the SR785 setup at the PSL racks, the CER patch panel BNC we used a BNC to connect U2 patch 5 which goes to the PSL racks to U5 patch 6 which goes to the HAM6 racks, our connection to the OMC DCPD Whitening Chassis (OMC DCPD A+ slot), and our connection to the HAM6 patch panel.
The CAL-CS filters installed by Jeff (LHO:76392) do a better job of correcting CAL-DELTAL_EXTERNAL. See darm_fom.png. Here's a screenshot of the filter banks: new_darm_calcs_filters.png. Also, Evan's 5Hz high pass filter (LHO:76365) pretty much killed the strong kick we've been seeing each time we switch into this new DARM state from ETMY.
The rms drive to the ESD is now about 5000 DAC counts on each quadrant, dominated by motion around 3 Hz.
J. Kissel, L. Dartez In prep for calibrating the detector under the "new DARM" control scheme (see e.g. some of the conversation in LHO aLOGs 76315 75308), I've copied over the new filters that are needed from the H1SUSETMX.txt filter file over to the H1CALCS.txt filter file. The new filters are only in the replica of L2 LOCK, L1 LOCK, L3 DRIVEALIGN, and L2 DRIVEALIGN, and I only needed to copy over two filters. I've committed the H1CALCS.txt to the userapps repo. /opt/rtcds/userapps/release/cal/h1/filterfiles/ H1CALCS.txt Attached is a screenshot highlighting the new filters copied over.
TITLE: 03/14 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
SHIFT SUMMARY: The morning was taken up by reverting IM and PR moves over the last few days, then ISCT1 work. Since then we have been trying to relock but an earthquake has been making this tricky. Starting to get past DRMI more consistently, currently powering up.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:06 | FAC | Karen | OptLab | n | Tech clean | 15:31 |
15:40 | - | Mitchell | LVEA | n | HAM6 pictures | 15:53 |
15:43 | Vac | Jordan | LVEA | n | HAM6 pictures as well | 15:53 |
15:55 | VAC | Chris | Outbuildings | n | SS unistrut hunt | 16:47 |
16:35 | FAC | Tyler | LVEA | n | Parts hunt | 16:36 |
16:58 | ISC | Sheila, TJ, Jennie | LVEA - ISCT1 | LOCAL | Alignment on ISCT1 | 19:22 |
17:28 | SQZ | Dhruva, Nutsinee | LVEA - SQZT7 | LOCAL | Homodyne work | 20:05 |
17:33 | FAC | Mitchell | EX, EY | n | FAMIS tasks | 18:13 |
18:34 | SQZ | Nutsinee, Dhruv, Naoki | SQZ table | y(local) | Squeezing even more | 22:51 |
19:04 | RUN | Gabriele | entire site | n | Running so far so fast | 20:53 |
20:10 | ISC | Sheila, Oli | LVEA | local | ISCT1 alignment | 20:53 |
20:58 | VAC | Travis | LVEA | n | Measurement | 21:01 |
Sheila, Jennifer W, Oli, TJ
It was decided that the recent changes of the IMs and the PRs might have put us in a bad alignment ( alog76366 and some control room and matter most conversations) . We reverted those to two days ago, scope trend, and then we decided to try to pico the HAM3 pop pico - labeled as ALS/POP beam steering HAM1, motor 8. This didn't make any positive changes in the beatnotes and arm powers together, so we brought them back to their starting positions and decided it was bes to to to ISCT1 to help match our IFO alignment, bring our DIFF beatnote back up, and we could fix some clipping that we've known about on the ALSY path to the PD and camera (alog76287).
Before going on table I ran a green initial alignment, input align, and mich bright. The BS circled in pink on the layout attached was translated to reduce the clipping on that optic. The downstream BSs within the light blue circle then needed to be adjusted to maximize the diff beatnote.
Looking at a power up (comparing when we get to 60W input as reference time to 40s and 120s later), we can see no evidence of point absorbers on ITMY after 120s and 40s. On ITMX after 120s and 40s can see our known point absorbers 66197.
Attached the spherical power trends of, ITMX changes 118μDiopters, ITMY changes 103μDiopter (double pass). It is expected the ITMX curvature will change more due to the point absorbers.
TCS SIM current status (sitemap > TCS > SIMULATION):
D. Barker, J. Kissel, O. Patane, R. Short ECR E1700387 IIET Ticket 9392 WP 11743 This aLOG that summarizes all the details of the work that went in to upgrading the SUS ETM and SUS TMS watchdog systems this past Tuesday, Mar 12 2024. Front-end model and MEDM screen infrastructure changes: LHO:76269 To receive these upgrades, - svn up the following common directories, ${userapps}/release/sus/common/models/ ${userapps}/release/sus/common/medm/quad/ - svn up the following h1 specific directories and copy what top-level changes that are shown in the above linked aLOG, ${userapps}/release/sus/h1/models/ ${userapps}/release/sus/h1/filterfiles/ Dave's summary of the install, and corresponding issues: LHO:76304 Filling out the foton and EPICs infrastructure to get the new WD system functional: LHO:76352 To receive these upgrades, - svn up the following h1 specific directories and copy what filters you see in the OSEMAC_BANDLIM and OSEMAC_RMSLP filter banks ${userapps}/release/sus/h1/filterfiles/ or use the design strings described in LHO:76352. A review of the calibrated BLRMS output vs. thresholds: For ETMs LHO:76343 for TMTS LHO:76347 - Note, 25 [um_RMS] is still an arbitrary threshold. We've already found that it might behoove us to increase the ETM L2 threshold to 30 [um_RMS], but for many other stages, something lower might suffice. Only time and data will tell. Surrounding python code underneath MEDM infrastructure: LHO:76389. To receive these upgrades (when we've reconciled the issues discussed in this LHO:76389 aLOG) - svn up the following common directory, ${userapps}/release/sus/common/scripts/ Proposed Schedule of the continued roll-out: Next week (Mar 19 2024), we hope to upgrade the ITMs and the BS. The following week (Mar 26 2024), we hope to upgrade the HSTS and HLTS. The following week (Apr 02 2024), we hope to upgrade the OMCS, HTTS, and HAUX The following week (Apr 09 2024), we hope to go through the already new WDs and calibrate their RMS signals, and update the thresholds, like we'll have done for the above "older" suspensions along the way.
J. Kissel, O. Patane, R. Short One last aLOG about Tuesday's upgrade to the ETM and TMS watchdog system -- this time related to the python infrastructure surrounding the watchdog. That means two things: (1) The SUS guardian code, which is primarily driven by the code generic to all suspensions, /opt/rtcds/userapps/release/sus/common/scripts/ sustools.py has a dictionary in it that defines whether to look for USER DACKILL channels. Since these were removed from EPICs, the ETM and TMS guardians threw connection errors after the model restarts. To resolve, we removed the "'USER': 'DACKILL', entries from the quadwd and tmtswd dictionary definitions around lines 1660 to 1670 of rev 23101. Unfortunately, because there's some incoming updates from L1 that we don't understand, we can't commit the changes. (2) There is python code underneath the hood of the "RESET ALL" button on the WD overview MEDM screens, /opt/rtcds/userapps/release/sus/common/scripts/ wdreset_all.py In the old system, that python code pushes the (one) reset button (channel) that untrips all of the user watchdogs, as well as pressing the reset button (channel) on the USER DACKILL. ezca['SUS-' + optic + '_WD_RESET'] = 1 ezca['SUS-' + optic + '_DACKILL_RESET'] = 1 Now that there no longer exists a USER DACKILL button to press on the ETMs and TMS, and because the python script is called in the MEDM screen with an ampersand, the call to the script successfully resets the user watchdogs, but then quietly breaks/fails in the background. Not good coding in practice, but not an impactful at this point. Indeed, because this is also common code to all suspension MEDM screens, the reset of the DACKILL is still *needed* for any suspensions that haven't yet had this upgrade. So, for now, we leave this ugly thing too.
Dhruva, Naoki, Nutsinee
We had power mysteriously dropped out of PDA multiple times after adjusting the half wave plate on SQZT7 so we went hunting for lose optics. We found a lose beam splitter cube mount on the LO path so we tighten that but it didn't fixed the problem. We accidently discovered that if we pushed PDA we got the power back. We suspect PDA came lose from the circuit board. HD DIFF has been readjusted.
Today's Fringe visibility on
PDA is 0.986
PDB is 0.985
Trent, Georgia
The new QPD offsets improved the coherence from when the offsets were turned off. However, this new coherence is still lower than coherence from the initial QPD offset values.
See Gabriele's post for more info about the OMC alignment and QPD offsets.
We wanted to look at the coherence between OMC-REFL and DARM with the OMC QPD B offsets on and off. We looked at the coherence between the following channels.
Trace | Date: UTC | GPS Start Time | Number of Averages | OMC QPD A Pitch Offset | OMC QPD A Yaw Offset | OMC QPD B Pitch Offset | OMC QPD B Yaw Offset | Coherence Ref | Darm Ref |
Black | 09/03/2024 07:59:49 | 1394006407 | 5325 | -0.15 | 0.2 | 0.04 | -0.14 | 32 |
36 |
Green | 13/03/2024 10:59:06 | 1394362764 | 1464 | -0.15 | 0.2 | 0 | 0 | 33 | 37 |
Blue | 12/03/2024 09:07:16 | 1394269654 | 1765 | -0.15 | 0.2 | 0 | 0 | 34 | 38 |
Red | 13/03/2024 20:28:00 | 1394396898 | 2061 | -0.25 | 0.1 | -0.05 | 0.07 | 35 | 39 |
The blue, green, and red curves have less averages because there was not as long of a time between glitches/losing lock as the black curve.
We took the blue and green curves to determine if there was something else that could have caused the coherence to lower besides turning the offsets off. There is not a significant difference between the blue and green curves so we can conclude that the coherence is lower when the offsets are turned off.
The coherence of all the curves is quite low (<0.1).
The red curve is the coherence with the new QPD offsets and we see that the coherence is still lower than the black curve (the initial QPD offsets).
I switched sw-psl-aux and sw-lvea-aux to using lldp instead of cdp. This is to help with continued documentation and visibility of the network. At this point with the rebuild of the network CDP gives us less information, so it has been turned off.
M. Barton, J. Kissel. O. Patane Recall, we'd had issues understanding why the modeled top-mass pitch to pitch transfer function looked so weird (see LHO aLOGs 75787 and 75947). After - Mark's suggestion of the problem (suggested over email and mentioned on the 2024-02-29 a+ SUS call), and - Oli's hard work proving that that *was* the problem (see LHO:76071), I've now closed the loop for the bbssopt.m Bigger Beam Splitter Suspension's production model parameter set that best matches the first article data: - Added a few more slides to G2400442 conveying the resolution, now at -v2 - copied over the temporary bbssopt_pendstage2on_d1minus2p5mm_d4minus1p0mm.m parameter set on top of bbssopt.m parameter set, - committed bbssopt.m to the SusSVN as of rev 11778. The screenshot of the comparison shows the differences between the previous rev and the current rev: - the pend.stage2 boolean flag is now set to 1.0, which matches that the ds are defined as physical ds, and - d1 and d4 are (slightly) adjusted to better match the first article. To generate transfer functions from this production model parameter set, (1) find a computer with the /ligo/svncommon/SusSVN/sus/trunk/Common/MatlabTools/TripleModel_Production/ folder checked out, (2) open Matlab, (3) Run the following commands: >> freq = logspace(-2,2,1000); >> buildType = 'bbssopt'; >> svnDir = '/ligo/svncommon/SusSVN/sus/trunk/'; >> plotFlag = false; >> [bbss,~,~,pend] = generate_Triple_Model_Production(freq,buildType,svnDir,plotFlag); >> figure(1); >> ll=loglog(freq,abs(squeeze(bbss.f(bbss.out.m3.disp.L,bbss.in.gnd.disp.L,:)))); %%% Chose which DOF you want to plot here >> set(ll,'LineWidth',2); >> grid on; >> title('BBSS Susp. Point. L to Optic L'); >> xlabel('Frequency [Hz]'); ylabel('Magnitude [m/N]') >> shg which will produce the attached plot.
The end station dust monitors are running smoothly. EX needed a slight adjustment. The corner station pump has been turned off. The temp was reading just over 200F. I will swap that pump out after it has cooled down.
This means that the corner station dust monitor readings will be incorrectly reporting until the pump is replaced in the near future.
List of Commissioning / Locking Activities that the commissioning, ops team and visitors have done so far this week:
I've retuned a Simulines configuration for testing on Tuesday morning. The frequency vector is the same as we nominally use but I reduced all injection amplitudes by 50% across the board. If we're able to run Simulines without losing lock while in the new DARM state in the morning while, I'll need another round with Simulines at some point to determine the best injection strengths moving forward. The new injection configs that I tuned were placed in /ligo/groups/cal/src/simulines/simulines/FreqAmp_H1_newDARMconfig_20231218/. I then sourced the cal pydarm environment withsource /ligo/groups/cal/local/bin/activate
and ran the vector optimization script at /ligo/groups/cal/src/simulines/simulines/amplitudeVectorOptimiser.py after adjusting the input and output directories for H1 on lines 33 & 34 (variables changed areinDir
andoutDir
) to:inDir = 'FreqAmp_H1_newDARMconfig_20231218' outDir = 'FreqAmp_H1_simuLines_newDARMconfig_20231218'
This placed new "optimized" frequency vector files in/ligo/groups/cal/src/simulines/simulines/FreqAmp_H1_simuLines_newDARMconfig_20231218/
. Lastly, to actually generate the config file that Simulines processes when it's run, while still in the cal virtual environment, I ranpython simuLines_configparser.py --ifo H1
, after changing the H1 output filename insimuLines_configparser.py
tooutputFilename = outDir+'settings_h1_newDARMconfig_20231218.ini'
. This returned the following output:(local) louis.dartez@cdsws22: python simuLines_configparser.py --IFO H1 Total time = 1252.0 Average time taken to sweep individual frequency: 26.083333333333332 Separations in TIME for each sweep: [233.0, 260.0, 253.0, 253.0] Starting Frequencies for each swept sine: [5.6, 7.97, 14.44, 64.78, 339.4] Starting points in relative time for each swept sine: [0, 233.0, 493.0, 746.0, 999.0]
I then commented out my temporary variable name changes in the simulines scripts. At the end of the exercise, the simulines file to test in the new DARM loop configuration is/ligo/groups/cal/src/simulines/simulines/settings_h1_newDARMconfig_20231218.ini
. The command to execute simulines using this configuration isgpstime;python /ligo/groups/cal/src/simulines/simulines/simuLines.py -i /ligo/groups/cal/src/simulines/simulines/settings_h1_newDARMconfig_20231218.ini;gpstime
. This is same as the instructions in the Operator's wiki, with the modification for the new ini file.
The script I used to adjust the injection amplitudes can be found at /ligo/groups/cal/src/common/scripts/adjust_amp_simulines.py
.
the script mentioned above now lives at /ligo/groups/cal/common/scripts/adjust_amp_simulines.py
* Added to ICS DEFECT-TCS-7753, will give to Chrisitna for dispositioning once new stock has arrived.
New stock arrived and has been added to ICS. Will be stored in the totes in the TCS LVEA cabinet.
ISC has been updated. As of August 2023, have 2 spare SLEDs for each ITM HWS.
ISC has been updated. As of October 2023, have 1 spare SLEDs for each ITM HWS, with more ordered.
Spare 8240nm SLEDs QSDM-840-5 09.23.313 and QSDM-840-5 09.23.314 arrived and will be placed in the TCS cabinets on Tuesday. We are expecting qty 2 790nm SLEDs too.
Spare 790nm SLEDs QSDM-790-5--00-01.24.077 and QSDM-790-5--00-01.24.079 arrived and will be placed in the TCS cabinets on Tuesday.