Search criteria
Section: H2
Task: AOS
WP12479 Fil, Dave:
Fil has fixed the missing slow controls Beckhoff terminals at EX which were lost after the site power outage Sun 6th April 2025 18:05 PDT.
Attached 18 day trend of the baffle PD signals show the loss and restoration of signal.
The CDS overview had been expecting only 125 of 127 terminals, and was displaying the degraded DEV4 in dark green, turning to red now all 127 terminals are back. I have removed this exception, DEV4 is now nominal green when new overviews are opened.
Tagging AOS.
I made a couple of ancilliary investigations while I was in-chamber helping adjust CPY. First, I shined a flashlight through the ITMY elliptical baffle towards the BS. Figure 1 shows that this produced a 45 degree annular beam, similar to the one observed during lock that I noted here ( 83050 ) , consistent with the hypothesis that it is a reflection from the BS cage (see linked alog).
Second, our entry through BSC8 also reminded me that LHO has some unique potential scattering sites that LLO does not have. Figure 2 shows that several of the blanked off nozzles in BSC8 act as corner retroreflectors visible to the beam spot on ITMY, and there is a reflection from the chamber, just below the beam. A look at my compilation of beam spot photos from several years ago ( 41142-Figure3 ) also shows these issues at ITMX (second page of Fig. 2). We should probably put in nozzle baffles next time we are in-chamber near BSC8 and 7.
I recently reported on multiple potential stray light issues in the vertex area, including a 45 degree conical annular beam from the beamsplitter, and a reflection of the halo of the beam passing through the ITMX elliptical baffle that is roughly directed at ITMY (83050). In a first attempt to study potential scattered light noise problems from these beams, before the break I injected 0.2 Hz X and Y motion onto the BS HEPI, with motion amplitudes of about 1e-6 m at stages, 0 and 2 (see figure). The figure demonstrates with spectrograms and spectra (second page), that there is a slight increase in DARM noise for both X and Y injections, mainly below 60 Hz. I turned the injections on and off multiple times because the noise is pretty subtle. I think we should further study these potential scattering issues after the break.
J. Kissel ECR E1700228 WP 12370 I've executed the PR3 Optical Lever QPD AA Cable move from SUS-C4 U11 D9 Port 7 to SUS-C4 U32 D9 Port 8 described in more detail a few days ago (LHO:83168) in order to make room for the incoming PM1 (per ). See LHO:83168 for "before" pictures; I attach "after" pictures here: - 2025-03-11_SUS-C4_PR3OplevMove_U11_AAChassis_After_1 shows that D9 port 7 and 8 of U11 AA chassis are now vacant, - 2025-03-11_SUS-C4_PR3OplevMove_U11_AAChassis_After_2.jpg shows that this AA chassis is serial number S1202339. - 2025-03-11_SUS-C4_PR3OplevMove_U32_AAChassis_After_1.jpg shows the big-picture scene around U32 AA chassis, - 2025-03-11_SUS-C4_PR3OplevMove_U32_AAChassis_After_2.jpg shows the cable, "H1:OP LEV_PR3_AA" or "H1:SUS-HAM2_88" specifically, connected into to U32 AA chassis D9 port 8, the last "IN29-31," spigot of this AA chassis that supports sush2a's ADC1. - 2025-03-11_SUS-C4_PR3OplevMove_U32_AAChassis_After_3.jpg shows that this U32 AA chassis is serial number S1202340
Proof that (after we restored the alignment offsets at the M1 stage) PR3 optical gross function has returned. From the trends alone, I see that - the pitch signal is a bit noisier, maybe 0.1 [urad] more RMS. - the yaw signal is drifting slowly at a super small slow 0.02 [urad/minute]. The total SUM of the QPD segments have returned to the identical value tho. Will gather more data, e.g. ASD, to confirm if, and at what frequency, the performance is worse in pitch. Will wait patiently in yaw to see if the trend is something interesting or some transient that doesn't matter. I suspect that this change in character won't matter at all. Recall we DO NOT use the PR3 optical lever for any controls, only for monitoring and driven characterization.
Here's a look at the amplitude spectral density of the PITCH and YAW signals before and after the change. In short: No concern -- the optical lever performance is equivalent as before Also -- hidden beneath the comparison of "read out by U11 AA chassis, ADC0 of sush2b, then shipped over to sush2a via IPC to be processed by PR3 model" vs. "read out by U32 AA chassis, ADC1 of sush2a, directly processed by PR3 model" is that the "after" time period is during maintenance day when site-wide sensor correction is turned OFF. So, the only substantial change in performance (and the increase in RMS I mentioned in the above LHO:83290) is in PITCH but it's because sensor correction is now off, vs. the before data both in the trends and the reference traces here where it was on. (Good job sensor correction!) Also in pitch, there's a bit of broadband increase in noise between 5 and 20 Hz, but this may be where the optical lever QPD is ADC noise limited. But also, a slight increase in noise doesn't really matter -- because the optical lever is much noisier compared to the real motion of the suspension. Even during driven measurements, we struggle to get coherence in this region. There're some minor improvements in character of sharp features in both pitch and yaw above 10 Hz.
Jeff, Oli
ECR E1700228
More preparation to make way for PM1 - Jeff and I went into the h1sushtts simulink model and added in PM1 and its necessary connections(h1sushtts before). It was basically a copy of the RM1 and RM2 control blocks, with the input ADC channels taking 24 - 27, and channels 8 - 11 on the DAC (h1sushtts after - PM1+output).
We also copied the RMs PCIe inputs, but the channels coming in from the TTL4C on HAM1 are going to be removed for the RMs when the ISI is installed on HAM1 and replaced with the new ISI channels, and so PM1 will never have the HAM1_TTL4C channels. Since we want to be able to compile and test the model before then, I have put in a constant 0 in place of the TTL4C channels for PM1 (h1sushtts after - IPC INP). Once we have the new ISI channels, we can add these connections in using those channels, as well as update the channels for the RMs.
Daniel just added in the new ASC channels for PM1 (83195), so I was able to successfully compile h1sushtts. It has not yet been installed.
The model file can be found in /opt/rtcds/userapps/release/sus/h1/models/, and the changes to h1sushtts.mdl have been committed to the svn as revision 30907.
Just few "slept on it, and remembered we should" things to add: (1) Attached is the DAQ channel list that comes with the installation of PM1. We didn't cover it explicitly above because it comes standard with the /opt/rtcds/userapps/release/sus/common/models/ HSSS_FF_MASTER.mdl library part, but as it is new (small) weight on the DAQ, it's worth calling out. 4x new channels stored at 512 Hz, and 13x at 256 Hz. (2) Also, the oft-forgotten coil driver output voltage monitor channels, the so-called VMONs needed to be absorbed by the h1susauxh2 front-end too, so we've now done the model prep that as well -- see LHO:83211.
Jeff, Oli
WP 12370
ECR E1700228
In a continuation of preparing for the addition of PM1 (83168), Jeff and I have made the necessary model changes for the PR3 OpLev. As outlined in 83168, the stuff that needed to be changed in the model (to match with the physical changes) was to move the PR3 OpLev channels. They were originally located in the top level of h1susim.mdl and sent via IPC to PR3, but we moved them over to the h1suspr3.mdl model. This means that the outgoing IPC connection from h1susim going into h1suspr3 has been removed(h1susim before, h1susim after), and the PR3 oplev channels have been directly connected from ADC1(h1suspr3 before, h1suspr3 after).
Both models were compiled successfully but have not yet been installed.
Model files can be found in /opt/rtcds/userapps/release/sus/h1/models/, and changes to h1susim.mdl and h1suspr3.mdl have been committed to the svn as revision 30905.
I have recently reported that the “mystery” beam on the spool piece wall near HAM3 was coming from the direction of ITMX (82252). To further this investigation, I started photographing the area around the ITMs and BS as best I coluld through our viewports (there is not a good view towards HAM3). I found several unexpected distributions of light in the vertex:
1. 20 degree conical annular beams from ITMs
Both ITMX/CPX (Figure 1) and ITMY/CPY (Figure 3) cast an expanding annular “beam” towards the BS with a cone half angle of roughly 20 degrees from the main beam. My best guess is that it is produced by arm cavity light hitting the bevel of the ITMs (see cartoon in Figure 1). A good test of this would be to install the new test mass cage baffles at one or more of the ITMs at LLO (presuming Anamaria finds this beam at LLO) this upcoming break. The baffle should hide the bevel and eliminate the ring of light.
2. 45 degree conical annular beams from BS
The BS appears to cast an expanding annular “beam” with a cone half angle of 45 degrees, centered around the -X, -Y direction (Figure 2), and likely another annular beam, also with a half angle of 45 degrees, centered around the -X, +Y direction (evidence in Figure 3). I tried to find a geometry where the bevels were also the source of these beams but didn’t. My best guess is that the annular cone is produced by reflections of light from PR3 and the ITMs off of the inner surface of the circular cage around the BS, or the inside surface of the circular barrel of the BS itself (see drawings on third page of Figure 2).
3. Reflection of BS in ITM elliptical baffles likely visible at ITMY and HAM3
The BS beam spot is reflected towards ITMY by the slanted piece of the ITMX elliptical baffle (Figure 3). While the actual beam is not reflected towards ITMY (it isn’t clipped), the baffle reflects light towards ITMY that is scattered out of the main beam by only a few degrees.
Similar images were taken at LLO for comparison (see Alog 75626)
This is a sister alog to LLO:72613. After discussion with the calibration group, Joe and I have proposed and pushed a small but important fix to the CAL_LINE_MONITOR_MASTER/COHERENCE block. When the IFOs are down, the coherence for various calibration and monitoring line transfer functions is 0 (this is good). However, the TF uncertainty, which is also produced by the coherence block, was found to be railing at 0 too. How can the uncertainty be 0 if there is no coherence?! It turns out that a coherence of 0 was causing a divide by 0 that, on the front end, presents as ... you guessed it, 0. The new h1calcs models (updated at LLO then synchronized by me to LHO) fix that. Now the coherence signal entering the uncertainty-calculation logic will be kept within 1e-6 to 2. We allowed for a coh value above 2 in order to avoid inadvertently 'hiding' egregious errors in the coherence calculation by forcing it to stay below 1. Thinking about this further now that I'm writing this...that wasn't necessary as the coherence channel itself isn't affected by our new changes. oh well. I've attached an annotated screenshot of Joe's screenshot in LLO:72613 to show the new component we've installed. The h1calcs updates should be put in place during tomorrow's maintenance period.
Alena Anayeva has been working on some zeemax modeling to help us understand what happened when our output arm seems to have changed on April 22/23rd.
She has been using shifts in alignment based on slider numbers as reported in 77388, but the shifts that this is giving her are too large.
This is a quick look at the slider calibrations. There is a discrepancy for SR2, the slider calibration agrees with the M2+M3 osems, M1 osems are reporting a 62% smaller shift. For SR3, the sliders and osems agree to within 10%.
After the large shift we have been falling off the SR3 optical lever, so that is not a useful comparison.
Robert, Anamaria
Yesterday we opened the viewports of the oplevs for both ITMX and ITMY to find the alignment of the CP's.
The procedure relies on the fact that the optics are not coated for red so we can see all four surfaces. The separation between the beams at this distance (~34m) is about 10cm due to the wedges (in the table below from the galaxy page). The test mass has a vertical wedge, thick down, so the AR beam will show up directly below. The CP is supposed to be perfectly parallel to the back surface so the closest surface of the CP (CP1) would land essentially right on top of the ITM AR beam. Then the CP has a similar size wedge, but horizontal, so the second CP surface would hit at the same level as CP1 and ITM AR but to the left or right, depending which ITM.
The first plot attached shows the nominal view of these beams as seen at the oplev viewport on the white page. The yellow pages were attached to the lexand covering the viewport and the beams were marked. The full view of all the beams no longer fits through the viewport due to the addition of nozzle baffles so we had to walk the oplev sender to find all the beams. For the ITMX we lucked out and the ITM AR beam was visible at the top gap of the baffle at the same time as the second AR reflection and the CP beam were visible in the aperture. As such we are able to scale the misalignments to the wedge value. We verified the CP beams by moving the CPs, and we scanned to find the second CP surface (CP2) horizontally at about the right distance from CP1.
wedge ITM/CP | misalignment | |
CP-X | 0.07/0.07 deg | 1.7 mrad down |
CP-Y | 0.08/0.07 deg | 0.55 mrad down |
By down I mean they are further pitched down towards the arm.
One would think that we have +/- 440 urad range in pitch on the R0, but it seems its range is much less than advertised. Even stranger, this was also found to be the case at L1, on both ITM R0. When we moved CPY, with respect to this calibration it only moves ~230 urad. So we cannot make it back to the nominal position of overlapping the ITM AR beam. For yaw we did a smaller step so more error on it, but it's about 60% of slider value. More on this later.
(CPY is the one that Robert found to modulate the noise from the MC tube baffle.) Speaking of the L1 experience, we even had to vent back in 2016 to fix one of these CP misalignments, which was too close to HR actually. The interesting thing to me, looking back, is that L1 still has a similar misalignment for CPX to the H1 CPY and we don't see as high noise coupling at the IMC tube.
CPX is very misaligned by comparison, but not linked to the MC tube scatter. Alena has agreed to help us track where these ghost beams land at P/SR3 and the scraper baffles, now that we know their exact orientation.
+Peter, Jeff
Regarding the reduced range of CPY R0, we checked what the BOSEMs and the coil current monitors had to say about the range during our optical measurement. Jeff kindly calibrated the RMSMONs so we could see what the current really is. The data is in the attached screenshot. We did not check the CPX but, as I mention above, we found this to be the case at LLO as well. For pitch I show F1, which gets the largest drive, for yaw I show one of F2/F3 which get identical drives. In terms of range, I define it as how far from 0 we can go, so half range technically, but it's max DAC output and current. If we want more range we have to decide if we can afford more than ~45mA on the BOSEMs.
CPY | Slider [urad] | range [%] | OSEM readback [urad] | Oplev meas [urad] | Coil current [mA] |
PIT | 440 | 100 | 1130 | 230 | 45 |
YAW | 200 | 33 | 160 | 120 | 16 |
Robert, Jason, Anamaria
Today we used the oplev to determine the CP alignment for both ITMX and ITMY. We had to move the sender around to find all the beams so here I mark the shift in the beam location in case someone later wants to look at drifts. The ITMs were realigned slightly after our test so it's not good to 1urad but the oplevs drift around more than that anyway. I also had to choose times after lockloss and before relocking, to not have ASC interfere.
ITMX P | ITMX Y | ITMY P | ITMY Y | |
Before | -8 | -1 | -20 | 1 |
After | -10 | 5 | 6 | 2 |
Since people don't open the receiver box or the sender box very often we note:
a) The beams on the oplevs are as large as the QPD which is not great, but depends what they're used for.
b) We found the ITMX oplev beam to be clipping on the nozzle baffle. Jason helped us shift the QPD and then we realigned to this new, more central location.
c) Replacing the sender cover is very difficult without causing a shift in the oplev beam, though we were quite careful to not touch the telescope while doing that. This is why we were not able to center them better.
d) I suppose the last few urad should be done with the QPD stages, but one has to be careful to check from time to time that it doesn't walk out of the aperture over time (which is smaller since the installation of nozzle baffles).
The calibration has been updated at LHO using Cal report 20240330T211519Z. The TDCF EPICS channel changes and the CALCS filter changes are attached. In O4b, the Calibration group is using a slightly different scheme for keeping track of cal reports, the front end pipeline settings, the GDS pipeline, and the hourly online uncertainty budgets. The biggest change is that each cal report is now also a git repository with its own history. This will allow for better tracking in situations for which it is deemed necessary to regenerate / reprocess calibration measurements. Additionally, there are now two additional channels available on the front end:H1:CAL-CALIB_REPORT_HASH_INT
andH1:CAL-CALIB_REPORT_ID_INT
. These new channels are populated by the pyDARM tools when someone in the control room runs 'pydarm export --push'. New Channels and their purpose:H1:CAL-CALIB_REPORT_HASH_INT
: numeric representation of the git commit hash for the report that was used to generate the current calibration pipelineH1:CAL-CALIB_REPORT_ID_INT
: numeric representation of the report id (e.g. 20240330T211519Z) the current calibration pipeline is configured with. Current channel values:caget H1:CAL-CALIB_REPORT_HASH_INT H1:CAL-CALIB_REPORT_ID_INT H1:CAL-CALIB_REPORT_HASH_INT 4.14858e+07 H1:CAL-CALIB_REPORT_ID_INT 1.39587e+09
End-to-end procedure for updating the Calibration pipeline: 0. Take new calibration measurements following the instructions in OpsWiki/TakingCalibrationMeasurements. 1. make sure that the current pyDARM deployment version is up-to-date 1.a) runpydarm -v
and check that the returned version (e.g. 20240405.1) matches the latest 'production release' tag listed at https://git.ligo.org/Calibration/pydarm/-/tags. 1.b) if the tags do not match, have a member of the Calibration group deploy the latest pyDARM tools to the site and the ldas cluster. They should follow the instructions laid out here. 2. generate a new cal report 2.a) runpydarm report
(if this measurement set should be considered an epoch in sensing or actuation then apply the appropriate command line options as listed in the pyDARM help menu (pydarm report -h
)). Report generation will now populate the report directory at/ligo/groups/cal/H1/reports/<report id>/
with various 'export' products. These include dtt calibration files, inverse sensing foton exports, and TDCF EPICS records that would be updated if this report were to be exported to the front end. Here is a quick list of the some of the products that get stored at this step:pcal_calib_dtt.txt
: Pcal calibration into meters of displacementdeltal_external_calib.txt
: calibration of DELTAL_EXTERNAL into strainpydarm_version
: the pyDARM tag indicating the version of pyDARM used to generate the reportexport_epics_records.txt
: list of each EPICS channel name and the value it would get set to when the report is exported to the front endgstlal_compute_strain_C00_filters_H1.npz
: a set of GDS filters and meta data that is sent to the GDS pipeline when the report is exported. 3. inspect the plots in the cal report to make sure they're reasonable. Typically this done by a member of the calibration group that is well-acquainted with the IFO and the calibration pipeline. 3.a) if the cal report is valid, set the 'valid' tag in the cal report:touch /ligo/groups/cal/H1/reports/<report id>/tags/valid
. 3.b) if the cal report was marked valid in 3.a), then 'commit' the report now that its contents have been changed:pydarm commit <report id>
. If you have not done this before, you may see a message from git complaining about dubious ownership. If that happens, follow the instructions in the message and try committing again. If you continue to have trouble, reach out to me, Jamie Rollins, or another member of the Calibration group that is knowledgeable about the new infrastructure. 4. if the report is 'valid', export the new calibration to the front end. 4.a) to first compare the cal report against the currently installed calibration pipeline, runpydarm status
4.b) to have pyDARM list all of the changes it would make if exported, runpydarm export
4.c) once you are certain that you want to update the calibration, runpydarm export --push
. This will write to all of the EPICS channels listed inexport_epics_records.txt
and perform various CAL-CS frontend foton filter changes. 4.d) reload the CAL-CS front end coefficients via the MEDM screen system to make sure the new changes are loaded into place. 4.e) add an 'export' tag to the current report (touch /ligo/groups/cal/H1/reports/<report id>/tags/exported
) and commit it again (pydarm commit <report id>
). 5. upload the newly exported report to the ldas cluster. 5.a) runpydarm upload <report id>
5.b) wait about 1-2 minutes after the upload to allow time for the systemd timers on the ldas cluster to recognize that the new report exists. You can confirm that the latest report is recognized by the ldas cluster by verifying that https://ldas-jobs.ligo-wa.caltech.edu/~cal/archive/H1/reports/latest/ points to the correct report. 6. restart the GDS pipeline 6.a) runpydarm gds restart
to begin the process of restarting the GDS pipeline. This will show prompts from the DMT machines (DMT1 and DMT2) asking you to confirm that the hash for the GDS pipeline package (gstlal_compute_strain_C00_filters_H1.npz
). The prompts will contain the following line:b1c9f6cd1ba3c202a971c6b56c7a1774afb1931625a7344e9a24e6795f3837d7 gstlal_compute_strain_C00_filters_H1.npzTo confirm that the hash above is correct, runsha256sum /ligo/groups/cal/H1/reports/<report id>/gstlal_compute_strain_C00_filters_H1.npz
and verify that the two hashes are identical. If they are the same then type 'yes' and continue with the GDS restart process. After performing this process for the second DMT machine, pyDARM will continue with the pipeline restart. The GDS pipeline currently takes about 12 minutes to fully reboot and begin producing data again. During this time, no GDS calibration data will be receivable. 6.b) If the two hashes are not the same and all of the above checks were done, then something is likely wrong with the pyDARM+GDS pipeline system and you cannot continue with the calibration push. Take the following steps to reset the calibration to its former state: 1. open the CAL-CS SDF table and revert all of the EPICS channel pushes listed inexport_epics_records.txt
. 2. reset the foton filters by reverting to the last installed h1calcs filter before you exported the calibration report. 3. remove the exported tag from the new report (rm /ligo/groups/cal/H1/reports/<report id>/tags/exported
), commit it (pydarm commit <report id>
), and re-upload it to the ldas cluster (pydarm upload <report id>
). Summary of pyDARM commands (for use in the control room):pydarm report [<args ...> <report id>]
: generate a calibration report based on the measurement set at <report id>. See output ofpydarm report -h
for additional customization.pydarm status [<args ...> <report id>]
: compare current calibration pipeline against what the pipeline would be if report <report id> were exported to the front endpydarm commit [<args ...> <report id>]
: make a new commit in the report <report id> git repository. this creates a new hash and should be done any time the report's contents are changed.pydarm upload <report id>
: upload/sync the report <report id> with the ldas clusterpydarm gds restart
: initiate a GDS pipeline restartpydarm ls -r
: list all reports
Gabriele, Louis We've successfully run a full set of calibration swept-sine measurements in the new DARM offloading (LHO:76315). In December, I tried running simulines in the new DARM state without success. I reduced all injection amplitudes by 50% but kept knocking the IFO out of lock (LHO:74883). After those repeated failures, I realized that the right thing to do was to scale the swept-sine amplitudes by the changes that we made to the filters in the actuation path. I prepared four sets of simulines injections last year that we finally got to try this evening. The simulines configurations that I prepared live at/ligo/groups/cal/src/simulines/simulines/newDARM_20231221
. In that directory are 1.) simulines injections scaled by the exact changes we made to the locking filters, 2.-4.) reductions by 10,100, and 1000 of the rescaled injections that I made out of an abundance of caution. The measurements we took this evening are:2024-03-15 01:44:02,574 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240315T012231Z.hdf5 2024-03-15 01:44:02,582 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240315T012231Z.hdf5 2024-03-15 01:44:02,591 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240315T012231Z.hdf5 2024-03-15 01:44:02,599 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240315T012231Z.hdf5 2024-03-15 01:44:02,605 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240315T012231Z.hdf5
We did not get to take a broadband PCALY2DARM measurement as we usually do as part of the normal measurement suite. Next steps are to update the pyDARM parameter file to reflect the current state of the IFO, process these new measurements, then use them to update the GDS pipeline and confirm that is working well. More on that progress in a comment. Relevant Logs: - success in transitioning to the new DARM offloading scheme in March 2024: LHO:76315 - unable to transition into the new offloading in January 2024, (we still don't have a good explanation for this): LHO:75308 - cal-cs updated for the new darm state: LHO:76392 - weird noise in cal-cs last time we tried updating the front end calibration for this state (still no explanation): LHO:75432 - previous problems calibrating this state in December: LHO:74977 - simulines lockloss in new darm state in December: LHO:74887
the script i used to rescale the simulines injections is at /ligo/groups/cal/common/scripts/adjust_amp_simulines.py
. it's the same (but modified) I used in LHO:74883.
On Updating the pyDARM parameter file for the new DARM state: - copiedH1OMC_1394062193.txt
to/ligo/groups/cal/H1/arx/fotonfilters/
(see Nov 28, 2023 discussion section in LIGO-T2200107 regarding cal directory structure changes for 04b). Since pyDARM logic isn't fully transitioned yet, I also copied the same file to the 'old' location :/ligo/svncommon/CalSVN/aligocalibration/trunk/Common/H1CalFilterArchive/h1omc/
. - i also copiedH1SUSETMX_139441589.txt
to both (corresponding) locations. - pyDARM parameter file swstat values were updated according to what was active at 1394500426 (SUSETMX and DARM1,2) the git commit encapsulating the changes to the parameter file can be found here: https://git.ligo.org/Calibration/ifo/H1/-/commit/119768de95a66658039036aca358364c1d39abe4
here is the pyDARM report for this measurement: https://ldas-jobs.ligo-wa.caltech.edu/~cal/?report=20240315T012231Z
Cabling on the HAM5 D3 flange was redressed. Cables routed behind the cross beam. This required cables for the HAM5 DCPD and the OFI TEC/THERMISTORS to be disconnected. D. Sigg disabled the servo for the OFI TEC. Servo enabled after cable work was completed.
J. Kissel, L. Dartez Jeff ran the calibration measurement suite. We processed it according to the instructions here. I then updated theCAL_DELTAL_EXTERNAL
calibration using the new report at/ligo/groups/cal/H1/reports/20240311T214031Z
.
Attaching the cal report. Optical gain: 2024-03-12: 3.322e+06 [DARM ERROR counts / meter] 2023-10-27: 3.34e+06 [DARM ERROR counts / meter] KappaC at the end of O4a: 1.006 Optical gain at end of O4a: 3.336e6 [DARM ERROR counts / meter] So the current optical gain is differs from what we had at the end of O4a by about 0.4%.
The calibration from this report has now been added to the LDAS cluster archive such that it shows up in the official infrastructure. It's location is https://ldas-jobs.ligo-wa.caltech.edu/~cal/?report=20240311T214031Z It was tagged as "valid" and "exported" as follows: On a local control room workstation (or on whichever computer system the report was created) $ cd /ligo/groups/cal/H1/reports $ touch 20240311T214031Z/tags/exported $ touch 20240311T214031Z/tags/tags $ arx commit 20240311T214031Z
Today, we opened the large ISI storage conatiner that housed 4 QUAD Upper Structure assemblies, 2 Baffle Down-tube assemblies, and 2 TransMonSus lower structures. It all looked great! The wires and blades showed no signs of any rusting, the container and parts looked very clean and organized. All clamped down as last left years ago. Nothing found unexpected.
We are working in the West Bay of the LVEA, so the parts are all currenly exposed in the large cleanroom there (which is ON).
Shroud panel update:
For the OMC Trans video beam, we further abused the shroud panel successfully move it enough such that the beam comes out. However, that made it somehow impossible to mount the panel on the cage using at least three screws.
We debated how we should secure the panel robustly, but in the end concluded that enlarging existing hole (by milling, good job Tyler!) is the way to go. That way we don't have to do crazy things for hardwares, the panels can stay where they are supposed to be. See Betsy's pictures for the milling vs drilling vs whatever-ing. The modified panels will be installed tomorrow.
We already installed a viewport simulator on HAM6.
Grounding problem was fixed:
ASC-AS_C grounding was fixed. We first touched the AS-AS_C cable on ISI table and nothing changed, we jiggled the cable coming from the top of the ISI to stage 0 and nothing changed, then we jiggled the section between stage 0 and the feedthrough and nothing changed.
Just to make sure, Rahul disconnected the DB25 connection on ISI table and the grounding was gone, which was a surprise. In the end, it was the unused QPD cable assy that was laying on top of the ISI table. ASC-AS_C and the unused QPD cable assy share the same DB25 connector. The shield collar on the unused PCB was touching the ISI table, causing problem for ASC-AS_C. We relocated the unused PCB so that it won't touch. See Rahul's pictures.
HAM6 grounding issues:-
Picture 1: shield collar on the unused PCB touching ISI table.
picture2: shield collar on the unused PCB readjusted and then grounding issues were gone.
picture3: showing in use QPD cable assy cable clamping/routing.
After some failed attempts to cut a hole out of a glass panel and retain the panel (not a thing stain glass'ers do apparently with standard cutting tools, according to google), we switched methods and got diamond glass hole saws. These chewed up the glass pretty badly (quite a few practice rounds attempting to improve such as faster, slower, more water, taped, sandwiched, etc). And we were leary of where the paint from the hole saw went. #thumbsdown.
Then Tyler decided to try milling a square out of the glass. Some more practice rounds and a new tooling fixture plate later, he managed a clean-ish looking square cut around the original 0.6" beam hole. There are 2 chips at the square but the square is 1.5" on a side so these chips should be well out of the beam way and we'll tyr to improve the setup tomorrow for panel 2. It may have some chips, but this solution seems to be the best mediocre solution we have come up with with the HAM6 team. Attached is the photo assay, ala Tyler and I, of the evolution in glass hole cutting which took most of the day. Note, the last picture showing the new square hole has egregious reflections of the room light in yellow rectangles and dirty smudges all over which will be cleaned, both of which you should ignore.
https://dcc.ligo.org/DocDB/0117/D1500044/002/D1500044-v2.PDF Is the back panel modified today (not shown on drawing, but for part number ref).
Tomorrow I will clean this piece and help install it at the -Y OMC side of the shroud assy, while Tyler performs the same type of square-hole cut on the +X panel spare (D1500045).
Preparations:
ASC-AS_C was centered using SRM. It was moving more than I'd have liked but it was OK.
Confirmed that Irises on HAM6 were centered.
The laser beam was on both of OMC QPDs but sadly they were pretty off-centered. Things were swinging more than I'd have liked but anyway I roughly centered the beam on both QPDs using OM1/2/3.
P slider (now/before) | Y slider (now/before) | DAC max now | |
OM1 | 120/90 | 510/610 | 7.3k |
OM2 | 730/-80 | 760/760 | 10k |
OM3 | -1100/-550 | -184/-74 | 14k |
This means that, due to the cable rerouting/retouching we've done when we put the shroud panel on last week (alog 75651), OMC is hanging at a different angle. That's OK as far as DAC of OM1/2/3 won't saturate once we pump down.
This also means that both WFSs are not centered at all. That's OK, we can pico.
OMCR was good:
We visually checked the centering on the OMCR diode using a viewer card. It was OK in that it's not close to the edge, so we left it as is. The distance from the edge of the V-damp to the OMCR is pretty mush the same as Julian's photo from Jan/31 (alog 75653).
OMC trans video beam doesn't come out of the shroud:
I scanned the OMC PZT2 and was able to see that the DCPDs are flashing, but no beam was visible on the steering mirror of the OMC trans video beam.
We removed the +X+Y-side short shroud panel, and immediately I saw the beam, it was faint but obvious using the IR card and the viewer combined. It was already hitting the steering mirror, and the beam was roughly going to the direction of the video viewport.
Using a ruler and the IR viewer card, we found that the beam height was, very roughly, 4" from the OMCS cage to the steering mirror. Maybe it was a tad low.
Rahul placed a ruler on the ISI table to mark the rough beam position close to the +X door, see his picture.
We put on the shroud panel and the beam is gone no matter what. We even did Betsy/TJ/Fil/RichM/Peter/Koji/Calum trick (alog 28944) they did back in 2016 to move the -X-Y-side short panel around more than designed, without success.
It's very hard to find where the beam is at the panel location. Since we know where the beam is at the edge of HAM6 close to video viewport, we can use a laser pointer to back trace the beam path and see if the baffle hole should move to +X or -X direction, but that won't change the fact that we couldn't bring the beam out with the shroud panel.
Fundamental problem is that the exit hole in the shroud panel is too small.
We cannot find OMC trans unused beam at all:
We also tried to see the unused OMC trans beam that comies out into -X-Y direction, but couldn't. Maybe the power of that beam is much smaller than the video beam.
Because of this, and because they did the aforementioned Bests/TJ/Fil/RichM/Peter/Koji/Calum trick for the old OMC, and because there's no position reproducibility whatsoever once that trick is used and the panel removed and reattached, there's no guarantee that the beam will come out once the panel is installed.
If the panel weren't there, it's unimaginable that the beam will miss the V-dump. Again, the problem is that the hole is too small. That was a problem in 2016, and that is the problem now.
How can we move forward:
Could we NOT reinstall the two problem shroud panels in -X-Y and +X+Y side?
We put the OMCS baffles back on except the -Y side vertical panel, which had a non-standard screw/washer/O-ring configuration. It turns out that that was done intentionally back in Aug/2016 because otherwise the OMC trans beam won't come out of the shroud hole (alog 28944).
The shroud panels had finger marks from gloves as well as spots that look like dust particulates (1st attachment). We wiped using IPA and wipes, which reduced the finger marks (2nd attachment). Though they look much better in the picture, in reality we might have been just spreading it over the larger surface. Some spots didn't move at all. Anyway, we cleaned the panels using ionized nitrogen gun (for particulates) immediately followed by alcohol-wipe.
There was one particularly bad spot, see Betsy's picture.
3rd and 4th pictures show Rahul and Betsy working on a small horizontal top panel. 5th picture shows the OMC surrounded by the shroud panels.
After attaching the shroud panels (except for the -Y side vertical one), we found that one of the white DCPD cables interfered with the top glass panel, so Betsy pulled the cable up higher in the cable retainer attached to the suspension cage. PZT cable was really close to the other DCPD cable, but the PZT cable doesn't have a retainer, so I bent that cable to form a large arc to avoid interference. On -X side, Rahul found that one of the QPD cables is very close to a BOSEM cable, so he worked on that.
These resulted in some change in the OMCS alignment, and top osem numbers shifted according to Rahul. My hope is that this is benign enough we can take care of that using mostly OM2 and OMC.
Rahul measured the TF for OMCS and they're OK.
Tomorrow we'll attach the remaining panel, recenter BOSEMs for OMC and OM2, and I'll start the electronics check.
A couple PR photos, and the one of the panel with the particularly offensive mark - note the residue around it as if there was a previous attempt to remove it. It did not move for me either, my guess is it's actually a little divot. Will install it as is.
Tagging EPO for Output Mode Cleaner photos.
This morning I re-adjusted the BOSEMs on OM2 (all four of them) and OMCS (T1 and T3).
Also, I attached the one remaining side baffles.
I will take TF measurements and check the health of both the suspensions.