The LVEA has been swept at the conclusion of (most) maintenance activities. Team SQZ was still working on-table (can happen alongside locking) and VAC folks were finishing up taking pictures through HAM6 viewports.
I unplugged the Genie lift in the West bay and coiled up the cord next to it. Otherwise, everything looked okay.
Been thinking about CPS diff stuff lately, and I want try making some changes to what chambers are connected. For the most part, the individual corner cavities (MC/PRCL, MICH, SRCL) were separated, i.e. HAM2-3 were connected by cps diff, BSC123 were a separate set of loops, etc. I've now set it up so that all of the chambers are tied to BSC2. If this is suspected of causing a problem, it will be easy to switch back to the old config: new state is called BSC2_FULL_DIFF_CPS, the old config we used for years was just FULL_DIFF_CPS. Find and replace in SEI_ENV, load and take SEI_ENV through a down up cycle.
The only real risk to this is kicking BSC2 would probably trip all of the chambers. Don't do that.
As per WP 11934 the dmt-runtime-config package was updated on h1dmt1 and h1dmt2 followed by a reboot at around 9:40am local time. This was to update the calculation of sensemon2 to be based off of the cleaned data. It is desired to have these values available in CDS, so we updated the dmt2epics IOC to grab the EFF_BNS_RANGE and EFF_RED_SHIFT values and reflected them into EPICS. After testing that we could retrieve the data from the DMT we added the channels to the EDC and rebooted the daqd around 10:30am localtime. The new channels are: H1:CDS-SENSMON2_BNS_EFF_RANGE_CLEAN_MPC H1:CDS-SENSMON2_BNS_EFF_RANGE_CLEAN_MPC_GPS H1:CDS-SENSMON2_BNS_RED_SHIFT_CLEAN H1:CDS-SENSMON2_BNS_RED_SHIFT_CLEAN_GPS TJ will watch the new Sensemon2 range plot as we start to lock later today and will update the FOM display if this is working well.
Tue Jun 18 10:08:07 2024 INFO: Fill completed in 8min 3secs
As described in 76326 "kappa_TST is the time-dependent correction factor that tracks the TST stage actuation strength relative to the last time the calibration was updated", but the calibration hasn't been updated more regularly, so if it was still changing, we should see it. Francisco has made a script to automatically adjust the ETMX L2L drivealign to compensate for Kappa_TST but this only started this week: 78425.
In October Ryan's anaylsis showed the Kappa TST drift agreed up with increasing charge on ETMX from in-lock charge measurements: 73613, recent in-lock charge measurements in 75456 show the charge hasn't changed much. To do: check on in-lock charge measuremtns and add a longer time scale plot.
WP11927 TW0 Offload
The copy of the past 6 months of raw minute trend files from h1daqtw0 to h1ldasgw0 via h1daqfw0 was started at 09:39.
Prior to the start of the copy, the past 6 months of files were 'frozen' in a temporary minute_raw_1402759218 directory on tw0 at 08:20 this morning. The NDS process on h1daqnds0 was restarted at 08:34 to serve these data from their temporary path.
FAMIS 20702
About 4 days ago, AMP1 had a slight drop in output power while the NPRO had a slight jump. AMP2 power also had a hit at the same time, but is largely unchanged.
The rise/fall in PMC reflected/transmitted power again looks to have leveled off a bit since last week.
As walking through the LVEA to do the laser transition, things to note: the high bay and clean receiving lights were on, west bay genie lift was plugged in, I unplugged an unused extension cored by CO2Y, I heard a cricket by the y-manifold but couldn't find it, the HAM3 dust monitor has an extension cord running to it but isn't plugged in.
While Dave was working on moving the tw0 files to an archive disk he noticed that the gpstime command was giving bad times. The data was correct and the time was not: PDT: 2024-06-18 00:00:00.000000 PDT UTC: 2024-06-18 07:00:00.000000 UTC GPS: 1402729218.000000 This was the case on all the h1daq*0 machines. The cause was an old gpstime package. We updated the gpstime package and it works as expected. PDT: 2024-06-18 08:17:31.983990 PDT UTC: 2024-06-18 15:17:31.983990 UTC GPS: 1402759069.983990
TITLE: 06/18 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 3mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY: Locked for 8 hours, magnetic injections running. 4 hours of planned maintenance this morning.
Workstations were updated and rebooted. This was an OS packages update. Conda packages were not updated.
TITLE: 06/18 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: We are Observing at 152 Mpc and have been Locked for over an hour now. The wind was pretty bad (up to 40mph) most of my shift; now it's down to 15mph. The first lockloss was definitely from the high wind, but relocking actually wasn't as bad as I expected it to be in 35-39mph winds. We lost lock soon after that, 34 minutes into NLN. Second relock was quick - wind was also finally coming down at this point.
LOG:
23:00UTC Detector Observing and Locked for 5.5 hours
03:09 Lockloss from wind
03:11 Initial Alignment
03:56 IA done, relocking
05:00 NOMINAL_LOW_NOISE
05:08 Observing
05:34 Lockloss
05:35 Initial Alignment
05:56 IA done, relocking
06:38 NOMINAL_LOW_NOISE
06:40 Observing
07:16 Superevent S240618ah
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:55 | PCAL | Francisco, Rick | PCAL Lab | y(local) | PCAl things | 01:45 |
Lockloss @ 06/18 05:34 UTC after only 35 minutes locked. I don't believe this one is wind related.
06:40 UTC Observing
Lockloss @ 06/18 03:09 UTC from the wind
05:08 Observing
Y2L DRIVEALIGN diffs accepted in order to go into Observing. I believe TJ had said something about these specifically, since they get changed while we are relocking (MOVE_SPOTS?) tagging ISC
These were from the A2L (Y) that was ran earlier in the morning. I hadn't loaded the guardian before we went into Observing, then forgot to pass off my sticky note to the evening operator. They've been loaded in now since we are out of Observing.
TITLE: 06/17 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 18:33 UTC (6 hrs)
It has been a calm day after minimal trouble relocking (multiple post-initial alignment locklosses) at and before DRMI with one at CARM_TO_TR. Planned comissioning occured from 8:30-11:30 and was timely.
Nothing else of note
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
22:41 | SAF | LVEA | LVEA | YES | LVEA IS LASER HAZARD | 15:52 |
15:09 | FAC | Karen | Optics, Vac Prep | N | Technical Cleaning | 16:01 |
16:02 | FAC | Karen | MY | N | Technical Cleaning | 17:02 |
16:02 | FAC | Kim | EX | N | Technical Cleaning | 16:45 |
16:31 | SQZ | Sheila, Preet | LVEA | YES | Magnetometer move | 16:34 |
16:39 | SQZ | Naoki, Andrei | LVEA | N | SQZ Rack Work | 16:39 |
16:49 | FAC | Richard, Bubba | Y-Arm-VPW Area | YES | Genie Lift Move | 17:49 |
17:14 | SQZ | Karmeng | Optics Lab | N | SHG SQZ Work | 01:28 |
18:28 | SQZ | Naoki | LVEA | YES | SQZ Table Work | 18:28 |
18:29 | SQZ | Sheila, Preet | LVEA | YES | Magnetometer Move | 18:29 |
22:18 | PHOTO | Chris | X/Y Arm | N | Driving | 00:17 |
J. Kissel TIA D2000592: S/N S2100832_SN02 Whitening Chassis D2200215: S/N S2300003 Accessory Box D1900068: S/N S1900266 SR785: S/N 77429 I've finally got a high quality, trustworthy, no-nonsense measurement of the OMC DCPD transimpedance amplifiers frequency response. For those who haven't seen the saga leading up to today, see the 4 month long story in LHO:77735, LHO:78090, and LHO:78165. For those who want to move on with their lives, like me: I attach a collection plots showing the following for each DCPD: Page 1 (DCPDA) and Page 2 (DCPDB) - 2023-03-10: The original data set of the previous OMC DCPD's via the same transimpedance amplifier - 2024-05-28: The last, most recent data set before this, where I *thought* that is was good, even though the measurement setup was bonkers, - 2024-06-11: Today's data Page 3 (the Measurement Setup) - The ratio of the measurement setup from 2023-03-10 to 2024-06-11. With this good data set, we see that - there's NO change between the 2023-03-10 and 2024-06-11 data sets at high frequencies, which matches the conclusions from the remote DAC driven measurements (LHO:78112) and - there *is* a 0.3% level change in the frequency response at low frequency, which also matches the conclusions from the remote DAC driven measurements. Very refreshing to finally have agreement between these two methods. OK -- so -- what's next? Now we can return to the mission of fixing the front-end compensation and balance matrix such that we can - reduce the impact on the overall systematic error in the calibration, and - reduce the frequency dependent imbalance that were each discovered in Feb 2024 (see LHO:76232). Here's the step-by-step: - Send the data to Louis for fitting. - Create/install new V2A filters for A0 / B0 bank - Switch over to these filters and accept in SDF - Update pydarm parameter file with new super-Nyquist poles and zeros. - Measure compensation performance with remote DAC driven measurement of TIA*Wh*AntiWh*V2A confirm bitterness / flatness Once IFO is back up, running, (does it need to be thermalized?) - Measure balance matrix, Remember -- SQZ OFF confirm better-ness / flatness - Install new balance matrix - Accept Balance Matrix in SDF Once IFO is thermalized - grab a new sensing function. - push a new updated calibration
The data gathered for this aLOG lives in: /ligo/svncommon/CalSVN/aligocalibration/trunk/ Common/Electronics/H1/DCPDTransimpedanceAmp/OMCA/S2100832_SN02/20240611/Data/ # Primary measurements, with DCPD TIA included in the measurement setup (page 1 of the main entry's attachment measurement diagrams) 20240611_H1_DCPDTransimpedanceAmp_OMCA_DCPDA_mag.TXT 20240611_H1_DCPDTransimpedanceAmp_OMCA_DCPDA_pha.TXT 20240611_H1_DCPDTransimpedanceAmp_OMCA_DCPDB_mag.TXT 20240611_H1_DCPDTransimpedanceAmp_OMCA_DCPDB_pha.TXT # DCPD TIA excluded, "measurement setup" along (page 2 of the main entry's attachment measurement diagrams) 20240611_H1_MeasSetup_ThruDB25_PreampDisconnected_OMCA_DCPDA_mag.TXT 20240611_H1_MeasSetup_ThruDB25_PreampDisconnected_OMCA_DCPDA_pha.TXT 20240611_H1_MeasSetup_ThruDB25_PreampDisconnected_OMCA_DCPDB_mag.TXT 20240611_H1_MeasSetup_ThruDB25_PreampDisconnected_OMCA_DCPDB_pha.TXT
Here are fit results for the TIA measurements DCPD A: Fit Zeros: [6.606 2.306 2.482] Hz Fit Poles: [1.117e+04 -0.j 3.286e+01 -0.j 1.014e+04 -0.j 5.764e+00-22.229j 5.764e+00+22.229j] Hz DCPD B: Fit Zeros: [1.774 6.534 2.519] Hz Fit Poles: [1.120e+04 -0.j 3.264e+01 -0.j 1.013e+04 -0.j 4.807e+00-19.822j 4.807e+00+19.822j] Hz A PDF showing plots of the results is attached as 20240611_H1_DCPDTransimpedanceAmp_report.pdf. The DCPD A and B data and their fits (left column) next to their residuals (right column) are on pages 1 and 2, respectively. The third page is a ratio between DCPD A and DCPD B datasets. Again, they're just overlaid on the left for qualitative comparison and the residual is on the right. I used iirrational. To reproduce activate the conda environment I set up specifically just to run iirrational.activate /ligo/home/louis.dartez/.conda/envs/iirrational
Then runpython /ligo/groups/cal/common/scripts/electronics/omctransimpedanceamplifier/fits/fit_H1_OMC_TIA_20240617.py
A full transcript of my commands and the script's output is attached as output.txt. On gitlab the code lives at https://git.ligo.org/Calibration/ifo/common/-/blob/main/scripts/electronics/omctransimpedanceamplifier/fits/fit_H1_OMC_TIA_20240617.py
Here's what I think comes next in four quick and easy steps: 1. Install new V2A filters (FM6 is free for both A0 and B0) but don't activate them. 2. Measure the new balance matrix element parameters (most recently done in LHO:76232. 3. Update L43 in the pyDARM parameter file template at /ligo/groups/cal/H1/ifo/pydarm_H1.ini (and push to git) N.B. doing this too soon without actually changing the IFO will mess up reports! Best to do this right before imposing the changes to the IFO to avoid confusion. 4. When there's IFO time, ideally with a fully locked and thermalized IFO: 4.a move all DARM control to DCPD channel B (double the DCPD_B gain and bring the DCPD_A gain to 0) 4.b activate the new V2A filter in DCPD_A0 FM6 and deactivate the current one 4.c populate the new balance matrix elements for DCPD A (we think it's the first column but this remains to be confirmed) 4.d move DARM control to DCPD channel A (bring both gains back to 1, then do the reverse of 4.a) 4.e repeat 4.b and 4.c for DCPD channel B then bring both gains back to 1 again 4.f run simulines (in NLN_CAL_MEAS) and a broadband measurement 4.g generate report, verify, and if all good then export it to the front end (make sure to do step 3. before generating the report!) 4.h restart GDS pipeline (only after marking report as valid and uploading it to the LHO ldas cluster) 4.i twiddle thumbs for about 12 minutes until GDS is back online 4.j take another simulines and broadband (good to look at gds/pcal) 4.k back to NLN and confirm TDCF's are good.