Search criteria
Section: H2
Task: DAQ
WP12321 Add FMCSSTAT channels to EDC
Erik, Dave:
Recently FMCS-STAT was expanded to monitor FCES, EX and EY temperatures. These additional channels were added to the H1EPICS_FMCSSTAT.ini file. DAQ and EDC restart was required
WP12339 TW1 raw minute trend file offload
Dave:
The copy of the last 6 months of raw minute trends from the almost full SSD-RAID on h1daqtw1 was started. h1daqnds1 was temporarily reconfigured to serve these data from their temporary location while the copy proceeds.
A restart of h1daqnds1 daqd was needed, this was done when the DAQ was restarted for EDC changes.
WP12333 New digivideo server and network switch, move 4 cameras to new server
Jonathan, Patrick, Fil, Dave:
A new Cisco POE network switch, called sw-lvea-aux1, was installed in the CER below the current sw-lvea-aux. This is a dual powered switch, both power supplies are DC powered. Note, sw-lvea-aux has one DC and one AC power supply, this has been left unchanged for now.
Two multimode fiber pairs were used to connect sw-lvea-aux1 back to the core switch in the MSR.
For testing, four relatively unused cameras were moved from h1digivideo1 to the server h1digivideo4. These are MC1 (h1cam11), MC3 (h1cam12), PRM (h1cam13) and PR3 (h1cam14).
The new server IOC is missing two EPICS channels compared with the old IOC, _XY and _AUTO. To green up the EDC due to these missing channels a dummy IOC is being ran (see alog).
The MC1, MC3, PRM and PR3 camera images on the control room FOM (nuc26) started showing compression issues, mainly several seconds of smeared green/magenta horizontal stripes every few minutes. This was tracked to maximizing CPU resources, and has been temporaily fixed by stopping one of these camera viewers.
EY Timing Fanout Errors
Daniel, Marc, Jonathan, Erik, Ibrahim, Dave:
Soon after lunchtime the timing system started flashing RED on the CDS overview. Investigation tracked this down to the EY fanout, port_5 (numbering from zero, so the sixth physical port). This port sends the timing signal to h1iscey's IO Chassis LIGO Timing Card.
Marc and Dave went to EY at 16:30 with spare SFPs and timing card. After swapping these out with no success, the problem was tracked to the fanout port itself. With the original SFPs, fiber and timing card, using port_6 instead of port_5 fixed the issue.
For initial SFP switching, we just stopped all the models on h1iscey (h1iopiscey, h1iscey, h1pemey, h1caley, h1alsey). Later when we replaced the timing cards h1iscey was fenced from the Dolphin fabric and powered down.
The operator put all EY systems (SUS, SEI and ISC) into a safe mode before the start of the investigation.
DAQ Restart
Erik, Dave:
The 0-leg restart was non-optimal. A new EDC restart procedure was being tested, whereby both trend-writers were turned off before h1edc was restarted to prevent channel-hopping which causes outlier data.
The reason for the DAQ restart was an expanded H1EPICS_FMCSSTAT.ini
After the restart of the 0-leg it was discovered that there were some naming issues with the FMCS STAT FCES channels. Erik regenerated a new H1EPICS_FMCSSTAT.ini and the EDC/0-leg were restarted again.
Following both 0-leg restarts, FW0 spontaneously restarted itself after running only a few minutes.
When the EDC and the 0-leg were stable, the 1-leg was restarted. During this restart NDS1 came up with a temporary daqdrc serving TW1 past data from its temporary location.
Reboots/Restarts
Tue18Feb2025
LOC TIME HOSTNAME MODEL/REBOOT
09:45:03 h1susauxb123 h1edc[DAQ] <<< first edc restart, incorrect FCES names
09:46:02 h1daqdc0 [DAQ] <<< first 0-leg restart
09:46:10 h1daqtw0 [DAQ]
09:46:11 h1daqfw0 [DAQ]
09:46:12 h1daqnds0 [DAQ]
09:46:19 h1daqgds0 [DAQ]
09:47:13 h1daqgds0 [DAQ] <<< GDS0 needed a restart
09:52:58 h1daqfw0 [DAQ] <<< Sponteneous FW0 restart
09:56:21 h1susauxb123 h1edc[DAQ] <<< second edc restart, all channels corrected
09:57:44 h1daqdc0 [DAQ] <<< second 0-leg restart
09:57:55 h1daqfw0 [DAQ]
09:57:55 h1daqtw0 [DAQ]
09:57:56 h1daqnds0 [DAQ]
09:58:03 h1daqgds0 [DAQ]
10:03:00 h1daqdc1 [DAQ] <<< 1-leg restart
10:03:12 h1daqfw1 [DAQ]
10:03:13 h1daqnds1 [DAQ]
10:03:13 h1daqtw1 [DAQ]
10:03:21 h1daqgds1 [DAQ]
10:04:07 h1daqgds1 [DAQ] <<< GDS1 restart
10:04:48 h1daqfw0 [DAQ] <<< Spontaneous FW0 restart
17:20:37 h1iscey ***REBOOT*** <<< power up h1iscey following timing issue on fanout port
17:22:17 h1iscey h1iopiscey
17:22:30 h1iscey h1pemey
17:22:43 h1iscey h1iscey
17:22:56 h1iscey h1caley
17:23:09 h1iscey h1alsey
J. Kissel echoing E. Dohmen Just a bit of useful info from E.J. that I think others might be interested in (and giving myself bread crumbs to find the info in the future): - The PRODUCTION (H1 included) systems are only at 5.1.4 with no plans on upgrading soon. - h1susetmx computer, however is at a prototype version of 5.3.0 in order to support LGIO DAC testing (see LHO:79735) - One can find the running release notes for modern versions (since RCG 2.0) of the RCG at https://git.ligo.org/cds/software/advligorts/-/blob/master/NEWS?ref_type=heads
Jennie, Jenne, Sheila
I pushed Jenne's updated cleaning but cannot check if this is better or worse until our problems getting data from nds2 are fixed.
I ran the following:
cd /ligo/gitcommon/NoiseCleaning_O4/Frontend_NonSENS/lho-online-cleaning/Jitter/CoeffFilesToWriteToEPICS/ python3 Jitter_writeEPICS.py
I accepted these DIFFS in OAF model in OBSERVE.snap but we might have to revert them before finish of the commissioning period today if we find out the cleaning is worse.
GPS 139627823 - 16 mins quiet time from last night.
11:12:41 UTC - 11:18:38 UTC quiet time just before cleaning implemented.
11:19:55 UTC new cleaning drops.
11:31:30 UTC end of quiet time.
I took the following jitter comparison measurements
Old Cleaning quiet time: 11:19:55 UTC 0404/2024 light blue
New cleaning quiet time: 13:50:05 UTC 04/04/2024 red
Its hard to tell if new cleaning is better and I have reverted the coefficients in OBSERVE.snap to what they were this morning.
J. Kissel, O. Patane A follow-up from yesterday's work on installing the infrastructure of the upgrades to the ETM and TMS watchdog systems, in this aLOG I cover with what I've filled out the infrastructure in order to obtain the calibrated BLRMS that forms the trigger signal for the user watchdog. Remember, any sensible BLRMS system should (1) Take a signal, and filter with with a (frequency) band-limiting filter, then (2) Take the square, then the average, then square root, i.e. the RMS, then (3) Low-pass the RMS signal, since only the "DC" portion of the RMS has interesting frequency content. As a bonus, if your signal is not calibrated, then you can add (0) Take the input to the band limiting filter, and calibrate it (and through the power of linear algebra, it doesn't really matter whether you band-limit first and *then* calibrate) This screenshot shows the watchdog overview screen conveying this BLRMS system. Here're the details of the BANDLIM and RMSLP filters for each of the above steps: (0) H1:SUS-ETMX_??_WD_OSEMAC_BANDLIM_?? FM6 ("10:0.4") and FM10 ("to_um") are exact copies of the calibration filters that are, and have "always been" in the OSEMINF banks. These are highlighted in the first attachment in yellow. FM6 :: ("10:0.4") :: zpk([10],[0.4],1,"n") :: inverting the frequency response of the OSEM satellite amp frequency response FM10 :: ("to_um") :: zpk([],[],0.0233333,"n") :: converting [ADC counts] into [um] assuming an ideal OSEM which has a response of 95 [uA / mm], differentially readout with 242 kOhm transimpednance and digitized with a 2^16 / 40 [ct / V] ADC. In addition, I also copied over the GAIN from the OSEMINF banks and copied these over as well such that each OSEM trigger signal remains "normalized" to an ideal 95 [uA / mm] OSEM. These are highlighted in dark green in the first attachment. (1) H1:SUS-ETMX_??_WD_OSEMAC_BANDLIM_?? FM1 :: ("acBandLim") :: zpk([0;8192;-8192],[0.1;9.99999;9.99999],10.1002,"n") :: 0.1 to 10 Hz band-pass (2) This is a major part of the upgrade -- the front-end code that does the RMS was changed from the nonsense "cdsRms" block (see LHO:1265) to a "cdsTrueRMS" block (see LHO:19658) (3) H1:SUS-ETMX_??_WD_OSEMAC_RMSLP_?? FM1 :: ("10secLP") :: butter("LowPass",4,0.1) :: 4th order butterworth filter with a corner frequency at 0.1 Hz, i.e. a 10 second Low Pass. This is highlighted in magneta in the second attachment. These are direct copies from other newer suspension models that had this infrastructure in place. I've committed the filter files to the userapps repo, /opt/rtcds/userapps/release/sus/h1/filterfiles/ H1SUSETMX.txt H1SUSETMY.txt H1SUSETMXPI.txt H1SUSETMYPI.txt H1SUSTMSX.txt H1SUSTMSY.txt are all committed as of rev 27217. All of these settings were captured in each model's safe.snap. I've not yet accepted them in the OBSERVE.snaps.
Here's a handy script that demos using the python bindings to foton in order to easily populate simple filters from a python script. I've only used this from the control room workstations, whose environment has been already built up for me, so I can't claim any knowledge of details about what packages this script needs. But, if you have the base cds conda environment this "should work."
As the first part of the TW0 raw minute trend file offload, tw0 is now writing to a new area freeing up the old files for transfer.
nds0 was restarted at 10:44 PST to serve the past 6 months of data from their temporary location as the files are being transferred to h1daqframes-0. The file copy takes about 30 hours.
File copy was started 11:50 Tue. As of 16:05 43 of 256 dirs had been copied. ETA 13:10 Wed.
Copy completed at 13:15:44 PST. I will do the nds0 change and file deletion tomorrow morning.
F. Mera, D. Sigg, M. Pirello
Per WP11548:
The Squeezer 4 rack was modified duplicating the EL3692 terminals in the positions M13, M14, M19 and M20. Wiring was modified to have readings in CH1 only.
TwinCAT solution was updated accordingly, the system was restarted and succesfully verified with the new configuration.
SQZ 4 rack serial: S2101205
Anamaria, TJ, Jonathan, Dave:
A new h1calcs model was installed. Two fast and two slow DAQ channels were renamed.
++: slow channel H1:PEM-CS_ACC_SQZT0_SQZLASER_X_MON added to the DAQ
++: slow channel H1:PEM-CS_ACC_SQZT7_HOMODYNE_X_MON added to the DAQ
++: fast channel H1:PEM-CS_ACC_SQZT0_SQZLASER_X_DQ added to the DAQ
++: fast channel H1:PEM-CS_ACC_SQZT7_HOMODYNE_X_DQ added to the DAQ
--: slow channel H1:PEM-CS_ACC_ISCT6_SQZLASER_X_MON removed from DAQ
--: slow channel H1:PEM-CS_ACC_SQZT6_HOMODYNE_X_MON removed from DAQ
-!: fast channel H1:PEM-CS_ACC_ISCT6_SQZLASER_X_DQ removed from DAQ ***GDS-CHAN***
-!: fast channel H1:PEM-CS_ACC_SQZT6_HOMODYNE_X_DQ removed from DAQ ***GDS-CHAN***
Total number of DAQ changes = 8
(4 additions, 4 deletions)
Note that the fast channels are in the GDS broadcaster channel list. I initially made the change by hand for h1daqgds[0,1] and Jonathan then made the changes permanent in puppet.
h1daqgds0 daqd was down for an extended period while I made the first hand-edit.
These channels were remnants of the pem changes from O3 to O4 (there is no more SQZT6 or ISCT6), and we overlooked changing them earlier. As such, people who use static channel lists might run into issues with the disappearance of the ISCT6 and SQZT6 channels.
Tagging DetChar.
J. Kissel, R. Savage After Rick and I conversed about the systems level compromises in play (see LHO:69175) when placing the newer 410.2 Hz PCALX line near the 410.3 Hz PCALY line for comparisons like that in LHO:69290, we agree to push forward with the plans discussed in LHO:69175: (1) Move the PCALX 410.2 Hz PCALXY comparison line further away from the pre-existing PCALY 410.3 Hz TDCF line. Joe, for entirely different reasons, recommends 0.5 Hz separation instead of 0.1 Hz. THIS ALOG The new frequency is H1:CAL-PCALX_PCALOSC2_OSC_FREQ = 409.8 Hz. (2) Update the "DARM Model transfer function values at calibration line frequencies" EPICs records for the PCALX 410.2 Hz line, NOT YET DONE, SEE BELOW (3) Revert all DEMOD the band-passes to have a pass band that's +/- 0.1 Hz wide (what we had in O3) DONE ALREADY, and now for 410.3 Hz: see LHO:69265 and THIS ALOG (4) Revert all DEMOD I & Q low passes to a 10 second time constant, or 0.1 Hz corner frequency DONE ALREADY, and now for 410.3 Hz: see LHO:69265 and THIS ALOG (5) Change COH_STRIDE back to 10 seconds to match the low pass, and Change the BUFFER_SIZE back to 13.0 in order to preserve the rolling average of 2 minutes. DONE ALREADY, and now for 410.3 Hz: see LHO:69265 and THIS ALOG I've also modified the three band-pass filters in the SIG banks of the special segregated PCALX DEMOD for the PCALXY comparison (i.e. H1:CAL-CS_TDEP_PCAL_X_COMPARE_PCAL_DEMOD_SIG, _EXT_DEMOD_SIG, _ERR_DEMOD_SIG). These had had a pass pand with of 0.01 Hz, so I've created a new band pass for 409.8, butter("BandPass",6,409.79,409.81). It lives in FM1, and I've copied the old band for 410.2 Hz pass over to FM2. In order to have *all* of our ducks in a row with the line move, we still need to do: (2) Update the "DARM Model transfer function values at calibration line frequencies" EPICs records for the PCALX 410.2 Hz line, but that also means we need to do a step not listed above (0) Update the pydarm_H1.ini file to reflect that the pcalx comparison line is now at 409.8 Hz, and (6) Let the GDS team know that there's a calibration line frequency change and they need to update the GDS line subtraction pipeline. This line frequency change is in play as of 2023-05-04 00:20 UTC. Stay tuned! Here's the latest list of calibration lines: Freq (Hz) Actuator Purpose Channel that defines Freq Since O3 15.6 ETMX UIM (L1) SUS \kappa_UIM excitation H1:SUS-ETMY_L1_CAL_LINE_FREQ Amplitude Change on Apr 2023 (LHO:68289) 16.4 ETMX PUM (L2) SUS \kappa_PUM excitation H1:SUS-ETMY_L2_CAL_LINE_FREQ Amplitude Change on Apr 2023 (LHO:68289) 17.1 PCALY actuator kappa reference H1:CAL-PCALY_PCALOSC1_OSC_FREQ Amplitude Change on Apr 2023 (LHO:68289) 17.6 ETMX TST (L3) SUS \kappa_TST excitation H1:SUS-ETMY_L3_CAL_LINE_FREQ Amplitude Change on Apr 2023 (LHO:68289) 33.43 PCALX Systematic error lines H1:CAL-PCALX_PCALOSC4_OSC_FREQ New since Jul 2022 (LHO:64214, LHO:66268) 53.67 | | H1:CAL-PCALX_PCALOSC5_OSC_FREQ Frequency Change on Apr 2023 (LHO:68289) 77.73 | | H1:CAL-PCALX_PCALOSC6_OSC_FREQ New since Jul 2022 (LHO:64214, LHO:66268) 102.13 | | H1:CAL-PCALX_PCALOSC7_OSC_FREQ | 283.91 V V H1:CAL-PCALX_PCALOSC8_OSC_FREQ V 409.8 PCALX PCALXY comparison H1:CAL-PCALX_PCALOSC2_OSC_FREQ New since Jan 2023, Frequency Change THIS ALOG 410.3 PCALY f_cc and kappa_C H1:CAL-PCALY_PCALOSC2_OSC_FREQ No Change 1083.7 PCALY f_cc and kappa_C monitor H1:CAL-PCALY_PCALOSC3_OSC_FREQ No Change n*500+1.3 PCALX Systematic error lines H1:CAL-PCALX_PCALOSC1_OSC_FREQ No Change (n=[2,3,4,5,6,7,8])
As of git hash ac191a90, I've changed the pydarm_H1.ini pydarm parameter file in order to make this change. - LINE 512 cal_line_cmp_pcalx_frequency = 410.2 + LINE 512 cal_line_cmp_pcalx_frequency = 409.8 This takes care of item (0) above.
The CDS Data Acquisition System lookback times have recently been extended when we upgraded to the new Linux file systems with their 150TB RAIDs.
Both h1daqnds0 and h1daqnds1 are now running at nominal disk usage, with a wiper script maintaining the required amount of free disk space.
The lookback times for h1daqnds0 and h1daqnds1 can be obtained by running the lookback command. Note that control room users can ignore the minute_trend line, this refers to GWF framed minute_trends which are only used by LDAS for archival. CDS minute trends have no limits on their lookbacks (trends go back to the channel's creation date).
The wiper is currently configured to maintain 84 days (3 months) of second_trends, 28 days of minute_trends, and the rest is given to full framed data (currently about 2 months worth)
Each line shows the lookback time in days and the oldest file's GPS time along with its corresponding local time.
david.barker@opslogin0: lookback
DAQ0
2023-05-03 08:10 PDT
full 57d 1362168064 [2023-03-06 12:00 PST]
second_trend 84d 1359850200 [2023-02-07 16:09 PST]
minute_trend 28d 1364691600 [2023-04-04 17:59 PDT]
DAQ1
2023-05-03 08:13 PDT
full 57d 1362174848 [2023-03-06 13:53 PST]
second_trend 84d 1359850200 [2023-02-07 16:09 PST]
minute_trend 28d 1364691600 [2023-04-04 17:59 PDT]
WP11163 Remove IFO_UNDISTURBED Guardian node
TJ, Jonathan, Dave:
TJ removed the IFO_UNDISTURBED node from h1guardian1. I removed its channels from H1EPICS_GRD.ini. Jonathan removed H1:GRD-IFO_UNDISTURBED_OK from the DAQ Broadcasters. EDC+DAQ restart were required.
WP11165 h1sqz model change
Daniel, Dave:
Daniel's latest h1sqz model was installed. There were all possible types of DAQ Changes (except datatype change)
fast channels removed | 3 |
fast channels rate increased | 3 |
fast channels rate decreased | 4 |
slow channels removed | 14 |
fast channels added | 5 |
slow channels added | 11 |
WP11153 Add h1cdssdf slow controls SDF
Jonathan, Erik, Dave:
A new h1cdssdf was added to h1ecatmon0 using the next available dcuid (1039) and specific_cpu (16).
Its monitor.req initially monitors the lock-lock-alert settings.
WP11168 New h1oaf model
Jenne, Dave:
Jenne's new h1oaf model was installed. DAQ changes:
fast channels removed | 0 |
slow channels removed | 11 |
fast channels added | 7 |
slow channels added | 11 |
WP11166 New h1calcs model
Jeff, Dave:
Jeff's new h1calcs model was installed. DAQ changes:
fast channels removed | 0 |
slow channels removed | 28 |
fast channels added | 0 |
slow channels added | 136 |
DAQ Restart
Dave, Jonathan:
DAQ and EDC were restarted to support the above changes. No problems except GDS0 needed a second restart to sync its channel list.
DAQ Changes
Full Frame:
Detailed frames lists are available in attached TAR file.
channels removed | 79 |
channels with rate change | 7 |
channels added | 173 |
GDS Broadcaster:
channels removed | 1 (H1:GRD-IFO_UNDISTURBED_OK) |
channels added | 0 |
Changes to the CDS Overview MEDM:
Expanded the CDS section to include the new h1cdssdf difference report. Shrunk the staging-building bakeout in the Vacuum section to make room.
Front End status colours are now ready for O4, previous purple, orange, yellow blocks are now RED.
Tue02May2023
LOC TIME HOSTNAME MODEL/REBOOT
08:51:07 h1lsc0 h1sqz
08:51:42 h1oaf0 h1oaf
08:52:05 h1oaf0 h1calcs
09:19:42 h1lsc0 h1sqz <<< Daniel second change, before DAQ restart
09:22:47 h1daqdc0 [DAQ]
09:22:57 h1daqfw0 [DAQ]
09:22:58 h1daqnds0 [DAQ]
09:22:58 h1daqtw0 [DAQ]
09:23:05 h1daqgds0 [DAQ]
09:23:24 h1susauxb123 h1edc[DAQ]
09:24:00 h1daqgds0 [DAQ] <<<< gds0 second restart
09:25:59 h1daqdc1 [DAQ]
09:26:12 h1daqfw1 [DAQ]
09:26:12 h1daqtw1 [DAQ]
09:26:14 h1daqnds1 [DAQ]
09:26:21 h1daqgds1 [DAQ]
11:58:10 h1seib1 ***REBOOT*** <<< power cycle h1seib1 following IO Chassis power down for DC supply work
11:59:42 h1seib1 h1iopseib1
11:59:55 h1seib1 h1isiitmy
12:00:08 h1seib1 h1hpiitmy
J. Kissel, J. Betzwieser ECR E2300125 IIET Ticket 27801 WP 11166 Executive Summary: Based on the *very* encouraging results from last week's install of the prototype of real-time monitoring of the systematic error lines that we inherited from LLO (see LHO:68961 for code changes, and the success story a mere 3 days later in LHO:69157), we crave finalizing this prototype infrastructure by incorporating standard stuff that other time-dependent calculations are doing, and re-casting the final answers into units that we regularly use elsewhere. In addition, we're also making some changes to the demodulation parameters given the systems-level compromises we discovered on at the end of last week LHO:69175, namely, changing undocumented hard-coded values into user-definable EPICs records. I've compiled the model with the below detailed changes, so the model is ready for build, install, and restart tomorrow. This will result in removal of 28 EPICs records, in addition to the addition of 164 EPICs records. This aLOG covers the changes to the front end model library parts that enacts the changes to the following library parts: /opt/rtcds/userapps/release/cal/common/models/ CAL_LINE_MONITOR_MASTER.mdl CAL_CS_MASTER.mdl as well as the following c-code chunk /opt/rtcds/userapps/release/cds/common/src/ RING_BUFFER.c These common parts have had their changes committed to the above mentioned location in the userapps SVN repo. (1) In order to support LLO's independent, front-end subtraction of calibration lines, he'd installed "gated running median" (GRM) infrastructure on the live calculations of (DELTAL / PCAL) transfer functions and (DELTAL / SUS) calculations on the output of those ratios that come directly from the DEMOD I and Q outputs. The GRM process is "just" a copy and paste from what we've been using for years to smooth out the answers for the the time-dependent correction factors (TDCFs). Indeed, the front-end GRM process is itself a copy of the gst-lal_calibration, "GDS" pipeline's function that's used in production of TDCF-corrected GDS-CALIB_STRAIN. The I and Q outputs are good enough for what Joe needs to subtract them out of DELTAL. However, in order to produce a true systematic error, the DELTAL / PCAL transfer function needs corrected for flaws in CALCS and then a conversion into magnitude and phase systematic error transfer functions for ease of interpretation. But -- the trends of the answer from this prototype of the systematic error transfer functions are quite noisy (2023-04-28_SYSERROR_LINES_FirstResults_30minutetrend.png from LHO:69157). As such, we've now moved the gated running median down-stream of the simple ratio of I & Q demod outputs, such that *both* the subtraction output *and* the systematic error output receive the noise cleaning benefits of the standard GRM process. (2) For better or worse, the "gated running median" is a bubble-gum and duct tape with super glue process for the DEMOD I and Q outputs: (A low-passed) The DEMOD I and Q banks themselves, have low-passes in them. As discussed in LHO:69175, these low passes correspond to "FFT lengths" if one were doing the analysis in the frequency domain. This has been traditionally a 0.1 Hz corner frequency low pass to correspond to a "10 second FFT." (B buffered) In the first stage of the GRM process, this low-passed I and Q data is held in a ring buffer for 10 seconds (and in the front-end this duration *used* to be a hard-coded input of RING_BUFFER.c as 163840 samples, i.e. model_sampling_frequency * buffer_size_sec = 16384 Hz * 10 sec = 163840.) (C gated) The output of RING_BUFFER.c, is then feed into a gating function, that either passes the current values through, or holds on the previous 10 second's value based on the last cycle's median. This reduces the sensitivity to glitches in the current 10 seconds. (D down-sampled and medianed) The current cycle's low-passed, buffered, gated output is then passed into a median function, which has an "array size" parameter, and a "stride" parameter. These parameters were hard-coded in the front-end to be a stride, (or a sample rate) of 1/16 seconds = 0.0625 seconds -- essentially down sampling the data, with an array size, or it's own collection of 2049 samples, i.e. 16384 / 8 -- 2048 ... so it calculates the median of 8 seconds of 16 Hz data. (E averaged) The current cycle's low-passed, buffered, gated, medianed output is then fed into an averaging buffer, with the same stride (sample rate) of 0.0625 sec, and it's own buffer of 160 samples i.e. (F up-sampled) Finally the current cycle's low-passed, buffered, gated, medianed, averaged 16 Hz signal is then uses a spline algorithm to upsample the data back to 16384 Hz. Thus *it* has a hardcoded parameter that serves as the upsampling ratio of 16384 Hz / 16 Hz = 1024. It's gross. Anyways, I walk through all this to illuminate that -- for the DELTA / PCAL and DELTA / SUS transfer functions, which are now mapped through the GRM process (see (1) above) -- I've converted all of these simulink hard-coded numbers that are all inter-related to the choice of FFT length into user-definable EPICs records. This way, if we every decide to make a *different* choice of FFT length, then we can change all of the inter-related parameters at the same time, in a hopefully intelligent way without an ECR to change the front-end model. Importantly, all the front-end computed TDCF values are the same, and are still using all the old hard-coded values. This is *just* for the newly commissioned DELTA/PCAL and DELTAL/SUS transfer functions that are now newly passed through their own GRM process. (3) Here's an easy one -- for the DELTAL / PCAL transfer function, which had been converted from the TF's real and imaginary parts to magnitude and phase -- I've converted added an additional readback that converts the phase from radians as it was to degrees. (4) It's important that the systematic error monitor has an uncertainty associated with it. To date, we've been using *only* a coherence based uncertainty, the usual Bendat and Piersol sqrt( (1 - coh) / (2 * navg * coh) ). This covers the *statistical* uncertainty of this transfer function nicely, but it does not reflect the fixed uncertainty in the model of the amplitude of PCAL displacement -- typically between 0.25 - 0.5%. So, if one of the PCAL lines has particularly loud signal-to-noise ratio, the measurement uncertainty can be quite low, lower than the uncertainty in the model of PCAL displacement. As such, I've added an additional EPICs record where one can install the fixed uncertainty of the PCAL amplitude, and that uncertainty will be added in quadrature to the coherence-based measurement uncertainty. (5) Finally, while we're still exploring what the best options are for the "FFT length," i.e. the frequency of the low-pass filter of the I & Q demods, we want to make sure that there's no fundamental code limit to choosing a long FFT -- say 100 seconds. As such, we've modified /opt/rtcds/userapps/release/cds/common/src/ RING_BUFFER.c , which was formerly limited to have a maximum buffer (a hard-coded definition in the c-code itself) of 20 seconds (16384 Hz * 20 seconds = 327680 samples). After some verification of of whether the front-end process could handle this increased limit (see TST:15369), we've increased this limit to 100 seconds (i.e. 16384 Hz * 100 seconds = 163840 samples). I'll post screenshots of the changes in a series of comments below.
J. Kissel, E. Goetz Executive Summary: We are now calculating the systematic error in H1:CAL-DELTAL_EXTERNAL_DQ "live" constantly at the TDCF line and systematic error monitor line frequencies (i.e. all PCAL calibration line frequencies), and the DELTAL_EXTERNAL / PCAL transfer functions are all showing values close to 1 in magnitude and 0 degrees in phase, around ~5% deviation in magnitude, ~3 deg deviation in phase. Details: Some links to the details regarding the installation and commissioning the new calculation of calibration systematic error in CALCS, which only started this past Tuesday. 1) First installation on Tuesday Apr 25 2023 LHO aLOG 68961 2) Filling out the new infrastructure MEDM screens LHO aLOG 68999 3) Installing and updating DEMOD filters LHO aLOG 69112, LHO aLOG 69117, and LHO aLOG 69143 4) Populating the "DARM model transfer function values at calibration line frequencies for TDCFs and systematic error monitoring" (aka "EPICS records") LHO aLOG 69061 and LHO aLOG 69159 During the commissioning of this infrastructure, we had some trouble getting correct results, especially for frequencies below a few hundred hertz. This led us to investigate and identify a few problems. 1) one of the narrow bandpass filters for a line was incorrectly placed, but we updated it and found some consistent (incorrect) results with other frequencies (LHO aLOG 69112) 2) the values for optical gain, cavity pole frequency, and actuator gains in the pydarm_H1.ini file in the git repository and local version at /ligo/groups/cal/H1/ifo/pydarm_H1.ini did not have consistent values compared against the recently installed values in CALCS (see LHO aLOG 69047). We then updated the pydarm_H1.ini file to fix this, but we were still finding incorrect answers. The complete solution was to make sure the CALCS actuation output matrix and pydarm_H1.ini file had the same values for the x-arm: -1. *** 3) The "human vs. machine modification of parameter file" loop needs to be closed after an MCMC result gets pushed to the front end CALCS model. Meaning, the parameter file needs to get updated values before calculating TDCFs, otherwise TDCFs calculated from this parameter file directly will create invalid calibration results, without us realizing it. Now we have finally sorted this out, the values produced by the calibration systematic errors are starting to look pretty good. Note that this is a measurement of the systematic error on H1:CAL=DELTAL_EXTERNAL_DQ corrected for the known static systematic errors GDS/CALCS (mostly impacting at high frequency for super-Nyquist poles), but does not correct for any time dependence of the interferometer. Attached are two figures showing the systematic error calculated via this new infrastructure for today. - 2023-04-28_SYSERROR_LINES_FirstResults_30minutetrend.png shows a 30 minute trend of the channels. There will be lots of further, standard, improvements we can make to these demod outputs that we'll likely implement over the next week. But, one can clearly see that lines with the higher SNR in DARM show nice clean answers. Here, the phase is in radians. - 2023-04-28_SYSERROR_LINES_FirstResults.pdf since we're far more used to looking at the systematic error in the frequency domain, we took a 180 sec average of the systematic error answers and then transposed them by hand on to a bode plot. This conveys the same result -- the louder PCALY "TDCF" lines have a lower uncertainty than the quieter PCALX "Systematic Error" lines. Also note that the phase in this plot is in degrees. Either way, one can see the magnitude is 1 +- 0.04, with a noise of ~10% and phase is 0 +/- 2 degrees with a noise of ~5 deg. Go team! Note that this is not the normal multiplicative systematic error (aka "eta values") because it is DELTAL/PCAL. This is the inverse of an eta, so to correct the DELTAL you need to divide by these values. Tricky. *** There had been some previous discussions (see the latter pages of G2300832) indicating that to get correct GDS filters required the CALCS values to be -1 and the pydarm_H1.ini file to be +1. This does not appear to be true, at least for the EPICS records installed in CALCS. This may mean that either GDS is doing something to include a -1 where it should not, there is a mistake in a method used by GDS to export transfer functions from pyDARM, or it could mean that an ini file previously used to generate GDS filters were using incorrect values and we need to solidify/sort out which copy of the parameter file is the parameter file. Further investigation is needed here!
Here's the map of calibration line frequency to DEMOD names (LINE${n}), and the channel names for the magnitude, phase, and uncertainty. 17.1 H1:CAL-CS_TDEP_PCAL_LINE1 _RELATIVE_MAG _RELATIVE_PHASE _UNCERTAINTY 24.5 H1:CAL-CS_TDEP_PCAL_LINE4 | | | 33.43 H1:CAL-CS_TDEP_PCAL_LINE5 | | | 53.67 H1:CAL-CS_TDEP_PCAL_LINE6 | | | 77.73 H1:CAL-CS_TDEP_PCAL_LINE7 | | | 102.13 H1:CAL-CS_TDEP_PCAL_LINE8 (same) | | 283.91 H1:CAL-CS_TDEP_PCAL_LINE9 | | | 410.2 H1:CAL-CS_TDEP_PCAL_LINE10 | | | 410.3 H1:CAL-CS_TDEP_PCAL_LINE2 | | | 1083.7 H1:CAL-CS_TDEP_PCAL_LINE3 V V V Recall that the uncertainty, UNC, in radians, is derived from the coherence, C, where UNC = sqrt(1-C / 2 * Navg * C)), where "Navg" is set by the H1:CAL-CS_TDEP_COH_BUFFER_SIZE channel, and the FFT length, set by H1:CAL-CS_TDEP_COH_STRIDE, should match the corner frequency of the low pass that's used in the DEMODs. Some further notes: - the 24.5 Hz line is a part of the temporary collection to monitor how thermalization of the IFO impacts systematic error during the engineering run, so that DEMOD and its line will be turned off for the remainder of the run. - the 17.1 and 410.3 Hz are a part of the determination of time-dependent correction factors. Though the answer, [[ DELTAL_EXTERNAL * (GDS / CALCS) ]] / PCAL is NOT corrected for TDCFs, the answer from these two lines is kind of "in loop" for GDS-CALIB_STRAIN, which DOES correct for TDCFs. - All lines other than the 24.5 Hz are now expected to be permanent, and continue through into O4, as per LHO:68289.
J. Kissel, D. Bhattacharjee, T. Sanchez, J. Betzwieser WP 11138 From LLO:64213, I'm importing three changes to, and one feature removal from, the PCAL end-station calculations to compute the representative force and corresponding displacement for each channel, updating them for the latest methodology the team wishes to use in O4. Within the force calculation block this includes: (1) Correcting an inconvenience an EPICs record set in the computation of force coefficient, where the reference optical efficiency for the RX and TX PDs are stored. Where before the RX and TX optical efficiency EPICs records (TX_OPT_EFF_CORR and RX_OPT_EFF_CORR) were divided into each channels path, now, the TX record is multiplied in and the RX record remains divided in to each respective channel. (2) The calculation of optical efficiency *ratio* (comparing the ratio of TX to RX, then normalized by the above reference values) has been re-arranged, with an additional EPICs monitor of the unnormalized ratio for better understanding. Note: this causes a channel name change -- formerly, the normalized ratio was called OPT_EFF_RATIO_MON, and now it's called OPT_EFF_LIVE_OVER_REF. The new channel monitoring the unnormalized ratio is called OPT_EFF_LIVE_MON. (3) The former monitor of the ratio between channels was taken *after* this optical efficiency correction, with an EPICs and test point channel called ETM_PWR_RATIO_MON and ETM_PWR_RATIO_OUT. These have been unceremoniously removed. See before vs. after screenshots. Outside the force calculation, just prior to the conversion to displacement in the TX_PD and RX_PD filter banks, there's now a new EPICs record that applies the multiplicative correction to account for the differences in displacement between the X and Y arm, informed by the side-by-side comparison between the answer as seen in DARM. This new record is XY_COMPARE_CORR_FACT. See before vs. after screenshots. To import these changes was relatively simple, just an svn up to /opt/rtcds/userapps/release/cal/common/models PCAL_MASTER.mdl The h1calex and h1caley models, with the following updated library part, have been built successfully in prep for tomorrow's install and restart. I'll update MEDM screens tomorrow.
The CDS team informs me that it's been recently decreed that "no channels shall be removed without ECR," so I've reverted the removal of the OPT_EFF_RATIO_MON, OPT_EFF_RATIO_OUT, ETM_PWR_RATIO_MON, ETM_PWR_RATIO_OUT channels since the PCAL team doesn't have an ECR for these changes. All other infrastructure that rearranges the calculation and adds new channels will still go in as described above. See new screenshot of the FORCE_COEFF block. Here's the official channel change list for these changes jeffrey.kissel@cdsws03:~$ check_model_daq_configuration h1calex --------------------- file times ---------------------- Tue Apr 25 07:54:43 2023 = Model build time Tue Apr 18 10:21:21 2023 = Current configuration load time DAQ configuration is changed, processing... ++: slow channel H1:CAL-PCALX_FORCE_COEFF_OPT_EFF_LIVE_OVER_REF added to the DAQ ++: slow channel H1:CAL-PCALX_FORCE_COEFF_OPT_EFF_LIVE_MON added to the DAQ ++: slow channel H1:CAL-PCALX_XY_COMPARE_CORR_FACT added to the DAQ Total number of DAQ changes = 3 (3 additions, 0 deletions) jeffrey.kissel@cdsws03:~$ check_model_daq_configuration h1caley --------------------- file times ---------------------- Tue Apr 25 07:59:02 2023 = Model build time Tue Apr 18 10:21:21 2023 = Current configuration load time DAQ configuration is changed, processing... ++: slow channel H1:CAL-PCALY_FORCE_COEFF_OPT_EFF_LIVE_OVER_REF added to the DAQ ++: slow channel H1:CAL-PCALY_FORCE_COEFF_OPT_EFF_LIVE_MON added to the DAQ ++: slow channel H1:CAL-PCALY_XY_COMPARE_CORR_FACT added to the DAQ Total number of DAQ changes = 3 (3 additions, 0 deletions) which all match the functional changes expected from above.
IRIG timing channels (H1:CAL-PCALX_IRIGB_DQ, H1:CAL-PCALY_IRIGB_DQ) are checked at H1 for GPS=1365880961. Retrieved signal has been plotted in below figures. Please check decoded time against recorded gps time.
A similar test is conducted for L1 at the same GPS time: https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=64421
J. Kissel, (witnessed by H-Y Huang & B. Weaver) We're continuing to try to validate the high-frequency portion of our pyDARM parameter set in different ways, in hopes to keep up with and model all the changes that have been happening to the OMC DCPD Sensing Chain. Yesterday (LHO:67730), we re-did the OMC full remote test signal chain of the OMC DCPD sensing path since it hadn't been done in a while. While we saw a change in the low-frequency (< 100 Hz) end of the measurement, we attributed that change to the OMC DCPD's transimpedance amplifier (TIA). Today, I wanted to isolate the portion of the signal chain that really only has high frequency response that we saw change as well. So I created a "loop back" measurement, where I drove the 524 kHz ADC system directly with the DAC -- a measurement that's strikingly simple now that we're using the D2200215-style whitening chassis. Namely, - The DAC input drive is copied onto on pins 1-6 and 2-7 of a DB9 with the cable "ISC_444". This normally goes through the whitening chassis as a passthrough to the test input of the in-vac TIA, then through the TIA response itself, then through the whitening filter within) before hitting the ADC cable on its way out to be digitized. The end of ISC_444 that connects to the whitening chassis is a "sockets" connector. - The OMC DCPD A and B signals come out on cable "ISC_426" as they come out after whitening ... on pins 1-6 and 2-7. The end of ISC_426 that connects to the whitening chassis is a "pins" connector. Convenient! I unplug the two cables from the whitening chassis, and connect them together, and bingo, bango, et. voila, I have a loop back measurement that isn't confused by the response of either the TIA or the Whitening. See first attachment, which shows a picture of this setup. In doing so, I also turned off the compensation for the TIA and whitening in the A0 / B0 banks, since they're bypassed in analog and don't need to be compensated in this measurement. The second attachment shows a screencap of the OMC-DCPD_A0 and OMC-DCPD_B0 banks during this measurement. The third attachment shows the results. The left two panels show the magnitude and phase of the H1:OMC-DCPD_[A,B]_OUT_DQ / H1:OMC-TEST_DCPD_EXC transfer functions. The right two panels show magnitude and phase of the transfer function OMC-DCPD_A_OUT_DQ / OMC-DCPD_B_OUT_DQ, i.e. the ratio of the two paths. For the left two panels, Unlike the units and scale of full test loop as it's normally connected (see description on LHO:62653) *this* special configuration of the transfer function, between the same digital channels, EXC: H1:OMC-TEST_DCPD_EXC RESP: H1:OMC-DCPD_[A,B]_OUT_DQ, i.e. H1:OMC-DCPD_[A,B]_OUT_DQ / H1:OMC-TEST_DCPD_EXC now has units of [ADC V / DAC ct], since the measurement includes only [ ] The digital AI filter which converts the digital drive from 16 kHz to 65 kHz, which has unity gain at DC, and plenty of phase-loss and magnitude response, [-] The 16-bit 65 kHz DAC gain for the digital drive excitation, [ ] The analog 65 kHz to analog AI filter which has (almost) unity gain, and some phase loss impact below 7 kHz, but otherwise "invisible" in magnitude, [ ] The now direct connection between AI output and AA input, [-] The 4 copies of the signals that are created within the "copy and pass-through" analog AA filter which has (almost) unity gain and no frequency response, [-] The 18-bit 524 kHz ADC gain of the analog voltage coming in from the analog AA, [-] The digital sum of the 4 channels, then a digital "4chn" divide by 4, to get the sum signal back into single-channel units as though it were a single ADC channel [-] The "cts2V" compensation (inversion) of the 18-bit 524 kHz ADC gain, to get back into ADC input voltage [ ] digital AA filter which converts the digital drive from 524 kHz to 65 kHz "Dec65k" filter, which has unity gain, and some phase loss impact below 7 kHz, but otherwise "invisible" in magnitude, [ ] digital AA filter which converts the digital drive from 65 kHz to 16 kHz "Dec16k" filter, which has unity gain at DC, and plenty of phase-loss and magnitude response and thus we expect a DC gain of [ ] 1 [LSC DAC ct / IOP DAC ct] [-] 20 / 2^16 = 3.052e-4 [DAC V_DIFF / DAC ct] [ ] ~1 [AI V_DIFF / DAC V_DIFF] [ ] 1 [AA IN V_DIFF / AI V_DIFF] [-] 4 [AA OUT V_DIFF / AA IN V_DIFF] # four copies [-] 2^18 / 40 = 1.525e-4 [IOP ADC ct / AA V_DIFF] [-] 0.25 [A0 FM1 OUT ct / IOP ADC ct] # "4chn" [-] 40 / 2^18 = 6553.6 [A0 FM4 OUT "V" / A0 FM1 OUT ct] # "cts2V" [ ] ~1 [A0 FM9 OUT "V" / A0 FM4 OUT "V"] # Dec65k [ ] ~1 [A0 FM10 OUT "V" / A0 FM9 OUT "V"] # Dec16k = (20 / 2^16) * 4 * (2^18/40) * (0.25) * (40/2^18) = 20 / 2^16 = 3.052e-4 [ADC V / DAC ct] and this is indeed what we see, including the high frequency response that's dominated by the two digital 16k to 65k filters (one an AI, the other an AA) in magnitude, and a bunch of phase loss from those digital AAs, the analog AI, the digital 524 kHz AA, and several computational delays. Also, on the right two panels, we see that the ratio of these two signal paths is somewhere between 1.0000 and 0.9998 across the entire frequency band -- *very* well matched. So all of the imbalance that we saw in yesterday's measurement must be from the TIA and Whitening Chassis (shown in pink for reference). Hsiang-yu will take this data and use it to further validate our pyDARM model of this more simple system (i.e. one that's not confused by the poor compensation of the time-dependent TIA response). The data taken during this measurement has been saved to tha CalSVN here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Common/Electronics/H1/SensingFunction/OMCA/Data/ 20230303_H1_TIAxWCBYPASSED_OMCA_S2100832_S2300003_DCPDAandB_RemoteTestChainEXC.xml 20230303_H1_TIAxWCBYPASSED_OMCA_S2100832_S2300003_DCPDAandB_RemoteTestChainEXC_tf.txt where the export, *_tf.txt has columns [freq, real(A), imag(A), real(B), imag(B)].
J. Kissel I've taken some time this morning to initialize a single channel that has set off the middle yellow flag (H1:FEC-${DCUID}_SDF_UNINIT_CNT) in all front-end model's SDF systems. The appearance of this channel, H1:${MODELNAME}_REMOTE_IPC_PAUSE, is discussed in LHO:67598. Process: - Changed TABLE SELECTION to CHANS NOT INIT - Confirm that the setting value is 0.0 - Toggled ACCEPT and MON - Hit CONFIRM - Changed TABLE SECTION back to SETTINGS DIFF I list below the models I accepted and monitored. Models which have the *** next to them show the "hyphen-underscore" bug mentioned in LHO:67598. EY CORNER EX FCES model snap file model snap file model snap file model snap file h1susetmy safe h1asc*** safe h1susetmxpi safe h1sqzfces safe h1sustmsy safe h1ascimc safe h1susetmx safe h1susauxh8 safe h1isietmy OBSERVE h1lsc*** safe h1sustmsy safe h1hpietmy OBSERVE h1lscaux safe h1isietmx OBSERVE h1iscey safe h1omc*** safe h1hpietmx OBSERVE h1alsey safe h1ascsqzifo safe h1iscex safe h1pemey safe h1ascsqzfc safe h1alsex safe h1etmypi safe h1omcpi safe h1pemex safe h1susprocpi safe h1susitmpi safe h1oaf*** safe h1tcscs safe h1pemcs safe h1susproc safe h1seiproc safe h1susauxh7 safe h1psliss safe h1psliss safe h1pslpmc safe h1psldbb safe There still remains all the corner station SUS and SEI models, but I need to move on to other things. Here's hoping the operator core has some spare time to follow my instructions for the remaining models, or they or the CDS team can write a script to quickly accept and monitor these channels.
I have gone through the remaining corner station sus and sei models and accepted and monitored for the following
Corner:
h1hpiham1 observe h1susmc1 safe h1susmc3 safe h1susprm safe h1suspr3 safe h1isiham2 observe
h1hpiham2 observe h1susim safe h1sushtts safe h1susmc safe h1suspr2 safe h1isiham3 observe
h1hpiham3 observe h1sussr2 safe h1isiham4 observe h1hpiham4 observe h1sussr3 safe h1sussrm safe
h1isiham5 observe h1phiham5 observe h1susifoout safe h1sussqzout safe h1isiham6 observe h1hpiham6 observe
h1susfc1 safe h1sussqzin safe h1isiham7 observe h1pemh8 safe h1susfc2 safe h1isiham8 observe
h1susitmy safe h1isibitmy observe h1susbs safe h1isibs observe h1susitmx safe h1isiitmx observe
On Tuesday Feb 21, h1iopomc was rebuilt with a custom RCG that changes the latency of IPCs sent from the IOP.
h1omc0 has a "Low Noise" ADC, GSC-18AI64SSC, running at 524 kHz sample rate. The IOP, h1iopomc0, sends as IPCs calculation results based on Low Noise ADC channels every 8 samples, or 65 kHz.
With the standard RCG version 5.1.1, the IPCs were sent after the first of the 8 samples were processed.
With the custom RCG version, the IPCs are sent after the last of the 8 samples. The time stamp and read time of the IPCs by their user model receivers is unchanged. This eliminates 7 samples worth, or 13.35 usec of latency between the IOP sender and the user model receivers.
Tagging CAL and ISC. Interesting!
J. Kissel, M. Pirello, R. Schofield I've collated some data collected yesterday during Marc's re-configuring the h1omc0 IO chassis fans (LHO:67519), and took a few more data sets this morning, given that we were seeing evidence that (a) No fan configuration was particularly great as there's tons of aliasing going on in each configuration, and (b) There is clear non-stationary, time-dependent noise swimming around in the OMC DCPD A Channel that is coherent with a shorted channel read out on the same ADC card. EDIT: Robert & Marc blame the time-dependent noise on beating RF Oscillator noise, NOT on the fans. Even though this ADC card was segregated from the card-dense LSC0 IO chassis into its own OMC0 IO chassis there are still unsynchronized "clocks" or RF oscillators the ADC itself has one at least, and the Adnaco motherboard has a couple as well. They suspect that changing the fan characteristics played a role in changing the frequencies of the oscillators and thus changing the beat characteristics, rather that the fan's noise itself acoustically or electronically coupling directly. The big picture data is shown in the attachment, where (1) I compare most recent data of the shorted CH16 against the two versions of the DCPDs (like in LHO:67530) to show that these features are coherent between the raw CH16 (neon green), the raw first-copy of the DCPDA channel (CH0, in red), and the average of all copies of the DCPDA channels (A0_IN1, in blue), and (2) The CH16 data from multiple times and configurations over the past few days showing how inconsistent the noise is. The 8-trace upper plot already has too many traces, so I don't show the single raw copy, CH0 or the average of 4 copies, A0_IN1 for each CH16 trace. However having watched the data go by, the identical features are present and at consistent amplitude relationship in each CH0 and A0_IN1 data for each CH16 time. I add a smattering of time's coherence in the lower panel to show this, that they are coherent at each time. One might argue that having 0 fans on is the worst, but there's no *clear* evidence that 1 fan is better than 2 fans (though there may be *some* evidence, once I dig further into the details of this data). We also see that a somewhere between ~10 minutes and a day's worth of IO chassis thermalization has an impact on frequency content, so it's unclear if our quick studies of "0 fan" and "2 fans" were a legitimate comparison. We should again attack the fans to try to build up a story and try to mitigate what we can, and perhaps this is best tackled on a test stand -- unfortunately, the cross-section of me being present on site to take this live data and the IFO being locked on DC readout remains at 0.0 since last week. As such, I don't have this same data during nominal low noise to tell if these features will be / are problematic when there's the standard 10 [mA] of light on the PD... yet. Further, and again, we're again limited by what's stored in the frames to confirm, e.g. DCPD B's channels see similar things. I'll slowly get better at displaying what I can I'll also work on comparing this raw 524 kHz data against the 65 kHz and 16 kHz versions of the same channels to confirm that the *digital* anti-aliasing is doing a good job. But -- if there's already aliasing down to the 10-1000 Hz range in this signals prior to the filtering -- that digital aliasing won't do any good. Even worse, if the noise appears in the IO chassis / ADC card system, then putting an analog AA (rather than the existing pass-through filter) won't help.
The previous version of this aLOG had suggested, via the title, that the IO chassis fans were polluting the ADC channels directly (I didn't make a guess as to the coupling mechanism, but I guess either acoustically or mechanically). Robert and Marc are confident that this isn't the case and blame unsynchronized oscillators. Robert provides further wisdom: Most clocks are good to a PPM or a few, so, roughly, a MHz clock will drift around by 1 or a few Hz, and a 100 MHz clock will drift around by 100ish Hz. They move through these bands on thermal time scales, say seconds to tens of minutes depending on the thermal stability. I am a little skeptical that these are internal oscillators, just because we didn't see anything like this on the test stand. An easy check is to watch the spectrum and put an insulated finger on the clock chips to heat them and see if you can move the peaks around. If not, I have cooled and heated nearby electronics chassis to see if they are the culprit (of course an educated guess helps) just removing the lid is often enough to change the temperature. I agree with Marc that more flow generally reduces temperature rise and thus fluctuations from ambient.
J. Kissel, M. Pirello
Per WP10997 I worked with Jeff K. in measuring the ADC noise floor spectrum with various fan configurations on the OMC IO Chassis in the CER. We settled on one fan blowing over the installed components as being the best option.
For a first look at the noise change after this configuration was made permanent (for now) after 2023-02-21 21:08 UTC, see LHO:67530. Further demonstration of the goodness of this change beyond LHO:67530 is still being processed, so stay tuned.
All models had DAQ channels added during today's upgrade. These were from various sources:
User model changes applied today:
h1asc: Because change (2) above, some IPC names exceeded the 55 char limit. We had shortened these IPC names by replacing REFL with RFL, h1asc had receivers with the old names
h1susmc1, h1susmc2, h1susmc3, h1supr2, h1suspr3, h1susprm, h1sussr2, h1sussr3, h1sussrm:
These SUS models had ISI Feed Forward parts added.
h1isiham4: DAMP and ISO parts added
In addition to the channels mentioned above, there's also another channel that has been added with H1:${modelname}_REMOTE_IPC_PAUSE. This is evident because there's tons of SDF tables that are reporting this one channel not initialized. The function of this channel: when set to a non-zero value, it halts/stops/pauses all writes (sender) to the IPC dolphin network / fabric. This EPICs write is not intended to be used frequently if at all. I also note that there's a bug in the generation of this channel, perhaps because of the use of top_names function in front-end user models that - have multiple subsystems within, or - have only three letters in the user model name e.g. I'm seeing channels like H1:SQZ-_REMOTE_IPC_PAUSE H1:OMC-_REMOTE_IPC_PAUSE H1:LSC-_REMOTE_IPC_PAUSE H1:ASC-_REMOTE_IPC_PAUSE I'll walk through all the SDFs and initialize these channels, acknowledging that if we fix the above mentioned bug, then we'll need to do this again for those models.
This information is also shown on the CDS overview. Models with a purple mark are built using a non-production RCG version. Clicking on the purple mark opens the RCG MEDM for more details.
Please see alog-78920 for details.
FWIW - The Stanford test stands are running RCG 5.3 to allow development and testing of the SPI readout code.
There's also some release notes here:
https://dcc.ligo.org/T1700552