I've been working on a prototype epics interface to the seismon system. It currently has several moving parts.
EPICS IOC
Runs on h1fescript0 as user controls (in a screen environment). Code is
/ligo/home/controls/seismon/epics_ioc/seismon_ioc.py
It is a simple epics database with no processing of signals.
EVENT Parser, EPICS writer
A python script parses the data file produced by seismon_info and sends the data to EPICS. It also handles the count-down timer for future seismic events.
This runs on h1hwinj2 as user controls. Code is
/ligo/home/controls/seismon/bin/seismon_channel_access_client
MEDM
A new MEDM called H!SEISMON_CUST.adl is linked to the SITEMAP via the SEI pulldown (called SEISMON). Snapshot attached.
The countdown for the P,S,R waves are color coded for the arrival time of the seismic wave
ORANGE more than 2 mins away
YELLOW between 1 and 2 minutes away
RED less than 1 minute away
GREY in the past
If the system freezes and the GPS time becomes older than 1 minute, a RED rectangle will show to report the error.
Just noticed this post, this is great.
Let us know if you run in any bug/trouble with the code.
J. Kissel, D. Tuyenbayev, K. Izumi, E. Goetz, C. Cahillane We have successfully updated the calibration in the front-end calibration. This means the static, frequency-dependent part of the low-latency calibration pipeline is ready to go for O2. Many thanks to all who've helped get this together in such short time. Kiwamu has updated the actuation side: see LHO aLOG 31687 I have updated the sensing side: see this aLOG below. Darkhan has updated the EPICs records that store the model at calibration line frequencies: see LHO aLOG 31696 These changes have been accepted in the CAL-CS SDF system. I always hate saying this statement because it comes without an uncertainty estimate and boils down a time-dependent, frequency dependent answer to a pair of numbers -- and DELTAL_EXTERNAL is limited by the accuracy of the front-end -- but I know people like to hear it, and it's a good, simple benchmark to hit: The resulting agreement between mean values of a swept-sine transfer function between the low-latency pipeline output, CAL-DELTAL_EXTERNAL and PCAL is roughly 2% and 2 [deg] for almost all frequency points between 10 Hz and 1.2 kHz. I was minutes away from finishing a broad-band PCAL injection so we can compare the output with the GDS pipeline to get the full answer, when the 6.9 [mag] Japanese EQ hit (see LHO aLOG 31694). We'll get this during the next lock stretch. We will be working on the uncertainty budget of the next coming days (but expect us to take a Thanksgiving break). We expect the time-independent, statisical uncertainty to improve beyond that of O1 but because they haven't happened yet, we can't make claims about the systematic errors that arise from uncorrected for time-dependence. This is especially true if the evidence for a changing SRC detuning spring frequency holds (see discussion in LHO aLOGs 31665 and 31667). %%%%%%%%%%%%%%% Details: %%%%%%%%%%%%%%% Actuation Function Again, see Kiwamu's aLOG for details on the updates to the actuation function: LHO aLOG 31687. Sensing Function I've updated the front-end's inverse sensing function to the parameters that are now used as the O2 reference measurements: [Units] value(95% c.i.) Meas Date 2016 Nov 12 IFO Input Power [W] 29.5 SRC1 Loop Status ON Optical Gain x 1e6 [ct/m] 1.153 (0.003) => Inv. Optical Gain = 8.673e-7 (2.255e-09) [m/ct] DARM/RSE Cav. Pole Freq. [Hz] 346.7 (4.1) Detuned SRC Optical Spring Freq. [Hz] 7.389 (0.3) Optical Spring Q-Factor (1/Q) [] 0.0454 (0.01) => Q = 22.015 (4.84) Residual Time Delay [us] 2.3 (3.4) => consistent w/ zero, so not included aLOG LHO 31433 The front-end implementation in foton is as follows: Bank: H1:CAL-CS_DARM_ERR Module Name Design String FM2 O2SRD2N zpk([346.7;7.2231;-7.5587],[0.1;0.1;7000],1,"n")gain(5458.55) FM3 O2Gain gain(8.673e-7) The bank is screen captured and attached, just in case SDF dies. Explanation for each component in the FM2 / O2SRCD2N filter: - 346.7 Hz zero is the inverse of the darm coupled cavity pole, as fit in LHO aLOG 31433. - 7.2231 and -7.5587 Hz zeros represent in inverse of the detuned optical spring. Because of the limitations of foton and the need for anti-spring-like response, we must convert the two positive zero frequencies and Q into a positive and negative zero (i.e. one in the real, one in the imaginary plane): f^2 (2*pi*i)^2 f^2 ---------------------------- = ---------- * --------------------------- f^2 - i*(f*f_s/Q) + (f_s)^2 (2*pi*i)^2 f^2 - i (f*f_s/Q) + (f_s)^2 (s = 2*pi*i*f ; c_s = 2*pi*f_s) s^2 = ------------------------ s^2 + s*(c_s)/Q - (c_s)^2 Zeros are the roots of >> 0 = s^2 + s*(c_s)/Q - (c_s)^2 1 ( (c_s) { ( (c_s) )^2 } >> s_{+/-} = --- ( - ----- +/- sqrt { ( ----- ) - 4 (1) (- (c_s)^2) } 2 ( Q { ( Q ) } 1 ( 2*pi*f_s { ( 2*pi*f_s )^2 } >> f_{+/-} = ----(- -------- +/- sqrt { ( -------- ) + 4(2*pi*f_s)^2 } 4*pi( Q { ( Q ) } f_s >> f_{+/-} = --- ( -1 +/- sqrt { 1 + 4 Q^2 } ) 2 Q For f_s = 7.389 Hz and Q = 22.015, that means the positive and negative zeros are at 7.2231;-7.5587 Hz, as shown in the design string. - The poles at [0.1;0.1;7000] are to artificially roll off the inverse response -- these are the same as they were in ER9. Remember the 7000 Hz pole distorts the response enough that this needs to be corrected for in the GDS pipeline. This CAL-CS design is not perfect. I've exported the design back into matlab and compared it against the model using /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/H1/Scripts/CALCS/compare_model_v_CALCS_sensing.m and I attach the discrepancy. The discrepancy at high-frequency should be the same, since we've used the 7000 Hz pole to roll off the darm coupled cavity zero as in O1. The low frequency could use some correction. The DARM UGF is now around 65 Hz, where this sensing function discrepancy starts to matter. There, the phase discrepancy is 0.52 [deg] @ 65 Hz, and works its way up to 3.6 [deg] at 10 Hz. Thus, we don't expect too much impact at all, but it needs to be quantified. I'll discuss with the GDS team to see if such a correction filter is worth it or possible. All of the above actuation and sensing function parameters have been built into the matlab DARM loop model. From here on out, we'll be using the relative, time-dependent correction to make corrections to the model such that they match any new measurements.
Nicely done Team Calibration!
While investigating the Control Room temperature swings, I discovered the return air damper for AHU-3 completely closed. I have the damper open to the normal operating position now and temperatures should stabilize in all areas serviced by AHU-3. This situation appeared to be caused by a problem with the pneumatic actuator for that damper which will soon be replaced with an electric actuator.
I turned both corner and EY RGA filamets OFF so that Robert and AnnaMaria can do noise tests at their convenience by powering off RGAs by unplugging unit at electronics body.
RGAs that are powered on:
BSC5 (EX) - this one cannot be controlled remotely yet but I think the electronics unit (i.e. fan) is still powered on. The filament should be OFF. There is an LED light on electronics unit that indicates if filament is ON/OFF.
BSC6 (EY)
OMC tube next to HAM4 (LVEA)
Jeff K, Kiwamu I, Darkhan T,
Overview
The CAL-CS EPICS records for tracking temporal variations of the DARM parameters have been updated at 2016-11-21 22:03:46 UTC. These values are identical to the ones in LHO alog 31677 from yesterday, i.e. D20161120_H1_CAL_EPICS_VALUES.m and D20161121_H1_CAL_EPICS_VALUES.m are identical.
New values have been accepted in SDF_OVERVIEW.
Details
Following DARM paramemter files were used to calculate these values:
${CalSVN}/Runs/ER10/Common/params/IFOindepParams.conf r3752
${CalSVN}/Runs/ER10/H1/params/H1params.conf r3826
${CalSVN}/Runs/ER10/H1/params/2016-11-12/H1params_2016-11-12.conf r3786
And the DARM model scripts from
${CalSVN}/Runs/O2/DARMmodel/* r3814
The *.m file with EP1-9 values and the verbose output are attached to this report. All of the files have been committed to CalSVN at
ER10/H1/Scripts/CAL_EPICS/
D20161121_H1_CAL_EPICS_VALUES.m
20161121_H1_CAL_EPICS_VALUES.txt
20161121_H1_CAL_EPICS_verbose.log
Shivaraj K, Jeffrey K, Aaron V, Darkhan T,
Updated DARM time-dependence EPICS
We have updated the EPICS values used for DARM time-dependent parameter calculations (DCC T1500377). We added a previously (Nov 21) missing delay in the DARM_ERR signal due to one computational cycle delay, comes from the fact that the signal is transferred from the OMC front-end model to CAL-CS. The *.m file with EP1-9 values and the verbose log are attached to this alog.
The new values were updated in the front-end and were accepted in SDF_OVERVIEW.
Location in the SVN:
${CalSVN}/Runs/ER10/H1/Scripts/CAL_EPICS/
D20161122_H1_CAL_EPICS_VALUES.m
20161122_H1_CAL_EPICS_verbose.log
20161122_H1_CAL_EPICS_VALUES.txt
Corrections to be applied to the channels in GDS and SLM (CalMon) when calculating "kappas"
All of the necessary corrections, except for Pcal RX PD channel corrections have been incorporated in the EPICS values, i.e. when calculating kappas with the method from T1500377, except for the Pcal Rx PD the rest of the channels must be used without applying any additional corrections. The Pcal RX PD channels from the frames must be corrected for the freq. dep. part of the free mass response (two poles at 1 Hz), analog AI, digital AI (IOP) and the time delay of the CAL-EY channels w.r.t. [V] from the PD (this piece was measured to be zero). One of the ways to get the correction TF is to extract it from the Matlab DARM model, an example of extracing this correction TF for Hanford can be found at (the resulting TF is attached):
${CalSVN}/Runs/ER10/H1/Scripts/PCAL/examplePcalCorrExtraction.m
LLO analog AA/AI models have been updated for ER10/O2, while at LHO they did not change since the O1 run. So it is important to calculate the site-specific Pcal corrections using the most up-to-date DARM reference-time parameters.
Corrections to be applied to the CAL-CS demodulators
Since "kappa" calculations in CAL-CS are not done using the Pcal Rx PD channels written into frames, but rather with the PCALY_RX_PD signal that gets transmitted from the CALEY model to CAL-CS, the Pcal Rx PD data seen by CAL-CS has an additional one computational cycle delay. This means that the phases and amplitudes in the front-end demodulators must be adjusted with PcallCorr * (one_cycle_advance_phase).
Pcal calibration line demodulator phases have been adjusted in the CAL-CS front-end model.
PCAL_LINE1 (36.7 Hz line) demodulator phase was set to -1.87 deg (old value was -2.40 deg)
PCAL_LINE2 (331.9 Hz line) demodulator phase was set to -16.95 deg (old value was -19.9 deg)
The phases were calculated according to the instructions given above and taking into account that two poles at 1 Hz (free-mass response) are applied separately in a Foton filter.
The script for calculating the phases was committed to
${CalSVN}/Runs/ER10/H1/Scripts/PCAL/getPcalCorrForCALCSdemod.m
The new values have been accepted in SDF_OVERVIEW.
After updating the phases we have got about 25 minutes of data before we lost the lock. κPU, κT, κC, in this interval were within 1% of their reference-time values (1.0) and fC was within 3% of its reference-time value (346.7 Hz).
I compared both of Pcal correction factors for LHO used by the GDS pipeline to the transfer function produced by the example script. These are computed at two line frequencies: 36.7 Hz and 331.9 Hz. As seen in the plot, they agree quite precisely. I also checked that the numerical values produced by this script at the line frequencies were identical to those used in the GDS pipeline.
The pcalcorrection factor used for SLM Tool and computed by the exapmple script are in good agreement as well.
2pm local
Showed Ed Merilh how to fill CP3 from control room so he can do this on Friday for vacuum group (thanks, Ed!). Someone in vacuum should let Ed do this on Wed. while observing for practice on Friday. He is writing a wiki link for instructions.
Took about 30 sec. to overfill CP3 at 50% open on LLCV.
Just so people have something to cite, this last lock stretch ending on Nov 21 2016 ~22:11 UTC was caused by the 6.9 [mag] earthquake in Japan. Calibration group was just finishing up post-front-end-update measurements, and PEM team was just about to start their work. Looks like we're out for a few hours though, neither of them were the cause of the lockloss.
We had another incident of the kind of scattering that was investigated in 30790 and comments, and in 31267
We see peaks in DARM that are probably upconversion of the 6.1Hz tip tilt resonaonces.
Over the weekend I had turned off the cut off of DC1Y, which actuates on both RMs. I rengaged this and the loops still seem stable, but it didn't have a repeateable impact on the noise.
I've noticed in the last couple of long locks that Mode 7 on ETMY slowly rings up. I mentioned it in log 31000, Cheryl did in 31657. With Jenne's ok, I've increased the gain in Violin_mode_2 to -24, from -20 in ISC_LOCK and loaded the new code. Cheryl went all the way to -60 in her log, so we clearly have some head room. I'll watch over the rest of my shift to see if this seems stable, but it's working well so far.
model restarts logged for Thu 17/Nov/2016 - Sun 20/Nov/2016 No restarts reported.
DAQ - stable (running 6 days). fw0 has with older version of frame-cpp (will be updated 11/22/2016). No critical problems reported.
All vacuum pumps are running within specified pressure and temperature ranges. Made a small adjustment to the End-Y vacuum pressure. Closed FAMIS #7507.
Jess McIver, Laura Nuttall
The full DQ shift can be found on the detchar wiki. Here are the highlights:
* Two long, in excess of a day, lock stretches with ranges around 70 Mpc
* Range drops could be beam alignment drift
* Whistles appeared in the two short lock stretches on Friday. Spotted using hveto which found a good coincidence with POP 9
* Ran bruco on a lock stretch where the 1083 Hz Cal line was turned off to see if there are any coherences around 1080Hz glitchy region. Nothing particularly obvious
Separate alogs will follow about more follow up during this shift
J. Kissel, K. Izumi, D. Tuyenbayev We were just about to use a new sweep I took (with data attached) to update the front-end portion of the low-latency calibration pipeline during the ~30 hour lock stretch, but we were rudely interrupted by a 6.4 [mag] EQ in Argentina. Thus we have not yet updated the calibration. We will resume tomorrow morning when we are on site and can get a before and after change set of measurements. Things on the plate to update: (1) DC actuation strength, based on results from frequency-dependent sweeps of each actuation stage and the long-duration, single frequency measurements taken in the first few weeks of ER10. We expect the TST and UIM stage to receive updates on the level of a few percent. We expect this will increase the BNS range a bit. (2) The frequency dependence of all stages of the actuator. This is to account for optical lever damping, and newly included UIM violin mode resonances. We do not expect this to affect the BNS range. (3) Small adjustments to the inverse sensing function to get updated values for the optical gain (will likely be ~3% higher). We expect this to decrease the BNS range a bit. (4) EPICs records that store the DARM loop model at calibration line frequencies to match the reference measurement times. Unfortunately, its only after we make these updates will we be able to trust any of the time-dependent tracking system. All this being said, the current sensing function sweep (retaken today) sweep shows that the PCAL is discrepant with DELTAL external by 5% (though this changes with time as the TST actuation strength, optical gain, SRC detuning, and cavity pole change). However, we think we can reduce this systematic offset by making the above updates. The strange thing, is that the fit RSE cavity pole and SRC detuned spring freq are statistically different than what has been measured to-date. I'm worried this has to do with Shiela's finalizing the alignment configuration to get back onto REFL WFS (see LHO aLOG 31653). [Units] value(95% c.i.) Meas Date 2016 Nov 20 IFO Input Power [W] 29.1 SRC1 Loop Status ON Optical Gain x 1e6 [ct/m] 1.15 (0.002) DARM/RSE Cav. Pole Freq. [Hz] 353.5(6.0) Detuned SRC Optical Spring Freq. [Hz] 7.02 (0.3) Optical Spring Q-Factor (1/Q) [] 0.04 (0.01) Residual Time Delay [us] 2.4 (5.4) aLOG this aLOG Details: CalSVN repo root: [~] = /ligo/svncommon/CalSVN/aligocalibration/ Data lives here: [~]/trunk/Runs/ER10/H1/Measurements/ DARMOLGTFs/2016-11-20_H1_DARM_OLGTF_4to1200Hz_fasttemplate.xml PCAL/2016-11-20_H1_PCAL2DARMTF_4to1200Hz_fasttemplate.xml Analysis produced by running: [~]/trunk/Runs/ER10/H1/Scripts/PCAL/fitDataToC_20161116.m Revision: 3815 on each of the measurement config files listed below. Model Config Files: [~]/trunk/Runs/ER10/Common/params/IFOindepParams.conf Revision: 3776 [~]/trunk/Runs/ER10/H1/params/H1params.conf Revision: 3811 [~]/trunk/Runs/ER10/H1/params/2016-11-12/H1params_2016-11-12.conf Revision: 3800 Measurement Config Files: [~]/trunk/Runs/ER10/H1/params/2016-11-20/measurements_2016-11-20_sensing.conf Revision: 3821
WeeklyXtal - explanation of what's going on with HPO diode powers is here. Amp diode powers are tracking with humidity, as they do.
WeeklyLaser - everything looks ok. BOXHUM shows decrease.
WeeklyEnv - everything looks ok. Temps and Humidity show decrease.
WeeklyChiller - there is a direct correlation between the Xtal Chiller flow and the OSC Press 1&2. OSC HEADFLOWS all show marginal decreases except for HEAD4 which is either very stable or has a faulty readback?
Head 4 was part of the group of faulty flow sensors (FE flow, PowerMeter flow, and Head 4) that were causing our interlock trips. These were bypassed in the Beckhoff software (alogged by Peter here); the variables were forced so the readback from the sensors is being ignored. Due to time constraints between commissioning, readiness for ER10 and now transistioning to O2, these bypasses are being left in place until after O2. We still have active flow sensors to give us protection in the case of a real flow fault.
Everything else looks as expected.
On Thursday we (Sheila, Daniel & I) increased the Thresholds on the TIDAL RED TRIGGER when the very low value kept the ISC driving the HEPI too long. It would appear there was only one long lock stretch with these settings before Sheila reported that the tidal was not coming on, in some locks. Beam diverters?... When Nutsinee reverted the values of the RED TRIGGER Thresholds, things seemed to start working properly again. I don't pretend to understand this and it is pretty clear that reverting the values solved the problem. See attached four days trend, when lowering the THRESH_ON value started the TIDAL running (I did not zoom in but it looks correlated.) But also seen is the REDTRIG_INMON has dropped to 153000 from the previous 165000. If this level (Power?) change was enough to disable the 8000 count THRESH_ON setting, clearly something else must be in play. I need to understand the triggering better to really understand this. We don't want the ISI to trip as it can be several minutes for the T240s to settle and be ready for complete ISI isolation but obviously we need the TIDAL relief to engage.
It might be that the threshold wasn't exceeded, when it tried to engage REDLOCK. The fundamental problem is that neither the arm power nor the trigger thresholds are getting scaled with the power up, so the threshold needs to be set low enough to catch the lock at low power. Once you power up, the arm powers will then be far above threshold. Too much it seems. Probably best to figure out how to scale the NSUM value with the inverse of the input power. Alternatively, one could rewrite the trigger so that it scales automatically.
Opened FRS Ticket 6787. This has happened a few more times this evening.
TITLE: 11/21 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing
INCOMING OPERATOR: Jim
SHIFT SUMMARY: One lockloss during the shift. There was a small earthquake in CA showed up on the seismic FOM around the same time (read 0.1 um/s). This amplitude of earthquake doesn't usually break the lock so I wonder if something else might have caused it. Had no issue relocking.
LOG:
9:28 Out of Observe to run a2l
9:36 a2l done. Back to Observe.
09:42 Lockloss
10:16 Arrived at NLN. Took time to damp bounce modes and ran a2l.
10:31 Observe
13:07 Out of Observe to run a2l
13:12 Observe
15:13 Out of Observe, ran a2l
15:18 back to Observe.
Dust alarm and Vacuum alarm (CP4) continues to beep. If your system is using the alarm handler please make sure the threshold is set right and only make it beeps when things are critical. Otherwise operators are just going to keep ignoring these alarms, including the ones that might actually be critical.
Checked the alarm level settings for all site dust monitors, and all are set correctly. Dust monitor alarms should be investigated and documented in the aLOG. Use some common sense when aLOGing an event, e.g. if you know Karen is cleaning at End-? and the alarm goes off no need to log. All unknown alarms should be logged.
Maybe this is known, but I didn't find an alog showing this feature of ITMY.
Plot 1: ITMY
OSEMs LF and RT have a peak at 0.55Hz throughout the long lock yesterday, T0 = 11/19, 16:00UTC, 750 averages
The peak in the LF and RT OSEMs seems to suggest the optic is swinging at 0.55Hz in pitch all the time, and that that swinging gets kicked up at lock loss.
Plot 2: ITMY, ITMX, ETMY, ETMX, OSEMs LF and RT, all optics
The peak at 0.55Hz in OSEMs LF and RT is unique to ITMY.
It's not really helpful, because we don't yet have a solution, but here's the aLOG where this problem has been explored: LHO aLOG 31404. I have no suggestions as of yet on how to fix it, other than re-designing the loops to add more gain (read: *any* gain) at this frequency. Hugh is studying how far back in time this problem has persisted (it's been a problem since at *least* Oct 28), in hopes to find some sort of coincident hardware/electronics activity to blame. But so far, no dice.
Jeff K, Sheila, Jenne, Evan
While we were having an EQ, we took a quick look at what is wrong with ITMY.
Since Cheryl pointed out that this motion is seen in LF and RT, we became suspicous of a problem with top mass V and R damping. Indeed, there was a "bounce" filter engaged in ITMY top mass V damping which must have been a mistake. Once we turned this off the OLTF of the vertical damping looks verry similar between ITMX and ITMY. With this filter on ITMY damping was garbage.
I accepted the change in SDF. We also noticed while doing this that the top mass actuator state for ITMX was 1, while the rest of the quads were in state 2, so we switched that. The large peak at 0.55 Hz that Cheryl aloged is gone now, and hopefully we won't have the problem with long long ringdowns on ITMY.
Jeff, Darkhan, Kiwamu,
This is a quick report; some more details will be reported tomorrow.
We started a preparation work to update the CAL CS front end model. We have created a new version of the matlab script that populates the actuator functions (21322). The new script can be found at
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/H1/Scripts/CALCS/quack_eyresponse_into_calcs.m
We made some tunings to get the script running without making autoquack unhappy. We didn't update the actual foton filters for ETMY today although we made changes on the ITMY CALCS filters as tests. The attached is a first glance at the actuation functions in comparison to the O1 actuation functions. No surprise so far, except that the L3 stage had a different sign now. This will be followed up.
Here is a summary of the activities including yesterday's and today's.
We have ...
This means that we have completed the update of the new suspension filters in CALCS.
[The new CALCS actuator functions]
More explicitly, the suspensions are extracted from the tagged model,
/ligo/svncommon/CalSVN/aligocalibration/trunk/Common/externals/SUSdynamModelTags/quadmodelproduction-rev8274_ssmake4pv2eMB5f_fiber-rev3601_h1etmy-rev7915_released-2016-11-11.mat
which is specified in H1's parameter file at:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/H1/params/H1params.conf
The DC values for the suspensions came a bit different than what Darkhan reported (31677) by a subpercent level. The matlab code reported the following values.
Ku = 8.1642e-08 N/ct
Kp = 6.8412e-10 N/ct
Kt = -4.3891e-12 N/ct
We will double check with Darkhan to see what actually affected these values even thought the discrepancies aren't significant.
[Adjustment for quacking the filters]
When running the matlab script
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER10/H1/Scripts/CALCS/quack_eyresponse_into_calcs.m
to update the actuator functions in CALCS, we ran into some problems for which we spent almost a day. This is a summary of what we encountered and how we mitigated.
Otherwise, we didn't change the code and therefore it did the processes that are describd in detail at 21322.
[Accuracy of the actuation functions in CALCS]
The first attachment is a plot showing the discrepancy between the desired and installed filters. UIM is the one who has the largest discrepancy in 10-100 Hz about a few % in magnitude and a few degrees in phase. Nevertheless, this shouldn't pose an issue for the resulting accuracy of DETAL_EXTERNAL as the UIM affects very weakly above 1 Hz due to the actuation authority. The second attached is another comparison plot showing the desired (full ss) and installed (discrete) filters. Finally the third attachment shows the installed susnorm filters which are forced to be flat at high frequencies.
[Copying the digital filter settings for ETMY]
This was done by running the existing code,
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/ER8/H1/Scripts/CALCS/copySus2Cal.py
Darkhan, Kiwamu,
Taking a further look at the K values, we came to the conclusion that the discrepancies are due to rounding error of the N/A values written in H1params.conf. As described above, this will not cause appreciable effects at the precision level we usually talk about (~1 % level). We left them as they are.
It looks like we had another incident of the POP90 power changing, (circled in the striptool screenshot) similar to what Stefan described in 31181. Is this still a problem with the demod as RIchard found the first time? If its only an intermittent problem with the POPAIR 90 demod, we probably don't need to worry about fixing it before O2 because that is jiust a monitor and won't be used if we are able to close the beam diverters for the run.
Detchar question:
Did we have RF45 glitches around these times? The times are roughly 16:26, 16:30, 16:36 and 16:42 Nov 19th local time, which is 0:26, 0:30, ect Nov 20th UTC time.
We were able to fix some obvious problems with this but the problem with the shield was not changed as we did not test it after the fact. Though this problem should only appear if someone was in the rack. Would be interesting to see if anything is on 9,or 45MHz
I took at look at the auiliary channels we used to create DQ flags monitoring RF45 noise in O1, namely H1:LSC-MOD_RF45_AM_CTRL_OUT_DQ and H1:ASC-AS_B_RF36_I_YAW_OUT_DQ. I created BLRMS of these channels in the same way we did in O1 to threshold on. In all of these plots we see a steady BLRMS over 21 hours from 20th Nov 00:00 - 21:00 UTC, indicating that these channels do not see any form of RF45 noise we are used to:
* Plot 1 - BLRMS of H1:LSC-MOD_RF45_AM_CTRL_OUT_DQ between 10-100Hz in 60 seconds strides
* Plot 2 - BLRMS of H1:LSC-MOD_RF45_AM_CTRL_OUT_DQ between 10-100Hz in 1 second strides
* Plot 3 - BLRMS of H1:ASC-AS_B_RF36_I_YAW_OUT_DQ between 30-170Hz in 1 second strides
Hveto for this day indicated that H1:ASC-AS_A_RF45_Q_PIT_OUT_DQ was a good channel to veto noise with on Sunday. I therefore did a BLRMS of this channel:
* Plot 4 - BLRMS of H1:ASC-AS_A_RF45_Q_PIT_OUT_DQ between 5-100 Hz in 1 second strides
This channel does show excess noise at certain times of the day. If we were to threshold on this BLRMS using the 99.5% BLRMS value during this time period, we would capture the times Sheila mentions and also veto 8/10 top ten pycbc live triggers for this day.
Not conclusive that this noise is RF45 noise similar to what we saw in O1, investigating further...
Seemingly another incident: circa 2016-11-23 20:25:30 Z.