Finally, I was able to get the Testo ComSoft5 software downloaded in order to look at the relative humidity logs for the desiccant cabinets in the clean and bake warehouse. Attached are the plots for the SUS DB1 cabinet and the 3IFO DB4 cabinet. The trending software plots back to the last time the data was plotted which in this case was last done in June. I am glad to see that there is a bump in the RH just after the power outage in June since prior months plots are usually ~flatline and suspect. Also for the last 6 months, one of the RH meters has been hanging out near -0.4% RH which we need to get to the bottom of, but at least the meter reads positive ~20-40% RH when I move it outside, and during the last power outage.
Past monthly plots are in alogs 23049 22277 19413 18826 17611
FYI:
The data logger software ComSoft5 is now loaded on the Contamination Control HP Mini laptop which resides in a bin outside of my office.
Data is in the file Documents/Desiccant Data/
In the bin are the cables and RH meter boxes/manuals.
The keys to the des cabinets (which are needed to pull out the little RH meter to plug into the computer for data retrieval) are in a variety of places (a central spot is TBD) - however it turns out that all keys work on all 4 cabinet locks in the C&B warehouse.
J. Kissel, E. Hall Checking in on the power delivered to the ITMY compensation plate after the strange drop in TSCY's front-end laser power drop yesterday (see LHO aLOG 28506), it looks like the laser power has recovered, mostly. The power, as measured by the pick-off beam just before the up-periscope into the vacuum system (H1:TCS-ITMY_CO2_LSRPWR_MTR_OUTPUT) is roughly stable at 0.3075 [W], where it used to deliver a really stable 0.312 [W]. I attach two zoom levels of the same 4-day trend. There's also some weird, 10-min period feature in the *minimum* of the minute-trend, where the reported power drops to 0.16 [W]. Given its periodicity, one might immediate suspect data viewer and data retrieval problems, but one can see in the un-zoomed trend that this half-power drop has been happening since the drop-out yesterday, but tracks the reported laser power even before the delivered power was increased back to nominal.
I'm wondering if this weird behaviour is due to the RF driver - we should try to swap in the spare driver soon to check this because if the power output is really glitching low like that then it's likely to cause issues for commisioning, or possibly to fail altogether.
The temperature trend for the laser doesn't give any signs that it might have overheated.
The delivered power was recovered because I added an offset to the RS calibration so that it allows the right amount of power through. The laser itself is still outputting 42W.
Reports of higher LASER noise are apparent in both Lo and HI power paths. HOM are marginally higher in the HI power path while the LO power path shows rather significant increase. (about 8%)
While the CDS crew had systems down yesterday, it appears the HEPI Pump Servo Pressure sensors were affected and changed their reading. This happened before in May 2015. SImilar to then, while the loads on the building changed, the voltage offsets changed on the pressure transducers and they stepped their value.
The attached trends shows the three building's HEPI Differential Pressures and their servo output drive to the VFD. You can tell that it is the pressure that is affected as it goes high the controller drive drops to lower the pressure back to the setpoint. When all the system loads are brought back on line, the pressures drop lower and the drive recovers to put the pressures back. The pressures come back to where they were as evidenced on the CONTROL VOUT. At EndY, I had put the servo into manual and the pressure only recovered to 65psi. This morning, I put the servo back in auto and now the drive is back to pre event level.
When the recovery happens and the pressures drop, the servo will drive hard to get the pressure back up and this can be enough to trigger the fluid level shutdown. This happened yesterday in the corner as evidenced by the pressure drop to zero for a while. I expect the increase drive on the pump literally pulls fluid out of the reservoir faster than it can be replenished from the return flow (until the system reaches the new equilibrium) and trips the level switch. I set these pretty close to nominal running state so as to not make as big a mess if there is a bad system leak. Other than resetting the corner level switch and reenergizing the controllers, the servos kept right on going through everything.
Barker and I will try to design an experiment as to what or how many 'power offs' cause this. The solution is pretty simple though. When this sort of activity is going to occur, just put the servos into manual mode. We'll get alarms when the pressure changes but we'll know they aren't real.
Stefan, Matt, Evan
We thought a bit about how thermally driven fluctuations in the polarization density of the test mass substrate could couple into DARM. We made a preliminary calculation of this noise for a single test mass biased at 380 V. This is not currently limiting the DARM sensitivity (since bias voltage reduction tests showed no effect), but could be important once aLIGO is closer to design sensitivity.
The test mass substrate and the reaction mass electrodes can be considered as a capacitor with position-dependent capacitance , where d is the gap distance. The dielectric loss of the substrate will contribute a small imaginary part
to this capacitance. If a significant fraction of the electrostatic field density from the electrodes is located inside the test mass substrate, then the loss angle of the capacitance will be similar to the dielectric loss of the substrate.
If a sinusoidal voltage is applied to the bias electrode (and the quadrant electrodes are held at ground), then the charge accumulated on the electrodes is
. The time-averaged power dissipated per cycle is then
.
Since , we therefore have
.
This voltage noise can then be propagated to test mass motion in the usual way. Using an ESD coefficient 2×10−10 N/V2 and a gap distance of 5 mm, this fixes the capacitance at C0 = 2 pF.
The loss tangent of Suprasil is quoted as 5×10−4 at 1 kHz. If this is assumed to be structural, then for a 380 V bias, the expected displacement ASD is 8×10−22 m/Hz1/2 at 100 Hz, falling like f5/2. This is shown in the attachment, along with the aLIGO design curve.
We have not considered the capacitance of the ESD cables, or of nearby conductors (e.g., ring heaters). (The effect of the series resistors in the ESD cables was considered by Hartmut some time ago.)
The Dynasil web site quotes dielectric properties of fused silica from MIT's Laboratory for Insulation Research, circa 1970.
These vales for the loss tangent are much lower, e.g.: < 4e-6 at 100 Hz.
J. Kissel I've performed our regular charge measurements for the week of 2016-07-20. Because we had lack acquisition troubles leading up to ER9 with flipping the bias sign (see LHo aLOGs 28362, 28152), we've left the signs on both end test masses where they were about a month ago (see LHO aLOG 27890). As such, the charge continues to accumulate in the directions they were. ETMX (perhaps because we've had such a low IFO duty cycle) has begun to accumulate charge more rapidly that a few months ago. Thus, we'll *need* to flip the sign sooner than expected -- but it's still OK for now. ETMY is still within -10 [V] effective bias. However, if we'd like to use this charge mitigation method for ER10 and O2, we should continue trying to flip the bias regularly and debug what's wrong with the lock acquisition sequence when we do. Will try again when we have a little more patience (i.e. in the mornings when we first arrive, while we [the detector engineering and operations teams] get the IFO locked and ready for the commissioning vanguard).
TCS-Y: LASER pssobly damaged
SEI: Discovered that power-ups/re-boots cause HEPI pump Diff pressure(s) to spike. This will be added to FRS.
CDS: No full report on Timing changes yet. A long lock stretch would be more telling.
PSL: Jenne reported that noise increased by a factor of 2. ISS setting are a bit different since more RF power being fed to AOM.
VAC: no issues reported
FAC: no issues reported
All accumulators found to be at 0 and GREEN
We didn't get to DC readout until after midnight tonight. There were no huge problems, but several things that it would be helpfull if someone could follow up on:
In the end we got to DC readout, and Evan, Carl and I had a look at the dither alignment for SRM pitch using POP 90 which is now in the ADS matrix. The script that is in userapps/asc/h1/scripts/ditherSRM.py sets things up, and we could see that we have an error signal that responds to moving SRM and has a zero crossing at a good alignment. We tried closing the loop but we probably hadn't turned the gain up enough and broke lock for an unknown reason, by this time our alignment had become rather bad and we had a small EQ.
Since the optics are drifting so much tonight I'm not going to do an alingment now, but if the morning operator could start by doing initial alignment when charge measurements are over that would help us get started tomorow.
The Yaw alignment of the ISI's has a temperature dependence (i don't remember the number but it is something like the expansion coefficient of Aluminum 2.2e-5/K, with some geometry that is going to be slightly less then 1), if the platform was running with a DC offset, turning it off and then back on could produce a drifting Yaw alignment
--
At 8:09 UTC (on July 20th) we had a large glitch in interferometer signals and MC3 saturated. This seems like a suspect for some kind of a suspension glitch that would be worth following up on.
The glitch is at 8:08:24.5 UTC, and it seems to have originated from the MC3 M1 LF OSEM. The glitch there is 500 counts peak-to-peak, while it's about 40 in the RT OSEM. It looks like one short glitch that caused some ringing for three seconds, which was visible in T2 and T3 as well, but not nearly as large as LF and RT.
For what it's worth - the overall trend of these signals did not change from before the little glitch event. OSEM signals look healthy.
We had a similarly huge glitch a few seconds before 23:30:30 UTC (also July 20th). It doesn't seem to be the same MC3 LF problem that Andy found. In the first attachment you can see the glitch from last night showing up clearly in MC3 M1 LF, in the second attachment you can see the very similar glitch from this afternoon without anything happening in MC3. For this afternoon's glitch I also looked at top mass osems for all the other optics and don't see much happening.
Also, all 3 MC mirrors react to the 8:08 glitch, this is because we don't have much of a cut off on our MC WFS loops. Adding more cut offs might be a good idea.
We've had several unexplained and sudden locklosses lately, and I wonder if whatever causes these huge glitches also causes some locklosses.
Upgrade of Timing Firmware
Daniel, Ansel, Jim, Dave
Most of today was spent upgrading the entire timing system to the new V3 firmware. This did not go as smootly as planned, and took from 9am to 6pm to complete. By the end of the day we had reverted the timing master and the two CER fanouts to the orginal code (the end station fanouts were not upgraded). We did upgrade all the IRIG-B fanouts, all the IOChassis timing slaves, all the comparators and all the RF Amplifiers.
The general order was: stop all front end models and power down all front end computers, upgrade the MSR units, upgrade the CER fanouts, upgrade PSL IO Chassis (h1psl0 was restarted, followed by a DAQ restart), upgrade all CER slaves (at this point the master was reverted to V2), at EY we upgraded IRIG-B and slaves (skipping fanout), at MY we upgraded the PEM IO Chassis, at EX we did the same as EY and at MX the same as MY.
All remaining front ends were now powered up. The DAQ was running correctly but the NDS were slow to complete their startup. Addiional master work in the MSR required a second round to restarts, at this point comparators which had been skipped were upgraded and the CER fanouts were downgraded. Finally after h1iopsush56 cleared a long negative irig-b error all systems were operational.
During these rounds of upgrades FEC and DAQ were restarted several times.
Addition of Beam Splitter Digital Camera
Richard, Carlos, Jim
An analog camera was replaced with a digital video GIGE-POE camera at the Beam Splitter.
New ASC code
Sheila:
new h1asc code was installed and the DAQ was restarted.
Reconfigured RAID for ldas-h1-frames file system
Dan:
The ldas-h1-frames QFS file system was reconfigured for faster disk access. This is the file system exported by h1ldasgw0 for h1fw0's use. After the system was upgraded, we reconfigured h1fw0 to write all four frame types (science, commissioning, second and minute). As expected, h1fw0 was still unstable at the 10 minute mark, similar to the test when h1fw0 wrote to its own file system. h1fw0 was returned to its science-frames-only configuration.
Just curious -- it's my impression that the point of "upgrading the timing system to the new V3 firmware" was to reprogram all timing system hardware's LED lights so as to not blink every second or two, because we suspect that those LEDs are somehow coupling into the IFO and causing 1 to 2 Hz combs in the interferometer response. The I/O chassis, IRIG-B, comparators, and RF amplifiers are a huge chunk of the timing system. Do we think that this majority will be enough to reduce the problem to negligible, or do we think that because the timing master and fanouts -- which are the primary and secondary distributors of the timing signal -- are still at the previous version that we'll still have problems?
With the I/O chassis timing upgrade we removed the separate power supply form the timing slaves on the LSC in the corner and both EX and EY ISC chassis. Hopefully the timing work will eliminate the need for the separate supplies.
Could you clarify that last comment? Was yesterday's test of changing the LED blinking pattern done in parallel with removal of separate power supplies for timing and other nearby electronics?
Ansel has been working with Richard and Robert of the past few months testing out separate power supplies for the LEDs in several I/O chassis (regrettably, there are no findable aLOGs showing results about this). Those investigations were apparently enough to push us over the edge of going forward with this upgrade of the timing system. Indeed, as Richard says, those separate power supplies were removed yesterday, in addition to upgrading the firmware (to keep the LEDs constantly ON instead of blinking) on the majority of the timing system.
To clarify Jeff's comment: testing on separate power supplies was done by Brynley Pearlstone, and information on that can be found in his alog entries. Per his work, there was significant evidence that the blinking LEDs were related to the DARM comb, but changing power supplies on individual timing cards did not remove the comb. This motivated changing the LED logic overall to remove blinking. I'm not sure whether the upgrades done so far will be sufficient to fix the problem. Maybe Robert or others have a better sense of this? Notable alog entries from Bryn: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=25772 https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=25861 https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=27202
I have gone through and manually compared FScan spectrograms and normalized spectra for the 27 magnetocometer channels that are processed daily: https://ldas-jobs.ligo-wa.caltech.edu/~pulsar/fscan/H1_DUAL_ARM/H1_PEM/fscanNavigation.html, to look for changes following Tuesday's timing system intervention, focusing on the lowest 100 Hz, where DARM 1-Hz (etc.) combs are worst. Because of substantial non-stationarity that seems to be typical, it's not as straightforward as I hoped it would be to spot a change in the character of the spectra. I compared today's generated FScans (July 20-21) to an arbitrary choice two weeks ago (July 6-7). But these six channels seemed to improve w.r.t. narrow line proliferation: H1_PEM-CS_MAG_EBAY_LSCRACK_X_DQ H1_PEM-EX_MAG_EBAY_SUSRACK_X_DQ H1_PEM-EX_MAG_EBAY_SUSRACK_Y_DQ H1_PEM-EX_MAG_EBAY_SUSRACK_Z_DQ H1_PEM-EY_MAG_EBAY_SUSRACK_X_DQ H1_PEM-EY_MAG_VEA_FLOOR_X_DQ (before & after figures attached) while these four channels seemed to get worse w.r.t. narrow lines: H1_PEM-EX_MAG_VEA_FLOOR_Z_DQ H1_PEM-EY_MAG_EBAY_SEIRACK_X_DQ H1_PEM-EY_MAG_EBAY_SEIRACK_Y_DQ H1_PEM-EY_MAG_EBAY_SEIRACK_Z_DQ In addition, many of today's spectrograms show evidence of broad wandering lines and a broad disturbance in the 40-70 Hz band (including in the 2nd attached figure).
Weigang Liu has results in for folded magnetometer channels for UTC days July 18 (before changes), July 19-20 (overlapping with changes) and July 21 (after changes): Compare 1st and 4th columns of plots for each link below. CS_MAG_EBAY_SUSRACK_X - looks slightly worse than before the changes CS_MAG_EBAY_SUSRACK_Y - periodic glitches higher than before CS_MAG_EBAY_SUSRACK_Z - periodicity more pronounced as than before CS_MAG_LVEA_VERTEX_X - periodic glitches higher than before CS_MAG_LVEA_VERTEX_Y - periodic glitches higher than before CS_MAG_LVEA_VERTEX_Z - periodic glitches higher than before EX_MAG_EBAY_SUSRACK_X - looks better than before EX_MAG_EBAY_SUSRACK_Y - looks better than before EX_MAG_EBAY_SUSRACK_Z - looks slightly worse than before EY_MAG_EBAY_SUSRACK_Y - looks slightly better after changes EY_MAG_EBAY_SUSRACK_Z - looks the same after changes (Weigang ran into a technical problem reading July 21 data for EY_MAG_EBAY_SUSRACK_X) A summary of links for these channels from ER9 and from this July 18-21 period can be found here.
Jeff K, Alastair (by phone), Nutsinee
Jeff noticed that TCS CO2Y was throwing a bunch of guardian error messages which led him to investigate and found that the CO2Y actual output power was lower since the laser recovered from maintenance activity this morning. Timeseries shows that CO2Y power dropped out at 15:41 UTC (8:41 local time) and never came back to its nominal (~57W). Chiller temperature which is read off the front end was down at the same time indicating CO2Y was down due to some front end maintenance activity. The supply current to CO2Y was also low compared to CO2X (19A vs 22A) suggesting that the low power output was real. And indeed, we went out and measured about 40W at the table (we stick a handheld power meter right before the first steering mirror).

We don't know why would the Front End maintenance today would affect CO2Y output power (CO2X is fine by the way). On the plus side, the heating profile looks good on the FLIR camera which means nothing was misaligned and we can still use CO2Y laser. The beam dump that was in front of the FLIR screen hasn't been put back so be mindful if you ever want to blast full power through the rotation stage.

I commented out the output power fault checker part in TCS power guardian so that ISC_LOCK can still tell it to go places. I added a temporary +1 degree offset to the minimum angle parameter for CO2Y rotation stage calibration so it would go to requested powers. We requested TCS CO2 laser stabilization guardian to down because it's not usable given a current output power.
Quick conclusion: CO2Y is still functional. The reason for power loss is to be investigated.
J. Kissel, S. Dwyer, N. Kijbunchoo, J. Bartlett, V. Sandberg A few checks we forgot to mention to Nutsinee last night: - Nutsinee and I checked the flow rate on the mechanical flowmeter for both the supply and return for TCSY chiller line, and it showed (what Nutsinee said was) nominal ~3 Gallon per minute. This was after we manually measured the power to be 41 W coming out of the laser head to confirm the EPICs readout. - Sheila and I went to the TCS chillers on the mezzanine. Their front-panel display confirmed the ~20.1 deg C EPICs setting for temperature. - On our way out, we also noticed that a power supply in the remote rack that is by the chillers marked "TCSY" was drawing ~18 mA, and was fluctuating by about +/- 2mA. We didn't know what this meant, but it was different than the power supply marked TCSX. We didn't do anything about it. - The RF oscillator mounted in that same remote rack appeared functional spitting out some MHz frequency sine wave. Sheila and I did not diagnose any further than "merp -- looks like an oscillator; looks on; looks to be programed to spit out some MHz sine wave."
Alastair, Nutsinee
Today I went and check the CO2Y power supply set point. Voltage limit is set to 30V and current limit is set to 28A. Same goes for CO2X power supply. These are correct settings, which means CO2Y laser is really not behaving properly.
For all the suspensions for which the guardian sets the SDF file to down, I have changed the safe.snap to be a softlink to the down.snap in the userapps repository. This means we have one less sdf file to worry about maintaining for these suspensions. If anyone can do a similar job for the rest of the suspensions, (ie, make sure that safe.snap is a softlink to some file that gets maintained), things will be a little easier next time we restart all models.
Along the same lines, I made a script which should allow us to right-click on a EPICs field and ask that it be accepted into the currently loaded SDF file.
This script is based on "instafoton.py" in /opt/rtcds/userapps/trunk/cds/utilities, with some help from create_fe_sdf_source_file_list.py (/opt/rtcds/userapps/trunk/cds/h1/scripts/). The idea is that it can be added to the MEDM drop-down menu like instafoton. The script is instaSDF.py in /opt/rtcds/userapps/trunk/cds/utilities (also attached).
The script works by changing the snap file which is currently loaded in SDF (as reported by the SDF_LOADED EPICs record, e.g. safe.snap), and then asking SDF to reload. As of this writing, the script is "toothless" in that it does not take the final steps to replace existing snap file or reload; the code required to do this is commented out.
To do:
While I still don't follow why we went from wanting more SDF files at various states, including an all-sacred SAFE.snap, to now just wanting 1 file, with JK's instruction I made more soft links in sus burt files. I guess this takes off where Sheila left off, namely that for any SUS that was sitting on the OBSERVE file this morning during IFO DOWN, I set the safe.snap to be softlinked to the observe snap. So, none of the names that the SDF overview say that the SUSes are looking at are correct. Everything has a soft link to some other file.
Someone else will have to suggest what PI files are softlinked too, I didn't touch those.
The following list is the state of the situation. Good luck.
lrwxrwxrwx 1 controls controls 67 Jan 13 2015 h1susauxasc0/h1susauxasc0epics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susauxasc0_safe.snap
lrwxrwxrwx 1 controls controls 67 Jan 13 2015 h1susauxb123/h1susauxb123epics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susauxb123_safe.snap
lrwxrwxrwx 1 controls controls 65 Jan 13 2015 h1susauxex/h1susauxexepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susauxex_safe.snap
lrwxrwxrwx 1 controls controls 65 Jan 13 2015 h1susauxey/h1susauxeyepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susauxey_safe.snap
lrwxrwxrwx 1 controls controls 65 Jan 13 2015 h1susauxh2/h1susauxh2epics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susauxh2_safe.snap
lrwxrwxrwx 1 controls controls 66 Jan 13 2015 h1susauxh34/h1susauxh34epics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susauxh34_safe.snap
lrwxrwxrwx 1 controls controls 66 Jan 9 2015 h1susauxh56/h1susauxh56epics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susauxh56_safe.snap
lrwxrwxrwx 1 sheila.dwyer controls 62 Jul 19 19:00 h1susbs/h1susbsepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susbs_down.snap
lrwxrwxrwx 1 sheila.dwyer controls 64 Jul 19 19:22 h1susetmx/h1susetmxepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susetmx_down.snap
lrwxrwxrwx 1 controls controls 66 Jul 27 2015 h1susetmxpi/h1susetmxpiepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susetmxpi_safe.snap
lrwxrwxrwx 1 sheila.dwyer controls 64 Jul 19 19:28 h1susetmy/h1susetmyepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susetmy_down.snap
lrwxrwxrwx 1 controls controls 66 Jul 27 2015 h1susetmypi/h1susetmypiepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susetmypi_safe.snap
lrwxrwxrwx 1 betsy.weaver controls 67 Jul 20 09:02 h1sushtts/h1sushttsepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1sushtts_observe.snap
lrwxrwxrwx 1 betsy.weaver controls 65 Jul 20 09:03 h1susim/h1susimepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susim_observe.snap
lrwxrwxrwx 1 controls controls 65 May 2 16:22 h1susitmpi/h1susitmpiepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susitmpi_safe.snap
lrwxrwxrwx 1 sheila.dwyer controls 64 Jul 19 19:02 h1susitmx/h1susitmxepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susitmx_down.snap
lrwxrwxrwx 1 sheila.dwyer controls 64 Jul 19 18:59 h1susitmy/h1susitmyepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susitmy_down.snap
lrwxrwxrwx 1 betsy.weaver controls 66 Jul 20 09:05 h1susmc1/h1susmc1epics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susmc1_observe.snap
lrwxrwxrwx 1 sheila.dwyer controls 63 Jul 19 19:06 h1susmc2/h1susmc2epics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susmc2_down.snap
lrwxrwxrwx 1 betsy.weaver controls 66 Jul 20 09:05 h1susmc3/h1susmc3epics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susmc3_observe.snap
lrwxrwxrwx 1 betsy.weaver controls 66 Jul 20 08:55 h1susomc/h1susomcepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susomc_observe.snap
lrwxrwxrwx 1 sheila.dwyer controls 63 Jul 19 19:19 h1suspr2/h1suspr2epics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1suspr2_down.snap
lrwxrwxrwx 1 sheila.dwyer controls 63 Jul 19 19:04 h1suspr3/h1suspr3epics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1suspr3_down.snap
lrwxrwxrwx 1 sheila.dwyer controls 63 Jul 19 19:03 h1susprm/h1susprmepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susprm_down.snap
lrwxrwxrwx 1 betsy.weaver controls 66 Jul 20 09:01 h1sussr2/h1sussr2epics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1sussr2_observe.snap
lrwxrwxrwx 1 betsy.weaver controls 66 Jul 20 08:59 h1sussr3/h1sussr3epics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1sussr3_observe.snap
lrwxrwxrwx 1 sheila.dwyer controls 63 Jul 19 19:20 h1sussrm/h1sussrmepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1sussrm_down.snap
lrwxrwxrwx 1 betsy.weaver controls 67 Jul 20 09:14 h1sustmsx/h1sustmsxepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1sustmsx_observe.snap
lrwxrwxrwx 1 betsy.weaver controls 67 Jul 20 09:15 h1sustmsy/h1sustmsyepics/burt/safe.snap -> /opt/rtcds/userapps/release/sus/h1/burtfiles/h1sustmsy_observe.snap
J. Kissel, B. Weaver, S. Dwyer Note that this is a conscious paradigm shift regarding safe.snaps. Since all SUS control models are pointing to either down.snaps or OBSERVE.snaps, there will be requested output when the front-end comes up from a reboot. HOWEVER, the SUS output will still be protected because the SUS USER model watchdog, (based on the SEI watchdog), by design comes up tripped, preventing all output. It requires human intervention to be untripped. Even if for some reason this user watchdog fails, we still have the IOP Software watchdog that is independently watching the SUS OSEMs adding another layer protection further preventing any unlikely hardware damage. I recommend that SEI do this as well, and then they can reduce their number of files that need maintaining to one. Why, then, do we have some SUS point to OBSERVE.snap, and others point to down.snap? Those suspensions which *have* a down, are those suspensions manipulated by ISC and DRMI guardians (i.e. MC2, PRM, PR3, SRM, SR2, and all of the BSC SUS besides the TMTS). Thus, there is quite a bit of difference between their nominal low noise settings (i.e. OBSERVE) and their "lets start the lock acquisition sequence" settings (i.e. down). These down.snaps were created only for these suspensions in order to have a limited enough scope that we got accomplished what needed accomplishing, when the settings for these suspensions were in question. Thus, there remain OBSERVE.snaps for typically ISC untouched SUS (i.e. MC1, MC3, SR3, SR2, the IMs, OMs and RMs, the TMTS, and the OMC), because these were created for *every* front-end model prior to O1. Since we're more often in a low-noise-like state for these SUS than in the SAFE state, the OBSERVE files have been better maintained. Note also that we're continuing to unmonitor channels and setting that are manipulated by guardian. Thus there should be a decreasing amount of DIFFs between any of these models' "ground" state and their nominal low noise state. This comes at a price however -- if guardian code is changed, then those SDFs settings must be manually re-monitored, which is difficult at best to remember. Especially for the new filter module monitoring system where we have individual control over each button in the bank. Further, if there are settings which are not monitored that are regularly changed by operators (like alignment sliders) or things that occasionally get tuned by scripts (like A2L DRIVEALIGN matrix elements, or Transmon QPD offsets), then they have a potential for coming up wrong. Thus, as a part of "SDF reconciliation" before a planned reboot, we should look at all channel diffs, not just those that have been masked to be monitored.