WP7162 Arnab, Dave:
This morning we upgraded h1lsc0 to a faster model (same model as h1suse[x,y]). The reason for the upgrade is not that we need the faster cores per-se, rather we need the additional cores provided to run the new h1sqz LSC squeezer model. The original was a 6 core machine, allowing 4 user models. The new is a 10 core machine, allowing 8 user models.
To expedite the install, we took the DTS x1susex computer, added a second GeFanuc 5565 card and used that for h1lsc0.
Unfortunately despite our best efforts, the upgrade glitched the corner station Dolphin fabric, and the models had to be restarted on all the Dolphin'ed corner station front ends, which includes the PSL.
S. Dwyer, J. Kissel While inspecting HAM6 regarding the broken OMC REFL Beam Diverter Dump, we took the opportunity to check out the fast shutter. We saw several things of concern: (1) The cabling that emanates from the shutter itself looks (and has been previously confirmed to be) very close to the main IFO beam path. Koji indicated in his LHO aLOG 28969 that "the clearance does not look great in the [above linked] photo but in reality the beam is clearly away from the wire." (2) One of these wires (the wire closer to the IFO beam) has a kink in it, but no visible burn marks, so we suspect this is from mechanical manipulation during install. (3) OM3's readout cables are kinked in a stressed position to make room for the fast shutter, which is a little forward (-Y, away from the beam splitter) of the position indicated in the drawing likely because of this interference. (4) Some small fleck of particulate on the inside of the "toaster" face, on the HR (back towards OM1) side of the shutter. I attach picture collections of these things (admittedly the particulate pictures are not great -- we tried lots of times to get a good picture but failed). Hopes: Re (1): Can we find a way to route these cables below the beam line, instead of surrounding it? Re (2): Same as (1) Re (3): Can we replace OM3's OSEM cables with a 90 deg backshell, so as to clear room for the fast shutter and relieve stress on the cable? Re (4): hopefully this is not from the HR surface of the fast shutter optic
We were motivated to look at the fast shutter wires by the incident on July 17th where the wire from the fast shutter must have been clipping the to OM1 from HAM5. My original alog about this might not have spelled things out well enough, so here are some plots.
The first plot is from July 17 2017 5:51:00 UTC, this was after Jim, Cheryl and I figured out that the fast shutter was malfunctioning. We locked DRMI, and opened and closed the fast shutter several times. The top panel shows the state of the shutter, 0 is open 1 is closed. The second panel shows AS_A_SUM, which is downstream of the shutter, and goes to zero when the shutter is blocking the beam as it should. The third panel shows AS_C, which is upstream of the shutter but downstream of the place where the wire is close to the beam (check Jeff's annotated photo). You can see that moving the shutter causes dips in the amount of light on AS_C, and that the wire must land in a slightly different place in the HAM5 to OM1 beam causing a different level of light to be seen on AS_C
The second attachment shows that the shutter did seem to block the beam going to the AS WFS in the July 16th lockloss before we had this malfunction. Chandra also checked vacuum pressures for a spike in HAM6 pressures similar to what happened when we burned the OMC in August 2016, and saw nothing. I had been wondering if the fast shutter might have failed to durring a lockloss where the OMC was unlocked, which could result in a lot of power being on the beam dump. It seems like this didn't happen on July 16th.
Note, we played with this shutter cable last in Aug 2016. We struggled with getting this wire away from the beam at that time. The pictures in the log from Aug 2016 28969 show that we left the wire in a larger arc than the pictures show now. I suppose it's not so surprising that the wire has maybe migrated into the beam path over the numerous cycles over the last year.
S. Dwyer, J. Kissel This completes the investigation of the broken beam dump found in HAM6 (see LHO aLOG 38918) -- the beam dump has been identified as the OMC REFL Beam Diverter Dump. This beam dump captures light *down stream* (toward the OMC) of the fast shutter. The dump was broken in a relatively clean vertical fracture which lines up with the dump's set screw, and when reconstructed appears to show a small pock-mark from an apparent small laser blast. While there's no way to prove why it broke, we have two main suspicions: - The black glass is secured to the dump's mount with metal set screws. It has been suggested that all such black glass should be secured with PEEK set screws. If not, these metal screws create un-due stress on the glass, especially if over-tightened. - During observation, it has become standard to leave the OMC REFL path's Beam Diverter CLOSED, i.e. blocking the path from hitting the OMC REFL QPDs and/or exiting HAM6 onto ISCT6. Thus, during some high-power lock loss in which both - the fast shutter protection failed, leaving the OM2 > OM3 > OMC > OMC REFL path exposed, and - the OMC was unlocked sending lots of OMC REFL light down the REFL path (instead of through the OMC, if it were locked). Picture highlights and labeled drawings are attached as .jpgs, and a more complete collection of pictures are compressed as a .pdf. Regarding the metal set screws on this dump: a survey of other similar beam dumps in the chamber indicate that *all* such dumps in HAM6 are secured with metal set screws (see 2017-10-11_HAM6_BumpDump_SetScrews.pdf). Open question: - Was the beam diverter closed when the shutter failed and killed the OMC DCPDs in Aug 2016 (LHO aLOGs 28820 and 28842)? - If closed, did we inspect this path / dump when we went in to fix the DCPDs? This picture from the Corey's Resource Space collection show that the dump is at least intact then.
Just in case, here's another labeled picture to show the beam path and clearance of the fast shutter to its high power beam dump. The above mentioned break is a result of the OMC REFL beam path, NOT the faster shutter path.
Now associated with FRS Ticket 9196.
Note, Corey and I both think (after looking at pictures) that these are the black PEEK set screws installed in the beam dumps shown.
Apologies -- in the above entry it says "when we killed the DCPDs in Aug 2016." However, we killed on of the OMC cavity mirrors, not the DCPDs (see, e.g. LHO aLOG 28820). We merely used the replacement of the entire OMC breadboard (necessary because the burned mirror is a part of the monolithic structure) as a target of opportunity to install high quantum efficiency PDs (see LHO aLOG 28807). Sorry about the confusion!
TITLE: 10/12 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
Wind: 13mph Gusts, 7mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.19 μm/s
QUICK SUMMARY:
As Dave mentioned (38991), we have started the model for running the squeezer ASC, although we don't have cables to the actual hardware yet.
This is a simple model, which takes in the 16 AS_A and AS_B RF42 signals and processes them for sending to ZM1 and ZM2. The attached medm screenshot isn't finished yet but shows the basic functionality.
Most of the difficulty was with getting the channel names to be what we wanted. We wanted to make this a new model rather than adding more parts to the ASC model because the ASC model is already so big that we run into problems with the RCG. Because the squeezer WFS are really just an additional demod of the AS WFS, I wanted their naming to be consistent with the rest of the AS WFS channels (H1:ASC-AS_A_RF42 ect).
The model is broken into two blocks, one of which has the top name ASC and just has the standard WFS parts in it, the other is actually two nested blocks, SQZ with a common block called ASC inside of it which has the matrices and servo filters. This was done so that the channel names would be SQZ-ASC. As Dave mentioned we had to change the model name from h1sqzasc to h1sqzwfs to be able to use the name ASC in the model without causing a conflict with the actual ASC model's DCU_ID channel.
The models and the medm screen (still in progress) are in the squeezer userapps repo.
G. Moreno, J. Kissel, N. Robertson, B. Weaver Gerardo and Norna finished installing and cabling up the temporary OSEMs on the current OFI for testing. I helped them buzz out the wiring into the ADC and center the OSEMs. More details later, but I want to get these numbers in the aLOG: ADC Channel OSEM Open Light Current OSEMINF Offset ADC0_28 LF 32767 16383.5 ADC0_29 RT 30455.2 15227.6 ADC0_30 SD 30319.6 15159.8 Note, when we found the system, the temporary cables were hooked up incorrectly to the last DB9 input (analog 29-32, or digital 28-31) of h1sush56's ADC 1's AA chassis (i.e. U33) in the SUS-C7 rack (incorrectly -- according to D1002740). I've moved the cable to the last DB9 input of ADC 0's AA chassis (i.e. U34) as suggested in D1002740, and was able to use the new OFI MEDM infrastructure as designed (see LHO aLOG 38827 for front-end code, aLOG is still pending on new MEDM screens).
Today, Travis and I put the ITMx main and reaction chains back together and bolted them up. This process included reseating the UIM and PUM magnet flag assemblies to their proper controls configuration and reattaching them to their 8 respective locations. We also had to apply First Contact to the CP-HR side and peal it and the ITM-AR sheet prior to pushing the structures togetehr to their nominal 5 mm separation. The unit is still sitting just outside of the chamber, waiting for install later this week.
GariLynn and I also spent some time installing the AUX alignment system she procured for us in the ITMy chamber. It is currently installed/clamped to the in-chamber stool pointing toward HAM4/5/6. We'll continue alignment of various targets tomorrw.
Daniel, Sheila, Dave:
we opted to rename the model h1sqzwfs to resolve an EPICS channel naming duplication while preserving the correct fast channel names.
Because the original model had already been installed, I hand edited the rtsystab, H1.ipc and testpoint.par files to correct the name. The new model was added to the DAQ and to the CDS overview MEDM.
DAQ was restarted at 16:27 PDT to acquire h1sqzwfs
TITLE: 10/11 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
SHIFT SUMMARY:
LOG:
16:05 Gerardo out to VPW
16:15 Betsy and Garilynn out to biergarten
17:27 Fil to MX to look for PEM test equipment
17:31 Gerardo out to HAM5
17:43 TJ out to HAM5
17:45 Jeff K and Sheila out to squeezer area
18:36 Jeff and Peter out of the PSL
19:10 Richard into CER
19:41 Fil to LVEA
20:15 Richard back into the CER
20:26 Jeff and Peter back into the PSL to check for water leaks
21:08 Travis and Betsy out to biergarten
21:31 Norna and Gerardo out to HAM5
There was some ambiguity as to whether the NPRO noise eater was on or off because the MEDM screen did not change. Attached is a plot of the NPRO output as seen by the photodetector inside the front end laser with the noise eater toggled on and off. Clearly the switch on the power supply works. However the status indicator did not change. The output voltage of the photodetector was 0.210 V and the number of counts on the MEDM screen was ~ -335. Blocking the light to the photodiode resulted in the status indicator changing and an increase in the number of counts to ~ -5000. We know that the change in status occurs around ~ -1792 counts. I do not recall when this problem first manifested itself.
From the trend data, it looks like the noise eater signal started misbehaving about a week ago. The photodetector monitoring the output power of the NPRO seems to be still working. I would be inclined to check the field box that the H1:PSL-MIS_NPRO_RRO_OUTPUT is attached too, as there maybe a large DC offset in that monitor signal's output.
Sheila, Dave:
the first version of the h1sqzasc model was started on h1asc0 yesterday. We noticed an EPICS channel was a duplicate of h1asc (H1:ASC-DCU_ID), caused by top naming an ASC block in h1sqzasc. We are working this issue. Until it is resolved I'll not add this model to DAQ.
J Warner, S Dwyer, S Cooper
We've been looking at what we should do during large earthquakes. The attached plots show the state of both the SEI Guardian (State N), and the L2 watchdog (L2 WDMON) channel, the L3 Oplev and the HEPI L4C's (as the ground STS's saturated) for the 8.1 magnitude Mexico earthquake (GPS: 1188881718), alog 38570, for the chambers ITMX,ITMY,ETMX,ETMY. During the earthquake, all the ISI's tripped as well as the ITMX suspension watchdog. From these plots we think that the decrease in amplitude of the Oplev signal is due to the reduction in ground motion around this time, rather than damping of the ISI, as both the damping and the reduction in ground motion occurred at similar times.
We've also talked about seismic watchdogs a bit and why the ISIs trip after the isolation loops are shut off by the guardian. Both ETMs are in damped right now so we set the T240 threshold to 100 counts, and sure enough, the T240s started counting saturations, but did not trip the watchdog. Attached plot shows the T240 saturation counts, threshold and ST1 WD mon state. The dip on the top left plot is where we reduced the threshold, the spike on the bottom left is where the model started counting T240 saturations, and the flat line bottom right shows the watchdog didn't trip. This is as it should be.
However, what I think I've seen during ISI trips before, is the ST1 T240s saturate, ST1 trips and ST2 runs for a little bit then trips. This results in ST1 getting whacked pretty hard. I'll try to see if that's what happened with this earthquake.
J. Kissel, inspired by conversation from S. Cooper, S. Dwyer, J. Warner I'll remind folks that this collective SEI/ SUS watchdog system has been built up sporadically over ~10 years in fits and spurts as reactionary and quick solutions to various problems by several generations of engineers and scientists. Also, the watchdog system is almost entirely designed only to protect the hardware from a software failure, and never designed to combat this latest suggestion -- protecting the hardware from the earth. So I apologize on behalf of that history at how clunking and confusing things are when discussing what to do in that situation. Also, I'll remind people that there are three "areas" of watchdogs: (1) in software, inside the user model -- typically defined by the subsystem experts (2) in software, inside the upper level iop model -- typically defined by CDS software group, with input from subsystem experts (3) in hardware, either in the AA/AI chassis, or built into the analog coil drivers -- typically defined during initial aLIGO design phase In my reply here, I'll only be referring to (1) & (2), though I still have an ECR pending approval regarding (3) -- see E1600270 and/or FRS Ticket 6100. With all that primer done, here's what we should do with the suspension user watchdogs (1), and not necessarily just for earthquakes: (a) Remove all connection between SUS and the ISIs user watchdogs. The independent software watchdogs (2) should cover us in any bad scenarios that that connection was designed to protect against. (b) Update the RMS system to be *actually* an RMS, and especially, one that we can define a time-constant. The RMS system that is currently installed is some frankenstein brought alive before bugs in the RCG were appreciated (namely LHO aLOG 19658), and before I understood how to use the RCG's RMS function in general. The independent software's watchdog (2) is a good role model for this (c) We should rip out all USER MODEL usage of the DACKILL part. The way the DACKILL used across suspension types and platforms with many payloads is confusing and inconsistent. Any originally designed intent of this part is now covered by the independent software watchdog. (d) Once (b) is complete, we should tailor the lower the time-constants and the band-passing to better match the digital usage of the stage. For example, the worst that can happen to a PUM stage is getting sent junk ASC and Violin Mode Damping control feedback signals when the IFO has lost lock, but the guardian has not figured it out and switched off control. (e part 1) Upon watchdog trip, we should consider leaving the alignment offsets alone. Suddenly turning off alignment offsets often causes just as much of a kick to the system as what had originally set off the watchdog. HEPI has successfully implemented such a system. (e part 2) We should re-think the interaction between the remaining USER watchdog system and the Guardian. Currently, after a watchdog trip the guardian state immediately jumps to "TRIPPED" and begins to shut off all outputs and bringing the digital control system to "SAFE." (f) Add a "bypass" feature to the watchdog such that a user can request the "at all costs, continue to try damping to top mass" in the case of earthquakes.
I'm attaching some more plots of what happened to the ISIs during this earthquake. The first plot is the saturation count time series for all seismometers and actuators for the test mass ISIs. All of the chambers saturated on the Stage 2 actuators first, this is the first green spike. This tripped the high gain DC-coupled isolation loops, and probably cause Stage 2 to hit it's lockers. The watchdog stopped counting all saturations for 3 seconds (by design), then immediately tripped damping loops on the saturated L4Cs or T240s. I'm not sure why the GS13s don't show up here.
The second plot I attach shows how long the ETMX was saturating different sensors. The L4Cs were saturated for about 45 seconds, the T240s and GS13s were saturated for minutes. The L4Cs never had their analog gains switched, but the chamber guardian should have switched the GS13s automatically. For this reason, if we increase the pause time in the watchdog (between shutting off the isolation loops and full shutdown), I think this shows that for this earthquake the ride-thru time needs to be more than 45 seconds.
On the 5th I opened the soft covers on HAM6 to lock the ISI and noticed that one of the black glass pieces was broken. I'm posting a couple of pictures here. Betsy, JeffK and others have looked to see if we noticed this during the last HAM6 vent, but none of the pictures in the alog show this glass in any detail.
Do you think you should write an incident report on this?
No, I don't think this warrants an incident report, since I believe this is a laser burn which broke the glass, not some other "accident". It does warrant an FRS however, since we will likely need a better "fix" to this. Further details:
On Friday, Keita and I inspected this broken beam dump on the North side of the HAM6 table in chamber. I have more pictures attached below. When I carefully pushed the 2 pieces of broken glass back together (like a puzzle) on the mount, I could see a hole of missing glass (PIC 1 and 2 below). As well, there is a shard of black glass sitting on the table about 8 inches in front of the beam dump, maybe "launched" from the hole site of the glass piece (PIC 3, circled in BLUE). As well, there are 2 other burn marks on this same piece of black glass off to the left visible in the first pics.
Sheila is going to help me with another round of inspections here and help determine beam propogation. We will also look into timing with potential shutter/toaster fussiness, and HAM6 pressure changes to hone in on when this may have occured since April 2016 when it was deemed healthy.
While inspecting HAM6 last week, I also looked a bit at the "toaster" fast shutter. A quickish look it's wires did not reveal any burn spots. Pictures attached.
As well, there is another black glass beam dump in the NW side of the table, close to the viewport which shows what appears to be some burn markings (last Picture). Likely another FRS addition.
For further follow up on the broken beam dump -- identified as the OMC REFL Beam Diverter dump -- check out LHO aLOG 38998 and FRS Ticket 9196. Regarding the dump which has a "glancing blow" burn mark, that's the dump catching the OM3 TRANS beam. Indeed, the burn mark appears on the *outside* of the functional part of the dump. So, I don't think there's a need for action there (and hence no need for an FRS ticket).
We have verified that X-Arm RFM IPC signals go to ETMX and Y-Arm to ETMY.
The h1omc model sends DARM_CTRL to ETMX with a matrix element of +1.0 and to ETMY with an element of -1.0. The DARM_CTRL signal is currently the CAL_LINE_SUM signal, a ~30Hz sine wave with a small amplitude of 0.4 counts. The attached dataviewer plot shows the sending trace (CH3: H1:LSC-CAL_LINE_SUM) the receiver on h1susetmx (CH4: H1:SUS-ETMX_L3_ISCINF_L_IN1) and the receiver on h1susetmy (Ch5: H1:SUS-ETMY_L3_ISCINF_L_IN1). The plot is 1/16th of a second on the time axis. As can be seen, ETMX is in phase with CAL_LINE_SUM and ETMY is 180deg out of phase.
The relevant part of h1omc model and the matrix settings are shown in an attachment.
To be absolutely sure of which Dolphin port the cable should be connected to, we drove out to EY and verified with h1susey.
Model processing time changes resulting from computer upgrade (see attached minute trend plot).
Ch1: h1ioplsc0. Did not noticably speed up, in fact looks like it may have slowed by 1uS. min-max range has tightened up.
Ch2: h1lsc: Sped up from 38 +/- 4uS to 25 +/- 0uS (12us (32%) faster with very little jitter)
Ch3: h1omc: Sped up from 22 +/-2uS to 15 +/-0uS (7uS (32%) faster with very little jitter)
Ch4: h1lscaux: Sped up from 19 +/-1uS to 15 +/-0uS (4uS (21%) faster with very little jitter)
Ch5: h1omcpi: Sped up from 8 +/-1uS to 6 +/0uS (2uS (25%) faster with no jitter)