S. Dwyer, J. Kissel While inspecting HAM6 regarding the broken OMC REFL Beam Diverter Dump, we took the opportunity to check out the fast shutter. We saw several things of concern: (1) The cabling that emanates from the shutter itself looks (and has been previously confirmed to be) very close to the main IFO beam path. Koji indicated in his LHO aLOG 28969 that "the clearance does not look great in the [above linked] photo but in reality the beam is clearly away from the wire." (2) One of these wires (the wire closer to the IFO beam) has a kink in it, but no visible burn marks, so we suspect this is from mechanical manipulation during install. (3) OM3's readout cables are kinked in a stressed position to make room for the fast shutter, which is a little forward (-Y, away from the beam splitter) of the position indicated in the drawing likely because of this interference. (4) Some small fleck of particulate on the inside of the "toaster" face, on the HR (back towards OM1) side of the shutter. I attach picture collections of these things (admittedly the particulate pictures are not great -- we tried lots of times to get a good picture but failed). Hopes: Re (1): Can we find a way to route these cables below the beam line, instead of surrounding it? Re (2): Same as (1) Re (3): Can we replace OM3's OSEM cables with a 90 deg backshell, so as to clear room for the fast shutter and relieve stress on the cable? Re (4): hopefully this is not from the HR surface of the fast shutter optic
We were motivated to look at the fast shutter wires by the incident on July 17th where the wire from the fast shutter must have been clipping the to OM1 from HAM5. My original alog about this might not have spelled things out well enough, so here are some plots.
The first plot is from July 17 2017 5:51:00 UTC, this was after Jim, Cheryl and I figured out that the fast shutter was malfunctioning. We locked DRMI, and opened and closed the fast shutter several times. The top panel shows the state of the shutter, 0 is open 1 is closed. The second panel shows AS_A_SUM, which is downstream of the shutter, and goes to zero when the shutter is blocking the beam as it should. The third panel shows AS_C, which is upstream of the shutter but downstream of the place where the wire is close to the beam (check Jeff's annotated photo). You can see that moving the shutter causes dips in the amount of light on AS_C, and that the wire must land in a slightly different place in the HAM5 to OM1 beam causing a different level of light to be seen on AS_C
The second attachment shows that the shutter did seem to block the beam going to the AS WFS in the July 16th lockloss before we had this malfunction. Chandra also checked vacuum pressures for a spike in HAM6 pressures similar to what happened when we burned the OMC in August 2016, and saw nothing. I had been wondering if the fast shutter might have failed to durring a lockloss where the OMC was unlocked, which could result in a lot of power being on the beam dump. It seems like this didn't happen on July 16th.
Note, we played with this shutter cable last in Aug 2016. We struggled with getting this wire away from the beam at that time. The pictures in the log from Aug 2016 28969 show that we left the wire in a larger arc than the pictures show now. I suppose it's not so surprising that the wire has maybe migrated into the beam path over the numerous cycles over the last year.
S. Dwyer, J. Kissel This completes the investigation of the broken beam dump found in HAM6 (see LHO aLOG 38918) -- the beam dump has been identified as the OMC REFL Beam Diverter Dump. This beam dump captures light *down stream* (toward the OMC) of the fast shutter. The dump was broken in a relatively clean vertical fracture which lines up with the dump's set screw, and when reconstructed appears to show a small pock-mark from an apparent small laser blast. While there's no way to prove why it broke, we have two main suspicions: - The black glass is secured to the dump's mount with metal set screws. It has been suggested that all such black glass should be secured with PEEK set screws. If not, these metal screws create un-due stress on the glass, especially if over-tightened. - During observation, it has become standard to leave the OMC REFL path's Beam Diverter CLOSED, i.e. blocking the path from hitting the OMC REFL QPDs and/or exiting HAM6 onto ISCT6. Thus, during some high-power lock loss in which both - the fast shutter protection failed, leaving the OM2 > OM3 > OMC > OMC REFL path exposed, and - the OMC was unlocked sending lots of OMC REFL light down the REFL path (instead of through the OMC, if it were locked). Picture highlights and labeled drawings are attached as .jpgs, and a more complete collection of pictures are compressed as a .pdf. Regarding the metal set screws on this dump: a survey of other similar beam dumps in the chamber indicate that *all* such dumps in HAM6 are secured with metal set screws (see 2017-10-11_HAM6_BumpDump_SetScrews.pdf). Open question: - Was the beam diverter closed when the shutter failed and killed the OMC DCPDs in Aug 2016 (LHO aLOGs 28820 and 28842)? - If closed, did we inspect this path / dump when we went in to fix the DCPDs? This picture from the Corey's Resource Space collection show that the dump is at least intact then.
Just in case, here's another labeled picture to show the beam path and clearance of the fast shutter to its high power beam dump. The above mentioned break is a result of the OMC REFL beam path, NOT the faster shutter path.
Now associated with FRS Ticket 9196.
Note, Corey and I both think (after looking at pictures) that these are the black PEEK set screws installed in the beam dumps shown.
Apologies -- in the above entry it says "when we killed the DCPDs in Aug 2016." However, we killed on of the OMC cavity mirrors, not the DCPDs (see, e.g. LHO aLOG 28820). We merely used the replacement of the entire OMC breadboard (necessary because the burned mirror is a part of the monolithic structure) as a target of opportunity to install high quantum efficiency PDs (see LHO aLOG 28807). Sorry about the confusion!
TITLE: 10/12 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
Wind: 13mph Gusts, 7mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.19 μm/s
QUICK SUMMARY:
As Dave mentioned (38991), we have started the model for running the squeezer ASC, although we don't have cables to the actual hardware yet.
This is a simple model, which takes in the 16 AS_A and AS_B RF42 signals and processes them for sending to ZM1 and ZM2. The attached medm screenshot isn't finished yet but shows the basic functionality.
Most of the difficulty was with getting the channel names to be what we wanted. We wanted to make this a new model rather than adding more parts to the ASC model because the ASC model is already so big that we run into problems with the RCG. Because the squeezer WFS are really just an additional demod of the AS WFS, I wanted their naming to be consistent with the rest of the AS WFS channels (H1:ASC-AS_A_RF42 ect).
The model is broken into two blocks, one of which has the top name ASC and just has the standard WFS parts in it, the other is actually two nested blocks, SQZ with a common block called ASC inside of it which has the matrices and servo filters. This was done so that the channel names would be SQZ-ASC. As Dave mentioned we had to change the model name from h1sqzasc to h1sqzwfs to be able to use the name ASC in the model without causing a conflict with the actual ASC model's DCU_ID channel.
The models and the medm screen (still in progress) are in the squeezer userapps repo.
G. Moreno, J. Kissel, N. Robertson, B. Weaver Gerardo and Norna finished installing and cabling up the temporary OSEMs on the current OFI for testing. I helped them buzz out the wiring into the ADC and center the OSEMs. More details later, but I want to get these numbers in the aLOG: ADC Channel OSEM Open Light Current OSEMINF Offset ADC0_28 LF 32767 16383.5 ADC0_29 RT 30455.2 15227.6 ADC0_30 SD 30319.6 15159.8 Note, when we found the system, the temporary cables were hooked up incorrectly to the last DB9 input (analog 29-32, or digital 28-31) of h1sush56's ADC 1's AA chassis (i.e. U33) in the SUS-C7 rack (incorrectly -- according to D1002740). I've moved the cable to the last DB9 input of ADC 0's AA chassis (i.e. U34) as suggested in D1002740, and was able to use the new OFI MEDM infrastructure as designed (see LHO aLOG 38827 for front-end code, aLOG is still pending on new MEDM screens).
Today, Travis and I put the ITMx main and reaction chains back together and bolted them up. This process included reseating the UIM and PUM magnet flag assemblies to their proper controls configuration and reattaching them to their 8 respective locations. We also had to apply First Contact to the CP-HR side and peal it and the ITM-AR sheet prior to pushing the structures togetehr to their nominal 5 mm separation. The unit is still sitting just outside of the chamber, waiting for install later this week.
GariLynn and I also spent some time installing the AUX alignment system she procured for us in the ITMy chamber. It is currently installed/clamped to the in-chamber stool pointing toward HAM4/5/6. We'll continue alignment of various targets tomorrw.
Daniel, Sheila, Dave:
we opted to rename the model h1sqzwfs to resolve an EPICS channel naming duplication while preserving the correct fast channel names.
Because the original model had already been installed, I hand edited the rtsystab, H1.ipc and testpoint.par files to correct the name. The new model was added to the DAQ and to the CDS overview MEDM.
DAQ was restarted at 16:27 PDT to acquire h1sqzwfs
TITLE: 10/11 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
SHIFT SUMMARY:
LOG:
16:05 Gerardo out to VPW
16:15 Betsy and Garilynn out to biergarten
17:27 Fil to MX to look for PEM test equipment
17:31 Gerardo out to HAM5
17:43 TJ out to HAM5
17:45 Jeff K and Sheila out to squeezer area
18:36 Jeff and Peter out of the PSL
19:10 Richard into CER
19:41 Fil to LVEA
20:15 Richard back into the CER
20:26 Jeff and Peter back into the PSL to check for water leaks
21:08 Travis and Betsy out to biergarten
21:31 Norna and Gerardo out to HAM5
There was some ambiguity as to whether the NPRO noise eater was on or off because the MEDM screen did not change. Attached is a plot of the NPRO output as seen by the photodetector inside the front end laser with the noise eater toggled on and off. Clearly the switch on the power supply works. However the status indicator did not change. The output voltage of the photodetector was 0.210 V and the number of counts on the MEDM screen was ~ -335. Blocking the light to the photodiode resulted in the status indicator changing and an increase in the number of counts to ~ -5000. We know that the change in status occurs around ~ -1792 counts. I do not recall when this problem first manifested itself.
From the trend data, it looks like the noise eater signal started misbehaving about a week ago. The photodetector monitoring the output power of the NPRO seems to be still working. I would be inclined to check the field box that the H1:PSL-MIS_NPRO_RRO_OUTPUT is attached too, as there maybe a large DC offset in that monitor signal's output.
Sheila, Dave:
the first version of the h1sqzasc model was started on h1asc0 yesterday. We noticed an EPICS channel was a duplicate of h1asc (H1:ASC-DCU_ID), caused by top naming an ASC block in h1sqzasc. We are working this issue. Until it is resolved I'll not add this model to DAQ.
Jeff and Dave:
late entry from yesterday evening. Jeff made a model change to h1susopo (binary IO) which required a DAQ restart at 17:05 PDT Tuesday. I have just renamed the partial second trend frame file to permit NDS1 to provide date when spanning the gap.
Kyle, Chandra
CP2's bottom fill valve (V1) had an obstuction and was allowing LN2 to sneak past and pressurize and frost the line that truck deliveries use to fill the Dewar. Both top and bottom fill valves felt closed, but Kyle opened and reseated both. Fixed.
Elevated HF in plots is clearly due to work being done in-chambers.
TITLE: 10/11 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
Wind: 3mph Gusts, 1mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.22 μm/s
QUICK SUMMARY:
15:30 Travis out to biergarten with Fil
I'm covering for Cheryl. Ed covered for me from 8:50 - 10:50. Cheryl covered 12:30- ?. Patrick covered from ?-2:30. And now I'm back to cover while Cheryl heads out to HAM2.
Had "Noise Eater out of range" verbal alarm this morning. Ed mentioned filing an FRS for this on behalf of Jason.
Day's Log Of Activities
As of 23:12UTC: (completed by Cheryl)
Patrick rebooting ISC at EX: Actually h1ecatx1 EPICS IOC (alog 38970).
Betsy, Hugh, TJ
Last week Betsy put the heater on the table and today Hugh checked it's vertical center with an auto-level. Some washers were added to get it as close to center as possible, which ended up around 0.4mm high. Betsy and I then had to wiggle the assembly into place, and rotate the entire gold ceramic holder to allow the screws on the outside to clear the OSEM brackets. The heater is currently sitting ~6mm away from the back of the SR3 optic and it is plugged into the feed through.
Picture attached.
Here are a few more pics. As TJ notes, the ROC front face is 5-6mm from the SR3 AR surface. It is locked down in this location.
Note, we followed a few hints from LLO's install:
https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=25831
Continuity checks at the feedthru still need to be made. Will solicit EE for their help.
Initial continuity test failed. Found issues with in-vacuum cable, power pins not pushed in completely. Pins were pushed in until a locking click was heard.
Reading are:
Larger Outer Pins, Heater: 66.9Ω
Inner pair (left most looking at connector from air side), thermacouple: 105Ω
Found center of SR3(-X Scribe) to be 230.2mm above optical table. By siting the top and bottom of RoC Heater, found center to be at 231.4mm. Removed available shim to put center of RoC Heater at 229.6 for 230.2-229.6=0.6mm below perfect.
Hugh's comment reminded me that to get the heater to fit, Betsy and I added a 1mm washer to raise the height of the assembly. In total we have 4mm of washers (2x1.5mm & 1x1mm).
I conducted measurement of quantity 6 of [D1600104 SR3 ROC Actuator, Ceramic Heater Assy] at CIT on 4 March 2016. Dirty state before baking. The serial number of the heater assy installed in LHO HAM5 is S1600180 - see https://ics-redux.ligo-la.caltech.edu/JIRA/browse/ASSY-D1500258-002 S1600180 Resistance = 66.8 Ohms on 4 March 2016. There is good agreement between the as-installed and pre-bake measurements.
J Warner, S Dwyer, S Cooper
We've been looking at what we should do during large earthquakes. The attached plots show the state of both the SEI Guardian (State N), and the L2 watchdog (L2 WDMON) channel, the L3 Oplev and the HEPI L4C's (as the ground STS's saturated) for the 8.1 magnitude Mexico earthquake (GPS: 1188881718), alog 38570, for the chambers ITMX,ITMY,ETMX,ETMY. During the earthquake, all the ISI's tripped as well as the ITMX suspension watchdog. From these plots we think that the decrease in amplitude of the Oplev signal is due to the reduction in ground motion around this time, rather than damping of the ISI, as both the damping and the reduction in ground motion occurred at similar times.
We've also talked about seismic watchdogs a bit and why the ISIs trip after the isolation loops are shut off by the guardian. Both ETMs are in damped right now so we set the T240 threshold to 100 counts, and sure enough, the T240s started counting saturations, but did not trip the watchdog. Attached plot shows the T240 saturation counts, threshold and ST1 WD mon state. The dip on the top left plot is where we reduced the threshold, the spike on the bottom left is where the model started counting T240 saturations, and the flat line bottom right shows the watchdog didn't trip. This is as it should be.
However, what I think I've seen during ISI trips before, is the ST1 T240s saturate, ST1 trips and ST2 runs for a little bit then trips. This results in ST1 getting whacked pretty hard. I'll try to see if that's what happened with this earthquake.
J. Kissel, inspired by conversation from S. Cooper, S. Dwyer, J. Warner I'll remind folks that this collective SEI/ SUS watchdog system has been built up sporadically over ~10 years in fits and spurts as reactionary and quick solutions to various problems by several generations of engineers and scientists. Also, the watchdog system is almost entirely designed only to protect the hardware from a software failure, and never designed to combat this latest suggestion -- protecting the hardware from the earth. So I apologize on behalf of that history at how clunking and confusing things are when discussing what to do in that situation. Also, I'll remind people that there are three "areas" of watchdogs: (1) in software, inside the user model -- typically defined by the subsystem experts (2) in software, inside the upper level iop model -- typically defined by CDS software group, with input from subsystem experts (3) in hardware, either in the AA/AI chassis, or built into the analog coil drivers -- typically defined during initial aLIGO design phase In my reply here, I'll only be referring to (1) & (2), though I still have an ECR pending approval regarding (3) -- see E1600270 and/or FRS Ticket 6100. With all that primer done, here's what we should do with the suspension user watchdogs (1), and not necessarily just for earthquakes: (a) Remove all connection between SUS and the ISIs user watchdogs. The independent software watchdogs (2) should cover us in any bad scenarios that that connection was designed to protect against. (b) Update the RMS system to be *actually* an RMS, and especially, one that we can define a time-constant. The RMS system that is currently installed is some frankenstein brought alive before bugs in the RCG were appreciated (namely LHO aLOG 19658), and before I understood how to use the RCG's RMS function in general. The independent software's watchdog (2) is a good role model for this (c) We should rip out all USER MODEL usage of the DACKILL part. The way the DACKILL used across suspension types and platforms with many payloads is confusing and inconsistent. Any originally designed intent of this part is now covered by the independent software watchdog. (d) Once (b) is complete, we should tailor the lower the time-constants and the band-passing to better match the digital usage of the stage. For example, the worst that can happen to a PUM stage is getting sent junk ASC and Violin Mode Damping control feedback signals when the IFO has lost lock, but the guardian has not figured it out and switched off control. (e part 1) Upon watchdog trip, we should consider leaving the alignment offsets alone. Suddenly turning off alignment offsets often causes just as much of a kick to the system as what had originally set off the watchdog. HEPI has successfully implemented such a system. (e part 2) We should re-think the interaction between the remaining USER watchdog system and the Guardian. Currently, after a watchdog trip the guardian state immediately jumps to "TRIPPED" and begins to shut off all outputs and bringing the digital control system to "SAFE." (f) Add a "bypass" feature to the watchdog such that a user can request the "at all costs, continue to try damping to top mass" in the case of earthquakes.
I'm attaching some more plots of what happened to the ISIs during this earthquake. The first plot is the saturation count time series for all seismometers and actuators for the test mass ISIs. All of the chambers saturated on the Stage 2 actuators first, this is the first green spike. This tripped the high gain DC-coupled isolation loops, and probably cause Stage 2 to hit it's lockers. The watchdog stopped counting all saturations for 3 seconds (by design), then immediately tripped damping loops on the saturated L4Cs or T240s. I'm not sure why the GS13s don't show up here.
The second plot I attach shows how long the ETMX was saturating different sensors. The L4Cs were saturated for about 45 seconds, the T240s and GS13s were saturated for minutes. The L4Cs never had their analog gains switched, but the chamber guardian should have switched the GS13s automatically. For this reason, if we increase the pause time in the watchdog (between shutting off the isolation loops and full shutdown), I think this shows that for this earthquake the ride-thru time needs to be more than 45 seconds.