We iterated between IO_MB_1 and IO_MB_2 without moving EOM as planned. One difference from the plan was that it was my misunderstanding that Cheryl had two irises on the main beam path (there was only one), so we used EOM and iris as our fiducial.
As expected this made ALS path totally misaligned such that it doesn't clear Faraday, that will be adjusted later.
I installed the RGA electronics module on the RGA installed @ CP4 and energized its filament in preparation for a preliminary (above ambient temperature) scan tomorrow. I also asked Bubba to restore the VEA temperature setpoint to its pre-CP4 bake value. Following tomorrow's preliminary RGA scan, Ken D. will decouple the duct heaters and control panel while Mark D. and Tyler G. will remove the insulation skirt, aluminum foil wrap and added insulation from the blanketed enclosure.
Gerardo M., Kyle R.
We resumed roughing this morning and switched over to the turbos this evening. For tonight, the Vertex+YBM+XBM combined vacuum volumes are being pumped via the YBM MTP (backed by its QDP80) and the Vertex MTP (backed by its QDP80). All adjacent "dead volumes" such as the space between opened and blanked-off valves were isolated from the roughed Vertex+YBM+XBM at the time of the switch over to Turbo pumping. Tomorrow, we will add the XBM MTP and switch all three MTPs over to be backed by their scroll pumps. We will then shut down the QDP80s at that time.
IFO folks -
Note that Gerardo and I won't be in until after the 0900 meeting tomorrow (Thursday) but no special vacuum-related restrictions are in effect. Sheila's work (PSL laser into HAM6) etc. is A-OK etc....
I changed the temperature at Mid Y to 69 degrees F.
I guess that should have read, I changed the setpoint to 69 degrees F. The temperature at Mid Y is currently 71.5 degrees and dropping.
[TJ, Jamie]
Today we had an instance of one of the guardian nodes (ALIGN_IFO) being killed by systemd because it was taking too long to reload the code, which caused it to go long on it's watchdog check-in. The problem was actually not on loading the code, but on committing the new code to the code git archive.
The guardian code archives live on NFS (/ligo/cds), and for whatever reason access to this resource can be very slow. When access is slow, it can cause guardian to run long on it's systemd watchdog notification, which will cause systemd to kill the process. We ran into this problem yesterday on system boot, where archive access was too slow and many of the nodes weren't able to check in in time and were killed. We resolved the issue yesterday by increasing the startup timeout (systemd property 'TimeoutStartSec=') to 3 minutes, to ride out the massive traffic jam on boot.
Since the issue that TJ ran into today was just for reload during normal operation, it's the main loop watchdog timeout that's the issue, not the startup timeout. I've increased the watchdog timeout to 20 seconds ('WatchdogSec=20s') which should be plenty of time to access the archive on the NFS, without letting us suffer a dead node for too long if something else goes wrong.
We can probably increase the timeout even more if need be, but if it's really taking more than 30 s to access files on the NFS then maybe there's something really wrong with how the NFS is configured...
I discovered that it's also possible for guardian to inform systemd that it needs more time, and to temporarily extend the watchdog timeout. The following would extend the timeout to 20 seconds temporarily:
sd_notify("EXTEND_TIMEOUT_USEC=20000000")
This would allow for giving the code reload + archive more time, while keeping the normal run watchdog timeout tight.
I think having the watchdog timeout be even as long as 30 seconds or so is still probably ok, so we'll stick with that solution for now. I just mention as an alternative.
Today, Travis and I continued alignment of the main and reaction chains of the ETMX QUAD. We watched the OPLEV beams to guide us in pitch and yaw. While there are a few reflections of this oplev beam, we convinced ourselves that we had the one that was from the first (HR) reflection. However, now that it is on the OPLEV diode, the sum is lower than it was when the suspension was last in chamber a couple months ago, so will do a PCAL beam check first thing tomorrow morning, before getting too far ahead of ourselves. All 6 top Main chain BOSEMs were attached and centered around their flags in X,Y,Z directions.
Meanwhile, Fil joined us to perform HIPOT testing right at the connector on the AERM mass. The problematic BIAS and LR pins failed the test, so we will embark on disassembling the connector and reterminating those pins per procedures. TBC...
TITLE: 05/09 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
LOG:
I've spent some time doing tilt-decoupling measurements on the BSCs in the last couple days, trying to figure out how to properly do the gain matching for the ST1 Z drive from RZ T240 subtraction.For these measurements: I put both stages in high blends, with both stages fully isolated, turned off all sensor corrections, used a 4th order butterworth bandpassed white noise excitation in awggui to drive at the error point of the isolation loops. In this configuration, I would expect the St2 motion to be the same as the St1 motion up to about 1hz
While doing this I looked at the Stage 2 motion,and there is some funny business there. First attached plot shows the transfer function from St1 Z T240 to the St1 X/Y T240 & St2 X/Y GS13s. At low frequency, a Z to RX/RY coupling would show up in X/Y as a g/w^2 slope. This might be what is happening on ITMY in the first image, below 100mhz. Above 100 mhz, something else is going on, but at least the St2 motion generally agrees with the St1 motion.
In the second image, ITMX below 100mhz looks more like 1/f slope, and the above 100mhz stuff is even worse. For some reason St2 seems to be moving a lot more that St1.
For both plots, Z-X tfs are red, Z-Y tfs are blue, solid lines are St1 drive to St2 sensors, dashed lines are St1 drive to St1 sensors.
It's possible that the Z/RX-RY tilt decoupling is bad, so I'll try to remeasure tomorrow. I think this might also be me rediscovering a problem RichM found a while ago in the SEI log and that I didn't understand when we were actively talking about it. Probably also related to an issue that Camillo and Arnaud were looking at for Camillo's BSC model.
I'm assuming the peak at .55 hz in the St2 dofs is some interaction with the quad.
HAM & BSC spectra look as expected. The one which has high high-freq noise is ETMx (this suspension is currently being worked on).
Added 50mL to the Crystal Chiller
Diode Chiller was fine (no water added).
Filters were clean & free of debris.
After changing the fluid filter 2+ weeks ago, the fluid has been recirculating just at the Pump Station with valve caused back pressure at 80psi. This is about the nominal pressure during normal running and is done to check for leaks at the filters just changed.
With approval from vent captain, the valve configuration has been changed to return flow to the VEA. The flow is still bypassing the HEPI Actuators and their flex lines from the 4-way valves. The Pump Stations are running under local servo control at 20psi. Happy to run in this way for a week or two and then will be ready to switch to nominal configuration when allowed.
09-05-18 15:17 Christine in the LVEA
09-05-18 15:17 Tyler and Mark in the LVEA and heading to MY
09-05-18 15:18 Ken in the LVEA
09-05-18 15:19 Hugh running measurements on ITM
09-05-18 15:35 Carlos oplog Chris to the LVEA
09-05-18 15:35 Kyle in the LVEA
09-05-18 15:42 contractor to Maintenance shop
09-05-18 16:57 Mark and Tyler craning IPs near IMC beam tube
09-05-18 17:02 MarkP to the Vault
09-05-18 17:02 Richard to the LVEA
09-05-18 17:08 JeffB done in LVEA heading to both mids and ends to cap off the old house vacuum lines
09-05-18 17:21 Jim using both ITMX and ITMY SEI
09-05-18 17:22 TJ using SRM and PRM for guardian testing
09-05-18 17:28 MarkP and Fil to the Vault
09-05-18 17:34 Betsy and Travis to EX
09-05-18 18:01 Richard to CER to test a fiber
09-05-18 18:53 JeffB back from out buildings
09-05-18 18:53 beamtube IP craning is done
09-05-18 18:54 Mark and Tyler and Chris to LVEA to crane the Genie Lift
09-05-18 18:54 Travis and Betsy back from EX
09-05-18 19:14 Mark Tyler and Chris done craning in the LVEA
09-05-18 19:14 Mark and Tyler moving the Snorkel lift from MX to MY
09-05-18 19:38 MarkP and Fil back from the Vault
Conclusion--Driving the HEPIs +- 500micro units suggests no interferences
Details--Jim unlocked the HEPIs for the ITMs yesterday and as we want to be sure of no interferences, the platforms were strokes in X Y RX & RY (X Y & Z should suffice) by 500um/urad.
The ISIs were put into DAMPED as tilting of the platform will trip on the T240s. With the HEPI ISOLATED, offsets were ramped into the isolation filters. With all the cartesian loops closed with large DC gain, interferences impeding motion will be evident in the local sensors as the loops are satisfied. Interferences will be evident with clipping and coordinated slope changes--see 40171.
The four attached plots show the ITMX first with horizontal drives and then vertical drives; ITMY are the third and 4th plots. The lower graph has the 4 local sensors and the upper shows the cartesian position. All traces indicate things are fine with no obvious clipping or slope changes. The horizontal drives on the ITMY show the peaks shifting; I'm not sure what this means if anything...will keep thinking. The positions end up at zero again. Maybe this does indicate an interference as the X drive nears the end of the drive offset but not obvious enough to clearly clip. Maybe this is worth a closer look driving further and trying open loop. Will think and look some more and will report as required.
Drove ITMY to +- 700 micrometers to check for interference and I have to say there is something. The attached is similar to above but this time, for the local sensors in the lower panel, after shifting all signals to zero before the start time, additionally now the absolute value is taken. With this, all the traces should overlay another. Certainly there is no clipping or large slope changes. The wobble seen on the first and fourth peak indicates the H3 negative stroke may be compromised though. This is actually a subtle slope change where that sensor stops moving as much and the others compensate. H3 negative drive is the common factor for reduced displacement during the 1st and 4th peaks. H1 H2 & H3 sensors are all displaced at rest pretty far from zero at ~11000+ counts, nearly half the range. This may be impacting the H3 more than others. It could also be corner3 starts to experience some additional resistance either in the actuator or within the pier but obviously is able to strain the system. Still, plenty of range for operations but I will check it out to maybe find the problem.
FRS 10607
Yesterday I received a call from American Rock Products regarding a blast/explosion they were going to conduct (I'm assuming we have a connection with the company and they previously let us know when they will be making some noise.). They said the blast was to occur between noon & 2pm PDT (7-9 UTC). They said this was at the Kiona Pit. I called them to get a little more information. They said it had been a few years, but in the past they would give us a call whenever they were planning a blast. They said this was at a pit mine on Kennedy Road. Googling around, I'm fairly certain this is the American Rock Products quarry on Kennedy Rd between Candy & Red Mountains. This blast is ~13-miles from the LVEA & ~12-miles from EY.
(Not sure why I am their contact number, but here is their contact info: Kim Terlson of American Rock Products (509)547-2380)
Tagging SEI and DetChar teams, in case anyone wants to go data spelunking.
MORNING MEETING:
Cheryl and Ed aligned the ALS beam path using IO_MB_M1 and ALS_M1 (see the first attachment for the mirror names from D0902114). After that, the main beam was rotated such that the beam will not even clear IO_MB_WG1. Second attachment shows this, the beam was hitting the edge of an iris placed in front of IO_MB_WG1 (iris is not on the PSL layout diagram). This is a bit more than an inch of a shift mostly in YAW, and the distance between the iris and IO_MB_M2 is about 32 inches, so this is like ~1/32 rad ~2 degrees (huge).
This seems to qualitatively agree with that the beam position moved towards +Y direction on IO_MB_M1 at some point after new PMC installation (3rd attachment) given that the steering mirror positions cannot be changed due to the PSL mount design (though the detailed numbers depend on how much the shift on IO_MB_M1 was, what was done to EOM path after the beam shift and how good the ALS path alignment is). With the beam position change on IO_MB_M1, we cannot satisfy both the main path and the ALS path at the same time using only IO_MB_M1 and ALS_M1.
There was some suspicion that the EOM motion/rotation could have caused the beam rotation downstream. There could only be a miniscule change due to that compared with O(1deg) we're talking about, and thus I don't see this as an urgent issue. See attached comment.
As such we need to move on to align the main beam path downstream of IO_MB_2 using two irises that Cheryl placed in that path as a fiducial. We first turn IO_MB_1. If that is not enough we iterate using IO_MB_2 and IO_MB_1. We might need to displace EOM using 5-axis mount so the beam cleanly goes through EOM (we need to measure in/out power like Koji did). Then we go back to ALS path and move ALS-M1 and ALS_M2.
About EOM movement. It is a concern but not because it rotates the beam by a huge amount.
1. EOM movement.
This is already reported by others, but I was also able to move the EOM by pushing/pulling using finger pressure. I wasn't able to successfully move the EOM and make it stay there after removing my pressure within the accuracy of my eyeballs, so I don't think that the connection of EOM to the 5-axis mount is loose as of now. That part shouldn't be a disaster.
(Koji's alog mentioned that originally the screws were loose, he should have tightened them. But that means that it can become loose over some time, as Cheryl and Volker tightened them a long time ago before Koji swapped EOM.)
Still, if it becomes loose it's bad, and anyway vibration or sudden movement may still cause phase or amplitude or polarization jitter or glitch. Seems like a long term fix material to me.
2: You cannot cause O(deg) rotation of the beam by rotating wedged material.
Let t1 be the incident angle on the first surface and t3 the exit angle on the second surface, and the wedge angle w (1st attachment).
Deflection is t1+t3-w = t1+asin(n*sin(w-asin(sin(t1)/n)))-w.
This is an even function of t1-asin(n*sin(t/2)) and is always ~(n-1)w for near-normal incident.
EOM has 2.85 deg wedge at both ends, net wedge of 5.7 deg, I don't know the exact index of refraction for RTP but let's say n=1.9. With these parameters, 2nd attachment right panel shows the deflection over a super wide range of t1, and the left panel is just the magnified view centered at the nominal incident angle of ~5.4 deg.
Peter King put the iris in front of IO_MB_WG1 after the EOM replacement and before the PMC swap. It's a new aperture, and given that the other iris placed in the IO path at the same time, in front of the bottom periscope mirror, is not well aligned to the long standing apertures in the IO path, using the iris in front of the wedge is not advised.
T0900475 lists the expected deflection angle as 4.7° for P-pol and 4.2° for S-pol.
Apparently n=1.9 is too big then. From deflection angle in T0900475, it sounds like n=1.82-ish for P and 1.74-ish for S.
Ooopps! This is potentially a worst case scenario. I made log entry 41922 @ 21:21 hrs. local intended for the Thursday morning 0900 meeting audience thinking that anyone who might potentially energize any of the High Volts had left for the day. I hadn't seen anyone in the Control Room, LVEA or OSB offices for hours. Is this log entry just a late recording of activities from earlier in the day, or, where people in the PSL "chomping at the bit" awaiting the "go ahead" from me confirming that we were on turbo pumping? If so, and if any High Volts had been energized last night after my entry, then we may have risked Paschen arcing!
Regardless, people, I think this situation (or potential situation) is symptomatic of a culture that is developing that is going to "bite" us sooner or later. If I had my druthers, we would slow down the pace a little. In the past, I didn't feel time pressure to make this hand-off to the next interested party. Typically, we would pump with the turbos for a day or more before declaring it okay to resume IFO work.
This was an alog entered well after the work completed. (Cheryl & Keita left the LVEA at ~4:45pmPDT.)
They were working on alignment in the H1 PSL Room. The LVEA is Laser SAFE (so the light pipes for the ALS & PSL shutters are CLOSED).
To re-enforce what Corey said, this was work that was done earlier during the day and confined to the PSL enclosure (no beams were sent into the chamber, no high voltage in chamber was used, and no viewport work.)
Before starting this, we measured the power before and after the EOM, which was 17.8mW at the input and 17.9mW at the output. We changed the power up and down while we were working, and attempted to re-measure after the work was finished (9.08mW before the EOM, 9.5mW after). We found that the scattered light from the high power beam dump that is near the EOM was showing up as significant errors in our power measurements, which probably explains why we found EOM transmissions greater than 100%.