ETMY isn't working as well as ETMX.
ETMY is running TCrappy blends
ETMX is running TBetter with sensor correction (lower blend frequencies in rX and rY)
There is what looks like a loop instablity around 10Hz (stage 1 maybe?)
To much motion in ry & rX around 1/2 (probably the difference in the blend frequencies)
The references in the ETMX-GS13 and ETMX OPLev plots are with Sensor correction off, which is giving more peaking at the microsiesem then we want so I'm turning it off
The low frequency excess motion is likely due to the fact that we haven't finished doing the stage 1 tilt decoupling, Jim needs a few hours to finish that up
We opened the YEND to the beam tube today for ~9 hours.
The attached plot shows the pressures during the day at the BSC10 dome(pt410) and at the 80k pump(pt 424).
The pressure at the trap reached 3e-8 torr.
At ~7:00pm local time I closed GV18 , opened the valve to the main turbo, and opened the small turbo pump to the BSC6 annulus.
Alexa, Sheila, Keita, Daniel, Stefan, Chris ==== We did the y-arm sneak-peak initial alignment today. - A 200urad, 0.1Hz yaw drive on TMSY was big enough to get us the beam on the ITMY SPOOL camera. - We centred the single shot beam on ITMY using just the camera view. - Next we hit the ETMY baffle PD's using ITMX. We got about 3V with 20dB whitening gain on every PD. - Next we aligned ETMY until we got fringes on the camera. - Next we moved the BS until we saw the y-beam on ISCT1. (All other mirrors are common between x-arm and y-arm green path until we hit the ISCT1 prism.) Looking at POP baffle hole helped. - Next we measured the single shot power: y-arm: 10.1uWatt, x-arm 16.0uWatt. - Next we roughly aligned the ISCT1 table path for the y-arm beam. Note that so far we did NOT align the BS with the IR light. Instead we just separated, so the alignment will have to be redone. - We ensured the reflected beam was on the refl RF PD on iSCTEY. For calibration, we measured 4.3mW with ~-4V at the diode with 30dB gain. - We were able to see an RF signal out of the demodulator. This signal was very weak. IMPORTANT note; the RF MON on the demod board does not work. - We set the delay line to 11ns and were able to see a decent (but not great) PDH error signal out of the IMON of the demod. ==== We locked the Y-arm to the 00 mode (only a second at a time). - We aligned the ETMY and ITMY using transmission video image on ISCT1, green transmission level on ISCT1 (H1:ALS-C_DIFF_A_LF_OUT), PDH DC level, PDH demod, ITMY video and ETMY video. - At this point we were able to lock to some crappy mode for 0.5sec at a time when the masses were slow. - The fringe velocity was very high, probably more than 5um/sec. We turned off the ISI isolation. - At this point we were able to lock mostly to 00 mode. It would last for 1 sec or shorter (but we saw 2sec stretch too!), then would go to 01 or some higher order mode, and come back to 00. The motion is mainly PIT. - Seemed like the lock is lost when VCO output goes to some big number. ====== I have attached a picture showing the 00 modes for BOTH the x and y-arm locked at the same time!!!! (y-arm was locked for only a few seconds though).
This process is on opsws2.
A guardian process was created for ETMY (ST1, ST2 and Manager), and helped bringing back the ISI to "isolated" with blends on Tbetter on both stages.
ETMY tripped while switching blends on ST2. Attached ST2 plots
This is Arnaud
This is sheila
ITMX ST2 tripped on GS13 while switching horizontal blends on ST1. Plots of ST1 attached
This is Arnaud
Yesterday the lower Keel plate was installed. Today the lower Keel plate was bolted down and torqued, outer gussets added. Up facing Keel has all of the 448 3/8 helicoils installed. The plate needs to be put on blocks to finish additional helicoils.
Big events: Opening the Y-Arm gate valve to look down with the ALS beam. - David and Greg in LVEA to fix a cracked pipe on the CO2Y - Apollo is craning in the west bay and beer garden - Reset ISIs, watchdogs tripped possibly by earthquake last night - Aaron to pull cable at EY - Joe, Justin, Gerardo to replace ALS light pipe at HAM1 - Richard, Keita, Fil, Corey working on cameras at ITMY spool - Ken and Richard to install weather stations - 10:19 AM PT the GV-18 opened to look down the Y-Arm - Patrick restarts H0EPICS2
(Joseph D., John W., Justin B., Sheila D., and Gerardo M.)
Shutter installation finished today.
The viewport will need to be pulled sometime later for a close inspection and or possible replacement.
I opened GV7 at 2 pm.
I moved x1boot to a different location in the rack, requiring a power down and restart. An fsck was run on startup, delaying the boot process by over 30 minutes.
Cyrus R., Richard M., Patrick T. h0epics2 runs the fmcs, tidal and weather IOCs. Although the weather and fmcs IOCs appeared to be running fine (I did not check tidal), h0epics2 was nearly unresponsive to log in attempts. When I restarted it, it would not mount /ligo. Cyrus noted that it was trying to mount h2boot, which no longer exists. Fixing this allowed it to mount /ligo. Cyrus took the opportunity to apply the OS updates. Once that was done I restarted the h0fmcs, h0tidal and h0weathercs IOCs. I also started the h0weatherey IOC, which was the reason I had tried to log in to begin with.
Greg / DavidH At ~0930 we were notified that a cut to one of the TCSY cooling hoses had been observed, and despite there being no leaks, it was decided to remove the affected length of hose near the table enclosure feed-through (cut out). No sharp edges were noticed on this feed-through, however as a preventative measure some clean room tape was applied around this cut out and then also around the two hoses before reconnecting them up. The chiller was operated and no leaks were observed. The cooling hoses to TCSX were also inspected and no damaged was found. However, as a preventative measure, clean room tape was also applied around these hoses where they passed through the feed-through. It was decided to not disconnect these hoses (to apply tape around the enclosure feed-through) since no damage was observed and disconnection within the enclosure could introduce water leaks. See attached photos.
At behest of others -> GV18 will be closed later today
A magnitude 8 earthquake coming from Chile tripped all the BSC-ISIs on 04/01/14. Sheila's wd plots show that the ISIs tripped on the actuator watchdogs. We suspected that the servo loops were trying to compensate for the common mode seen by the BSC-ISIs at both ends of the arms, thus saturating the actuators. We studied the ground motion recorded by the ground STS at the corner station (BS-STS) and the ground T240 installed at EX (EX-T240) at the GPS time provided by Sheila, in order to find out.
On the attached document:
We could considerably reduce the amount of signal sent to the BSC-ISI actuators during a major earthquake (factor as big as 6), and thus reduce the risk of saturating the actuators, by removing the common mode from the signal we feed to the servo loops of the BSC-ISIs.
Data, plots and scripts are commited to the seismic SVN:
/ligo/svncommon/SeiSVN/seismic/Common/Data/2014_04_01__Chile_Earthquake_Data/
Note 1: X and Y had to be swapped on EX data, and a minus sign was also added to the new x-axis EX data. It is likely that EX T240 is mis-oriented.
Note 2: The calibration of EX-T240 was still the one that JeffK made, at the time we looked at. RichM updated it since.
Units were updated.
Plots and script were commited to the svn.
(Rich M, Sheila, Alexa)
HEPI and ISI ETMX tripped because we were playing with the suspension watchdogs.
We had a hard time recovering from this trip which caused several more trips, the wd plotting script is not working this afternoon, possibly beause of our network problems so we don't have any plots. Once HEPI was isolated, we tried letting guardian isolate the ISI. It tripped immediately on CPSs (stage1) after we cleared this and tried again it tripped on GS13 (stage2).
We then paused the guardian, moved all the blends to start, and gave the gaurdian another try. (note, the ground motion is fairly quiet today). This tripped again, GS13s.
We then puased the guardian, and tried the command script, stage 1 level 3, tripped on the actuator limit. I then stopped taking notes.
The rough procedure that Rich used to bring it back was damping both stages, turning on 1 DOF at a time RX, RY, Z, RZ, X, Y (stage 1 first, then stage 2). Then Alexa moved the blends over to Tbetter without anything tripping.
We don't know why the things that used to work don't anymore, but now ETMX ISI is harder to untrip and neither the guardian nor the command script seem to be able to do it anymore.
Rich may not have mentioned in the alog, but after our trip the other day while trying to switch blends, he changed the filters from zero crossing to ramp, which should make it easier to switch blends without tripping the ISIs.
One thing about guardian, its not clear to me how we should unpause if we need to pause it for some reason. TO pause it we paused all three, stage 1, stage2 and the manager. Wen unpausing the subpodinates they don't come back in managed mode. I was able to get them back to managed mode by going to init on all three, but in the meantime they were doing their own thing, but happily didn't trip the ISI. What is the best way to do this? Also, maybe the right way to pause and unpause a manager with subordinates is something that needs to be made more obvious to a user.
Alexa also tried pausing, she only paused the manager, bringing that back was not a problem but it doesn't pause the subordinates.
quoth Sheila:
We don't know why the things that used to work don't anymore, but now ETMX ISI is harder to untrip and neither the guardian nor the command script seem to be able to do it anymore.
But you did mention one likely important thing that changed: new blend filters (Tbetter?). What happens if we go back to Tcrappy? Do we have the same problems?
also quoth Sheila:
One thing about guardian, its not clear to me how we should unpause if we need to pause it for some reason. TO pause it we paused all three, stage 1, stage2 and the manager. Wen unpausing the subpodinates they don't come back in managed mode. I was able to get them back to managed mode by going to init on all three, but in the meantime they were doing their own thing, but happily didn't trip the ISI. What is the best way to do this? Also, maybe the right way to pause and unpause a manager with subordinates is something that needs to be made more obvious to a user.
Alexa also tried pausing, she only paused the manager, bringing that back was not a problem but it doesn't pause the subordinates.
This is definitely sounds like something that could be improved. It is true that all nodes, manager and subordinates, need to independenty be paused and unpaused. I'll try to think about how to make that smoother. In the mean time, here's the procedure that should be used:
The manager INIT state resets all the works, restarts them, and puts them into managed mode. That should be the most straightforward way to do things for the moment.
Thanks Jamie.
We did try both the command script and guardian with all the blends on start, which used to work. I don't think we tried turning sensor correction off, which is another change.
I heard that rumors were spread that this alog was written during an earthquake that we hadn't noticed, so there is no reason to worry about our inability to use guardian or the command scripts to reisolate ETMX. That is false, there was no earthquake. PS, this is sheila, still loged in as Stefan.