TITLE: 05/02 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 63Mpc
OUTGOING OPERATOR: Travis
QUICK SUMMARY: Seems like a fast recovery from maintenance and we have been obvserving for the last 1.75hrs.
TITLE: 05/02 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 64Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: Relatively painless maintenance day and IFO relocking. No persistent issues to report.
LOG: See attached .txt file.
As a reminder for TJ, should he find himself with a slow shift, the alog lazy script is not producing the -t transition output (NDS errors).
To be able to take measurement with HWS at the end stations we need a proper PZT offset for both EX and EY to get rid of the reflection coming off of the ITMs during full lock (when the arm is locked in RED). Today I tested Elli's EX PZT2 offset from 2 years ago (alog17860) while taken the past and current PZT2 YAW output into account. A quick conclusion is the sum of output + misalign bias that Elli figured out still gives goodish image on the HWSX camera, but needed fine tuning.
How to lock arms in red (one at a time, this is for X arm)
1) Request INITIAL_ALIGNMENT on the ISC_LOCK guardian, take ALIGN_IFO to DOWN then request XARM_IR_LOCKED
2) If ALIGN_IFO guardian stuck at LOCKING_XARM_IR, the arm is too misalign. Do INITIAL_ALIGNMENT for the arm that you want (or both)
Once the arms are locked in green, they are good for red too. Go back to LOCKING_XARM_IR
Guardian will misalign PRM, SRM, ITMY, and ETMY. Make sure no one else is using these.
Misaligning ALSX PZT2
1) Turn off H1:ALSX_PZT2_YAW_OUTEN. This is an ON/OFF switch. The switch can be found in SITEMAP>ALSX Overview, PZT2_YAW
Make sure you stream image so you can see the transition
2) Add YAW offset to H1:ALS-X_PZT2_YAW_MISALIGN_BIAS
3) Type of misalignment => make it SUM
4) Click Misalign.
To revert the configuration, make sure to watch out for the growing output before turning the OUTEN switch back on. If the number grows, turn the integrator off and bleed rate on. The counts will slowly ramp down.
I compared my streamed images to Elli's using the same matlab script. But the images don't look quite the same (mine seem a bit more saturated). On April 24 I calculated the more recent offset to be -3381. This result in a very clipped-looking image on the HWSX so I went backward a little. -2400 seems to have given the best result. Today's PZT2 YAW output (~10340) + misalignment bias (-2400) sum up to be 7940, 321 counts different from Elli's sum (10919-3300 = 7619).
I'm afraid that ITMs alignment might also affect this offset. If there's no more commissioning time this week to take the data the offset should be checked again after the vent.
We had no problem going back to NLN later. If there was any hysteresis, it didn't case a problem.
HWS plates are currently off at both end stations.
I had a chance to check EY PZT offset today. -3500 PZT2 YAW misalign bias worked best for H1:ALS-Y_PZT2_YAW_OUTPUT = 15233 (sum being 11733)
If there's a lockloss, we can take some data with End Stations HWS during the power up. Hartmann waveplates still have to be put back though. Waiting for an opportunistic down time.
No real issues with relocking after maintenance day. However, we are observing a bump (and harmonics) in the DARM spectrum that is increasing in frequency as the lock ages (started at around 600 Hz, now at around 2kHz).
John, Chandra
21:10-21:45 UTC
We cut the locks on CP3, CP4 bottom LN2 draw valve cover and broke the fitting downstream of LN2 vaporizer on CP4. Pressure in GN2 line blew down, and we capped off the downstream line. Pressure/leak testing of vaporizer will continue tomorrow when the right size ASA flange arrives.
May 2, 2017 Pulled battery packs from two portable phones (they were otherwise off). I had tried to get the covers off in previous sweeps and gave up, this time I was successful. Unplugged unused extension codes One crane is not in its nominal position (is over cleanroom in beer garden)
J. Kissel I've taken weekly charge measurements. The results continue to show the expected trends -- ETMX has about +15 [V] accumulated effective bias voltage (which is low enough to be inconsequential), and ETMY has accumulated over -35 [V] of effective bias voltage, so we'd like to mitigate it. As I have for the past two weeks, I recommend Just before we vent on May 8th, during last minute preparations as we bring the IFO down, let's - Turn OFF the ETMX ESD bias completely, and leave it OFF for the duration of vent and post-vent pump-down Enter the following in the command line: ]$ caput H1:SUS-ETMX_L3_LOCK_INBIAS 0 # Set Requested Bias to 0 ]$ cdsutils switch H1:SUS-ETMX_L3_BIAS_OUTPUT OFF # Make sure the output of the Bias Bank is OFF ]$ caput H1:SUS-ETMX_L3_ESDOUTF_LIN_BYPASS_SW 1 # Turn off any linearization, which turns off any requested bias on the Control Electrodes ]$ caput H1:SUS-ETMX_BIO_L3_RESET 1 # Turn off the HIGH VOLTAGE driver entirely - Leave the ETMY bias ON at +9.5 [V] at the DAC (or +380 [V] at the ESD electrode), with the opposite sign as in observation for the duration of the vent and post-vent pump-down Enter the following the command line: ]$ caput H1:SUS-ETMY_L3_LOCK_INBIAS +9.5 # Set Requested Bias to +9.5 ]$ cdsutils switch H1:SUS-ETMY_L3_ESDOUTF_UL OFFSET OFF # Turn off all request bias on the Control Electrodes ]$ cdsutils switch H1:SUS-ETMY_L3_ESDOUTF_LL OFFSET OFF # | ]$ cdsutils switch H1:SUS-ETMY_L3_ESDOUTF_UR OFFSET OFF # | ]$ cdsutils switch H1:SUS-ETMY_L3_ESDOUTF_LR OFFSET OFF # V As last week's full actuator transfer functions confirm (see LHO aLOG 35867), the actuation strength as measured by calibration lines is dead-on. Thus the relative actuation strength from when we last updated the calibration model is measured to be 4% stronger. That's starting to push our comfort zone (even though we correct for it) so hopefully we can accumulate even charge of the opposite sign on ETMY during this vent that we can reset the actuation strength reference when at 0 [V] effective bias voltage.
Nutsinee is still doing HWS measurements. Otherwise, maintenance day is wrapping up. Should begin relocking around 20:00UTC.
We asked the CDS crew to help us with alarms when either fire pumps starts.
This has been implemented and today we ran both pumps for a few minutes each. Two series of text alarms were received within a minute or two of starting a pump. Success.
J. Kissel I happened to notice that the H1 SUS ETMY's LL coil driver FASTIMON monitor was showing ~4300 [ct]. This is known problem in which one leg of a SCSI connector is bent / grounded / shorted / non-functional (see original documentation in LLO aLOG 1857), and in fact has been solved for this exact channel before (see e.g. LHO aLOGs 14930, 30231, FRS Ticket 6114). However, a long trend reveals that the channel went bad *again* on April 12 2017 at 19:08 UTC. Thanks to Monsieur Bartlett's (and all operator's) detailed logs (in this case LHO aLOG 35508), and that this was on a *Wednesday* during a commissioning window, I can identify that this was a result of Robert's move of a magnetometer, which are indeed in the EY QUAD SUS racks. We must be remember to be careful around these SCSI cables! No urgency to fix, but just letting people know that it's busted. I'll open a new FRS ticket.
h1dc0 (running 2.6.35 kernel) had been up for 196 days and needed a reboot before the vent. Also we needed to add the FMCS FIRE_PUMP monitor channels to H0EDCU_FMCS.ini.
Using monit I stopped daqd on h1dc0 and then rebooted. It had been running for more than 196 days, and so an FSCK was forced on its disk system. The startup of daqd failed because I had accidentally left a duplicated channel in H0EDCU_FMCS.ini. This was fixed, daqd was restarted using the monit web interface, but I got GPS timing errors being logged from all FECS. I did a telnet restart and all came back correctly.
I've already put in a brief alog about this (comment alog 35966), but it was annoying to fix, so I'm putting a more detailed log in.
After the front end restarts yesterday, the gains for the HAM2&3 GS13s were in the wrong state. The GS13s are put in low gain for the safe state, and nominally switched by the guardians when re-isolating is done. However, HAMs 2&3 are special snowflakes and trip when the ISI guardian switches the gains. With the Ubuntu work stations, this was easy to fix: simply open the Commands screen and push a button that called a PERL script. But we are abandoning PERL, so these scripts don't work on the new Debian machines. We can't leave the GS13s in high gain, because the HAMs restore their alignments and on HAMs 2&3 the RX/RY offsets are big enough to cause the horizontal GS13s to saturate when the tables rotate into position. The ONLY way possible to change these gains without tripping the platform with these scripts not functioning is: I had to take the ISIs down to offline, switch the GS13s by "hand", tweak the RX/RY offsets so the residuals were small, re-isolate the chambers, then finally restore the RX/RY offsets with a long ramp time (~45 seconds). Lastly, I had to clean up SDF
The easiest way to do this would have been last night by reverting the GS13s with the SDF system. This would not have risked tripping the tables (which I did multiple times this morning with HAM3, trying to figure this process out), if all the setting were reverted at once. It would have been an easy fix this morning, if the Commands screen gain switching script worked. Still, it would be best if all of the tables behaved the same, and HAMs 2&3 had this setting under guardian control.
Attached is the plot of the NPRO output power since installation, or at least as far back as the hour trend data permits. As of 7 am on May 1st, it had 43498 accumulated running hours. In reality it's a little older because it underwent some acceptance testing at the AEI prior to being shipped out for installation. One thing to note is that the output power calibration seems to be off as I would have expected the starting output power to be closer to 1.8 W. But this might be because of the location of the photodiode and might reflect the power into the amplifier. The data was purged of drop outs caused by either data acquisition being off line or the laser tripping.
model restarts logged for Mon 01/May/2017
day208bug lock-up of h1susauxb123, was front-panel-reset.
day208bug lock-up of h1seib2, was removed from dolphin and ipmi reset.
Reboot of all front end computers with runtime exceeding 208 days, and reboot of h1nds0. Full restart log file attached.
model restarts logged for Sun 30/Apr/2017
2017_04_30 13:46 h1iopsusauxh2
2017_04_30 13:46 h1susauxh2
day208bug lock-up of h1susauxh2 computer, was front-panel-reset.
model restarts logged for Sat 29/Apr/2017 No restarts reported
model restarts logged for Fri 28/Apr/2017
2017_04_28 06:58 h1iopsusex
2017_04_28 06:58 h1susetmx
2017_04_28 06:58 h1sustmsx
2017_04_28 06:59 h1iopsusex
2017_04_28 06:59 h1susetmx
2017_04_28 06:59 h1sustmsx
2017_04_28 07:00 h1susetmxpi
2017_04_28 07:15 h1hpietmx
2017_04_28 07:15 h1iopseiex
2017_04_28 07:15 h1isietmx
2017_04_28 07:17 h1alsex
2017_04_28 07:17 h1calex
2017_04_28 07:17 h1iopiscex
2017_04_28 07:17 h1iscex
2017_04_28 07:17 h1pemex
day208bug lockup of h1susex, was power cycled. subsequent dolphin crash of h1seiex and h1iscex.
As per work permit 6614 I have updated the nds2-client package on the debian 8 systems to 0.14.1. It is set as the default package.
The laser signs at EX and EY were changed for IEC compliant ones. The change is reflected in the use of the signal word "WARNING" rather than the old one of "DANGER". The laser eyewear protection requirements remain the same.
I reset both PSL power watchdogs at 16:00 UTC (09:00 PDT). This completes FAMIS 3648.
Jeff Kissel, Kiwamu Izumi, Jenne Driggers, TJ Shaffer
The two 6+ earthquakes in Alaska along with the front end restarts did a number on us, but Jeff K and Kiwamu worked hard and we are now back to Observing at 68Mpc.
I arrived on shift with initial alignment just finishing up. We could not go past LOCKING_ARMS_GREEN without damping the severely rung up bounce modes, so we spent some time damping those and trying to get them below the noise. After we were able to move on from there, we got DRMI after a handful of tries. We stopped at RF_DARM to damp the bounce modes further, and then check on the OM alignment at ANALOG_CARM. We then had to engage the SOFT loops slowly, per Jenne's suggestion. At this point we could see that the roll modes were also very rung up, and knowing this, still proceeded to lose lock at ROLL_MODE_DAMPING. Doh.
So I ran through the whole thing again, and this time I damped the roll modes by hand. The rest of locking was smooth sailing.
Now I can finally eat.
Had to accept a bunch of SDF diffs.
SEI - Hepi had some Tramps and setpoint diffs, and the ISIs had some filter differences.
SUS - A few differences with setpoints
ASC - IMC PZT offset diffs
Actually the earthquakes were in BC, Canada - that country that borders the US to the north ( or east of Alaska).
The SDF diffs on HAMs 2 and 3 are because these chambers (for reasons we don't understand) can't have their guardians change gains on the GS13s. On these chambers it works just fine to use the SDF system to revert the GS13 gains, if you do them all at once. This used to also be able to be set from the Commands screen on the chamber overview, but the script that does the switching was written in PERL (sensor_hilo in userapps/isi/common/scripts), so now it doesn't on our new Debian workstations. There is currently no easy way to reset the gains for these chambers.
As I noted here, the oplev laser SN 191 was found to be running very warm; this in turn made it very difficult to eliminate glitches. In light of this, as per WP 6591, this morning I re-installed laser SN 189-1 into the ITMy oplev. The laser will need a few hours to come to thermal equilibrium, then I can assess whether or not further tweaks to the laser output power are needed to obtain glitch-free operation. I will leave WP 6591 open until the laser is operating glitch-free.
SUM counts for this laser have been very low; ~2.8k versus the ~30k the last time this laser was used (March 2017). Today I pulled the laser out of the cooler and tested it in the Pcal lab and found the output power to be very low; at the setting being used I measured 0.11 mW versus the 2.35 mW I measured before I installed the laser. By tweaking the lens alignment (lens that couples light from the laser diode into the internal 1m fiber) I was able to increase the output power to ~0.2 mW. There is clearly something not quite right with this laser, my suspicion being either a gross misalignment of the coupling assembly (which takes longer than a maintenance period to correct) or something is going bad with the laser diode. Knowing the history of these lasers, both are equally probably in my opinion.
Unfortunately there is not a spare currently ready for install. In light of this, since the laser is currently working, I reinstalled SN 189-1 into the ITMy optical lever so at least we have a functional ITMy oplev. Once I get a spare ready for install this laser will be swapped out at the earliest opportunity. I have closed WP 6591.