I adjusted the HPO pump diode currents as per FAMIS 8421. All diode currents were increased by 0.1A. The table below summarizes the changes and the attached picture shows the PSL Beckhoff PC main screen for future reference. This was done with the ISS OFF.
Old (A) | New (A) | |
DB1 | 52.1 | 52.2 |
DB2 | 52.1 | 52.2 |
DB3 | 52.1 | 52.2 |
DB4 | 52.6 | 52.7 |
I also adjusted the pump diode temperatures. The changes are summarized in the table below:
Diode Box 1 (°C) | Diode Box 2 (°C) | Diode Box 3 (°C) | Diode Box 4 (°C) | |||||
Old | New | Old | New | Old | New | Old | New | |
D1 | 24.5 | 24.0 | 20.0 | 19.5 | 21.5 | 21.0 | 24.0 | 23.5 |
D2 | 25.0 | 24.5 | 19.5 | 19.0 | 25.5 | 25.0 | 21.5 | 21.0 |
D3 | 27.0 | 26.5 | 20.5 | 20.0 | 25.5 | 25.0 | 23.0 | 22.5 |
D4 | 23.5 | 23.0 | 18.5 | 18.0 | 22.5 | 22.0 | 21.5 | 21.0 |
D5 | 25.5 | 25.0 | 18.5 | 18.0 | 26.5 | 26.0 | 23.5 | 23.0 |
D6 | 25.0 | 24.5 | 19.0 | 18.5 | 21.0 | 20.5 | 23.5 | 23.0 |
D7 | 22.5 | 22.0 | 19.5 | 19.0 | 22.0 | 21.5 | 23.5 | 23.0 |
The HPO is now outputting ~168.5 W. The ISS has been turned back ON. This completes FAMIS 8421.
WeeklyXtal plots show evidence of trips due to the thunderstorm m last Sunday and also the NPRO failure last Thursday. Typical humidity tracking of amp diode powers are normal. There may also be some anomalous data from Thursday regarding the ISS not being locked for a period of time.
Attached are scans from this morning and also from a couple weeks ago. Note that today the filament had only one hour to warm up, but the scans look the same.
[Gerardo, Chandra, Kyle, John]
I think we left the CCs on the BSC chambers that now have the HC gauges. We may want to reactivate those for comparison.
The vertex section venting, door preps, cleanings, and staging of the HAM4 North door and BSC3 West door went well today. We'll have an 8am vent prep meeting tomorrow morning and then we'll be ready to pull the BSC3 door. There are a few more morning preps but we did all that we could today, we are officially on page the Tuesday steps of the vent plan E1700124. Both doors are hanging on 4 bolts overnight tonight.
TITLE: 05/08 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
SHIFT SUMMARY: Started the shift by taking down the IFO. The followed the vent plan from step 17-21. Accepted all the safe.snap SDF diffs. Left the ISI SDF diffs alone since it's showing the OBSERVE.snap rather than safe.snap. See attachment for details.
LOG:
15:00 Took IFO to DOWN manually
15:04 Bubba -> LVEA (clean room setup)
Apollo to EY
15:12 PSL/ALS light pipes shut, CO2 controllers keyed off
15:14 turn off BRSY
Set ETMX ESD BIAS OFF, Leave ESD BIAS ON with opposite sign.
15:18 Accepted SUS-IM SDF diffs but forgot to take a screenshot
Mistakenly took HAM4 and BSC3 ISI to OFFLINE instead of ISI_OFFLINE (followed the older version of the vent plan document), fixed this later.
15:30 Jeff B out of the LVEA (staging)
15:49 Switched the rest of the ISI to NO BRS (I later learned that there's none at the corner station)
15:59 Vac team closing gate valves.
16:04 Fil to LVEA (disconnecting HWs cables)
17:10 Hugh locking down HEPI
17:24 Fil done
18:07 Fil replacing ESD HV power supply at EX
18:30 Hugh back, HEPIs are locked
19:19 Fil taking a lunch break
19:57 Fil back to EX
20:44 Vern+Jeff to HAM4
20:48 Fil done. Heading to CER
20:53 Vac team starts removing bolts
21:07 John driving to EY checking on Apollo.
21:30 John back
22:07 Jeff out of LVEA
22:10 Vern out of LVEA
The ITMX and HAM4 HEPIs were locked this morning. Attempted to lock them at the nominal isolated position but that never works perfectly.
HAM4 diffs for position are 2, 13, & 44 um for x, y, & z. For rotations the diffs are 2, 5, & 2 urads for RX, RY, & RZ.
For the ITMX, the isolated vs locked position differences are 58, 48, & 51 um for x, y, & z. For the rotations the differences are 15, 16, & 16 urads for RX, RY, & RZ.
Not terrible but they should be close enough such that Jim won't have a real difficult time locking the ISI and Betsy will be able to set EQ stops.
Since Friday's storm every lock shows software saturations in NGN-CS_CBRS_RY_PZT1/2_CTRL. These channels seem to saturate for about 85 seconds, then stop saturating for about 50 seconds before saturating again. This trend has continued throughout the weekend. Here's the summary page link: https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20170505/detchar/software_saturations/
The CBRS was disconnected a few weeks ago, so this is probably due to the guardian node getting restarted after the lightning storm on the 4th. Probably no one has looked at it since then. The BRS is physically disconnected, so this probably isn't hurting anything.
The full DQ shift report can be found on the DetChar wiki. Here are the highlights:
*) 77% observing time over the DQ shift. In the longer locks it's really noticeable to range degrading, particularly on Thursday and Sunday.
*) Locks on Thursday going in to Friday seem to have a higher rate of blip glitches compared to previous and preceding days. Not sure of the cause - will investigate more
*) ETMX/Y oplevs are still glitching a lot - they're the main round winners according to hveto Friday through Sunday.
*) From Friday onwards there are a lot of overflows in the IOP SUS EY model and near continuous software situations in NGN-CS_CBRS_RY_PZT1/2_CTRL. I don't see any particular way these are causing problems with the range / glitches (unless it's very quiet).
Starting CP3 fill. LLCV enabled. LLCV set to manual control. LLCV set to 50% open. Fill completed in 66 seconds. LLCV set back to 20.0% open. Starting CP4 fill. LLCV enabled. LLCV set to manual control. LLCV set to 70% open. Fill completed in 189 seconds. TC A did not register fill. LLCV set back to 43.0% open.
Laser Status: SysStat is good Front End Power is 34.06W (should be around 30 W)
HPO Output Power is 166.6W
Front End Watch is GREEN
HPO Watch is GREEN
PMC: It has been locked 0 days, 9 hr 50 minutes (should be days/weeks)
Reflected power = 16.83Watts
Transmitted power = 61.71Watts
PowerSum = 78.55Watts.
FSS: It has been locked for 0 days 3 hr and 10 min (should be days/weeks)
TPD[V] = 3.199V (min 0.9V)
ISS: The diffracted power is around 3.5% (should be 3-5%)
Last saturation event was 0 days 3 hours and 11 minutes ago (should be days/weeks)
Possible Issues: See alog36080
All field cabling going into the enclosure have been disconnected. Only thing to note, cables connected to the Polarization PD Whitening Field Box inside the enclosure do not have a feedthrough panel. Signal cable had to be disconnected from inside the table and pulled out. Cable is labeled H1:Polar_Pos_ADC_1.
The following chassis inside of TCS-R1 rack were powered off:
1. Hartmann Sensor Power Distribution D1002206
2. TCS SLED Driver D1200614
TITLE: 05/08 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY:
PSL recovery & a little tough to get locked (but not that bad). Rest of the shift was normal running. H1 was then promptly taken down at the end of the shift & I handed off "May Vent" tasks to Nutsinee. I covered the Ops Station briefly while Nutsinee shuttered PSL pipes and she locked out the TCS lasers. Now she is taking over Ops.
It's looking like I wrote over my Transition alog here (hence the subentries attached to this).....a sleep-deprived mistake. :-/
LOG:
Broke lock at LOWNOISE_ESD_ETMY! :(
Just A Note: PRMI_2_DRMI_TRANSITION hasn't worked for me the last two times I've tried. (DRMI did lock up last time.)
On the 3-10 & 10-30Hz Seismic BLRMS there was a noticeable period of elevated seismic noise in the LVEA from roughly 3:15-4:00utc. (On attached screenshot, the Seismic screenshot shows this period from -9 to -8hrs.).
Power spectra of LVEA seismometers & accelerometers show a comb of peaks during this noisy period primarily between 7-8Hz.
4:46 - 9:12 CORRECTIVE MAINTENANCE due to PSL Trip (see 36078 & 36080)
Spent a couple of hours trying to get back to OBSERVING. Early issues with alignment noted in my Transition alog.
Troubles in latter steps of locking. Not certain of what the issue was. On third (& successful) attempt, I paused at various points to check Bounce, Roll, & Violin modes. Ultimately did not take action.
Around VIOLIN MODE DAMPING1, ASC Pitc signal gets noisy (coinciding with tidal signals for ALS x & y REFL control). Also see ASC SRC1_Y_INMON drifting down (But reduce rf9 took care of this).
Upon getting to NLN, had a few (small, i.e. e-11 & smaller) SDF diffs (attached) related to pointing slider offsets & HEPI BS setpoints. I simply REVERTED them.
Reported here by Jim. This trip was caused by the 35W FE power watchdog tripping, there by shutting down the PSL. It appears to have been caused by the NPRO shutting off. Attachment 1 shows the output powers of the 35W FE and the HPO, as well as the power watchdogs for each laser. It is clear that the FE watchdog trips several seconds before the HPO watchdog trips. The second attachment shows the output power of the NPRO and the 35W FE power watchdog; it is clear the watchdog trip coincides with the NPRO shutting off. At this time the cause of the NPRO shut-down is unclear.
The PSL itself was restarted without issue, but as Jim mentioned in his above-linked alog there was an issue relocking the ISS; this was resolved by locking the PMC (since the in-loop PD for the ISS is beyond the PMC, so if the PMC is unlocked, the ISS will not lock). I had an issue with my remote login not being able to bring up a sitemap (command not found?), so I drove out to the site to investigate. When I showed up onsite, the lock request for the PMC was OFF (this is obviously not the nominal configuration for a locked PMC). When the laser was restarted I asked Jim if the PMC was locked and he indicated it was, but when I arrived onsite it was not. Not sure what happened here. I know there is a script responsible for turning the PMC and FSS off in the event of laser power loss, but I believe this script also turns them back on once power is restored; I will follow up with TJ, the script's author, about this functionality. In the previous 2017 laser trips this issue has not been encountered (i.e. open the HPO external shutter and the PMC and FSS have locked right up, no further action required). Maybe this script didn't quite work right this time? At any rate, everything is functioning normally now and the IFO is currently relocking (engaging ASC as I type this). I will investigate more in the morning, for now I'm going to try to get some sleep.
Submitted FRS 8049 for the PSL trip.
Jason's remote SSH execution of the MEDM sitemap highlighted a problem that some CDS accounts are missing the standard .profile (or .bash_profile) files in their home directories. I have corrected this by copying in the template .profile file into home directories of those affected accounts.
We will go through Jason's script problem with TJ.