Jeff K and myself
Was: SAFE
Now: INIT
A few weeks back we changed the initial request state for the SUS Guardians from ALIGNED to SAFE (alog27627) with the thought that it was too aggressive of a move immediately after a reboot. Today the Guardian machine was rebooted again and the SUSs all went to SAFE, but this had apparently messed up the PCAL team. We decided to just have the nodes run INIT and go back to where we had them previously. Hopefully it actually goes that way.
All of the SUS nodes have been reloaded with the new code that is also committed to the SVN.
Manually set CP5 LLCV to 10% open to lower LN2 level, then adjusted % full set point to 98% and switched to PID mode to maintain overnight.
Cheryl recovered the optic positions and locked the arms on green. I completed an initial alignment and brought the IFO to DC_READOUT_TRANSITION. Bubba, property inventory, LVEA Travis, Darkhan and Evan to PCAL X 14:12 UTC Jeff B. and Peter in PSL 14:40 UTC Chis S. to end X to retrieve equipment from receiving area 15:08 UTC Coca Cola delivery truck through gate 15:23 UTC Gerardo to LVEA to retrieve wire 15:23 UTC Jim B. restarting guardian machine 15:23 UTC Karen and Christina to LVEA to clean 15:26 UTC Jeff B. and Peter out of PSL, Peter taking equipment back to anteroom 15:34 UTC John changed setpoint on CP7 Guardian reboot took everything to safe, interfered with PCAL work 15:40 UTC Jeff B. taking ETMX SUS back to aligned 15:48 UTC Bubba and Nicole to end Y and then end X for property management 15:52 UTC Peter done with chiller block investigation and DC blocker install 15:58 UTC Richard swapping IM2 satellite amp, power cycling IM3 and MC1 satellite amps 16:07 UTC Cheryl installing beam blocks on IOT2L 16:07 UTC Joe D. to LVEA to gather flex tubing 16:07 UTC Vinny to LVEA to work on infrasonic microphones 16:09 UTC LN2 delivery truck through gate 16:11 UTC mat delivery truck through gate 16:15 UTC Jeff B. to optics lab to store equipment 16:19 UTC Bubba lifting Styrofoam cooler off tilt meter at end Y to look at serial number 16:23 UTC Jeff B. done 16:27 UTC Cheryl done 16:40 UTC Vinny to EY VEA to work on infrasonic microphones 16:43 UTC Evan and crew done at end X 16:46 UTC Evan and crew to end Y 16:48 UTC retrieval of SR785 from beer garden 16:51 UTC Vern to SR2 by HAM4 16:58 UTC tractor trailer truck through gate to pickup forklifts 17:02 UTC Nutsinee to LVEA 17:05 UTC Richard to LVEA to replace DAC cards in SUSB123 and SUSH2A IO chassis 17:05 UTC Gerardo to LVEA to replace wire for HAM1 vacuum gauge 17:07 UTC Dave B. powering down SUSB123 and SUSH2A frontend computers so Richard can replace DAC cards 17:11 UTC SUSB123 and SUSH2A powered down 17:15 UTC Nutsinee done 17:19 UTC Vinny back from end Y, going to end X 17:22 UTC Dave M. swapping cables on Newtonian Noise seismometers in LVEA 17:26 UTC Richard done replacing DAC cards, Jim B. powering on SUSB123 and SUSH2A frontend computers 17:40 UTC Dave B. restarting ISI ITMY model 17:40 UTC Jim B. restarting models on SUSB123 and SUSH2A frontend computers 17:51 UTC Dave M. done Satellite amp modifications are done 17:59 UTC Richard to LVEA to reinstall modified whitening chassis 18:00 UTC Vinny back from end X 18:11 UTC Dave M. to LVEA to make measurements of Newtonian Noise array 18:12 UTC Christina and Karen done cleaning at end X, coming back to OSB 18:12 UTC BSC HEPI and IOP models need restarting, not just ISI 18:16 UTC BS seismic HEPI and IOP model restart 18:19 UTC Vinny to LVEA to calibrate microphones 18:19 UTC Nutsinee removing TCSY AOM from water supply 18:25 UTC Kiwamu restarting OMC PI model 18:25 UTC Dale leading tour in CR 18:25 UTC LDAS work starting on nds0 18:28 UTC Travis done 18:28 UTC Richard reinstalling modified AA chassis 18:34 UTC Daniel done Beckhoff work 18:36 UTC Bubba to end X for property management 18:39 UTC Richard done installing modified AA chassis 18:39 UTC Dave restarting OMC PI model 18:42 UTC Vinny done calibrating microphones 18:46 UTC Gerardo done at HAM1 18:49 UTC Filiberto power cycling CPS fanout 18:51 UTC Dave B. restarting PI model 18:52 UTC SEI ITMx (HEPI & ISI) (Dave) 18:52 UTC Kyle to end X and end Y VEAs to take photographs and serial numbers of VAC equipment 18:57 UTC Volker to optics lab to retrieve optics posts 19:02 UTC Dave B. restarting end station HEPI and ISI models 19:27 UTC SUS ETMY PI model 19:31 UTC Jeff K. and Jim W. starting recovery of optics in HAM2 19:38 UTC DAQ restart 19:44 UTC Kyle back, reports end X is laser hazard and end Y is laser safe 19:55 UTC Christina opening OSB receiving door 20:09 UTC Bubba and Ernest to LVEA for property audit 20:15 UTC Filiberto to LVEA to reset cable on HAM4 SR2 20:39 UTC Kiwamu to LVEA to check with Nutsinee 20:57 UTC Nutsinee done, closed tables, reopened light pipe 20:57 UTC Dave B. taking nds1 down 21:01 UTC Starting IFO locking recovery 21:06 UTC Kiwamu to LVEA to realign REFL cam 21:09 UTC Bubba to end stations for property audit 21:15 UTC Nutsinee to TCS racks for CO2 X 21:16 UTC Starting initial alignment 21:19 UTC Carl to end Y to take PI measurements 21:19 UTC Nutsinee back 21:27 UTC Kiwamu done 21:28 UTC Kiwmu to HAM6 to measure DC QPD whitening 21:36 UTC DAQ work complete 21:49 UTC Bubba done property management in VEAs 22:08 UTC Travis to end Y to check illuminator 22:09 UTC Fred taking UW REU students into LVEA 22:55 UTC Initial alignment complete 22:57 UTC Carl done 23:04 UTC Fred and UW REU students out of LVEA
Work done by Carlos and Jonathan The CDS wiki/web server was upgraded to a new OS, apache release, and newer hardware. As it was being rebuilt we built the configuration into the CDS configuration management system so that the current state is easily repeatable. Everything thing appears to have been migrated successfully. Users should not expect to see changes due to this migration. However it allows us to upgrade security settings and provides a needed upgrade that will support some experimental remote data access software.
We finally accumulated a lock stretch long enough, with good sensitivity to analyze the interferometer response with the Pcal above 1 kHz. Last night was a lock stretch at 40 Mpc, 40 W input power. For the lock stretch, a good time to start the high frequency analysis is at 05:31:00 06-28-2016 UTC, and end time of 06:32:00 06-28-2016 UTC. The x-end Pcal was running at 1000.3 Hz (different than the previous time we made the injection)
Now that there is enough data accumulated, I moved the Pcal injection to 1501.3 Hz and will wait for the next lock stretch with good enough sensitivity, high power, and long duration.
For reference, the end of O1 also made high frequency, long duration injections. We are repeating these measurements. See alog 24843, and the plan from the end of O1 alog 24802.
Today during Maintenance the HAM2 ISI was not fully isolated but in DAMPED to allow for work in the LVEA. The IM1-4 optics were in SAFE, so no alignment bias and no damping. This configuration closely mimicks an ISI / optic trip.
When it was time to restore the HAM2 ISI, and then redamp the IM1-4 optics, I observed the drift in pitch and yaw that I've observed after an ISI trip.
The difference between a typical ISI-IM-IMC recovery, and todays recovery is that we restored the ISI and IMs while the PSL beam was blocked at the shutter between the PSL and HAM1.
The drift in IMs today was no different than the drifts I've observed before.
There was a suggestion that heating from the IMC being restored, when the HAM2 and HAM3 ISIs and optics are restored, might be the source of the IM drift, however since I tracked the drift today and there was no IMC beam, I have to conclude the restoration of the IMC is not the source of the IM drift.
While tracking the drift, I exercised the pitch and yaw alignment biases for each IM by +/-100 counts to see if that would effect the drift. I was testing the theory that the cause of the alignment shifts that I've tracked since last year might also be causing this recovery drift, but I cannot see that being true, since moving the alignment biases had no effect on the drift.
Here are the amounts each IM changed after being restored:
black: IM DOFs that did not show any significant drift (drift that is less than 1urad)
red: IM DOFs that did show significat drift (drift equal to or greater than 1urad)
alignment restored 20:15:55UTC |
alignment at 20:40:47UTC |
drift over 25 minutes urad |
|
IM1 P | 186.62 | 186.5 | -0.12 |
IM1 Y | 1118.10 | 1118.25 | 0.15 |
IM2 P | 599.60 | 604.1 | 4.50 |
IM2 Y | -207.30 | -207.1 | 0.20 |
IM3 P | 1949.50 | 1961.8 | 12.30 |
IM3 Y | -78.00 | -79.68 | -1.68 |
IM4 P | -3863.40 | -3864.36 | -0.96 |
IM4 Y | -611.30 | -611.35 | -0.05 |
Attached plots:
omc_pi model change. WP5957
Kiwamu, Carl, Dave:
h1omcpi was modifed to read in the two new ADC channels. Two new 64kHz channels were added to the commissioning frame, 8 existing 2kHz channels were added to the science frame for permanent archival (H1:OMC-PI_DOWNCONV[1-4]_DEMOD_[I,Q]_OUT_DQ)
h1 SUS PI model changes
Ross, Carl, Dave:
new h1susitmpi,h1susetm[x,y]pi models were installed.
DAC card replacement in SUS B123 and H2A WP5954, WP5953
Richard, Fil, Dave, Jim
h1susb123 and h1sush2a were powered down. The first DAC card in each IO Chassis were replaced. The new DAC cards both pass the autocal. The ADCs were power cycled at the same time.
BSC ISI code change WP5961
Brian, Jim W, Jim, Dave
a new isi2stagemaster.mdl file were installed. All BSC ISI models were rebuilt and restarted. (h1isi[bs, itmx, itmy, etmx, etmy]). Part of this change was to remove the user model's DACKILL part. Since this part registers itelf with the IOP model on startup, the code change required a restart of all the models on h1seib[1,2,3,ex,ey] to resync the IOP with the DAC clients.
We also found that the HEPI watchdog reset button does not work on newer Ubuntu 14 machines. This was tracked to the perl script wd_dackill_reset.pl script requires the CAP5 module, not loaded on newer machines. Perhaps these old PERL scripts should be replaced with newer PYTHON scripts.
DAQ upgrades and new QFS server WP5964
Dan, Carlos Jim, Dave:
Dan upgraded the QFS servers h1ldasgw0, h1ldasgw1 and h1dmtqfs0 Solaris machines. These were fully patched, and the QFS file sysem they control was file-system-checked.
Dan installed a new QFS server called h1ldasgw2. This is a Sun X2200, with redundent fiber channel ports which directly connect to the two Qlogic switches.
In the new configuration, h1ldasgw0 only exports /ldas-h1-frames to h1fw0 on the private LAN (no longer exports to h1nds0). h1ldasgw1 only exports /cds-h1-frames to h1fw1 on the private LAN (no longer exports to h1nds1). h1ldasgw2 NFS exports both /ldas-h1-frames and /cds-h1-frames to both nds machines in read-only mode.
The hope is that the frame writers will be more stable if their NFS/QFS server is not also serving NDS requests.
h1nds1's RAM was doubled from 24GB to 48GB by the insertion of 3*8GB SIMM cards harvested from x1work.
ASC new code
Jenne, Jim, Dave
new h1asc model was installed. We were surprised that an excitation was started immediately once the model got running. This looks like a script constantly running the excitation of H1:ASC-POP_X_PZT_YAW_EXC
Does anyone know what is doing this?
DAQ Restart
Daniel, Dave:
The DAQ was restarted to support all of the above model changes. Latest Beckhoff INI files were installed.
Tesing Scientific Linux 7.2 on LHO DMT WP5969
Dan:
Dan is upgrading h1dmt2 from SL6.5 to SL7.2 as a test bed for the new OS in preparation for a pre-ER9 upgrade.
cdswiki upgrade WP5956
Jonathan, Carlos:
work is underway to upgrade the cdswiki machine
mysterious ASC excitation found, the ASC is running a special awg_tp_man process to provide more than 35 testpoint channels. The restart of the model did not restart the awgtpman process, so the running excitation was immeditely applied to the new system. Jim is investigating to see how this can be prevented, for now the process was restarted by hand.
h1fw1 is unstable again.
After being stable since last Friday, following today's maintenance h1fw1 is now very unstable. We tried the power cycle of the h1ldasgw1 solaris box, which did not fix the stability problem unlike last Friday. h1fw0 also restarted itself since the last DAQ restart, but just the once compared with h1fw1's many.
here is an interesting error in h1fw1's log file on the last attemped restart
[Tue Jun 28 18:07:42 2016] ->3: start profiler
[Tue Jun 28 18:07:42 2016] ->3: # comment out this block to stop saving data
[Tue Jun 28 18:07:42 2016] ->3: # Write commissioning (full) frames
[Tue Jun 28 18:07:42 2016] frame saver started
[Tue Jun 28 18:07:42 2016] main profiler thread thread - label dqpmain pid=10592
[Tue Jun 28 18:07:42 2016] ->3: start frame-saver
[Tue Jun 28 18:07:42 2016] waiting on frame_saver semaphore
[Tue Jun 28 18:07:42 2016] Full frame saver thread - label dqfulfr pid=10593
[Tue Jun 28 18:07:42 2016] Full frame saver thread priority error Operation not permitted
[Tue Jun 28 18:07:43 2016] waiting on frame_saver semaphore
[Tue Jun 28 18:07:44 2016] waiting on frame_saver semaphore
[Tue Jun 28 18:07:45 2016] waiting on frame_saver semaphore
[Tue Jun 28 18:07:46 2016] waiting on frame_saver semaphore
Terminated and landed a new cable for PT100-A, old cable was 22 AWG, new one is 18 AWG.
As a note, we used a pigtail cable with a HD DB15 connector.
Great! I closed FRS 5634 ticket on this.
Evan G., Darkhan T., Travis S.
Both end station PCals were calibrated today during the maintenance period. Stay tuned for results.
TravisS, EvanG, Darkhan,
Pcal calibration measurements were taken at both end stations (the measurement procedure can be found in T1500063).
Pcal EndX measurements (DCC T1500129-v08) showed that Pcal optical efficiency (OE) at the this end station have decreased since the last measurement (LHO alog 27029), the OE of the inner beam and outer beams dropped from 84% and 61% (on 2016-05-05) to 78% and 46% (on 2016-06-28).
At the same time, WS/TX response ratio did not change for more than 0.5% since Aug 2015. So for now, at EndX Pcal TxPD output should be used.
Note: initially DAQ restart happened during measurements with WS at the receiver module. Since we repeated the measurements affected by the DAQ restart, the results reported in T1500129-v08 are not affected by this.
Pcal EndY measurements (DCC T1500131-v05) were consistent with previous measurements (LHO alog 27029).
The measurement data is committed to calibration SVN at:
trunk/Projects/PhotonCalibrator/measurements/LHO_EndX/D20160628
trunk/Projects/PhotonCalibrator/measurements/LHO_EndY/D20160628
Surveyed all eight CP LLCV stems for loctite blue 242 application. CPs 2, 4, 5 now have loctite and were re-zeroed (CP5 has broader range). The others' stems were so tight that I couldn't loosen, so I assumed they don't need loctite. NOTE: CP 2 & 5 stems were starting to unthread from the actuator coupling nut again, prior to loctite application.
OSEM readback | |
IM1 P | 187 |
IM1 Y | 1118 |
IM2 P | 603 |
IM2 Y | -204 |
IM3 P | 1966 |
IM3 Y | -79 |
IM4 P | -3865 |
IM4 Y | -611* |
*Nominal value for IM4 YAW updated to reflect current H1 alignment.
I used TimeMachine to restore all other SUS alignment values to the lock stretch at ~3am local time.
SR2 Top Satellite Box OSEM Photodiode Oscilloscope Traces
The oscilloscope traces attached show a comparison of the time domain signals between SR2 Top OSEM Photodiode A and SR2 OSEM Photodiode B, as seen from connector J2 "Analog Rack". (We use the satellite box signal and connector naming conventions as shown on D1400098-v1.) OSEM Photodiode A was from the SR2 Top OSEM, the suspect channel. In the images, the upper trace, labeled "1", is from OSEM Photodiode B, satellite box connector J2 pin 2. The lower trace, labeled "2", is from OSEM Photodiode A, satellite box connector J2 pin 1, the noisey channel under investigation.
Two x10 probes were used. The vertical scale factors for images 1 and 2 are 0.200 V/div. The vertical scale factor for image 3 is 0.050 V/div.
Image 1 shows the satellite box as found, all cables connected. Photodiode A shows significantly more noise.
Image 2 shows the photodiode signals with the "Vacuum Tank" cable, connector J1, disconnected. All quiet.
Image 3 shows the photodiode signals with the "Vacuum Tank" cable reconnected to J1. Both photodiode signals now have the same amplitudes. (Note the more sensitive scale factor for image 3.)
An intermitant? A poorly seated connector? Pickup from the SEI CPS 25kHz? Betsy reported that the noise spectrum did not change!
After the above investigation, FIl also powered off the HAM CPS clock synchronizing fanout. The spectra still showed the noise while this was off, although it may have been a little bit reduced. Then, Fil reseated the SAT AMP cable at the feedthru of the chamber which has these channels on it. After reseating the noise was still there. So, we've now tried multiple power switched cable reseating with no luck. Our next tries will be to power cycle the h1sush34 computer (the only one not done today!) and lock closer at the CPS clock sync.
So to recap, there are a few channels which show a "bouncy" type noise spectrum (based on Andy L. tool plots, which I'll ask him to rerun soon) which appeared before or after the power outage:
PR2 M1 T2
PR2 M3 LL
PR2 M3 UL
SR2 M1 T1
SR2 M1 T3
SR2 M1 LF
A scan through all other OSEMs do not show any other OSEMs with this specific noise shape. Attached is the spectra of the still present SR2 noise from today (bottom pane) with some other healthier channels (upper pane).
Continuation of alog 27893.
(Richard M, Fil C, Ed M, Daniel S)
ECR E1600192.
Split Whitening Chassis S/N S1101627:
AA Chassis S/N S1102788 & S1202201:
The attached plots show the transfer functions of
Whitening chassis S1101603 was removed from ICS-R5 U18. New chassis S1101627 was reinstalled with modifications listed above. New unit is the split variant.
[Kiwamu, Carl]
First indications are that the DCPD HF channels are behaving as expected. With the OMC locked on an RF sideband the DCPD A is compared to the new DCPD HF A. The transfer function between them at low frequency has a 'f' relation which transistion fto f^3 at 10kHz as expected from the AC coupling change and the removal of 2 poles at 10kHz. Coherence is lost at about 20kHz.
I have changed the whitening gain of the AHF DCPD path (A and B) to 15dB. This pushed the noise floor in the 15-29kHz region to a factor ~2 above what I guess is ADC noise. In the attached plots the original DCPD can be seen to reach a common noise floor with the AHF DCPD path with no whitening gain at about 17kHz. Turning the whitening gain up we can get some coherence with the shot noise of the DCPD through original electronics.
A forest of new peaks is visible, mainly in the 17-29kHz region. There are 80 peaks in teh 25-26kHz band. I stepped the ETMX ring heater at 7:01 up by 0.5W and down again at teh end of lock at 7:22. This may give some mode identificatiion data.
This morning we removed the Twin T notch on the AA board by removing C3, C4, C5, C10, C11 leaving in the 100 Ohm resistors in place.
We adjusted the pressure on the pwrmtr water circuit from its nominal 5 bar to 4.5 bar. The flow rate to the laser heads decreased somewhat. Put the pressure back to ~5 bar to set the flow rate in the circuit to be over 1.25 l/min. In doing so, the other flow rates seemed to behave as expected. We will monitoring the flow rates and pressures over the the next few days to see if anything settles out/down. Jeff, Peter
Have been monitoring the PSL chiller trends during the day. The attached plot is for an 8 hour period. The spikes at 06:50 (PT) are when Peter and I varied the pressures regulators. The pressures and flows have flattened out, which is good. The head flows have also flattened out, which is also good. The head temperatures have been moving around a bit (by 0.1 degree). It appears that varying these pressures regulators may have stabilized the pressure and flows. Will check again in the morning to see if these trends hold.
Nutsinee, Jim, Dave:
The HWS code crashed at 07:50 PDT this morning, Nutsinee tried to restart at 11:46 PDT, but it failed. We found that the 1TB raid is 100% full (has data back to December 2014). We are contacting Aidan to see how to proceed.
BTW: the /data file system on h1hwsmsr is NFS mounted at the end stations, so no HWS camera information is being recorded at the moment.
We deleted December 2014 ITMX and ITMY data to free up 21GB of disk space on /data. The code now runs.
We need a long term plan on how to keep these data if they need permanent archiving.
I have restarted both X and Y HWS codes this evening.
The disk was full again today. I deleted Jan-Feb 2015 data from ITMX folder. Freed up 194GB. HWS code now runs again.