Attached are plots of dust counts > .3 microns and > .5 microns in particles per cubic foot from approximately 5 PM Jan. 21 to 5 PM Jan. 22. Also attached are plots of the modes to show when they were running/acquiring data. Dust monitor 10 in the H1 PSL enclosure is still indicating a calibration failure. I did not plot the counts in the labs, since this IOC was rebooted and put 'nan' in the data. The IOC was timing out communicating with the dust monitors, but the Comtrol was still on. Rebooting the IOC seemed to fix it.
Giacomo, Lisa, Matt
While yesterday's "MC2-M2 as a low frequency offload path for M3" approach worked for signals well below 100mHz, it did little to save M3 from saturation due to signals around the microseism. It was also not sufficient for ISI testing.
Through more measurements we found that the MC2 M2-M3 cross-over had a small region of stablity between 10 and 20Hz with only the old 100:1 filter engaged. The maximum phase margin in this region was about 10dg, which really doesn't sound like enough to be reliable, so I started another filter design cycle. The result is shown in the attached plot: unconditionally stable up to 20Hz, but not really optimized in terms of gain (we could have a lot more if we are willing to invert the plant features). This was sufficient to keep the M3 drive RMS below 10k counts (at LOCK filter output, so 2.5k at the DAC), which is about a factor of 10 better than before and about a factor of 40 from saturation.
Giacomo, Keita, Lisa This entry is just for reference, now we have this problem . Before the WFS decided to go in a permanent and irreversible bad state we experienced a moment of happiness as the calibration of the DC signal path was actually making sense (the difference between what we measure on the ADCs WFS SUM channels and what we expect is within 10%). Direct power measurement on IOT2 - IMC unlocked Total power in the WFS path combined (before the first BS IO_MCR_BS2, nominally 50/50: 2.25 mW) Power in front of WFS_A (in reflection from BS2): 0.95 mW Power transmitted after BS2: 1.30 mW Power in front of WFS_B = 1 mW (after the second BS IO_MCR_BS3) Power going to the camera (transmitted by the second BS IO_MCR_BS3): 0.27 mW Power reflected by WFS_A: 0.13 mW Power reflected by WFS_B: 0.1 mW - IMC locked Total power in the WFS path combined: 250 uW Expected number of ADC counts with IMC unlocked and WFS interface in high gain mode WFS DC transimpedance = 1000 V/A Diode Responsivity = 0.8 A/W ctsPerVolts = 1638 counts/V DC WFS interface gain = 10 ==> 13104 counts/mW Expected ctsWFS_A [counts] = 13104 [counts/mW] x (0.95-0.13)mW = 10745 counts ctsWFS_B [counts] = 13104 [counts/mW] x (1.0-0.13)mW = 11794 counts Measured number of ADC counts with IMC unlocked and WFS interface in high gain mode WFS_A_SUM = 9272; WFS_B_SUM = 11014; Dark offset WFS_A_SUM = -1215; WFS_B_SUM = 25; Measured ctsWFS_A_SUM = 9272 - (-1215) = 10487 counts ctsWFS_B_SUM = 11014 - 25 = 10989 counts Laser Calibrator D1201258 on WFS_B, high gain mode 4mW @ 980 nm Measured ctsWFS_B_SUM = 35037; We don't know the exact responsivity of the Q3000 diodes at 980nm, but it should be between 0.5 and 0.6 A/W. ==> This measurement gives us 0.54 A/W
Something is really only marginaly stable.
Attached is an example of low frequency oscillation of IMC WFS DC measured by using a DB15 break out board on the front panel of the DC interface (i.e. the breakout board is inserted between WFS head and the WFS DC interface).
This is easily triggered by disconnecting the WFS DC cable from the front panel of the WFS DC interface and then plugging in again. In this case it was WFSA, segment 4 (i.e. between pin1 and pin9, why is this not the segment 1, I don't know), and it's oscillating at about 100Hz. When this happens, all the quadrants show similar symptom, and it doesn't go away spontaneously. Sometimes this goes away by wiggling the cable or touching some of the pins on the breakout board.
It's not exactly easy to make things accidentally oscillate at such a low frequency, I'm suspicious about power problem, maybe weak grounding of big capacitors e.g. power capacitor on the WFS head.
Interestingly, when the breakout board is not there, when you look at the fact channels for WFS DC, the oscillation dies down on its own within 10 or 20 seconds for WFSA.
I was curious to do the same measurement for WFSB, and with the breakout board it does something similar (second picture, in this specific case it was slower). Without breakout board, I didn't see any craziness. Also, when this was going on, the +-18V power that are passed through to the WFSB head were also showing something similar (third picture, in this case the WFSB was oscillating faster than when the second picture was taken) . I haven't measured WFSA power (see the entry attached to this one).
When they're not oscillating, things look OK-ish in that nothing is outrageously wrong.
And the WFS DC interface board broke in the middle of the above measurement for WFSB.
Circuit breaker switch of the board was triggered, I disconnected WFSA and WFSB from the board and switched the board on again, +15V LED came back but -15V didn't.
I brought the DC interface chassis back to the EE shop, and Filiberto thinks that one of the FETs on the protection board inside the chassis (the one that provides -18V to the -15V voltage regulator as well as to the WFS heads) is dead.
Since we don't have spares, I removed the WFS DC interface that was originally intended for IFO REFL WFS (i.e. HAM1) and put it in place of IOO WFS DC interface. We are not going to use REFL WFS for a long time.
Anyway, after we replaced the WFS DC interface, the DC power came back but we still don't know if there is any damage in the WFS heads themselves.
We noticed very large dark offsets after the new WFS interface was installed. Looking more closely, we saw the same type of oscillations as with the other board. This time, we couldn't stop the bad behavior by unplugging/touching/rebooting, so we turned off the WFS interface board for tonight.
The fourth BSC-ISI assembly was recently tested. It looks good. The testing report can be found at E1100297-V1 - aLIGO ISI-Unit 4 - LHO - Phase I report.pdf.
Manually moved the model running on the tripleteststand front-end to the HLTS 05 model, for some reason the script to change this failed almost completely. Started the model, copied the master.05 file to master and restarted the daqd, verified the medm screen for the 05 model showed appropriate activity.
The network switch for vacuum controls in the MSR as well as several other devices were powered by normal AC power. The power strip for all devices was moved to a UPS powered outlet to allow operation during brief power failures. This did not cause an interruption for the vacuum network since the switch had dual power supplies, but other services were interrupted for about 2 seconds. Nobody noticed.
The SATABOY RAID unit associated with the h1fw1 system suffered a controller board failure at 1:30am Sunday. Dan (LDAS) diagnosed the problem and for now removed the failed controller card and moved everything over to the second controller card. A replacement card is on order, we will schedule a h1fw1 outage when this arrives.
h1fw1 resumed writing frames at 15:30 this afternoon local time.
The optics airbake oven that had been located in the LSB optics lab has been re-re-located (delocated?) to the OSB optics lab. There should be no competing issues with bonding now.
Events/work, that I was notified to take place:
- 8:30 am, UniFirst to change mats
- 9:20 am, Paradise water, water delivery.
- 10:28 am, Travis heading into the LVEA, BSC01 work, unlocking quad.
- 10:50 am, ITMY watchdogs trip, Travis working in chamber.
- 11:10 am, Jim B. move power for fiber rack, per WP3675.
- 11:34 am, Praxair, LN2 delivery to Y-End.
- 12:00 pm, Hugh has unlocked BSC01 HEPI.
- 12:00 pm, Mark making ITMY measurements.
- 2:44 pm, Travis checking ITMY for interference, bad transfer function, Travis out. TF start again.
- 3:03 pm, CP7 starts alarming, delivery symptom, curve in DV appears to slow down, now at 99.3.
Mark B. Commencing another round of Matlab TFs on PR3.
Mark B. Data taking finished at around 2:50 am. Log file shows one failure to get data for H1:SUS-PR3_M1_OSEMINF_T3_OUT_DQ after maximum number of attempts, but otherwise OK. Undamped: /ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/PR3/SAGM1/Data/2013-01-22-1042933717_H1SUSPR3_M1_0p01to50Hz_tf.mat Damped: /ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/PR3/SAGM1/Data/2013-01-22-1042953606_H1SUSPR3_M1_0p01to50Hz_tf.mat Analysis is underway.
Mark B. I committed a handy Matlab script to the SUS SVN that automates the process of creating safe.snap files for SUS: ^/trunk/Common/MatlabTools/save_safe_snap.m Typical usage is >>save_safe_snap('H1','ITMY') It backs up the current state to a temp file in /opt/rtcds/userapps/release/sus/${ifo}/burtfiles/, puts the suspension into the safe state by switching off the master switch, test filters, damping filters, lock filters and alignment offsets, makes the safe.snap file, restores the suspension to the saved original state, and deletes the temp file. It's fairly verbose and describes in detail what it's doing, so if it should hit an unexpected condition and stop it should be fairly obvious how far it got and how to recover. At the end of its output it supplies the text of an svn commit command that can be copied and pasted into the Matlab command line to commit the new file. It supports QUAD, BSFM, HLTS, HSTS and HAUX, and has been tested on all of those on H1. For HAUX, you can specify a single optic, e.g., 'IM1' but since there's only one model for IM1-4, it necessarily does them all. It assumes that the safe.snap files are symlinks pointing into the userapps SVN, according to the scheme imposed on H1 by Jeff K in alog 5133. If/when LLO adopts the same layout, the script should work there as well.
Mark B. The PR3 TFs that failed before (5191) ran fine when restarted with the typo fixed (the call to Matlab_TFs() had invoked model hsts_metal, not hlts metal): Undamped: ^/trunk/HLTS/H1/PR3/SAGM1/Data/2013-01-21-1042830122_H1SUSPR3_M1_0p01to50Hz_tf.mat Damped: ^/trunk/HLTS/H1/PR3/SAGM1/Data/2013-01-21-1042852645_H1SUSPR3_M1_0p01to50Hz_tf.mat The damped plots are all fine. The undamped plots are rather noisy in L, P and Y, and the fundamental pitch mode (0.66 Hz) has pretty much disappeared in P and L. Probably needs a look over and another round of TFs.
Mark B. Per alog 5200, starting DTT TFs on ITMy. Will do M0 and R0 in parallel.
Mark B. Plots for both chains were very messy, especially in pitch, so Travis has gone out to hunt for interferences, with particular attention to the flags between the chains.
Mark B. Travis found a UIM flag that wasn't clearly touching but was a bit close for comfort and corrected it, so I'm starting another round of TFs. I probably won't finish before quitting time in which case I'll continue in the morning.
ITMy in BSC1 is now suspended. Mark is running a set of quick TFs to check for rubbing, and once deemed healthy, we are back to alignment for both SEI and SUS.
The H1 DAQ SATABOY disk raid system in the MSR failed at 01:27 local time Sunday morning (the Sunday gremlin?). From that time onwards h1fw1 has not been able to write frames. We will leave the system in its broken state for LDAS to do some forensics. This is the system that was upgraded to Sol11 on Thursday, we are investigating if that had anything to do with the failure.