It seems like the laser shut down at around 11 pm local time friday night after being on for about 14 hours. Attached is a screenshot of the laser medm screen as it look now, and some trends, the first is 4 seconds around the time it shut down (2016-05-07 4:46:56 UTC). Of the channels I looked at the HPO watchdog (PSL-OSC_PWRDOGON?) and the LRA seem to be the first to go. The third screenshot shows the power outputs over the 14 hours the laser was on, along with the humidity trend. The humitidty looked alarming at first, but the last attachment shows a 10 day trend where the humidity seems to fluctuate this much and more.
Looks like the NPRO shut down too, for whatever reason - which is not obvious to me.
Delivery already ordered for Tues. (CP7 and CP8)
hourly trend of CP8 dewar level (from hourly autoburts) for the month of May
01/00:00 2.723166600543230e+01
01/01:00 2.718588824121830e+01
01/02:00 2.718588824121830e+01
01/03:00 2.714316232795190e+01
01/04:00 2.710348826563310e+01
01/05:00 2.704550309762871e+01
01/06:00 2.699972533341472e+01
01/07:00 2.682271797845394e+01
01/08:00 2.674947355571153e+01
01/09:00 2.669759208960234e+01
01/10:00 2.664265877254554e+01
01/11:00 2.658467360454115e+01
01/12:00 2.652363658558916e+01
01/13:00 2.653279213843196e+01
01/14:00 2.651753288369396e+01
01/15:00 2.641071810052797e+01
01/16:00 2.636799218726158e+01
01/17:00 2.634052552873317e+01
01/18:00 2.629169591357158e+01
01/19:00 2.625202185125278e+01
01/20:00 2.620929593798639e+01
01/21:00 2.617572557756279e+01
01/22:00 2.615131076998199e+01
01/23:00 2.610248115482040e+01
02/00:00 2.609637745292520e+01
02/01:00 2.604754783776361e+01
02/02:00 2.602313303018281e+01
02/03:00 2.598651081881161e+01
02/04:00 2.595904416028321e+01
02/05:00 2.593462935270241e+01
02/06:00 2.587969603564562e+01
02/07:00 2.569048127689444e+01
02/08:00 2.560197759941405e+01
02/09:00 2.556535538804284e+01
02/10:00 2.550737022003845e+01
02/11:00 2.544633320108646e+01
02/12:00 2.537614062929167e+01
02/13:00 2.536698507644887e+01
02/14:00 2.531815546128727e+01
02/15:00 2.526932584612568e+01
02/16:00 2.523575548570208e+01
02/17:00 2.519608142338328e+01
02/18:00 2.517471846675008e+01
02/19:00 2.511978514969329e+01
02/20:00 2.508926664021729e+01
02/21:00 2.500991851557970e+01
02/22:00 2.501297036652730e+01
02/23:00 2.494888149662771e+01
03/00:00 2.495193334757531e+01
03/01:00 2.490005188146610e+01
03/02:00 2.486953337199011e+01
03/03:00 2.482070375682851e+01
03/04:00 2.479323709830012e+01
03/05:00 2.477187414166692e+01
03/06:00 2.470473342081972e+01
03/07:00 2.455519272438734e+01
03/08:00 2.448194830164494e+01
03/09:00 2.441785943174535e+01
03/10:00 2.439344462416456e+01
03/11:00 2.431714835047456e+01
03/12:00 2.428357799005096e+01
03/13:00 2.422559282204657e+01
03/14:00 2.420422986541337e+01
03/15:00 2.416455580309458e+01
03/16:00 2.411877803888058e+01
03/17:00 2.405468916898099e+01
03/18:00 2.399670400097659e+01
03/19:00 2.395397808771020e+01
03/20:00 2.393566698202460e+01
03/21:00 2.388988921781060e+01
03/22:00 2.385326700643941e+01
03/23:00 2.380443739127781e+01
04/00:00 2.376171147801142e+01
04/01:00 2.372814111758781e+01
04/02:00 2.367320780053102e+01
04/03:00 2.364879299295022e+01
04/04:00 2.360911893063143e+01
04/05:00 2.355418561357464e+01
04/06:00 2.351451155125584e+01
04/07:00 2.346568193609424e+01
04/08:00 2.338023010956145e+01
04/09:00 2.327036347544786e+01
04/10:00 2.326425977355266e+01
04/11:00 2.320322275460067e+01
04/12:00 2.315439313943907e+01
04/13:00 2.310251167332987e+01
04/15:00 2.305368205816828e+01
04/16:00 2.302316354869228e+01
04/17:00 2.297738578447829e+01
04/18:00 2.290719321268349e+01
04/19:00 2.287667470320750e+01
04/20:00 2.284310434278390e+01
04/21:00 2.281868953520310e+01
04/22:00 2.279427472762231e+01
04/23:00 2.276375621814631e+01
05/00:00 2.269966734824671e+01
05/01:00 2.267830439161351e+01
05/02:00 2.261726737266152e+01
05/03:00 2.258369701223792e+01
05/04:00 2.253791924802393e+01
05/05:00 2.251960814233833e+01
05/06:00 2.245246742149114e+01
05/07:00 2.240058595538194e+01
05/08:00 2.224494155705435e+01
05/09:00 2.224799340800195e+01
05/10:00 2.216254158146916e+01
05/11:00 2.208624530777917e+01
05/12:00 2.205877864925077e+01
05/13:00 2.200994903408918e+01
05/14:00 2.200079348124638e+01
05/15:00 2.194280831324198e+01
05/16:00 2.188787499618519e+01
05/17:00 2.185735648670919e+01
05/18:00 2.180852687154760e+01
05/19:00 2.178106021301920e+01
05/20:00 2.171391949217200e+01
05/21:00 2.166203802606281e+01
05/22:00 2.166508987701041e+01
05/23:00 2.162541581469161e+01
06/00:00 2.157048249763481e+01
06/01:00 2.154911954100162e+01
06/02:00 2.150639362773522e+01
06/03:00 2.146061586352123e+01
06/04:00 2.141178624835963e+01
06/05:00 2.135685293130283e+01
06/06:00 2.132938627277444e+01
06/07:00 2.119815668202765e+01
06/08:00 2.113406781212806e+01
06/09:00 2.109439374980926e+01
06/10:00 2.103335673085727e+01
06/11:00 2.100283822138127e+01
06/12:00 2.097231971190527e+01
06/13:00 2.092349009674367e+01
06/14:00 2.084414197210608e+01
06/15:00 2.084414197210608e+01
06/16:00 2.078310495315409e+01
06/17:00 2.072817163609729e+01
06/18:00 2.066408276619770e+01
06/19:00 2.063966795861690e+01
06/20:00 2.056947538682211e+01
06/21:00 2.054506057924131e+01
06/22:00 2.052369762260811e+01
06/23:00 2.047181615649892e+01
07/00:00 2.044740134891812e+01
07/01:00 2.037720877712332e+01
07/02:00 2.034058656575213e+01
07/03:00 2.032837916196173e+01
07/04:00 2.027649769585253e+01
07/05:00 2.023682363353374e+01
07/06:00 2.019714957121494e+01
07/07:00 2.005676442762536e+01
07/08:00 2.004760887478256e+01
Dewar contents have dropped 7% in 7 days. In April they dropped from 61% to 27% ( 34% in 30 days).
Data obtained from command line in directory
/ligo/cds/lho/h0/burt/2016/05
running the command
grep "H0:VAC-EX_CP8_LT505_DEWAR_LEVEL_PCT " */*/h0veex.snap|sed 's//h0veex.snap:RO H0:VAC-EX_CP8_LT505_DEWAR_LEVEL_PCT 1//g'
CP8 dewar liquid level for last 4 days. Looks normal.
Filament is off
Kyle, Chandra We plan to initially install this unit on CP4's exhaust with CDS power and signal read back so as to accumulate data from a typical "representative" 80K pump. After which, we would then install it on CP3's exhaust and use it as part of a low-tech control loop to keep CP3's LN2 level within an acceptable range without the need to manually overfill it every few days. Recall that one of the LN2 level transducer sensing lines (at lowest point of pump's LN2 reservoir) used to determine CP3's LN2 level became clogged in Dec. 2015 rendering the normal CDS PID control loop non-functioning and requiring manually filling the pump every 72 hours. Today we just wanted to install it temporarily to verify that the resulting back pressure was as advertised by the manufacturer. The "before" exhaust pressure was nominally 1.0 psig and the "after installation" value increased 0.1 psig to 1.1 psig - this is as advertised. After a few hours however it looks like it causes periodic "spikes" in the exhaust pressure. Since the exhaust pressure is one of the two differential values used to determine the 80K pumps level, we also see these spikes in the pump level (see attached). I later removed the unit and restored the piping to as found.
~1330 hrs. local? - not sure, some time after lunch Opened exhaust check valve bypass, opened LLCV bypass 1/2 turn ccw, LN2 at exhaust in 55 seconds -> Restored valves to as found -> Next overfill to be Monday, May 9th.
I ran a script to let PSL roration stage go back and forth 100 times between 0 and 3 Watts. The result of measured power vs. requested power is ~2% uncertainty. The ISS first loop wasn't locked at the time of the measurement. Repeating the test by letting the rotation stage go between 0, mid power (~30 W), and max power (~56W) yielded similiar result. We could expect even a better performance when ISS is locked.
Repeated the same test for CO2X and CO2Y rotation stages. Later realized that the laser weren't locked due to today's Guardian computer restart. CO2X RS yielded .3% uncertainty and CO2Y RS yielded 3% uncertainty. I'm not sure if our powermeters are a thermopiles. I would disregard this number if they are.
In trying to figure out what's happening with the power stabilisation, I looked at the "dark" noise of the power stabilisation photodiodes. pda-dark1.png shows the measured "dark" noise for both the AC and DC output of the photodiode. Both are the same. I suspect that both photodiodes are damaged. The ISS will lock but is very temperamental and depends on the set point (reference level). When running the diffraction level should be between 12-14%. There is a problem with the reported DC output and that measured at the photodiode. Often they do not agree. For example the MEDM screen will report that the DC output is ~11V but measured at the photodiode it is ~7V. Whilst the reason might be related to how the DC value is calculated from the AC value of the photodiode, it still is not consistent with the measured value.
When so-called analog "AC" signal (which is in reality a whitened signal with DC gain of 0.2) hits the rail, digital DC on the MEDM wouldn't agree with analog DC (first attachment).
Out of curiousity, I turned on the 1st loop and increased the diffraction to 16% by moving the REFSIGNAL from -2.3V to -1.9V, and ISS has been running OK in the past 40 minutes. Digital DC is a factor 5 larger than the REFsignal, this makes sense because the DC of the whitened analog signal that is actually used for the ISS is a factor of 5 smaller than the analog DC.
1st loop PDs as well as downstream of ISS 1st loop (FSS transmission) look about the same or maybe somewhat worse than before (e.g. 26758).
Morning Meeting
- It's Friday. Don't do anything goofy.
- More PSL tweaking today.
- Avoid end station (SEI)
- Shutdown HAM6 pump today (VAC)
- IM measurement continue
- RS measurement continue
#################################3
Activities. All time in UTC:
15:11 Carlos/Jeff B. going to PSL (conputer boot)
15:41 Jeff B. out, Carlos still in with Peter.
16:04 Jason joining Peter in the PSL
16:11 Kyle to HAM4-5
16:26 Carlos out
16:30 Kristaina and Karen to LVEA.
16:50 Joe to LVEA retrieve some equipment
17:01 Fil to CER to fire up UPS.
17:03 Joe out.
17:40 Kyle to EY. NOT into the VEA. He'll be turning on the light and look for quipment.
18:04 Kyle out
18:43 Jason and Peter out for lunch.
19:10 Kyle putting a fan near RCG (HAM4-5)
19:16 Kyle out
20:13 Jeff K. to LVEA
20:22 Jeff K and Evan G going to EY near chamber. I ramped off BRS. Jeff will also be turning off high voltage supple. Make sure he turn it back on.
20:27 Kyle to vertex RGA to plug "stuff" in.
20:50 Jeff B. to rack near PSL.
21:01 Peter done for the day
21:57 Bryn and David to LVEA to take some pictures.
21:58 Evan to ISCT1 working on POPX.
22:18 Bryn out
22:30 Kyle to MidY to do CP3 overflow and get rid of CP2 alarm.
22:35 Jeff B to cleaning area
22:39 Jeff B. out
22:53 Kyle back from Mid Y. Now going to vertex RGC.
23:01 Jeff K back from end station, putting tuff in the LVEA
Jeff K, Evan G
We (finally!) measured the ESD driver quadrant control voltage path transfer functions, which was needed because of the change made on Feb 9 (see ECR E1500341 and LHO aLOG 25468) for calibration/compensation use. Measurements are stored under
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER9/H1/Measurements/Electronics/2016-05-06
Measurement notes attached. Analysis to come...
The anaylsis is done.
I wrote a matlab analysis script which automatically fits all the measured transfer function by calling LISO. The script can be found in SVN at
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER9/H1/Scripts/model_ETMY_LVLN_driver_20160511.m
The results are attached and also can be found in SVN at
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER9/H1/Results/Electronics/2016-05-11_H1SUSETMY_ESD_LVLNDriver_*.pdf
An update:
Evan and Jeff pointed me out that I could have also subtracted the reference transfer function ( = the transfer function of the measurement setup, including SR785 and diff-to-single-ended-convertor). So I edited the code so that it subtracts the reference out of each measurement. This impacted mostly on the gain of each measruement which are now 1.88. The resulting pdfs are attached.
I have moved the analysis code to a slighlty place at:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER9/H1/Scripts/Electronics/model_ETMY_LVLN_driver_20160511.m
The resulting pdfs are saved at the same location as the original ones.
CP4 was sending out a yellow alarm on CP4 so I looked more closely and noticed one of the temp sensors (from heat tape) is reading bogus value. Attached is trend. It started yesterday in the morning.
However, this bogus temp reading did not cause the alarm. The liquid level of CP4 was spiking (seen only in *seconds* trend scan) due to back pressure from flow meter. See Kyle's log entry above for more detail.
Re-initalized HWSX matlab files.
Included is the RPN image from the injection locked HPO for comparison. The 10Hz comb feature that was present in the Freq scan from Apr 29th is gone. The pointing seems to be slightly out of spec and I believe tha modescan looks relatively ok.
PSL might need another tweak after HPO work is done. I noticed maximum power changed day-to-day basis.
model restarts logged for Thu 05/May/2016 ISC and SUS-PI model work, h1tw0 rebuild, some unexpected fw restarts
2016_05_05 11:48 h1iscex
2016_05_05 11:50 h1iscey
2016_05_05 12:00 h1omc
2016_05_05 12:00 h1omcpi
2016_05_05 12:02 h1pemex
2016_05_05 12:02 h1susetmxpi
2016_05_05 12:06 h1broadcast0
2016_05_05 12:06 h1dc0
2016_05_05 12:06 h1fw0
2016_05_05 12:06 h1fw1
2016_05_05 12:06 h1nds0
2016_05_05 12:06 h1nds1
2016_05_05 12:06 h1tw1
2016_05_05 12:56 h1pemex
2016_05_05 12:57 h1broadcast0
2016_05_05 12:57 h1dc0
2016_05_05 12:57 h1fw0
2016_05_05 12:57 h1fw1
2016_05_05 12:57 h1nds0
2016_05_05 12:57 h1nds1
2016_05_05 12:57 h1tw1
2016_05_05 14:52 h1broadcast0
2016_05_05 14:52 h1dc0
2016_05_05 14:52 h1fw0
2016_05_05 14:52 h1fw1
2016_05_05 14:52 h1nds0
2016_05_05 14:52 h1nds1
2016_05_05 14:52 h1susetmxpi
2016_05_05 14:52 h1tw1
2016_05_05 15:06 h1fw0
2016_05_05 15:13 h1fw0
2016_05_05 15:23 h1fw0
2016_05_05 16:35 h1tw0
2016_05_05 17:06 h1tw0
model restarts logged for Wed 04/May/2016 No restarts reported
model restarts logged for Tue 03/May/2016 ALL SYSTEMS RESTARTED. RCG upgrade to 3.0.2. Front end and DAQ upgrade.
model restarts logged for Mon 02/May/2016 No restarts reported
model restarts logged for Sun 01/May/2016 No restarts reported
model restarts logged for Sat 30/Apr/2016 fw1 instability
2016_04_30 04:33 h1fw1
2016_04_30 06:24 h1fw1
2016_04_30 09:03 h1fw1
2016_04_30 11:26 h1fw1
2016_04_30 19:14 h1fw1
2016_04_30 19:44 h1fw1
2016_04_30 21:03 h1fw1
2016_04_30 21:53 h1fw1
model restarts logged for Fri 29/Apr/2016 No restarted reported
model restarts logged for Thu 28/Apr/2016 hw1 and nds1 instability
2016_04_28 00:34 h1fw1
2016_04_28 04:34 h1fw1
2016_04_28 04:54 h1fw1
2016_04_28 05:12 h1fw1
2016_04_28 06:05 h1fw1
2016_04_28 07:13 h1fw1
2016_04_28 07:43 h1fw1
2016_04_28 07:56 h1fw1
2016_04_28 08:02 h1fw1
2016_04_28 08:34 h1fw1
2016_04_28 08:54 h1fw1
2016_04_28 16:28 h1nds1
2016_04_28 16:29 h1nds1
2016_04_28 16:30 h1nds1
model restarts logged for Wed 27/Apr/2016 fw0+1 unstable. OMC and SUS PI IPC model work
2016_04_27 08:33 h1fw1
2016_04_27 09:53 h1fw1
2016_04_27 11:13 h1fw1
2016_04_27 11:30 h1fw1
2016_04_27 11:59 h1omcpi
2016_04_27 12:01 h1dc0
2016_04_27 12:01 h1susitmpi
2016_04_27 12:03 h1broadcast0
2016_04_27 12:03 h1fw0
2016_04_27 12:03 h1fw1
2016_04_27 12:03 h1nds0
2016_04_27 12:03 h1nds1
2016_04_27 12:03 h1tw1
2016_04_27 13:06 h1fw0
2016_04_27 14:33 h1fw0
2016_04_27 17:54 h1fw1
2016_04_27 20:33 h1fw1
I just pushed a minor upgrade to guardian to fix a small issue with the guardlog client when following node logs. The new version is r1542.
The client was overly buffering stream data from the server, which was causing data from the stream to not be output in a timely manner. This issue should be fixed in the version I just pushed.
As always make sure you have a fresh session to get the new version:
jameson.rollins@operator1:~ 0$ which guardlog /ligo/apps/linux-x86_64/guardian-1542/bin/guardlog jameson.rollins@operator1:~ 0$
Sorry, Sheila. This issue has been fixed now. Just needed to tweak the lockloss script to account for some updated guardlog arguments.
Yes, its working, thank you Jamie.
Jim rebooted h1guardian0 at 10:53 local. I was hoping this would help with the logging issues that we have had here alog26965. But no luck.