The first attached screenshot is a series of DCPD spectra from the last several months. When we first turned on the HPO, the noise in DARM did not get any worse. The noise stayed similar to O1 until July 16th, the lump from 200-1kHz started to appear in late July, when we started a series of changes in alignment and TCS to keep us stable with a decent recycling gain at 50 Watts.
PMC
Last night, Gabriele pointed out that the noise in DARM is coherent with the PMC HV. The HV readback had been mostly some white noise (probably ADC noise), until the last few weeks, but has been getting noisier so that some of the things jitter peaks show up in it now (SEcond attached screenshot, colors corespond to the dates in the DCPD spectrum legend. This may be related to the problem described in LLO alogs 16186 and 15986. The PMC transmision has been degrading since July, which could be a symptom of misalingment. Since July, the REFL power has nearly doubled from 22 to 38 Watts, while the transmission has dropped 28%. The PMC sum has also dropped by 4%, roughly consistent with the 3% drop in the power out of the laser. Peter and Jason are planning on realigning the PMC in the morning, so it will be interesting to see if we see any difference in the HV readback.
TCS + PRC alignment:
The other two main changes we have had in this time are changes in the alignment through the PRC, and changes to TCS. These things were done to improve our recycling gain and stability without watching the noise impact carefully. In July we were using no offsets in the POP A QPDs. We started changing the offsets on August 1st, after the lumpy noise first appeared around July 22nd. We have continued to change them every few weeks since then, but generally moving in the same direction.
The only TCS change that directly coresponds to the dates when our noise got worse was the reduction of ETMY RH from 0.75 W each on July 22nd, the other main TCS changes happened September 10th. It would be nice to undo some of these changes before turning off the HPO, even if it means reduing the power to be stable.
The HVMon signal (PMC length) shows a peak at about 600Hz/rtHz. We don't this is an indication of frequency noise from the laser, but rather an error point offset picked up in the PMC PDH signal. As such this is length noise added to the PMC and suppressed by the cavity storage time. Assuming this factor is about 1/10000, we would still get ~100mHz/rtHz which are modulated onto the laser frequency. Seems a lot.
The HVMon measure 1/50 of the voltage send to the PZT. With no whitening this is not very sensitive.
After the PSL team adjusted the PMC alignment the ~400Hz peaks are no longer visible in the HVMon spectrum. The coherence is gone as well—except for the 1kHz peak.
About the PMC:
1st screenshot shows the small improvement in DARM we got after the PMC realignment. While the coherence with PMC HV may be gone, it might just be that the PMC HV signal is now burried in the noise of the ADC. At a lockloss I went to the floor and measured HV mon, then plugged it into one of the 560s, AC coupled, 10 Hz high pass, gain of 100, and sent the output into H1:LSC-EXTRA_AI_1. We still have high coherence with this channel and DARM. (last attchment)
Also, the PMC realingment this morning did decrease the reflected power, but the transmitted power also dropped.
| refl(W) | trans(W) | sum(W) | laser power out | |
| July | 20 | 126W | 157 | 174 |
| Yesterday | 35 | 103 | 138 | 169 |
| today | 27 | 100 | 126 | 169 |
About turning the HPO on not adding noise:
Kiwamu pointed out that the uncalibrated comparison above showing that the noise did not get worse when the HPO came on was not as convincing as it should have been. This morning he and I used the pcal line hieght to scale these uncalibrated spectra to something that should be proportional to meters, although we did not worry about frequency dependent calibration. (4th screenshot) From this you can see that the noise in July was very close to what it was in March before the HPO came on, but there is some stuff in the bucket that is a little worse.
The point is made best by the last attached screenshot, which is nearly identical noise in the last good lock I could find before the HPO came on to the first decent lock after it came on. Pcal was not working at this time, so we can't use that to verify the calibration, but the input powers were similar (20Watts and 24 Watts), DCPD powers are both 20mA, and the DCPD whitening was on in both cases. (The decimation filters were changed around the same time that the HPO came on which accounts for the difference at high frequencies.)
Regarding power available to the PMC, I know this is obvious but another thing we have to consider is the ISS. Since the ISS AOM is before the PMC, it clearly also effects the amount of power available to the PMC. Peter K. can correct me if I am wrong, but it is my understanding that this happens primarily in 2 ways:
On 2016-9-21, for reasons unknown to me, the ISS control offset was changed from ~4.3 to 20. This means we are driving the ISS AOM much harder than we were previously. This then causes changes in the beam profile, which effects the PMC mode matching and lowers the cavity visibility. This is likely why, even though we have had only a 5W decrease in laser power since July, the total power into and the power transmitted by the PMC are down and the power reflected by the PMC has increased, and why we cannot return to the July PMC powers Sheila listed in her table in the above comment by simply tweaking the beam alignment into the PMC. I have attached a 120 day minute-trend of the ISS control offset (H1:PSL-ISS_CTRL_OFFSET) that shows the changes in the ISS control offset value since 2016-6-22, including the 2016-9-21 change. There are of course other reasons why the control offset changed (as can be seen on the attachment, the offset was changed several times over the last 4 months), the one on 9-21 just really stuck out.
Is there a reason why the control offset was changed so drastically? Something to do with the new ISS outer loop electronics?
J. Kissel We're continuing to struggle to even get back to DRMI locking as of last night (LHO aLOG 30614). One suspicion is the sharp feature 27.0977 Hz (using a 0.005 Hz BW ASD) that has suddenly appeared, as shown in Jenne's attached ASD. Because we haven't yet measured them, and we're dead in the water until we figure out the problem, I've taken the time to excite and measure all of the HAM2 and HAM3 HSTS and HLTS's highest vertical mode frequencies, modeled to be at 27.3 Hz and 28.1 Hz, respectively. The results are as follows: Optic Sus. Type V3 Mode Freq (Hz) MC1 HSTS 27.387 MC2 HSTS 27.742 MC3 HSTS 27.426 PRM HSTS 27.594 PR2 HSTS 27.414 PR3 HLTS 28.211 None match up with the mystery line. The closest is MC1, at ~0.3 Hz away. All values were measured with 5 averages at 0.005 Hz BW. I attach screenshots of the DTT and awggui templates for MC3 as an example. The DTT templates have been copied and committed into the Sus repository here: /ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/PR3/Common/2016-10-18_H1SUSPR3_V3_Mode.xml /ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/MC1/Common/2016-10-18_H1SUSMC1_V3_Mode.xml /ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/MC2/Common/2016-10-18_H1SUSMC2_V3_Mode.xml /ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/MC3/Common/2016-10-18_H1SUSMC3_V3_Mode.xml /ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/PR2/Common/2016-10-18_H1SUSPR2_V3_Mode.xml /ligo/svncommon/SusSVN/sus/trunk/HSTS/H1/PRM/Common/2016-10-18_H1SUSPRM_V3_Mode.xml
Some work was started on the HWS camera on ISCTEX today as per WP6253. It wasn't completed but brought to a point where it can be completed realtively easily at the next opportunity. It was returned to it's original configuration and confirmed to still be operational as it was before the work was started.
Rick, Evan G., Travis
As part of the bi-annual PCal maintenance, today we optimized the drive range of the PCal OFS for both end stations. The procedure for this was:
1) Turn off PCal lines and inject 10 Hz sine wave.
2) Break the OFS lock.
3) Note that the AOM drive is large (~1.5V) with loop open.
4) Adjust the offset in 1V steps to find the maximum OFS PD output.
5) Record max OFS PD output.
6) Close shutter and record minimum OFS PD output.
7) Set offset to half of 95% of max OFS PD output.
8) Find amplitude of injected sine wave that give us the maximum p-p OFS PD voltage.
9) Record magnitudes of carrier and sideband frequencies of the OFS and TX PDs.
Results:
PCal X:
PCal Y:
As the final part of the maintenance for PCal Y, we took a transfer function of the OFS. See attached traces.
The limit on the excitation amplitude is total number of counts that should be allowed to be sent to the Pcal. This is a frequency independent value. So for Pcal Y, the the maximum in H1:CAL-PCALY_EXE_SUM should be no larger than 44,000 counts. For Pcal X, the total of H1:CAL-PCALX_EXE_SUM and H1:CAL-PINJX_HARDWARE_OUT should be less than 57,000 counts. Right now, Pcal Y has the following injections set: Freq. (Hz) Amp. (cts) ------------------------- 7.9 20000.0 36.7 750.0 331.9 9000.0 1083.7 15000.0 ------------------------- Total = 44750.0 This is just above the threshold. It might be worth returning the 331.9 Hz line to O1 level (see LHO aLOG 30476 for the increased amplitude lines) since the detector noise in this region is recently improved. Pcal X has the following injections set: Freq. (Hz) Amp. (cts) ------------------------- 1501.3 39322.0 And CW injections on the Pcal X are totaling ~1585 counts giving Total = 40907.
M. Pirello
Prior to working on this we determined that the analog voltages on the channels in questions are outputting anticipated values. The immediate problem is with the PowerOK bit for each of these signals.
Investigation into these chassis yeilded the following:
1. Upon rebooting RF Amp Concentrator #1 on ISC_C3 in the CER (S1103450) the stuck bits on that chassis AMP18M and AMP24M1 were reset. We are 2/3 of the way done with this task, piece of cake!
2. Upon rebooting RF Amp Concentrator #2 on ISC_C3 in the CER (S1103451) 3 new bits were stuck for a total of 4 bits, including the original DIV40M, drat!
3. I tried disconnecting, measuring, and reconnecting each of the 4 stuck signals on the DB25's on the back of the unit. These voltages look good, no luck resetting the latched bits here.
a. DB25#1 = DIV40M = 3.64V
b. DB25#2 = AMP40M = 3.75V
c. DB25#3 = DIV10M = 3.61V
d. DB25#4 = AMP10M = 3.67V
4. I then disconnected the DB37 on RF Amp Concentrator #2 (S1103451) and checked each signal coming out of the RF Amp Concentrator. This output is confusing. The I couplers inside are referenced to 5V. An on state should be 5V, off state should be 0V. All of the PO signals should be either 5V or 0V. PO11 is 2.5V, PO7 is 2.5V, PO2 is 2.5V.
| PIn 1 (M1P12) | 0V | Pin 11 (M1P2) | 0V | Pin 21 (M1N11) | 0.033V | Pin 31 (M1N1) | 0.004V |
| Pin 2 (M1P11) | 0V | Pin 12 (M1P1) | 0V | Pin 22 (M1N10) | 0.029V | Pin 32 (PO8) | 0.009V |
| Pin 3 (M1P10) | 3.4V | Pin 13 (PO12) | 0.008V | Pin 23 (M1N9) | 0.029V | Pin 33 (PO11) | 2.509V |
| Pin 4 (M1P9) | 2.598V | Pin 14 (PO4) | 0.008V | Pin 24 (M1N8) | 0.0228V | Pin 34 (PO3) | 0.005V |
| Pin 5 (M1P8) | 0V | Pin 15 (PO7) | 2.510V | Pin 25 (M1N7) | 0.002V | Pin 35 (PO6) | 0.006V |
| Pin 6 (M1P7) | 2.951V | Pin 16 (PO10) | 0.007 | Pin 26 (M1N6) | 0.033V | Pin 36 (PO9) | 0.004V |
| Pin 7 (M1P6) | 3.073V | Pin 17 (PO2) | 2.515V | Pin 27 (M1N5) | 0.033V | Pin 37 (PO1) | 0.003V |
| Pin 8 (M1P5) | 3.027V | Pin 18 (PO5) | 0.004V | Pin 28 (M1N4) | 0.005V | ||
| Pin 9 (M1P4) | 3.051V | Pin 19 (GND) | GND | Pin 29 (M1N3) | 0.004V | ||
| Pin 10 (M1P3) | 0V | Pin 20 (M1N12) | 0.029V | Pin 30 (M1N2) | 0.004V |
5. I then reconnected the DB37 to H1_EtherCAT_Corner_3. Ten out of twelve of the OK bits remained red. I recycled power on the RF Amp Concentrator #2 (S1103451) on ISC_3 and again all but 4 of the OK bits were green like before.
6. I put everything back together, removed the breakout boards, etc. When I left the CER, the 4 bits were latched, DIV40M, AMP40M, DIV10M, AMP10M. After noon I checked the OK bits and 10 out of 12 OK bits are red including the original four. I am relatively sure that the issue is with the RF Amp Concentrator. The power OK signals going into this chassis are good, the power OK signals coming out of it seem to latch up spontaneously and output bad voltages. Perhaps the 5V regulator is outputting 2.5V?
Recommendation:
Strike DIV40M from WP6248 and close out WP6248 and FRS 6391 because DIV40M is connected to a different chassis. Expand FRS 6059 to encompass RF Amp Concentrator #2 (S1103451) and work to debug the source of the spontaneous latchup/latchdown and bad voltages (including DIV40M).
note that fw2 and tw1 have issued retransmission requests since the last daq restart
Here are the current versions of daqd and nds Jonathan built and installed today:
| daqd-process-catagory | operating system(s) | machines |
| data concentrator | gentoo | h1dc0 |
| frame writer | ubuntu12, gentoo* | h1fw0, h1fw1, h1fw2, h1tw0, h1tw1* |
| nds-1 server | gentoo | h1nds0, h1nds1 |
| frame broadcaster (dmt) | gentoo | h1broadcaster0 |
Note that h1tw0 was upgraded to U12 because it was showing RAID errors, h1tw1 has been kept back with it original Gentoo OS (these machines were the original frame writers)
here is the nds process table (NDS-1 servers run two processes; daqd and nds)
| nds-process-catagory | operating system | machines |
| nds-1 server | gentoo | h1nds0, h1nds1 |
Today, I calculated the volume of piping that feeds the circulation loop for the TCSY laser. The total volume of water in the piping alone (to and from the laser to the chiller on the mech room mezzanine) to be 36 liters. Wow, much more than I had assumed! The chiller reservoir holds 7 liters, for a total of 43L in the system at any given "full" time.
Recall that the system popped a mesh filter and ran the reservoir dry on Sept 28th alog 30041 - at which time only 6L was added to ~fill the reservoir. At the time, they also noticed that there was air in noisy lines down at the table so there was air pushed through some or all of the piping volume (at least the chiller->laser piping was full of air as they mentioned, so half of the 36L piping would have needed to be refilled).
I've added up the small-ish amounts of water we have been adding daily to keep the chiller reservoir topped off - we have added 10.5L over the last 3 weeks. With the 6L added Sept 28th, we've added 16.5L to a 43L system so far. Even assuming there was some water still in the pipes during the Sept 28th "leak", we likley still have a ways to go before we have the system full.
Keep on filling...
(From VE drawings, I estimated ~2880" of piping length round trip chiller-laser-chiller. The piping ID looks to be 1".)
That's around the same volume I had calculated for flushing the chillers (I think I had ~10 gallons). So we're getting the same number there.
WP6251 RCG3.2 upgrade
Jonathan, Jim, Dave:
The main work today was in upgrading the front end models and the DAQ systems to RCG3.2. All models were recompiled yesterday evening (see alog 30609). This morning we installed this new code (took 1hr 27mins). New mx code was compiled, new daqd binaries were created for all DAQ systems. The install sequence was:
WP6258 DAQ removal of science frame
Jonathan, Greg, Jim, Dave:
All frame writers were configured to no longer write science frames, and rename the old commissioning frames as the new raw frame.
Detailed procedure given in wiki page https://lhocds.ligo-wa.caltech.edu/wiki/MakingTheCommissioningFrameTheNewFullFrame
After this change, there are no C-named frames, only R-named frames. Archived C-Frames are now R-Frames, new frames are what were previously called commissioning frames with R-names in the frames/full directories. What were called science frames are not written, no new frames in frames/science directories.
h1nds0 and h1nds1 were reconfigured to the new R-names and restarted
the wiper scripts on h1ldasgw0 and h1ldasgw1 were changed to not use science frames, and to give the disk these frames used to the full frames instead
WP6237 Remove h1ldasgw2
Jim, Dave
a third QFS/NFS server was install in the MSR during the summer as part of the attempt to fix h1fw0/h1fw1 instability. It offloaded the exporting of the frame directories (in read-only mode) to the NDS servers. It later proved to be a liability when corrupted frame files were served by h1ldasgw2 to both NDS servers.
Today we decommissioned h1ldasgw2 and reconfigured h1ldasgw0, h1ldasgw1 to serve their respective file systems as a read-only export to the NDS machines. h1nds0 and h1nds1 were configured to no longer use h1ldasgw2.
A new version of dataviewer was also installed for Ubuntu12 and Ubuntu14 control room workstations. This version, 3.2, will handle the leap second to be applied Dec. 31, 2016. Dataviewer is part of the advLigoRTS source code.
TITLE: 10/17 Eve Shift: 15:00-23:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Aquisition
SHIFT SUMMARY: We restarted the world, now initial alignment, fss, iss and pmc are being fun.
WP 6176
WP 6247
Pulled in new power and network cables for the RGA (next to HAM4). Cables need to be terminated.
Cables at EY were terminated.
Each main turbo pump (MTP) has an adjoining box containing (4) ea. 12v 18Ah sealed lead acid (SLA) batteries connected in series that provide 48VDC for pump rotor levitation during loss of AC power. These only get charged when the turbo controllers are energized. As preventative maintenance during the prolonged periods of pump inactivity, we would energize the controllers for a few hours during Tuesday's maintenance period. We now will keep these battery packs centrally located (hallway between AHU1 and AHU2 in the Corner Station Fan room) and on a battery tender. The trade off is that we now will need to remember to take a battery pack with us when activating an MTP at an outbuilding.
This morning, Chandra and I weighed an empty BSC LTS container. After weighing that container, we decided to check the weight on a loaded BSC LTS container. The loaded container never lifted off the floor. The crane capacity is only 10,000 lbs. and another 380 lbs. would not have lifted the container. The results are in the photos here.
(Kyle, Gerardo)
Today compressors #3, #4, and #5 were all greased and pressure tested.
All compressors passed their compression test, #3 at 120 psi, #4 at 120 psi, and #5 at 130 psi.
Then all compressors and electrical motors were greased.
All compressor assemblies were run tested after service was performed.
Work performed under WP#6257.
(See WPs #6221 and #6255) Will need to make daily check each morning. Plan to run through 10/23 inclusive unless too egregious for commissioners
M. Pirello
This FPGA code ( E1200034 ) disconnects the 1pps signal from the 4 internal LED's and grounds them. All four timing comparators at LHO were updated to v5.
S1107952 in the CER
S1201224 in the MSR
S1201886 at X end
S1201222 at Y end
Work Permit 6256
TwinCAT code has been updated to recognize the new SVN number.
The medm SYS_CUST_SHUTTER_SUMMARY.adl shows the ALS shutters, but the names are wrong. Attached is a map of what each shutter did today.
I guess we should file a FRS for EY. Controller channel 1 is unworkable for a while.
Jitter (alog 30237): 10-6/√Hz (level) to n x 10-4/√Hz (peaks)
IMC suppression (alog 30124): ~1⁄200
⇒ at IFO: 5 x 10-9/√Hz to n⁄2 x 10-6/√Hz
Fixed misalignment of RF sidebands: Δα < 0.3
DC power in reflection with unlocked ifo at 50W: REFLDCunlocked ~ 300 mW
Error offset in REFL = jitter * REFLDCunlocked * Δα
⇒ 5 x 10-9/√Hz * 0.3 W * 0.1 ~ 1.5 x 10-10 W/√Hz (low)
⇒ n⁄2 x 10-6/√Hz * 0.3 W * 0.3 ~ n⁄2 x 10-7/√Hz (high)
Frequency noise coupling into DARM (alog 29893):
⇒10-10 m/W at 1kHz (approx. f-slope)
at 1kHz: 10-20 m to 10-17 m
at 300 Hz: n x 10-18 m (high) with periscope peak n ~ 4.
This seems at least a plausible coupling mechanism to explain our excess jitter noise.
Some additional comments:
This calculation estimates the jitter noise at the input to the ifo by forward propagating the measured jitter into the IMC. It then assumes a jitter coupling in reflection that mixes the carrier jitter with a RF sideband TEM10 mode due to misalignment. The corresponding RF signal would be an error point offset in the frequency suppression servo, so it would be added to the frequency noise. Finally, we are using the frequency noise to OMC DCPD coupling function to estimate how much would show up in DARM.
If this is the main jitter coupling path, it will show up in POP9I as long as it is above the shot noise. Indeed, alog 30610 shows the POP9I inferred frequency noise (out-of-loop) more than an order of magnitude above the one inferred from REFL9I (in-loop) at 100Hz. It isn't large enough to explain the noise visible in DARM. However, it is not far below the expected level for 50W shot noise.
Possibly after some changes made durring maintence, the lockloss tool stopped working. I can use lockloss plot, but not select.
sheila.dwyer@opsws4:~/Desktop/Locklosses$ lockloss -c channels_to_look_at_TR_CARM.txt select
After noticing LLO alog 28710 I tried to svn up the lockloss script (we're at rev 14462 now), but I still get the same error when I try to use select
The problem at LLO is totally different and unrelated.
The exception you're seeing is from an NDS access failure. The daq NDS1 server is saying that it can't find the data that is being requested. This can happen if you request data too recent in the past. It also used to happen because of gaps in the NDS1 lookback buffer, but those should have been mostly fixed.
Right now the lockloss tool is looking for all lock loss events from 36 hours ago to 1 second ago. The 1 second ago part could be the problem, if that's too soon for the NDS1 server to handle. But my testing has indicated that 1 second in the past should be sufficient for the data to be available. In other words I've not been able to recreate the problem.
In any event, I changed the look-back to go up to only two seconds in the past. Hopefully that fixes the issue.