Evan, Sheila
The PSL has been working all night tonight. We got a chance to try SRCL feedforward. It works, but it doesn't improve the noise. We saw that we were able to reduce the noise coupling by 12 dB at first, but then later saw that the coupling was changing by about 6dB at most. The second time we tried it we did not get as good subtraction. Evan has new measurements of the SRCL coupling and frequency noise coupling to include in the noise budget over the weekend.
Other things:
We used the TMSY picomotors to center the beams on the QPDs, this didn't change the combination of QPDs we used for the ITM loops. We might want to check the normalization for the Y arm QPD next time we do inital alingment.
We can switch SRM coil drivers in full lock, PRM, PR2, SRM, SR2 are now in the guardian in the DRMI_ON_POP state.
We started to measure the ASC sensing matrix with all the loops closed including the ITM loops, we got a reasonable measurement for pitch, the data is all on disk although we had some trouble extracting it. We were in the middle of tuning the yaw excitations when we got another earthquake. We were moptivated to work on ASC because we have been aligning a little by hand before turning on the ASC all night, and we are hoping to find more diagonal signals so that we don't have to do this each lock.
Jason, Patrick, Peter, Rick In summary: 1. We could not get the spare interlock chassis to work. The original is back in. 2. We restored the C:/TwinCAT directory to a backup that Peter took in June of 2014. 3. The PSL is running again. The spare chassis was swapped in. The status screen no longer made sense. Pushing the interlock button made the wrong indicator turn red. I did a scan of the EtherCAT modules. It appeared to indicate that they were unchanged (that is the spare had the same modules in it as the original). Suspected maybe the safety PLC inside the interlock chassis had to be started separately? It seemed to require a username and serial number and password. Peter's attempt to enter these and start it did not appear to work. I tried restarting various things including the entire computer. No luck. I suggested resetting all of the variables including persistent. Bad idea. Starting the PLC now gave divide by zero errors and would not run. It appeared that some of the persistent variables, which were now all zero, were used in the denominator of fractions. They appeared to be related to some calibration settings. We couldn't get by this. This confuses me. If the code does not start with them zero, and upon the first run they are by default zero... How were they initially set? It appeared that the files for the saving of the persistent variables were located in the C:/TwinCAT/Boot directory. We tried replacing that directory with a backup that Peter had. This did not help, same divide by zero errors. We tried to delete and replace the entire TwinCAT directory. Windows would not let us delete it. At some point scanning the EtherCAT modules started showing a whole bunch of differences and the light on the front of the interlock chassis no longer came on. We decided to put the original interlock chassis back and try restoring the computer to a backup that Peter had taken in June of 2014. After the original chassis was put back the light on the front still did not come on. Peter tried restoring just the TwinCAT directory with the restore software. He was able to do so. We opened the link to the visual. It seemed to run but was blank. I closed it, opened the system manager and set it to run. I opened the PLC and logged in. The light on the front of the interlock chassis came on! The PLC was running again. Remaining questions: 1. How were the persistent variables originally set? Are what they are set to now (from the backup) the same as they were before? 2. What is different about the spare interlock chassis? Could it be some programming in the safety PLC? Different wiring?
This may not help, but
With no PSL and no one else really around, I wanted to test out the new sensor correction/blend SEI configuration nodes.
I got the blend side of it working just fine, switching between desired blends perfectly with only a few syntax errors. I watched the NXT and CUR filter banks closely to make sure that they are doing what they were suppose to just incase. The sensor correction part didn't go as smoothly though. There were a few issues that I did not fifure out before I had to go. Wish I had time tonight to finish this but unfortunately that is not the case.
I stoped the nodes just incase they wanted to mess with anything over the weekend.
On Monday:
Overall it went pretty well seeing this come semi-alive. I put all of the blends back where I found them as well as the SC filters so everything should still be set for the weekend.
I need to find a way around the added on ezca prefix for a channel names. I have one test that looks at guardian state channels to check for transitions, and the added prefix I can't seem to get around...for now.
I'm not sure what you're trying to do here, but my guess is that we can find a better way to do it.
STS2-B in the BierGarten still is the PEM unit destined for the vault. But the STS2-A (HAM2) is back at its home by HAM2 and all cables are returned to their original location. More looks after setting.
9:39 Hugh going to CER
11:07 Nutsinee to HWS table near HAM 4
11:33 Nutsinee out
11:39 Nutsinee to LVEA
11:54 Nutsinee out
12:42 Richard to EY for network cabling
12:45 Fil to EY
13:26 Elli to LVEA HAM4 HWS work
13:26 Bubba to LVEA for critter control
13:30 Jim B and Ryan to EY for network switch work
13:42 Bubba out
14:35 Jim B and Ryan out
14:48 Jim B, Ryan, and Elli to EY
I greened up the SDFs for H1 by accepting the ETM & ITMs running the 90 mHz blends as opposed to the 45 mHz for the beam line DOF. Also accepted the matrix changes for HAM1 2 & 3 for the STS ground seismo input switch to the C unit at HAM5.
A remote controlled power switch has been installed at EY to allow power for the HWS camera to be turned off or on from the control room. The IP address for the switch is 10.105.0.155, access using telnet, port 23. Ellie King has the instructions for controlling the power. The camera is plugged in to outlet #1.
The HWS is plugged into outlet J1. To turn on the power type into a terminal:
telnet 10.105.0.155 23 (open telnet)
@@@@ (start IPC)
? (brings up help scrren with list off commands)
A10 (turns on all power outlets. "A00" turns them all off.
LO (logs out)
^] (close telnet)
The CDS switch at EY needed to be power-cycled to allow us to do a password recovery procedure. This interrupted data collection for vacuum channels, the HEPI pump controller, and weather station for the end station. Vacuum data has a gap from approximately 13:51 PDT to 14:06 PDT. EY Weather was restored at 3:47 EY dust seems to have not been affected.
Didn't interpret any of the WPs to result in vacuum alarms
Scott L. Ed P. Chris S. Cris M. (1/2day) 5/6/2015 Cleaned 45.7 meters ending at HNW-4-034. Removed lights and began moving equipment to next section north. 5/7/2015 Finished moving equipment and hanging lights. Start vacuuming support tubes and tube cleaning. Cleaned 36.5 meters of tube, ending 16 meters north of HNW-4-035. 5/8/2015 Cleaned 45.7 meters ending at HNW-4-038. Cleaning crew left at noon.
At Kiwamu's request, I have updated the script which runs from the 'Turn WFS ON/OFF' button on the IMC_WFS_MASTER.adl medm screen. It appears this script was ported from the 40-meter lab (a few channel were still listed as C1:) in addition to touching a number of other switches and filters in an undesirable way. I commented out everything except the gain setting for the slider, so it should function solely as an ON/OFF switch as advertised. Changes have been committed to SVN.
I will take the LLO aLOG offline to migrate it to a new server during the maintenance window on Tuesday, May 12, 2015. Work Permit: https://workpermit.ligo-la.caltech.edu/view.php?permit_id=2544
Re Yesterday log on the STS2 work.
Last night I swapped the field cables for HAM2 and ITMY (A & B) STS2s at the interface chassis. This morning the traces from all three STS2 look good. See attached, the pink traces are references and notice the noise in the .5 to 5Hz area. All the other traces are consistent--the coherence below is exactly the same story. Since the signal from the HAM2 STS2 looked bad with two different cables, inferring the contacts have been worked a bit, this suggests the problem is more likely on the interface side of the connection. I'm going to do some cable wiggling and switch them back this morning.
Thanks much to Robert Schofield for assistance & teaching.
Uh boy... Spent 10 minutes sitting at the interface wiggling the cable into the STS2-A chassis watching a real-time spectra and could not get the noise to show up again.
The cables are now swapped back putting the signals where they belong. The STS2-A instrument is still in the BierGarten near STS2-B(ITMY/Vault machine.) The satellite cable/box is still swapped and the field cable for STS2-A is the test cable. Maybe later we'll restore everything back to home. We may want to just do an in Shop check out of the STS2-A chassis.
Sift wiki page Lock times: (ODC-MASTER_OBS_INTENT & DMT-DC_READOUT_LOCKED) 1114582937 1114624965 (42028s) 1114629970 1114639925 (9955s) 1114643198 1114646416 (3218s) There were three locks on 2nd May for ~10.5hrs , ~2 hrs and 3hrs (continued to 3rd May) respectively. The first lock loss happen after turning on 504.8 Hz Violin mode damping loop (alog). Not sure about the second one (Nutsinee was guessing that the initial alignment was not done for the full day so may be the optics started drifting), but the third lock was lost because PSL tripped (alog). The first lock had stable inspiral range of ~8MPC while the second had slightly better range ~10MPC. Three dips in the inspiral range of the first lock were caused by three loud glitches. (Glitch follow up). OAF and GDS h(t) calibration still differ between 20Hz and 110Hz (and a bit above 2kHz). There were few interesting glitch bands: 10 to 40Hz starting from 12:30UTC. , 80-300Hz through out the locks. Most of the loud glitches were seen around this frequency band. Both UPV and Hveto have found H1:ASC-AS_A_RF45_Q_PIT_OUT_DQ to be the most effective veto channel. 10-40Hz glitches might be related to angular sensing and control channels and output mode cleaner angular channels. STAMP-PEM has shown that OMC-ASC_QPD_B_PIT_OUT, OMC-ASC_ANG_Y_OUT, ASC-AS_{A,B}_RF45_Q_PIT_OUT channels have high coherence with DARM in 10-40Hz frequency band which started after 12p.m. Might be the cause of excess low frequency glitches starting from 12:30p.m. Daily CBC result has an interesting vertical band of glitches around 12 to 12:30UTC. It looks like a messy PSD estimation, but the timing is not convincing that it's involved with the noise floor jump which seems to be closer to 12:50 UTC.
J. Kissel, for the Calibration Team We'd discussed the CAL-CS vs. GDS discrepancy on the CAL team call yesterday. We believe the discrepancy arises because the GDS pipeline had not accounted for the (currently) four 16 [kHz] clock cycle delay between the actuation path and the (inverse) sensing path. The delay causes the amplitude discrepancy in the sum of the two paths around the DARM unity gain frequency at ~40-50 [Hz].
The second lock of May 2nd happened around 3pm PDT. Jim was on duty so my speculation about why the interferometer lost lock at that time was irrelevent. Sorry I thought the second lock loss happened while I was on shift and optics drift came to mind, which was not even true (it was the PSL tripped). There were no reason stated about this lock loss https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=18183
Forgive my scrumbled memory....
J. Oberling, R. Savage, P. King, J. Bartlett
The goal for today was to replace the flow sensors for both PSL chillers with new ones with no moving parts; we have to bring both chillers down to replace one, so might as well replace both. Once we swapped out the flow sensors both PSL chillers quit working, giving the same error; we are in touch with the service dept to figure out what happened. In order to get the PSL back up and running we installed the spare PSL chillers, which we had also installed new flow sensors in. As it turns out, these new flow sensors don't play nicely with the chillers for some reason not known to us. We therefore re-swapped the old flow sensors that were originally in the spare chillers. As our luck would have it, the diode chiller flow sensor was reporting no flow when we could clearly see flow; we replaced this flow sensor with one from the original chillers. By this time we had lost track of which flow sensor was from the crystal chiller and which was from the diode chiller (the one we originally suspected as beginning to fail, thereby causing the PSL trips we have been seeing). In other words, we traveled in one huge circle today, and the original problem with the PSL diode chiller flow interlock tripping may not be fixed. Luckily, we are all now experienced in swapping these flow sensors out, so if the PSL does happen to trip again because of the diode chiller flow interlock, we will swap it again.
As it happens, the PSL tripped as I was writing this alog. Jeff and Peter had already made it out, so Rick and I swapped the flow sensor and got everything to work after some issues. More detail on that to come tomorrow, as now I'm tired and hungry.
We've had 4 more PSL trips in quick sucession, Evan and I reset the first 3, now we are going home. In these trips the external shutter did not close, and there was no flipping of the diode chiller bits at the time of the trips. A few minutes after the last trip, the diode chiller bit did flip. Also, everything looked OK to me on the flow screen and there was no water on the floor in the chiller room.
The second time the laser tripped, I accidentally hit the Xtal chiller button on the flow screen, which I didn't realize was a button until I had hit it. This turned off the crystal chiller, which I turned back on by hitting this button again.
At Kiwamu's request, I have updated his PD Null script, pdOffsetNull_ver2.py, located in opt/rtcds/userapps/release/lsc/h1/scripts, to include the balance of the LSC PDs. The list of PDs zeroed by this script is now:
'LSC-POPAIR_B_RF18',
'LSC-POP_A_RF9',
'LSC-POP_A_RF45',
'LSC-POPAIR_A_RF9',
'LSC-POPAIR_A_RF45',
'LSC-POPAIR_B_RF90',
'LSC-ASAIR_B_RF18',
'LSC-ASAIR_B_RF90',
'LSC-REFLAIR_A_RF9',
'LSC-REFLAIR_A_RF45',
'LSC-REFLAIR_B_RF27',
'LSC-REFLAIR_B_RF135',
'LSC-REFL_A_RF9',
'LSC-REFL_A_RF45',
'LSC-ASAIR_A_RF45'
'LSC-X_TR_A_LF',
'LSC-Y_TR_A_LF',
'LSC-TR_X_QPD_B_SUM',
'LSC-TR_Y_QPD_B_SUM',
'LSC-POP_A_LF',
'LSC-REFL_A_LF',
'LSC-POPAIR_A_LF',
'LSC-REFLAIR_A_LF',
'LSC-ASAIR_A_LF',
'LSC-POPAIR_B_LF',
'LSC-REFLAIR_B_LF',
'LSC-ASAIR_B_LF'
The changes have been committed to the SVN.
While looking over SDF Diffs after running the script, I noticed that the offset for ASAIR_B_RF18 (both I and Q) changed from ~0 to ~300 (a few order of magnitude), whereas the other PD offsets changed little. Just a heads up.
Sheila and I found that the dark offset for LSC-TR_X_QPD_B_SUM changed from −0.9 ct to −37.2 ct at 11:41:57 local this morning. Was this when the script was run? This value is way too big.
For the record, following are the measured weights of the QUAD glass penultimate masses (PUM) and test masses that are currently here at LHO. Most of this data can also be found on the CIT optics Nebula we page. Note the labels of the masses are slightly confusing as the optics have been coated specifically for the one-arm. MASS LABEL INSTALL LOCATION 39,653g ETM02 (TM) BSC8 ITMy Didn't measure ITMy PUM because it was the first mass bonded and we were not wise to the need for weights. 39,626g ETM04 (PUM) BSC6 ITMy PUM 39,689g D050421-001 BSC6 ETMy PUM (was lasti mass) 39,613g ETM04 (PUM) 39,641g ETM05 (PUM) 39,633g ITM01 (PUM) 39,621g ETM03 (PUM)
We found more weight numbers and I made a typo in my original alog. The correct table is this: MASS LABEL INSTALL LOCATION 39,653g ETM02 (TM) BSC8 ITMy 39,583g ITM04 (PUM) BSC8 ITMy PUM 39,626g* ETM04 BSC6 ETMy 39,689g D050421-001 BSC6 ETMy PUM (was lasti mass) 39,613g ETM04 (PUM) 39,641g ETM05 (PUM) 39,633g ITM01 (PUM) 39,621g ETM03 (PUM "holy mass" has extra ground recesses) 39,650g ITM08 (PUM) 39,616g ITM05 (PUM) * Mass weighed with ears/prisms after binding/curing. - Bland, Barton, Moreno
I have reason to doubt the weight listed for ETM02 "TM" - I do not know where this number came from.
Here are the new SRCL and frequency noise couplings projected onto DARM.
The good news is that we no longer seem to have a frequency noise coupling shelf around 100 Hz. It also seems that the SRCL feedforward pushes the SRCL noise down below 10−19 m/rtHz around 80 to 100 Hz. But somehow the noise in DARM in this region still seems to be nonstationary and (qualitatively) we haven't really seen any noticeable noise reduction here.
I repeated the SRCL injection measurement on Saturday, and got similar results as what is shown here.
The MICH and intensity noise traces are stale and need to be retaken. However, I did not see coherence between MICH control and DARM when looking at the control noises.