After Fabrice okayed our close-out tests, Hugh and I headed down to the end-station to close out the seismic prep for closing the chamber. Hugh locked HEPI, while I checked cabling and locked the ISI. Other than adding a cover, we should now be ready for the dome.
Existing regulator was working but had a significant leak through its diaphram.
Mark, Eddy, Tyler and I installed two in-chamber cleaning dust barriers into the south nozzle between BSC3 and BSC2 to provide laser safety in BSC3. The operation went well and was completed before lunch. (Note-I went in first to wipe down the floor and the south bellows convolution. The floor was in good shape as evidenced by the fact that the wipe came away clean except for oxidation. The bellows convolution was not terrible but definitely dirtier than the floor: there were some metal fines and a couple of viton shreds.) Randy helped the CPB folks throughout the morning. Eddy and Tyler moved a HAM work platform from the east side of HAM1 to HAM2 just before lunch. The entire crew went down to BSC9 aka ETMX after lunch to work on the dome. There was a bit of a delay while walking plate bolts were removed but the dome was returned to the top of the chamber by the end of the day. I spent some time stowing the tooling etc. that is in the lower level staging area in preparation for moving the cleanroom so that the door can be replaced tomorrow. Not a bad day's work!
This morning I was unable to use tortise svn to do an svn update on h1ecatx1, jonathan managed to do one using the command line but something is wrong.
Now PLC2 on that machine is giving the message "choose run time system". I'm not sure why this happened. I tried stopping the plcs and shut down the OPC powershell window, but when I reran the install scripts I got the same choose run time system message, after which the plc does not run. So now I am restarting the machine.
I was also trying to use remina on opsws5 to log in to h1ecatx1, now opsws5 is crashed. Also, earlier today remina crashed on opsws3, and I was not able to use it again on that machine.
Also, the PSL shut down about an hour ago.
After restarting this, the PLCs came up running again, but powershell for epics communication has the same error as we had 2 weeks ago, saying target machine not found. This was due to a bad tpy file last time, but I am not sure how to fix it.
Only opsws4 can reliably run remina. This is due to video driver problems. Currently there is a tradeoff, With one video driver you can have lots of medm screens up at the same time, but you can't view digital video or use remina. With the other video driver, you can view digital video or use remina, but trying to view lots of medm screens leads to slow response. Opsws4 is set up to allow remina to be used, the other computers are set up to allow multiple medm screens to be viewed.
Started a round of TFs for SRM at gps = 1067655775. Measurement will take ~8h to complete.
Keita asked me this morning to take a look at TMSX damping. I am not sure if it is because it is moving too much, or if it was just to make sure it's working.
In any case, damping looks to be functionning fine.
Attached are some results comparing TMS motion damping on vs damping off in each DOF with the ISI floating (no active control). The first picture shows the time series and the second one the spectra.
Qs are reduced by a factor of (roughly the order of) 100 for the first modes of each dof. The last picture shows the filters engaged and the gains (for future reference)
Joe Gleason, Volker Quetschke We've reduced the main beam power measured at the bottom periscope mirror to approximately 260mW using the main (second) power control stage. We noticed that the PMC locking was unstable when we entered the PSL and were not able to remedy the issue. The throughput was approximately 80% according to the reflected power. The transmitted power looks to be uncalibrated at the moment and shows only about 1/10th the actual output on the PSL_PMC MEDM screen. Should the throughput change to the usual maximum we will still be under the 300 mW max required for the HAM2 work.
Travis, Betsy
Today, we attached the QUAD pusher and weight-off-loading mover assemblies on the ITMx QUAD and it's ISI. Attempting to maintain the yaw and Y-direction alignment, we pushed the suspension structure about 6mm. We then reclamped the QUAD with it's dog clamps and removed the pushers and movers. Next up: SEI HEPI Z, and Yaw adjustment, followed by another IAS measurement suite.
Some SUS items were moved from X end to Y End TMS Lab in preparation for dome and doors going back on BSC9 (maybe tomorrow). Some items (iLIGO SEI scissor jacks/blue iron, cables) were moved from Y End to Y Mid. A little work was done at Y Mid in preparation for receiving four ISI storage containers from LLO this week. The iLIGO baffle was removed from the X Arm Spool and preserved clean. Work on cryopump baffle continued. Three dust barriers were located (two clean) and moved to the High Bay for in-chamber use. One will need the Viton strip re-attached.
Attached is a plot of the round trip delay times of EPICS PULL (active read) EPICS PUSH (active write) and EPICS PUSH/PULL (active write to 3rd channel, followed by active read from that channel). PUSH and PUSH/PULL are dominated by a 2Hz update cycle. PULL sees a 8Hz update cycle. The minimum delay for PUSH is around 110msec, for PULL and PUSH/PULL it is about twice that. The delay for all three channels is about 10x too slow to be really useful (i.e. for acknowledgment returns shorter than the blink of an eye.)
A last snapshot of the EPICS test setup. Note that the intermediate EPICS server for the Push/Pull line was rebooted. an EPICS error was indeed reported. I backed out the temporary changes for testing from h1odcmaster and h1ascimc. Both models were re-compliled and re-installed and restarted without the changes.
Log from last Friday, 1 Nov 2013. Took a look at the ITMx after setting up the Brunson optical square from the End-X station. Position errors and pitch/yaw as of 1 Nov 2013 are:
I took down the IAS equipment Friday afternoon to allow for this morning's cryopump baffle installation. While that is happening SUS and SEI are making alignment adjustments. When the cryopump baffle installation is completed I will set the IAS equipment up again and measure the results of the moves.
Betsy, Travis
On Friday, we rebalanced the SRM suspension adding enough weight to the middle mass to bring it down such that the 2 AOSEMs which were maxed out in Z-position could be aligned. These AOSEM brackets were poorly designed and we have found that many of the HSTSes at LHO (and likely at LLO, but awaiting their response) are built with the brackets at a range extreme. So, while the SRM passed TFs the other week, we had to start over with balancing in order to bring these AOSEMs into alignment with a little bit of range adjustment. Previously, the SRM had been balanced to within tolerance at ~0.75mm high. Friday, we added enough weight to bring it to ~0.25mm low in order to bring the middle mass down a bit. Note none of the other 12 OSEMs were out of range, just the 2 top ones on the middle mass. Because of the rebalancing, Phase 2b TFs need to be run and evaluated again.
Danny, Gary & Matt agree that we have not seen this problem here at LLO on any suspensions. Matt has gone over all of the photos he has of the in situ HSTSs and there are none at the extremes. Matt & Betsy will talk
Matt, Betsy Jeff B & Janeen talked. Matt reminded that there were 2 different vendors for the HSTS weldments. It may be that the horizontals of the two different manufacturer's weldments are in different positions relative to the drawing. If, for example, some of the LHO crossbars are further apart than the LLO crossbars, then the range for those middle stage AOSEM brackets would be different. We are in agreement that if the suspensions meet the requirements - TF pass and optic center is in the right position relative to the optics table - then these differences are acceptable.
Matt has a couple of photos of HSTSs here at LLO, that show the relatively nominal position of these brackets. The first photo is the back of LPR2. The second is the back of MC3. The last is the back of MC2. You'll need to zoom in on these photos to see the AOSEM mounting brackets.
Maybe not necessary but caution is OK. I locked the HEPI before ACB is swung back and SUS moves in to push the suspension around.
* Level1 locked is just using the outer vertical & four outer horizontal lock bolts. The others are extemely difficult to access and for the entire platform, overkill.
(Stefan Ballmer - not Jeff) I set up a test for diagnosing front end to front end communications via 4 separate routes: 1) IPC 2) EPICS read (pull data) 3) EPICS write (push data) 4) EPICS write and read, via a third epics server (push to 3rd server, pull from 3rd server) The setup has a counter on H1odcmaster, this is sent via temporary IPC to h1ascimc, and then returned back to h1odcmaster via the 4 routes described above. For IPC and epics read operations I get an error flag, which I am reporting. No such thing is available for an epics write operation. h1odcmaster is currently running at 32k, h1ascimc is running at 2k. I calculated the following quantities - DELAY : round trip delay in sec - DELAY mean : averaged DELAY - DELAY max: : largest DELAY observed - JITTER MAX : largest time interval between updates - JITTER MIN : shortest time interval between updates - JITTER MAX-MIN : = JITTER MAX - JITTER MIN, the peak-to-peak jitter - JITTER RMS : root-mean-square of the time interval between updates Conclusion for now: - No errors observed in the ~1h this was running - No jitter on IPCs - epics read has a faster update rate (~8Hz) than epics write (~2Hz) - as a result the delay is shorter for epics read - the jitter MAX-MIN is about 40msec for EPICS access - the RMS jitter is about 4msec I turned on recording for all the fast return channels over the weekend. The modifications to h1ascimc and h1odcmaster will be backed out on Monday.