ITMX: Approximately 80% installed, still need to run cables and plug in electronics as well as connect the enclosures to the viewports, this requires a work permit so I'll wait till later today to get one signed. Beamsplitter: Data has been taken to calibrate the optlev signals using the M1 and M2 stage OSEMS, will analyze later today and append a graph to compare to other methods of calibrating.
J. Kissel, D. Barker After consulting with Dave this morning, and him snooping around a bit, it turns out the compilation failure of the h1sushtts.mdl (see LHO aLOG 8777) was just a stupid mistake of an extra carriage return under the OM3 block. Sheesh. The new HTTS front-end model, ${userapps}/release/sus/h1/models/h1sushtts.mdl now compiles just fine. In addition I've moved on to and finished the same upgrade to the HAUX front-end model, ${userapps}/release/sus/h1/models/h1susim.mdl I'll reinstall, restart, and restore tomorrow during maintenance and then begin the process of updating the MEDM screens with the hopes to be done by the end of the day. That will FINALLY close out the modifications to ALL SUS from ECR E1300578.
It appeared this morning that the IOC for the dust monitor at end X had stopped. When I telneted into the procServ process on h0epics, it seemed to automatically start running again with no other action on my part, which is strange. It still required renabling the dust monitor and burtrestoring it however.
Corey going to end X to get items for end Y work Cleaning at end Y Thomas V. turning on LVEA optical levers 08:43 Justin checking the state of the LVEA South Bay in preparation for later isolated laser hazard work 10:13 Kiwamu and Stefan installing light pipes between HAM2 and IOT2L 12:51 Aaron swapping ISC HAMA coil drivers in CER 13:03 Keita, Corey and Jim going into the end Y chamber to disconnect cables and secure the TMS in preparation for the cartridge removal 13:25 Justin transitioning the LVEA to laser HAZARD WP signed: 4312, 4313, 4314, 4315
Kiwamu, Stefan, After cabling up IOT2L, we verified that we see fringes on the IMC, and then realigned the whole IMC REFL path on IOT2L. To hit the top periscope mirror in the center we decided to move the whole table by ~3/4 in to the left (+x) - this should also have centered the transmitted beam periscope. We have not done a full transmitted beam alignment, but we did quickly center the transmitted fringes on the camera. We are now getting 930 counts on H1:IMC-REFL_DC_OUTMON - this compares to about 3500cts when we were sending in 1W, consistent with the ~300mWatt being sent from the PSL now. We again noticed a IMC RF phase flip by 90deg (the 5th time or so - I now stopped counting). So we moved the error signal from Q to I. The PDH error signal right now is still very small, but we haven't done a full IMC alignment. Right now the corner Beckhoff seems to be down, and we cannot control the IMC board.
We did everything we needed to allow people to move the ISI out except to cover the entire TMSY with a quad sock. We couldn't find one. Something for tomorrow.
TMS EQ stop thing from BSC6 was moved to BSC10 (it's sitting on the floor in the chamber).
Two cleanings of the Test Stand/Work Space cleanrooms have been completed. The garbing-staging cleanroom was returned to the top of the E-module form the north side of the beamtube and re-cleaned.
I located D1200329-v4-002 in the LVEA near the West Bay Pallet racks. (BTW: This item shows no evidence of being turned over to 3IFO LTS custody.) The Apollo crew helped me move it to the High Bay roll-up door so that it can be retrieved for shipment. Terry S. is aware of this item and will get it shipped to LLO ASAP (weight ~35 lbs).
Per ML's request, I checked the Fan-Filter units (FFUs) in preparation for the OpLev drilling that is due to start in the LVEA this afternoon. I used the hand test and followed-up with an anemometer check on each of the four FFUs on the cleanroom. Three were functioning well: the unit at the north-east (NE) corner did not seem to be functioning. I climbed up and checked each FFU's switch to make sure it was turned to high. All of them were so I started looking for some reason why the NE unit wasn't functioning. As with computer troubleshooting, I checked physical connections first. The NE FFU's plug was not securely seated in its receptacle. Once that situation was remedied, the NE FFU started functioning properly. I also checked the dust monitor: it appeared to be taking periodic readings and the counts were good (0/0) inside the cleanroom even with all my activity in and around the cleanroom. I will follow-up with hand-held dust monitor while drilling is in progress.
The "resting" cooling unit was changed this morning. Unit 1 is now off.
Sheila, Stefan Here is a trend of the last 6 days and last 90 days of the RevCav transmitted power read-back. Based on our measurement the calibration is roughly 1V per 10mWatt of light transmitted through the RefCav. The reduced max-min spread s might be due to the ISS not being locked during the ISS not being locked during the last 6 days.
Dave, Greg, Dan,
Following Dan's suggesting to try rebooting h1ldasgw0 remotely, at 17:20 I was able to log into the Solaris machine and remotely reboot it. I then re-mounted and re-shared the QFS file system, and then restarted the daqd process on h1fw0. So far h1fw0 has ran for 11 minutes and has written good frames. I'll monitor it remotely.
h1fw0 has been running well since the file system fix yesterday, no lost frames.
The primary frame writer h1fw0 is not able to write frames. It looks like an Oracle QFS file system issue, the NFS mounted file system is not accessible. The QFS writer h1ldasgw0 is pingable, but I cannot ssh onto it remotely. If anyone is going to the site today, please call me and perhaps we can try power cycling the machine.
The secondary frame writer continues to write frames OK. If this machine goes down we will no longer be trending vacuum controls signals so I would like to resolve this before Monday if possible.
It looks like the system started failing at 5pm Wed. Our DAQ seems to know about holidays.
h1fw0 shutdown:
The h1fw0 file system is actually on-again-off-again, and I've seen times when the two frames written by fw0 and fw1 are not identical. Since we know fw0 is having issues, we can assume that its frames are the incorrect ones. Unfortunately, as the primary writer, h1fw0's frames are preferentially archived by LDAS over h1fw1. I have stopped the daqd regeneration process, h1fw0 has now stopped running and it not being restarted. This will allow LDAS to fully switch over archiving h1fw1's frames.
Kyle Had spun-up the turbo late on Wednesday but aborted -> turbo too hot -> chilled water lines not circulating in LVEA now? Also, disconnected aux. cart from HAM3 and switched on BSC3 annulus ion pump
J. Kissel
I've made significant progress in the efforts to bring the HAM Auxliary Suspensions (HAUX) and HAM Tip-Tilt Suspension (HTTS) front end model infrastructure up to current SUS standards given the integrated testing experience thus far (as per ECR E1300578), focusing on the HTTS. Unfortunately, I'm unable to confirm success of the work I've done because what I've drawn in Simulink does not compile. Ah well. Stay tuned for further progress; details of today's work below.
---------
Today I created a new front-end simulink model,
${userapps}/sus/h1/models/h1sushtts.mdl
which utilizes a 5 copies of the the brand new, combined, HAUX / HTTS master model called
${userapps}/sus/common/models/HSSS_MASTER.mdl
where HSSS stands for HAM Single-Stage Suspension.
This model (h1sushtts.mdl) will replace
${userapps}/asc/l1/models/h1asctt.mdl
but will continue to run on the ASC front end, h1asc0, on the same core (specific_cpu=4).
Because there is so much unique stuff in the HSSS that differs from the multi-stage suspension's
${userapps}/sus/common/models/FOUROSEM_STAGE_MASTER.mdl
(e.g. alignment offsets, damping, shuttering) I've concluded that these optics should just be treated like every other multi-stage optic, where the "final" stage (in this case the only stage) is part of the over-all SUS-type library part, but not its own STAGE_MASTER. This renders all of these library parts obsolete:
${userapps}/sus/common/models/
FOUROSEM_DAMPED_STAGE_MASTER.mdl
FOUROSEM_DAMPED_ALIGNMENT_STAGE_MASTER.mdl
FOUROSEM_DAMPED_STAGE_MASTER_newOSC.mdl
FOUROSEM_DAMPED_STAGE_MASTER_oldOSC.mdl
because it contains damping loops, alignment offsets, and includes a lock-in in the manner similar to the rest of the SUS. The obsolete parts are various iterations of the original FOUROSEM master that reflect the history of the HSSS control system, created along the way to add all the new features that we now know we need today. Now that the HSSS master exists, which contains the best of all the above obsolete models, there's no need to keep these others around. I will delete them once I can confirm successful compilation of the models that use the HSSS master, and I've done all necessary modifications to the HSSS MEDM screens.
Also of note, in addition to the HSSS library blocks for RM1, RM2, OM1, OM2, and OM3, I've created a new block called HTTS, which contains the stuff that uses combined information from all the optics, i.e. the Online detector characterization (ODC) channels, and the USER DACKILL. This way the stored ODC channel will be
$(IFO):SUS-HTTS_ODC_CHANNEL_OUT_DQ
which will look like all the other SUS ODC channels (with $(OPTIC) replacing HTTS), but reflects that the channel contains information about all of the five HTTSs. The only other thing in this block is the USERDACKILL, which also absorbs watchdog information from all 5 HTTSs, so the DACKILL channels will now be
$(IFO):SUS-HTTS_DACKILL_STATE
for example. This DACKILL "OR"s all the HTTS individual watchdog flags, so it's still the silly bad state of cascading a trip from HAM6 to HAM1 (or vice versa), but I'll leave changing watchdog systems for another day and another ECR that's not buried in otherwise janitorial engineering changes.
Finally, in addition to a TON of other clean-up and the relevant addition/modifications of all the features, described in G1301192, I've removed the HAM-A coil drivers voltage monitors from this main "control" model. Instead I've put them in another new front end simulink model,
${userapps}/sus/h1/models/h1susauxasc0.mdl
which will occupy the 6th and final, spare, core of the h1asc0 front end (specific_cpu=5), and use the (otherwise unused) DCUID of 22. In this way, the HTTS become like every other SUS where these non-essential coil-driver monitor signals are read out by a separate core, if not entirely separate IO chassis / front end (including the HAUX which already do have their VOLTMON signals in h1susauxh2.mdl).
This new "monitor" model contains a filter bank for each voltage monitor of all the HTTS (i.e. 4 x 5 = 20),
$(IFO):SUS-$(OPTIC)_M1_VOLTMON_$(DOF)
of which the both the IN1 and OUT are stored in the commissioning frames (i.e. 40 new channels), but only the OUTs (which will presumably be calibrated) will be stored in the science frames (i.e. 2 channels). Both channels are to be stored at 256 [Hz] as most of the rest of the HSSS channels in the control model will be.
------------
Both models fail to compile, but I believe they're throwing bogus errors.
For h1sushtts, I get:
Bad DAQ channel name specified:
OM3_M1_OSEMINF_UL_IN1
make[1]: *** [h1sushtts] Error 2
make: *** [h1sushtts] Error 1
make failed
log file is /opt/rtcds/lho/h1/data/buildlog/h1sushtts_2013_45_27_21:45:46
I'm not sure why it would fail on OM3, since this DAQ channel is listed in the same library part as the rest of the four HTTSs in the model.
and for h1susauxasc0, I get:
ADC card numbers must be unique
make[1]: *** [h1susauxasc0] Error 255
make: *** [h1susauxasc0] Error 1
make failed
log file is /opt/rtcds/lho/h1/data/buildlog/h1susauxasc0_2013_48_27_21:48:47
This I know is bogus, because the ADC cards are unique.
- Kiwamu, Cheryl
I measured the IO path power budget after turning up the power to 9.1W incident on IO_MB_M2, the beam splitter to the IO path. This put 1.03W into the ALS path that was blocked with a beam dump, and 8.07W incident on the IO EOM. Temporary beam splitters are still in the IO path, so power loss there was measured to be 3.65W.
Two ways to calculate the throughput of the IO path - both meet the requirement to deliver at least 75% of power downstream of the PMC to the IFO.
1) Power incident on IO_MB_M2 to power delivered to the IFO:
9.1W - 3.65W (loss due to beam splitters) = 5.45W
4.27W (power after top periscope mirror) / 5.45W (adjusted power incident on IO_MB_M2) = 78% delivered power
2) Power incident on IO EOM to power delivered to the IFO:
9.1W - 1.03W (power in ALS path) - 3.65W (loss due to beam splitters) = 4.42W
4.27W (power after top periscope mirror) / 4.42W (adjusted power incident on IO EOM) = 97% delivered power
New safe snapshots have been saved for BS ITMX ETMX and IM (using the matlab script save_safe_snap )and they have been commited respectively under revision 6504/6505/6506/6507
Also: all in-vac cables were also disconnected from feed-thrus. Cable clamps for these cables for these cables were also removed and all these cables are dangling down from ISI (Jim will pull them up and stash up top in the morning.