RickS and DougC After prompting from ChristinaB during this morning's PSL group telecon, we enabled the ISS loop which has been switched off for the past week or two, reportedly due to instability operating it with the FSS loop. We first adjusted the refsignal from -1.98 to -1.91 which increased the diffracted power percentage from about 2 to 8-10. Then, when we closed the loop the diffracted light power went to about 12%, so we set the refsignal back to -1.93 such that the diffracted percentage is now at about 10%. The loop seems to be working properly (see attached screenshot), so we will leave it engaged.
pablo
DougC and RickS On Tuesday, we measured the minute trends of the environmental channels over the past week (see attached figure). Ch. 16 is (supposedly) the particle counter in the H1 Diode Room. Either the dust monitor is malfunctioning, or we are having a lot of dust events at the 100,000 count level.
Bugzilla ID 413 has been opened to address this apparent H1 Diode Room dust glitch issue. Attached are two plots, the first the trend of the particle counter (supposedly in the H1 Diode Room) over the past week and the second a detail of one of the glitches.
Did anyone enter the diode room and look at this dust monitor?
Filiberto, Frank, Thomas This morning we tested the continuity and shined a flashlight on each of the PDs to get a response. PD forward voltage: Pin3&4: 0.415V Pin7&6: 0.418V Pin10&9: 0.417V Pin13&12: 0.417V We tested response of each PD: Pin3&4: 64mV to 270mv PD4 Pin7&6: 87mV to 320mV PD3 Pin10&9: 66mV to 220mV PD2 Pin13&12: 57mV to 240mV PD1 The PDs are not calibrated, and for their purposes, maybe they don't need to be but they work.
Just started a low frequency measurement on ISI-ITMY for the night.
*** HPI ***
Undamped
*** ISI ***
750mHz blend on all the DOFs (ST1 & ST2)
Control lvl1 engaged
*** SUS ***
Damped
The goal of this measurement is to collect data in order to implement some tilt correction on the ISI. Tilt correction should improve our performance at low frequencies.
This test should be done by tomorrow morning.
Just started TFs measurement for the night on BSC-ISI ETMX. The ISI is undamped, the quad and the TMS are locked.
This test should be done by 6am tomorrow morning.
Attached are plots of dust counts requested from 4 PM September 17 to 4 PM September 18.
Attached are plots of dust counts requested from 4 PM September 16 to 4 PM September 17.
WP#4138 Verbally approved by Mike Landry and Mark Barton over the phone.
To permit Sebastien's ISI commissioning measurements overnight with a non-nonimal F1 ETMX OSEM signal, I have modified the h1iopsusex model to replace the AC-and-DC OSEM watchdog parts with the AC-only OSEM WD parts similar to what was done for SUS-PRM. This will permit SEI operations with an F1 OSEM signal of only 400 counts, but will trip the SEI if the ETMX optic is rang up with a large RMS value. Initially I had installed a new h1iopseiex model which disabled the WD entirely, but on retrospect I decided to put in the AC-only SUS WD and back out the SEI changes as a better solution. I have tested the system by manually tripping SUS and saw the trip propagate to SEI.
Tomorrow we should investigate if the F1 OSEM can be made nominal and back out these changes.
I restarted the DAQ this afternoon to resync with the h1pemey system, and to add some Beckhoff timing diagnostics slow controls channels to the frame for the MSR MASTER and the CER ISC FANOUT. All the channels viewable on the new SYS-TIMING MEDM screens are now in the frame. The EDCU file is called H1EDCU_TIMING.ini
For testing, I disconnected the CER comparator for a few minutes.
It appears that either the antenna or cable has failed for the control room NTP server, as indicated by the failure LED on the amplifier interface (could also be the interface...). This caused the NTP server to drop to stratum 16, meaning clients would no longer sync to it (particularly after rebooting it in the course of troubleshooting). I have restored the default configuration to the NTP server, in which it will also reference three public NTP servers operated by Symmetricom. This means the NTP server is now a stratum 2 server, rather than stratum 1 as it would be with the GPS reference operating, but it will at least provide time again. In the process I also swapped the lan 1 and 2 connections so that the admin interface is accessible where it should be and allow the external NTP servers to be used. Lan2 (now) provides a legacy NTP IP for old CDS equipment which will eventually go away. I will need to revisit the NTP configuration again when the antenna is operational to see if it is feasible to leave the other NTP references in place as a backup, or if it is possible for them to negatively influence the GPS derived clock in which case they will need to be removed.
As a result of the NTP server antenna failure, the time of day on opsws4 and opsws5 computers needed to be reset, as they were off by several seconds which was enough to stop diag from operating. Also reset the time of h1iscey and h1iscex which were rebooted during the time when no suitable ntp server could be found, as a result their time of day clocks were off by several hours.
Richard, Filiburto, Cyrus, Jim, Dave.
The h1pemey model is running again. The computer and analog racks have been relocated from their H2OAT location to their H1 ALIGO locations. Fiber cables for networking and timing, DC power and one-stop cables have been re-ran. We started with h1iscey to get the PEM model running for UofO commissioning work this week. I accidentally power cycled h1iscex this afternoon (you are allowed one of these) and due to Dolphin connection I had to restart the SUS and SEI models at EX. Turns out the IOP DACKILLS were enabled, so nothing was being driven there this afternoon.
We added h1iopiscey and h1pemey to the rtsystab and dolphin manager, I have added them back to the site overview medm.
- 8:55, Hugh to LVEA, locking WHAM01 HEPI, see his entry, out by 10:43
- 9:48, Cyrus to Y-End station, fiber related work.
- 9:20, Jim and Greg, X-End, SEI related work.
- 9:57, Called X-End to notify staff of dust alarm, counts up to 800(0.3u), 200(0.5u).
- 12:00, Thomas and Tyler, LVEA West bay test stand area, ACB balancing.
- 13:10, Dave, Jim and Filiberto, Y-End, PEM start-up, reboots after, PEM, DAQ, PEM....
- 14:00, Hugh, LVEA, WHAM6 motion investigation.
- 14:35, Jeff B, LVEA West bay test stand area, ITM-X work.
Put ITM-X into stops so the Seismic crew can payload the ISI.
While range of motion test was being run (Range_Motion_Adjustable_HAM_HEPI.m) on HAM6, a large step was seen. 22000 counts were going out the OutFilters. See attached for the local sensor trends. I've applied the calibration of 655counts/0.001" to display mils. The step occurred near the beginning of the test while pushing with individual Actuators. Note that the signs of the horizontal steps are all positive! This means: * The Crossbeam to Support Tubes clamps have slipped out, or * the Crossbeam has been bent outward, but, the West side Vertical shifted down early in the test and the East side also finished lower, so maybe: * the Crossbeam rolled and it has a bend Dial indicators at the Support Tube Ends saw much greater shift down than the vertical IPS -- 19 to 25 mils! Now I'm not the first one to run this test on a HEPI but this shift is enough to put a very careful alignment out of whack. Yeah so I don't know, any thoughts?
I replaced the RAID card in cdsfs1 with a new one. While I had the chassis open, I took the opportunity to vacuum out all the bugs and check the interior cable connections. Even so, upon booting with the new card installed, was greeted by an I2c bus error from the RAID controller. So powered off the server and found a loose connection to the disk backplane, which I either missed earlier or knocked loose when addressing another loose cable earlier (it may also never have been connected properly, once reconnected the RAID controller showed temperature sensors previously not shown on the old controller). On reboot the RAID controller was now happy. But once the server booted the root drive mounted read only due to EXT4 filesystem journal errors. So rebooted once again, which forced a full fsck after which the root filesystem was happy. The RAID controller is now in the process of verifying the RAID; this is a slow process that will take at least a day. So far the system looks healthier, but the RAID verification process should provide a good burn in period.
The RAID verification process was complete when I arrived this morning. I then unmounted the /raid filesystem* so I could force an fsck on it, to verify the integrity of the file structure itself. The fsck passed, so I remounted /raid. It should now be ready to run rsyncs/backups again. I also started the battery backup test on the RAID controller, this takes 'up to 24 hours' to complete. During this time, if the entire system loses power without a clean shutdown, the contents of any data in flight in the RAID cache will be lost; the system is on UPS power so this is a low risk. * I had to modify /etc/exports first to remove /raid, then run exportfs -ra to update NFS; otherwise you get 'filesystem busy' messages. Then the reverse of course when the fsck was finished.
The battery backup test passed with an estimated capacity of 255 hours; so the controller can maintain data in the cache for roughly 10 days without external power. I also checked the controller logs, and so far they are clean with no errors.
Locked up the HEPI on HAM1 this morning. Attached is a plot of the IPS local and calc'd cartesian trends. The IPS here are in counts and they have a calibration of 655 counts per mil. The V4 trend still has a slope to it but I don't think it will go too far. I'll try to look again later. Meanwhile please use the numbers at the beginning of the plot as a reference. The IPS show a step at 8am (wasn't me!) but it is pretty small. The largest shift of the IPS from before the step to after the lock is < 200 counts = ~8um. The X Y & Z trends are in nm and max shift is 3000nm. The rotational trends are in nanoradians although I dispute the signs given right-hand-rules. The largest rotational shift is about 1000nrads.