The beam alignment into the reference cavity was tweaked. Most of the alignment was done by adjusting the mirror mounts on the periscope. The other mounts did not yield as much gain as the ones on the periscope. With the pitch adjustments the transmission signal went from ~0.75 - 0.78 to ~1.16. Relatively minor tweaks were made to the AOM alignment without any big improvements, which suggests that the adjustments for the AOM are in the mid-range. Went back to the periscope adjustments and adjusted yaw which improved things to ~1.2. Walking the beam in yaw, followed by small pitch adjustments improved the signal to ~1.5 with the HEPA fans on and the ISS unlocked. The power into the reference cavity was measured to be 30.1 mW - measured with Ophir stick calorimeter. Both mounts on the periscope were locked as best as possible. The transmission signal was 1.52. The signal on the RFPD was: unlocked : -245 mV locked : -40 mV offset : -2 mV With the ISS on, the reference cavity transmission was measured to be 1.54. Attached, for reference, is the camera image of the various spots from the cavities. As an aside, the recent drift in reference cavity transmission seems to coincide with some jumps in the pre-modecleaner temperature. Jason, Ed, Peter
This morning before maintenance really took off, we tried switching blends on all of the BSCs, while the IFO was still locked. Contrary to my findings a couple weeks ago (alog 23610), it now seems like it's possible to switch blends while the IFO is locked and not break lock. We started out carefully, switching only the ETMs, one at a time, then the corner station ISIs. We then tried a little faster, doing the corner station all at once, then both ETMs. Finally, we switched all chambers all at once. For each of these tests, we switched from our nominal 90mhz blends to the 45mhz blends then back. The lock survived each switch, although the ASC loops would ring up some, especially when we switched the ETMs. The corner station ISIs didn't seem to effect ASC as much.
The only IFO difference I know of between the last time I looked at this and now is that Hugh went down and recentered the ETMY's T240s. Environment is also different today, with pretty quiet winds (<10mph) and only moderate microseism (rms < .5 microns).
The attached trends are for the ASC D/C Hard, D/C soft, ETM oplevs and corner station oplevs. Similar to what I found a while ago, ETMY seemed to have the biggest effect on the IFO (based on what we saw on the ASC foms in the control room), although the ITMY oplev actually moved more. Still, the oplevs didn't see more the about 1.5 microradians motion at any point.
You can also tell what blends were running on the traces based on the eye-ball rms of the ASC signals. The 90mhz blends don't filter out the microseism which is moderate today, so the ASC pitch signals get noisier. This is also visible on the oplevs.
This test makes us ~80% confident we can probably switch blends while locked. With caveats about the current environment not being very extreme. If it;s windy or the microseism is high, the answer could change.
Title: 12/01/2015, Day Shift 16:00 – 00:00 (08:00 – 16:00) All times in UTC (PT) State of H1: 16:00 (08:00), The IFO locked at NOMINAL_LOW_NOISE, 22.3w, 79Mpc. In Commissioning mode. Outgoing Operator: TJ Quick Summary: Start of maintenance window.
Doubling these limits should virtually eliminate locklosses of the nature reported here.
The SDF has been updated, both OBSERVE and safe files; and they are committed to the svn.
Title: 12/1 OWL Shift: 08:00-16:00UTC (00:00-8:00PDT), all times posted in UTC
State of H1: Maintenance Tuesday
Shift Summary: Still cruising on a 28 hour lock. The DARM spectrum was getting a tiny bit noisy in frequencies < 50Hz for the last hour or two (that I noticed at least). Since LLO was down and the inpending maintenance day upon us, I didn't do look too far into it.
Incoming Operator: Jeff B
Activity Log:
Just checking on the reference cavity transmission, which continues to fall. The ambient temperature didn't change much. Doesn't appear to be due to variation in the pre-modecleaner transmission, nor temperature.
Starting some maintenance tasks since LLO has been out. We are currently still locked, but that won't last for long.
***** Incident Type -- Scheduled Outage Start Date and Time — 2015-12-15 9:00 CST (UTC-6) End Date and Time -- 2015-12-15 ~12:00 CST Service(s) Affected — This outage is required to move GraceDB onto new hardware. (GraceDB is currently running on temporary hardware after the unscheduled outage on Nov. 13th.) During the migration, GraceDB will be unavailable for a time period of ~hours. -- Please direct all questions to uwm-help@gravity.phys.uwm.edu
Observing at 73Mpc for 25 hours.
Environment calm. There have been a handful of glitches, but it doesn't seem like anything out of the ordinary.
Title: 12/1 OWL Shift: 08:00-16:00UTC (00:00-8:00PDT), all times posted in UTC
State of H1: Observing at 78Mpc for the last 20hrs
Outgoing Operator: Travis
Quick Summary: Travis had it easy, still locked from my shift yesterday. Wind minial, useism 0.4 um/s, all lights off, CW inj running, timing error on H1SUSETMY, IPC error on H1ISCEY (I'll clear them when I get the chance).
Title: 11/30 Eve Shift 0:00-8:00 UTC (16:00-24:00 PST). All times in UTC.
State of H1: Observing
Shift Summary: Very quiet shift. Only 6 ETMy saturations not related to any RF45 glitching. 20 hours of coincident observing with LLO.
Incoming operator: TJ
Activity log: None
Nothing of note. A few ETMy saturations not related to any RF45 glitching. Coincident observing with LLO just over 16 hours.
Laser Status:
SysStat is good
Front End power is 31.28W (should be around 30 W)
Frontend Watch is GREEN
HPO Watch is RED
PMC:
It has been locked 6.0 days, 7.0 hr 24.0 minutes (should be days/weeks)
Reflected power is 1.635Watts and PowerSum = 25.14Watts.
FSS:
It has been locked for 0.0 days 14.0 h and 16.0 min (should be days/weeks)
TPD[V] = 0.7883V (min 0.9V)
ISS:
The diffracted power is around 8.313% (should be 5-9%)
Last saturation event was 0.0 days 14.0 hours and 16.0 minutes ago (should be days/weeks)
At 12:58:31 PST the IOP for h1susey took 69uS for a single cycle which in turn caused a single IPC receive error on h1iscey. This TIM error has been occuring approx once a week for EY, in this case it is the IPC error which is unusual. We should clear these accumulated errors the next time we are not in observation mode or during Tuesday maintenance, whichever is the soonest.
TITLE: 11/30 DAY Shift: 16:00-00:00UTC (08:00-16:00PDT), all times posted in UTC
STATE of H1: Locked at 72Mpc for 12+hrs
Incoming Operator: Travis
Support: None needed
Quick Summary: Very quiet shift with H1 hovering between 75-80Mpc. 0.03-0.01 seismic is noticeably trending down over the last 24hrs. useism is holding just under 0.5um/s (so we still have 45mHz Blends ON).
Shift Log:
Last Tuesday (24th Nov) Jim and I modified the monit on h1hwinj1 machine such that when it restarts the psinject process it smoothly ramps the excitation amplitude over a time period of 10 seconds. We manually started the new system on Tuesday and since then there have been no crashes of psinject until the last 24 hours. There have been 4 stops (with subsequent automatic restarts) in the past 24 hours, each stop was logged as being due to the error:
SIStrAppend() error adding data to stream: Block time is already past
Here are the start and crash times (all times PST). Monit automatic restarts are maked with an asterix
| time of start | time of crash |
| Tue 11/24 14:55:47 | Sun 11/29 17:15:56 |
| Sun 11/29 17:16:00* | Mon 11/30 00:00:14 |
| Mon 11/30 00:01:13* | Mon 11/30 13:09:07 |
| Mon 11/30 13:09:36* | Mon 11/30 13:12:43 |
| Mon 11:30 13:13:39* | still running |
Adding INJ tag so the hardware injection team sees this.
Using a script written to flag CW injection transitions (start or stop) from psinject log files on h1hwinj, I found all of the transitions below to occur during observing mode. I'll ask Laura Nuttall to insert these in the database. 1132905630 1132905633 Nov 30 2015 08:00:13 UTC 1132905689 1132905692 Nov 30 2015 08:01:12 UTC 1132952963 1132952966 Nov 30 2015 21:09:06 UTC 1132952992 1132952995 Nov 30 2015 21:09:35 UTC 1132953179 1132953182 Nov 30 2015 21:12:42 UTC 1132953235 1132953238 Nov 30 2015 21:13:38 UTC Peter Shawhan reported via e-mail that these restarts happened sporadically in iLIGO days and may be related to a busy network. Presumably, these restarts were not disruptive in those days because they were not injected via a time-domain inverse actuation filter. Looking at the first of the crashes above (see attached time-domain zooms of the CW injection channel, the IAF-filtered version of it and DARM), I'm surprised that the effect in DARM isn't larger.
On Nov3, the ITMY Coil Driver was powered down in an attempt to clear its brain which had been giving false bad status indicators. We also changed the code so as to not drop out during these glitches, see T1500555.
Trending the channels show no status drop outs since the 3 Nov power cycle. Before the power cycle, the status had indicated a problem erroneously several times with at least twice dropping the IFO out of observing.
For quick reference, and if it wasn't made clear from the Primary Task, these are the ISI ST1 and ST2 coil drivers (not SUS coil drivers).
VerbalAlarm script was stopped, but unfortunately when trying to restart it, I get an error (see below). Not sure of a back-up alarm system to employ if this goes down. I have the Calibration medm (with GraceDB up on my work station), but not sure what else we should have up as a back up.
It's back. TJ called back and let me know of some missing parenthesis needed on Line 638.
I'm testing a new commissioning reservation system I wrote this afternoon using python. This is a file based system replacing the old EPICS system. The main reason for the change is that the old EPICS system developed problems related to updates and required a lot of maintenance. It was also awkward to configure and restrictive in what it could do.
The reservation file is /opt/rtcds/lho/h1/cds/reservations.txt
There are three python scripts:
make_reservation.py is available to all users, it allows you to create your reservation
display_reservations.py script loops every second and shows the currently open reservations
decrement_reservations.py script is ran as a cronjob every minute, it decrements the time-to-live of each reservation and deletes them when they expire.
I have a display running on the left control room TV closest to the projector screen.
Here is the usage document for the make_reservation.py script
controls@opsws6:~ 0$ make_reservation.py -h
usage: make_reservation.py [-h] system name task contact length
create a reservation for a system
positional arguments:
system the system you are reserving, e.g. model/daq
name your name
task description of your task
contact your contact info (location, phone num)
length length of time (d:hh:mm)
optional arguments:
-h, --help show this help message and exit
If you need to use spaces, surround the string with quotes. There are no limitations on string contents. Here is an example reservation
make_reservation.py "PEM and DAQ" david.barker "update PEM models, restart DAQ" "phone 255" 0:2:0
Here's an example, reserving Nutsinee for 1 hour on TSCX / BSC3: opsws8:~$ make_reservation.py TCSX nutsinee 'BSC Temp Sensor Replacement' LVEA 00:01:00