Here is a block diagram of the signals that go into the summing node board, the IFO common mode board and the IMC common mode board during the CARM handoff. It can be used as a reference when following the recent commissioning alogs.
7:30? Karen, Cris - LVEA
8:15 Elli - To EX and EY
8:20 Corey - To Squeezer Bay and around LVEA
8:32 Mitch - To W. Bay
8:41 Sudarshan, Thomas - To EY
8:46 P. King - To H2 Enclosure
8:47 Mitch - Back
9:10 R. McCarthy - To EY
9:17 Greg - To LVEA
9:27 Greg - Back
9:34 Corey - Back
9:58 R. McCarthy - Back
~10 J. Kissel, Hugh - To Pump Room setting the pump servo to manual
10:59 J. Kissel, Hugh - Back for meetings, pump servo still on manual.
12:23 R. McCarthy - To EX and EY
12:48 J. Kissel, Hugh - To HEPI Pump
13:02 Alastair, Elli - To LVEA to work on TCSY table
14:05 Alastair, Elli - Back
14:05 J. Kissel, Hugh - Back
15:40 Jodi - To MY
15:44 P. King - Back from H2 Enclosure
Earlier today, with Ryan's encouragement, I turned on more of the I/ETMY isolation loops. Currently these 2 ISI's are running a configuration like LLO's, with all DOFs engaged on St1 and X,Y,Z,RZ running on St2. The St1 RZ loop is running a high (750mhz) blend with no T240. St2 is running the same 250mhz blend on Z and RZ as it was previously running on X and Y. The Stage Guardians for these two chambers have also been modified to turn on these loops, in case we have some traumatic event.
FF is now running on all test mass chambers as well with X&Y running on ETMY, Y on ITMY and X on E/ITMX.
Turning on the extra loops seems to mostly reduce the suspension point motion, although vertical motion looks to be a bit worse at .1-.3 hz. Attached plots are ETMX and ETMY suspension point motions, respectively. ETMX was running the normal LHO configuration, with no RZ loops on, only X&Y on stage 2, LLO blends plus ST0-1 FF running on the Y degree of freedom. ETMY was running a mostly LLO configuration, with all loops on St1, LLO blends plus the Start blend on RZ, St2 with all loops running except RX & RY, with St0-1 FF running on X & Y.
Jim, Is that "start-blend" still using just the L-4C as the inertial sensor? If so, it is going to be noisier than you want down below 1 Hz. Let's look for a similar high-blend which uses the T-240s.
Plots of the two high blend filters we have. CPS Signal is common to both, blue is inertial (L4C only, no T240 in that blend) part of the Start filter, green and brown are inertial parts of the T750 filter.
Alastair, Elli, Jamie
We're working on Guardian today, and wanted to install a beam dump at the output of the Y-arm table so that we can play with that laser without injecting power to the CP. While doing this we discovered the laser was running only at 1/2 power. The history to this is that a few months ago the same laser was running at half power, and after swapping in a different driver, went back to full power. The "not working" driver was sent to Caltech where it was found to be working fine. So this new episode was actually quite useful in finding that a problem still exists.
We went out to the LVEA and found that the power connector at the patch panel is badly connected. We couldn't effect a permanent solution in the 1hr window we had, but were able to get the laser back to full power again. These connectors will likely need replaced long term, and we'll start working on a solution to that.
The ouput of the table was found to be dumped to a beam dump already (it has been like this since installation of the table was completed).
I've added a Bug 1009 that describes this.
Minor changes to the drift monitor threshold updater script:
The log files are now recorded to: "/opt/rtcds/userapps/release/sus/common/medm/sus_driftmon_logs/", and have the following file name structure: SUSDRIFT_{OPTIC NAME}_{START_TIME}-{DURATION}.log (e.g. SUSDRIFT_ALL_1107199664-15.log, SUSDRIFT_MC1_1107199664-15.log)
Jeff informed me that the userapps repo is not a good place for log files, so I move them back to /tmp/ until a better place can be determined. The same filename structure applies.
no restarts reported.
Kiwamu, TJ, Elli
This morning while EY plugging HWS cables into the ISCTEY feedthrough panel, I touched the fiber bringing the red PLL reference signal to EY. This changed the fiber wrong polarization (see chanel H1:ALS-Y_FIBR_LOCK_FIBER_POLARIZATIONPERCENT) from 1% to 32%, which is above the maximum allowable value of 30%. This fiber is also connected to ISCTEY feedthrough panel, and then some of the fiber is sitting on a coil on top of ISCTEY. All I did to change the polarization was to gently move the coil on top of the table. When Kiwamu and TJ noticed the polarization had changed, I nudged the fiber coil on top of the table around a bit untill the fiber polarization returned to 8%. It seems it is very easy to bump the fiber and change the polarization.
One can use the fiber polarization controller in the MSR to bring it back.
We had another lock of about 40 seconds, durring which all the signals seemed more stable than last night. It was knocked out by an oscillation that showed up in boh the MICH WFS and MICH length loops. We added an extra boost (2 poles at 0.1 and 2 zeros at 2Hz) to the CARM path, which seems to have made things more stable. The lockloss time was 11:56:32 UTC Feb 5th.
Today we saw that our alingment was drifting much more quickly than we had seen in the last few weeks. Without ASC we could only lock DRMI for a few minutes before MICH would get misaligned. The attached screenshot shows the slow drift of ITMX pitch, the red trace, which is about 1.5urad over the last 3 hours. We are not sure if this is real or just OpLev drift. Wspent some time on the MICH ASC loops. We phased ASB 36 so that the BS shows up in Q, and saw that this is a better signal for the WFS throughout the CARM offset reduction, which is what Ryan says they saw at LLO as well. They come on with a bandwidth of about 1 Hz, and hold the DRMI buildup is stable.
We also ran into difficulty tonight with the PSL rotation stage, it stopped responding to the command button for several hours. We need some code that checks if the input power is really the requested power, or we could try running through the whole sequence at the same power.
Here is a StripTool of the lock acquiistion. My DTT trend is taking too long to run, and I am tired...
I have attached the temperature plot during the lock time (time at lock loss - 2 hrs). To see if the ITMX pitch drift was real I attached the vertical dof of the top stage as well...
The LVEA average temperature is taken from many sensors. Here is a map of the individual temperature sensors - which are available in dataviewer, I believe.
For example the temperature sensor nearest ITM X might be Zone 3B sensor 3A.
This lock event shows interesting things. An oscillation in MICH is what eventually unlocks, but (I think) only because at that point the sideband power in the recycling cavity had significantly dropped. As soon as REFL 9I is engaged,there is an oscillation ~ 0.45 Hz showing up in the AS RF45 and 36, and it is clearly visible in the ASAIR RF90 power. The DARM length correction signal shows awful "bursts" at that point.
This is the lock trend I meant to post last night.
This is the conlog replica.
It appears to have started around 5:00 PM and repeated each hour after. I had to restart the database replication, it appears that the database has been crashing in response. It may not be a coincidence that the search for frequently changing channels also runs every hour, so I have disabled it.
cdsadmin@h1conlog3:/var/log$ grep error syslog
Feb 4 17:04:08 h1conlog3 kernel: [2335398.323116] res 51/40:0a:16:e8:88/00:00:00:00:00/0b Emask 0x9 (media error)
Feb 4 17:04:08 h1conlog3 kernel: [2335398.323407] ata3.00: error: { UNC }
Feb 4 17:04:08 h1conlog3 kernel: [2335398.344358] Add. Sense: Unrecovered read error - auto reallocate failed
Feb 4 17:04:08 h1conlog3 kernel: [2335398.344367] end_request: I/O error, dev sda, sector 193521686
Feb 4 17:04:11 h1conlog3 kernel: [2335401.397976] res 51/40:02:16:e8:88/00:00:00:00:00/0b Emask 0x9 (media error)
Feb 4 17:04:11 h1conlog3 kernel: [2335401.398264] ata3.00: error: { UNC }
Feb 4 17:04:11 h1conlog3 kernel: [2335401.420344] Add. Sense: Unrecovered read error - auto reallocate failed
Feb 4 17:04:11 h1conlog3 kernel: [2335401.420352] end_request: I/O error, dev sda, sector 193521686
...
I attempted a simple search and it crashed with the same syslog error. I am going to try to disable the webpage. Please do not try to use conlog until this is resolved.
To clarify, the acquisition of data appears to be ok, but the machine with the copy of the data that is used for user searches is not.
From the MySQL error log: InnoDB: Error: tried to read 16384 bytes at offset 7 692060160. InnoDB: Was only able to read 8192. 150204 17:04:11 InnoDB: Operating system error number 0 in a file operation. InnoDB: Error number 0 means 'Success'. InnoDB: Some operating system error numbers are described at InnoDB: http://dev.mysql.com/doc/refman/5.5/en/operating-system-error-codes.html InnoDB: File operation call: 'read'. InnoDB: Cannot continue operation. 150204 17:04:13 [Warning] Using unique option prefix myisam-recover instead of myisam-recover-options is deprecated and will be removed in a future release. Please use the full name instead. 150204 17:04:13 [Note] Plugin 'FEDERATED' is disabled. 150204 17:04:13 InnoDB: The InnoDB memory heap is disabled 150204 17:04:13 InnoDB: Mutexes and rw_locks use GCC atomic builtins 150204 17:04:13 InnoDB: Compressed tables use zlib 1.2.8 150204 17:04:13 InnoDB: Using Linux native AIO 150204 17:04:13 InnoDB: Initializing buffer pool, size = 1.0G 150204 17:04:13 InnoDB: Completed initialization of buffer pool 150204 17:04:13 InnoDB: highest supported file format is Barracuda. InnoDB: Log scan progressed past the checkpoint lsn 47921973351 150204 17:04:13 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... InnoDB: Doing recovery: scanned up to log sequence number 47922305837 150204 17:04:13 InnoDB: Starting an apply batch of log records to the database... InnoDB: Progress in percents: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 InnoDB: Apply batch completed InnoDB: Last MySQL binlog file position 0 4307, file name /var/log/mysql/mysql-bin.000221 150204 17:04:14 InnoDB: Waiting for the background threads to start 150204 17:04:15 InnoDB: 5.5.38 started; log sequence number 47922305837 150204 17:04:15 [Note] Recovering after a crash using /var/log/mysql/mysql-bin 150204 17:04:15 [Note] Starting crash recovery... 150204 17:04:15 [Note] Crash recovery finished. 150204 17:04:16 [Note] Server hostname (bind-address): '0.0.0.0'; port: 3306 150204 17:04:16 [Note] - '0.0.0.0' resolves to '0.0.0.0'; 150204 17:04:16 [Note] Server socket created on IP: '0.0.0.0'. 150204 17:04:17 [Note] Event Scheduler: Loaded 0 events 150204 17:04:17 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.5.38-0ubuntu0.14.04.1-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu)
(Peter K, Richard M, , Filiberto C, Daniel S)
We noticed that the fast channels corresponding to H1:IMC-PWR_IN and H1:IMC-PWR_EOM were never hooked up. This required installing the PD interface box in the PSL enclosure, running the DAQ cable into the PSL and installing a tee in the photodetector readback. (These channels have previously been hooked up to the EtherCAT system, so they that they are available to the rotation stage.) The EOM channel is currently railed and needs an adjustment of the PD gain.
EPICS updates:
Another update:
I have put a filter in IMC-PWR_IN in order to calibrate the signal into watts. I used the slow readback (i.e. H1:PSL-PERISCOPE_A_DC_POWERMON) as a reference for the calibration such that IMC-PWR_IN matches to the slow one. So now the filter bank looks like this: