Title: 9/30 OWL Shift: 7:00-15:00UTC (00:00-8:00PDT), all times posted in UTC
State of H1: Observation Mode at 75Mpc for the last 6hrs
Shift Summary: I had one lockloss, but it came back up with relative ease. The RF Noise wasn't bothering me like in my previous shifts.
Incoming Operator: Jeff B
Activity Log:
Had one lockloss at 7:43 UTC, but brought it back up and into observing at 8:37. I'm still not sure what caused the lockloss.
Aside from that it is a quiet environment and everything seems to be humming along.
There was a GraceDB query failure with the last query at 11:48 UTC. I followed the instructions on this wiki and it started up just fine.
Back to Observing
Lockloss at 07:43 UTC.
No idea what may have caused it yet. There was an ITMX saturation, control loops looked normal, no seismic activity, all the monitors showed normal operation.
Title: 9/29 Eve Shift 23:00-7:00 UTC (16:00-24:00 PST). All times in UTC.
State of H1: Observing
Shift Summary: One lockloss due to ITMy saturation. One lockloss due measurements being made while LLO was down. This resulted in a net of 16 minutes of lost coincident observing time. Observing for all but ~1 hour of my shift. RF45 has been stable the entire shift. WInd and seismic quiet.
Incoming operator: TJ
Activity log:
23:25 Lockloss, ITMy saturation
23:26 Kyle and Gerardo back from EY
2:04 Out of observing while LLO is down so Sheila can make measurements
2:32 Lockloss due to measurements
3:00 Observing Mode
When DTT gets data from NDS2, it apparently gets the wrong sample rate if the sample rate has changed. The plot shows the result. Notice that the 60 Hz magnetic peak appears at 30 Hz in the NDS2 data displayed with DTT. This is because the sample rate was changed from 4 to 8k last February. Keita pointed out discrepancies between his periscope data and Peter F's. The plot shows that the periscope signal, whose rate was also changed, has the same problem, which may explain the discrepancy if one person was looking at NDS and the other at NDS2. The plot shows data from the CIT NDS2. Anamaria tried this comparison for the LLO data and the LLO NDS2 and found the same type of problem. But the LHO NDS2 just crashes with a Test timed-out message.
Robert, Anamaria, Dave, Jonathan
It can be a factor of 8 (or 2 or 4 or 16) using DTT with NDS2 (Robert, Keita)
In the attached, the top panel shows the LLO PEM channel pulled off of CIT NDS2 server, and at the bottom is the same channel from LLO NDS2 server, both from the exact same time. LLO server result happens to be correct, but the frequency axis of CIT result is a factor of 8 too small while Y axis of the CIT result is a factor of sqrt(8) too large.
Jonathan explained this to me:
keita.kawabe@opsws7:~ 0$ nds_query -l -n nds.ligo.caltech.edu L1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ
Number of channels received = 2
Channel Rate chan_type
L1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ 2048 raw real_4
L1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ 16384 raw real_4
keita.kawabe@opsws7:~ 0$ nds_query -l -n nds.ligo-la.caltech.edu L1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ
Number of channels received = 3
Channel Rate chan_type
L1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ 16384 online real_4
L1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ 2048 raw real_4
L1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ 16384 raw real_4
As you can see, both at CIT and LLO the raw channel sampling rate was changed from 2048Hz to 16384Hz, and raw is the only thing available at CIT. However, at LLO, there's also "online" channel type available at 16k, which is listed prior to "raw".
Jonathan told me that DTT probably takes the sampling rate number in the first one in the channel list regardless of the actual epoch each sampling rate was used. In this case dtt takes 2048Hz from CIT but 16384Hz from LLO, but obtains the 16kHz data. If that's true there is a frequency scaling of 1/8 as well as the amplitude scaling of sqrt(8) for the CIT result.
FYI, for the corresponding H1 channel in CIT and LHO NDS2 server, you'll get this:
keita.kawabe@opsws7:~ 0$ nds_query -l -n nds.ligo.caltech.edu H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ
Number of channels received = 2
Channel Rate chan_type
H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ 8192 raw real_4
H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ 16384 raw real_4
keita.kawabe@opsws7:~ 0$ nds_query -l -n nds.ligo-wa.caltech.edu H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ
Number of channels received = 3
Channel Rate chan_type
H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ 16384 online real_4
H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ 8192 raw real_4
H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ 16384 raw real_4
In this case, the data from LHO happens to be good, but CIT frequency is a factor of 2 too small and magnitude a factor of sqrt(2) too large.
Part of this that DTT does not handle the case of a channel changing sample rate over time.
DTT retreives a channel list from NDS2 that includes all the channels with sample rates, it takes the first entry for each channel name and ignores any following entries in the list with different sample rates. It uses the first sample rate it receives ans the sample rate for the channel at all possible times. So when it retreives data it may be 8k data, but it looks at it as 4k data and interprets the data wrong.
I worked up a band-aid that inserts a layer between DTT and NDS2 and essentially makes it ignore specified channel/sample rate combinations. This has let Robert do some work. We are not sure how this scales and are investigating a fix to DTT.
As followup we have gone through two approaches to fix this:
Back to Observing Mode @ 3:03 UTC.
Since LLO went out of lock, Sheila asked if she could complete some measurements that she didn't finish during maintenance. I gave her the OK and went to commissioning mode since we aren't losing any coincident data time.
I caused a lockloss by moving TMSX too quickly while doing this test.
I also spent some time earlier in the day (durring maintence recovery) to do some excitations on TMS and the End station ISIs to investigate the noise that seems to come from TMSX. An alog with results will be coming soon.
I updated the GDS calibration correction filters today to reflect the bug fixes to the actuation and sensing time delays (see aLOG #22056). Attached are plots of the residual and control correction filters, which include the updated time delays. I have also attached plots that compare the h(t) spectra from the CALCS and GDS calibration pipelines and the spectrum residuals. There is now a larger discrepency between CALCS and GDS because the time delays that were added to CALCS to bring the two closer together are now no longer as accurate. Updates to the delays in CALCS may be coming as the differences are investigated more.
The new GDS calibration correction filters were generating using
create_partial_td_filters_O1.m
which is checked into the calibraiton SVN (r1560) under
aligocalibration/trunk/Runs/O1/Common/MatlabTools.
aligocalibration/trunk/Runs/O1/GDSFilters
The filters file is called H1GDS_1127593528.npz.
Back to Observing @ 23:50 UTC.
Lockloss @ 23:25 UTC. ITMy saturation.
Laura, Jordan
People have been decreasing the RF45 modulation index as a fix for extreme glitchiness associated with RF45 AM noise (described here among other places). I did a quick check to see if this has any noticeable effect on the rate of background triggers. I made plots of omicron glitchgrams and trigger rates for times where the modulation index was decreased and for nearby times with modulation index at its nominal level and no other obvious issues (many plots ). Attached are rate plots from some recent times.
For reference, the nominal plot is from lock # 31 and the decreased plot is from # 39 according to https://ldas-jobs.ligo.caltech.edu/~detchar/summary/O1
Activity Log: All Times in UTC (PT) 15:00 (08:00) Take over from TJ 15:00 (08:00) Start of maintenance window 15:00 (08:00) Christina & Karen – Cleaning at End-Y 15:00 (08:00) Richard – Going into CER to work on 45Mhz driver 15:10 (08:10) Sprague – On site *Joe D. to escort through LVEA, Mid and End Stations 15:18 (08:18) Peter – Going into LVEA to work with Richard on 45Mhz driver 15:22 (08:22) Ken – Working on the solar power on X-Arm 15:22 (08:22) Leslie – Going to End-X and End-Y for cleaning work 15:24 (08:24) Lockloss – Maintenance 15:25 (08:25) Hugh – Taking down HAM5 ISI for Rouge excitation fix 15:26 (08:26) Keita – Going into PLS to swap 45Mhz driver 15:28 (08:28) Filiberto & Manny – Taking photos of electronics racks at Mid-X 15:42 (08:42) Bubba – Going to End-Y chiller yard to replace vibration dampers on compressor 15:48 (08:48) Filiberto & Manny – Taking photos of electronics racks at Mid-Y 15:50 (08:50) Jim B. – Working on hardware injection computer 15:54 (08:54) Hugh – Going to both end stations to check HEPI fluid levels 15:55 (08:55) Betsy – Running charge measurements at both end stations 16:00 (09:00) Christina & Karen – Cleaning at End-X and End-Y NOTE: A lot of moths at End-Y 16:00 (09:00) Robert & Vinnie – at End-Y glitch hunting 16:05 (09:05) Filiberto & Manny – Taking photos of electronics racks at End-Y 16:07 (09:07) Mitch – In LVEA to place signage – will open receiving roll-up door 16:09 (09:09) Betsy – In LVEA to help Mitch and hunt for parts 16:09 (09:09) Jodi – In LVEA to check 3IFO stuff 16:10 (09:10) Jason & Peter – Moving PSL equipment from OSB to LSB 16:10 (09:10) Filiberto & Manny – Finished at End-Y – heading for the LVEA 16:13 (09:13) Keita – Out of the PSL 16:30 (09:30) Hugh – Shutdown dust monitor at both end stations 16:30 (09:30) Mitch – Finished in the LVEA 16:30 (09:30) Filiberto & Manny – In LVEA by HAM3 taking photos of electronics racks 16:32 (09:32) Hugh – Checking CS HEPI fluid levels 16:33 (09:33) Jodi – Out of the LVEA 16:35 (09:35) Betsy – Out of the LVEA 16:37 (09:37) Filiberto & Manny – Out of the LVEA 16:45 (09:45) Hugh – Delivering 3IFO parts to storage in the LVEA 16:50 (09:50) Jodi – Taking tour through LVEA 16:55 (09:55) Jason & Peter – Finished moving PSL parts 17:00 (10:00) Mitch – Moving (forklift) load from LSB to VPW 17:10 (10:10) Add 175ml water to PSL crystal chiller 17:12 (10:12) Kyle & Joe – Taking 1-Ton and trailer to near End-X to recover leak detector 17:18 (10:18) Christina & Karen – Cleaning in the LVEA 17:30 (10:30) Bubba & John - Going to End-X chiller yard to replace vibration dampers on compressor 17:31 (10:31) Beverage service on site to check the vending machines 17:35 (10:35) Corey & Patrick – Going to sweep End-Y 17:39 (10:39) Gerardo – Delivering batteries to near both end stations 17:46 (10:46) Vinnie & Jordan – Going to both end stations to install PEM equipment 17:47 (10:47) Paradise Water on site 17:47 (10:47) PraxAir truck leaving site (was at Mid-X) 17:52 (10:52) Ken – Going to work on solar panels on Y-Arm 18:05 (11:05) Sudarshan – Taking TFs at End-X 18:12 (11:12) Karen & Christina – Out of the LVEA 18:15 (11:15) Corey & Patrick – Going to sweep End-X 18:34 (11:34) Betsy & Cheryl – Sweeping the LVEA 18:45 (11:45) Gerardo – Going to End-X to check annulus gages 18:45 (11:45) John & Bubba – Finished with chiller work – Checking gauges in End-X 18:48 (11:48) Richard – In LVEA to shut off wireless APs 20:06 (13:06) IFO Locked at NOMINAL_LOW_NOISE 22.5W, 62Mpc 20:32 (13:32) Lockloss – See aLOG 22073 20:41 (13:41) Kyle & Joe – Out to X2A and Mid-X 20:42 (13:42) Daniel – Going into CER 20:47 (13:42) Carpet Contractor on site to meet with Bubba 20:47 (13:47) Daniel – Out of CER 22:11 (15:11) IFO locked at NOMINAL_LOW_NOISE, 22.5W, 75Mpc 22:18 (15:18) Set Intent Bit to Observing 23:00 (16:00) Turn over to Travis Shift Summary: Title: 09/29/2015, Day Shift 16:00 – 23:00 (08:00 – 16:00) All times in UTC (PT) Support: Sheila, Hugh, Betsy Incoming Operator: Travis Shift Summary: - 15:00 IFO locked. Observing Mode set to Maintenance. - 15:24 Lockloss – Maintenance - 16:30 Turnoff dust monitors at both end stations - 17:05 Reset HAM5 L4C Saturation WD -19:00 Corey & Patrick – Sweep of end stations -19:00 Cheryl & Betsy – Sweep of LVEA
Since LLO had already gone down (we think for maintence) TJ let me start some maintence work that needs the full IFO locked. at about 14:32 UTC Sept 29th we went to commisioning to start running the A2L script as described in WP # 5517.
The script finished right before an EQ knocked us out of lock. Attached are results, we can decide if we are keeping these decouplings durring the maintence window.
The three changes made by the script which I would like to keep are ETMX pit, ETMY yaw, and ITMY pit. These three gains are accepted in SDF. Since we aren't going to do the other work described in the WP, this is now finished.
All the results from the script are:
ETMX pit changed from 1.263 to 1.069 (1st attachment, keep)
ETMX yaw reverted (script changed it from 0.749 to 1.1723 based on the fit shown in the second attachment)
ETMY pit reverted (script changed it from 0.26 to 0.14 based on the 3rd attachement)
ETMY yaw changed from -0.42 to -0.509, based on fit shown in 4th attachment
ITMX no changes were made by the script, 5th +6th attchments
ITMY pit (from 1.37 to 1.13 based on 7th attachment, keep)
ITMY yaw reverted (changed from -2.174 to -1.7, based on the 8th attachment which does not seem like a good fit)
By the way, the script that I ran to find the decoupling gains is in userapps/isc/common/decoup/run_a2l_vII.sh Perhaps next time we use this we should try a higher drive amplitude, to try to get better fits.
I ran Hang's script that uses the A2L gains to determine a spot position (alog 19904), here are the values after running the script today.
vertical (mm) | horizontal(mm) | |
ITMX | -9 | 4.7 |
ITMY | -5.1 | -7.7 |
ETMX | -4.9 | 5.3 |
ETMY | -1.2 | -2.3 |
I also re-ran this script for the old gains,
vertical(mm) |
horizontal (mm) | |
ITMX | -9 | 4.7 |
ITMY | -6.2 | -7.7 |
ETMX | -5.8 | 5.3 |
ETMY | -1.2 | -1.9 |
So the changes amount to +0.4 mm in the horizontal direction on ETMY, -0.9 mm in the vertical direction on ETMX, and -1.1mm in the vertical direction on ITMY.
Please be aware that in my code estimating beam's position, I neglected the L2 angle -> L3 length coupling, which would induce
an error of l_ex / theta_L3,
where l_ex is the length induced by L2a->L3l coupling when we dither L2, and theta_L3 is the angle L3 tilts through L2a->L3a.
Sorry about that...
Now that the multi-processing version of HWInjReport is operational and returning results (multi-processing code is an absolute bear!), this is the first of daily reports of analysis of output from HWInjReport. Currently, HWInjReport still has to be run manually and requires one to checkout the current schedule file from the SVN (currently at https://svn.ligo.caltech.edu/svn/dac/hwinj/Details/tinj/schedule; if this has changed, please let me know so I can modify the run script appropriately). Automatic execution of HWInjReport is soon to come. I've attached copies of the output report file and the schedule file used. NOTE: the report file is very wide due to the number and size of the columns in the network injections tables of the report. To examine the report, you will need to either zoom out or change the font size in your browser/text editor to 10pt or less (I'm looking into compressing the columns in the network injections tables in future updates).
The daily run performed with the following parameters:
GPS Start Time = 1127163296 # Beginning of time span, in GPS seconds, to search for injections
GPS End Time = 1127249696 # Ending of time span, in GPS seconds, to search for injections
Check Hanford IFO = True # Check for injections in the Hanford IFO frame files.
Check Livingston IFO = True # Check for injections in the Livingston IFO frame files.
IFO Coinc Time = 0.01 # Time window, in seconds, for coincidence between IFO injection events.
Check ODC_HOFT = True # Check ODC-MASTER_CHANNEL_OUT_DQ channel in HOFT frames.
Check ODC_RAW = True # Check ODC-MASTER_CHANNEL_OUT_DQ channel in RAW frames.
Check ODC_RDS = True # Check ODC-MASTER_CHANNEL_OUT_DQ channel in RDS frames.
Check GDS_HOFT = True # Check GDS-CALIB_STATE_VECTOR channel in HOFT frames.
Report Normal = True # Include normal (coherent, consistent, and scheduled) injections in report
Report Anomalous = True # Include anomalous (incoherent, inconsistent, or unscheduled) injections in report
NOTE: coherent -> coincident. Missed changing that in the code where it outputs the report.
The schedule file only has injections spanning 1125280499 - 1126450499. This is outside the range of times checked by HWInjReport, so there are no occurring or non-occurring scheduled injections reported.
No normal injections, as defined above for HWInjReport, were reported for the network injections. All injections found were reported as UNSCHEDULED, and all injections occurring were reported as CBC injections.
Two H1-L1 coincident injections were found: CBC 1127175853.757(H1), 1127175853.764(L1) and CBC 1127179822.757(H1), 1127179822.764(L1). Both injections were reported as UNSCHEDULED but, otherwise, had no other reported anomalies.
H1 had only 1 single-IFO injection, CBC 1127173426.757 (the report file shows 3, but only 1 is actually an H1-only injection. There is apparently a bug due to the multi-processing code that is not propagating the association of the other 2 injections with their corresponding L1 injections. It's basically a problem of how memory works for a multi-processing environment.)
L1 had 5 single-IFO injections:
The first three injections have the anomaly that they occur in the ODC hoft and GDS hoft frames but not in the ODC raw or ODC rds frames. The remaining 2, other than being UNSCHEDULED, had no other anomalies reported.
ADDENDUM: I was able to successfully fix the data propagation bug. I've attached a copy of the resulting "fixed" report that correctly shows the single-IFO injections for H1 and L1.
Peter Shawhan and I examined the anomalies more closely and found they are not anomalies. The missing injections in RAW and RDS for L1 do actually occur, but HWInjReport missed them. My current working hypothesis is that the code missed these injections because of how it has to separate the list of files to pass to FrBitmaskTransitions into chunks of no more that 4090 files. This is to prevent the number of arguments passed to FrBitmaskTransitions, one for each file, from exceeding the number of arguments supported by the OS (I actually ran into this issue at one point with the RAW frame files). HWInjReport merges the output from the chunks into a single continguous internal list, however, it currently is not accounting for the occurrence doubled transitions (two "off" or "on" transitions consecutively placed) during the process of merging the transitions internally. This may cause the code to become misaligned when finding the injections, based on the bit transitions, and so it completely misses it.
I am reasonably convinced this is the case because when I performed a run on a time-span around the anomalies, 1127162120 - 1127162970, the anomalies do not occur. But, this is because the list of files is much smaller and only needs one chunk, instead of several, to be passed to FrBitmaskTransitions.
This also brings another point which is that I need to include all the output files, the report generated, the schedule used, and the log file when I upload files with my alog summaries of HWInjReport, because the log file has a lot of information regarding the internal activity to HWInjReport. I built it that way because the code has some unexpectedly complex logic in places, which has made debugging a total bear, and it only got worse with the transition to multi-processing.
I just realized there is another bug in HWInjReport, though this one is somewhat benign. While HWInjReport is specified to cover only a certain time-span, it actually ends up covering a larger time-span due to the fact that FrBitmaskTransitions processes entire frame files and HWInjReport is processing the resultant transitions into injection events. This means that HWInjReport can receive from FrBitmaskTransitions a set of transitions that lie well outside the specified time-span and, consequently, generate injection events that lie outside the time-span. It does not have this issue with the scheduled injections, because it trims those to the specified time-span before doing any further processing. The fix, fortunately, is simple: just trim the transitions from FrBitmaskTransitions to within the specified time-span. However, the bug does have the effect of potentially creating injections just outside the beginning or ending of the specified time-span that are flagged as UNSCHEDULED, because the scheduled injections to which they may correspond were trimmed.
Here's some plots.