EX was transitioned to LASER SAFE. EY was already LASER SAFE when I got there. Temporarily transitioned it to LASER HAZARD to enable my opening of the table enclosure. Transitioned back to LASER SAFE. No untoward items of test equipment were found in the table enclosures.
The LVEA has transitioned to LASER SAFE. No untoward items of test equipment were found in the table enclosures.
05:10 Parametric Instability ETMY_PI_OMC_DAMP_MODE1 started to ring up. Nothing I did seemed to improve it
6:43 ITMX joined the dance. i changed the phase to -60deg and this seems to have turned that one back around. And then, not.
6:54 It's abundantly evident that I(we)could use some training on how to deal with this PI monster. I'm feeling a bit distraught at not being able to mitigate it.
06:57 Watching he lock deteriorate. OMC DCPD saturations have begun....
06:58 LockLoss due to PI.
[Jenne, AmandaC]
With Sheila and Haocun's work earlier today (alog 28324) putting in the beam splitter on the POPAIR table, we are now able to see a dither drive in POP90. We shake SRM in pitch at 4Hz with 100 counts, and see a clear peak in POP90. See snapshot. We haven't yet put this into a new ASC loop, but we will.
4:15UTC
IFO Mode set to OBSERVING
During the lock today I had to change some of the damping filter settings as different modes rang up. Similarly to what Nutsinee was seeing on Saturday the 18040Hz and the 15009Hz ETMY modes were the ones causing the most trouble. From the strip tool you can see where I have changed the damping filter settings.
The current phase and gain states for these filters are:
Also when I was trying to damp the 18040Hz mode earlier there was a difference in thet rends of the OMC and QPD rms monitors. From the OMC rms it seemed theat the damping was effective but it's clear from the QPD rms that the mode was still ringing up and broke lock.
The lock after this had several PI ringing up and again the damping filter settings needed changed.
The purple line here refers to the 14980Hz ITMY mode which needed to change phase 3 times in order to damp it. At the start of the lock it was set at -60 degrees. After the first increase it was changed to 0 degrees and it went down. After 5 minutes it started ringing up again and so I changed the phase to 60 degrees. This damped it for ~13 minutes and then rang up again so I changed the phase to 30 degrees which damped it again. The yellow line and the red line refer to the QPD and OMC rms for the 18040Hz ETMY mode respectively. After seeing the difference between the rms of the OMC and QPD signal earlier it becomes more apparent that at larger mode amplitude the QPD signal becomes more reliable. After this mode became unstable I changed the phase to -30 degrees which damped the mode. This mode is currently being damped using the QPD signal. The dark green is the 15009Hz mode which rings up a bit more slowly than the rest. I changed the phase from 0 degrees to 60 degrees which damped it. The light blue line is the 15520Hz ITMX mode. I had iwave tracking on this one but I had to change to just a bandpass filter after the rms started clipping. I changed the phase from -30 degrees to 30 degrees which damped the mode.
The GDS calibration filters were updated for ER9 on July 7, 2016. These filters are designed to correct the output of the front-end CALCS calibration model.
These filters were generated using the script run_td_filters.m located in the calibration SVN under aligocalibration/trunk/Runs/PreER9/H1/Scripts/TDfilters with SVN version #3176. Information on the exact parameters files used to generate these filters and the version of DARMModel used for these filters can be found in the run file run_td_filters.m.
In addition to the residual and control chain correction filters, the updated filters also include dewhitening filters that are unity since the pipeline is now run by ingesting double precision channels that are no longer whitened. The next generation of the GDS calibration code will not require dewhitening filters to be present in the filters file.
The new filters file can be found in calibration SVN under aligocalibration/trunk/Runs/ER9/GDSFilters/H1GDS_1151960706.npz.
Attached are plots of the residual and control correction filters. These plots compare the frequency response of the GDS FIR filters to the true frequency domain model they are based on.
A few notes I forgot to mention above:
On June 17, 2016 the GDS calibration pipeline at LHO switched to ingesting double precision channels that are no longer dewhitened. Therefore, the filters file was updated to include a unity dewhitening filter. The new filter file can be found in the calibration svn under aligocalibration/trunkRuns/PreER9/GDSFilters/H1GDS_1150213197.npz.
The double precision channels ingested by the GDS pipeline in the current configuration on:
H1:CAL-DELTAL_CTRL_TST_DBL_DQ
H1:CAL-DELTAL_CTRL_PUM_DBL_DQ
H1:CAL-DELTAL_CTRL_UIM_DBL_DQ
H1:CAL-DELTAL_RESIDUAL_DBL_DQ
H1:CAL-DARM_ERR_WHITEN_OUT_DBL_DQ
[Jenne, Robert]
As a result of Keita's alog 28196 regarding the beam position on the BS, we wanted to move the beam splitter around in relation to the beamline, to see if that would change any clipping that we may have on the baffles. Short answer: nope.
First, we moved ST1 by putting offsets in the isolation loops. JeffK tells us that these are calibrated in nm, so our 5,000 count offsets correspond to about 0.5mm of motion. We moved ST1 up and down, as well as laterally along the plane of the beam splitter (+x+y and -x-y). No effect seen in the power recycling gain.
Next, we moved HEPI in a similar fashion. The thought here is that the ITM elliptical baffles are suspended from this ST0, so we weren't moving them earlier. (By moving both ST1 and ST0 we had hoped to differentiate which set of baffles was causing us trouble.) We moved up and down, as well as in RZ, rotation about the z-axis. RZ is calibrated into nrad, and the baffles are order 1m away from the center of the ISI, so they were each moved on the order of 0.5mm also. Again no effect seen in power recycling gain.
Attached is a snapshot of our striptool, with the first offsets starting at about 0:06:00 UTC, and the last ones ending around 1:00:00 UTC. Teal is the power recycling gain. The POP18 seems to be still relaxing from the power up to 40W for the first few minutes of our tests, but doesn't seem to be correlated with our movements. Red trace is the vertical CPS measure of BS ST1 ISI position, and orange is superimposed with brick red measuring our lateral motion. Light purple is vertical HEPI motion and light green is RZ HEPI motion.
We felt that if we were really dominated by clipping losses around the beam splitter, moving by 0.5mm in some direction should show us some change in recycling gain. Since it doesn't, we conclude that the power loss must be somewhere else.
For the record -- indeed the calibration of the offsets are 1 [nm / ct] or 1 [nrad / ct], but that would mean at 5,000 [ct] offset in translation (X, Y, or Z) is 5 [um] = 0.005 [mm] (not 500 [um] = 0.5 [mm] as stated above). Similarly the RZ offset of 5,000 [ct] = 5 [urad] = 0.005 [mrad].
Yeah, Mittleman just pointed that out to me. Apparently math is hard in the evenings. We'll give this another try with a bit more actual displacement.
Dan, Ryan, Dave:
we worked on h1fw0 today. This writer was chosen because the nds machines have been offloaded to the cds-h1-frames file system (via h1ldasgw2) and h1fw0 is significantly more unstable than h1fw1.
Executive Summary: no matter what we did h1fw0 continues to be very unstable if it tries to write all the frame files, it is stable if it only writes the science frames. Further tests are more intrusive and will wait until later in the week.
During all the tests I monitored the status of the threads which write the frame files to disk. There are four threads: dqscifr, dqfulfr, dqstrfr, dqmtrfr (science frame, full frame, second trend frame and minute trend frame). These threads are normally in sleep state (S), transistioning to the running state when writing their frames (R) and also spending some time in the disk-sleep state (D) during the write process.
I am interacting with the QFS file system in three ways: daqd writing the frame files, manually creating a 1GB file using the dd command and performing a directory long listing (ls -al). When everything is running correctly, the dd creation of a 1GB file takes about 5 seconds and the "ls -al" is fast. The symptom of the problem is that all the active frame writer threads all get stuck in the D uninterruptible state, forever waiting for a disk write completion signal. At this point the dd copy is still able to create a 1GB file, but it takes much longer (10-20 seconds). The long listing fails, also going into the D disk-io-sleep state. At this point data is going into daqd and none is coming out, internal buffers fill and the process dies. The long listing now completes, and the dd copy goes back to 5 seconds to complete.
Things tried today:
Monitor NFS traffic on private network between frame writer and solaris NFS server, no errors or unusual packets seen
Change the version of NFS for this mount from vers3 to vers4, no change in stability. One point of interest, to make the change on h1fw0 I rebooted the computer, and the daqd process on h1fw1 died at this time.
Dan stopped the rsync process which is backing up the raw minute trends from ldas-h1-frames. He also stopped all LDAS access to ldas-h1-frames (disk2disk copy, frame file checksumming). At this point h1fw0 hadldas-h1-frames all to itself, and it still could not write all 4 frame files and died on the hour when minute, second and full frames all being written at once.
As I have left the system, h1fw0 is only writing science frames. Dan is comparing these files with h1fw1 and will use them if h1fw1 restarts. Dan has restarted the rsync process to complete the raw minute trend backup (prior to his reconfiguring the SATABOY raids).
Later in the week Dan will reconfigure the Sataboy raid to make a more efficient file system, whose file access times should be halved.
There is a recent DAQ code change in the development branch (trunk) that addresses that 'top-of-the-hour' instability when writing second, minute and full frame files at the same time (revision 4181). See CDS Bug 985. This addressed some issues in stand-alone framebuilder configurations and could be useful here. A mutex is used to avoid simultaneous writing of minute and second trend frames. This could likely be ported to the production branch (3.x)
It seems that there are some marginal increases in temp in both the OSC and the AMP that correspond to very marginal decrases in headflows. OSC_DB4_PWR seems to be doing "it's own thing" in comparison to 1,2 ad 3.
As usual, please refer further, in-depth analysis to Jason O or Peter K.
Andy, Duncan, Laura, Ryan, Josh,
In alog 28299 Andy reported that we were seeing the ER9 range deteriorate due to glitches every 2 seconds. Figure 1 shows the glitches turning on in DARM at 2016-07-09 05:49:34 UTC.
We think the ALS system not being shuttered and changing state in lock is to blame. Here's why.
Excavator pointed us to a strong coupling between DARM and the ALS channels. Raw data confirmed a correlation, Figure 2 shows the ALS glitches tuning on at that same time and figure 3 shows that both DARM and ALS are glitching at the same times.
When we investigated the Guardian ALS state for this time (figure 4), it was not in a nominal configuration to start with and that got worse around the time the glitches started. The shutter was not "Active" and at 05:49:34 UTC the ALS X state changed from "-15 locked transition" to "-19 locking WFS" and at that same time the glitches started in DARM. So at some point, ALS X decided it needed to lock the arm (looks like Y followed an hour or so later). We did not track down exactly how the glitches originated or made it into DARM because this seems non-standard enough that a configuration fix should make it go away.
Figure 5 shows a summary page plot for nominal ALS X Guardian behavior from O1. So the shutter should be active and we don't expect to see "locking WFS" come on during an analysis ready state.
It seems like the ALS didn't think that the IFO was locked on IR anymore. The ALS-X state suddenly drops from 'Locked on IR' to 'PLL locked' (state 6 to state 2), then the requested state changes from 'Locked on IR' to 'Lock Arm' (state 6 to state 3). It seems like something went wrong in the communication and the ALS started to try to lock the arm. I don't think it would have helped if it were shuttered, because it would have unshuttered when trying to 'relock'. The attachment is just a plot of the two EPICS channels. As Josh said, the change corresponds to the time the glitching started.
Two additional notes: Here are the full Excavator results for the time period: https://ldas-jobs.ligo-wa.caltech.edu/~rfisher/Excavator_Results/Jul11_Tests/1152079217/ (Note: Excavator was run over unsafe channels as we were running a test of the code and then we started to follow up why something in ALS ODC popped up.) We were pointed to the source of the problems by the ALS-X ODC channel indicating ADC overflows on a 2 second interval with precise timing. The ADC overflows reported by the EPICs system at this time had timing fluctuations relative to the actual overflows of +/- .6 seconds in just the 5 minutes we looked at by hand.
We have been leaving the green light into the arms because sometimes it is usefull to see the power build ups and green WFS signals as we are trying to understand alignment problems. As people pointed out we would normally have these shuttered if things were really nominal, and in the shuttered state we don't check if the green arm is still ocked or not so it would not try to relock causing the glitches.
Looked at the cameras for PR3, PR2, PRM and all 16 arm baffle PDs during a power up. Bottom line: The lost power is found on the ETM baffle PDs.
- Plot 1 and 2 show the ETMX and ETMY baffle PDs. Note the huge increase in baffle PD4 in both arms. At the same time, baffle PD 1, on the opposite side of PD4, also sees an increase, so we can't explain this with arm alignment.
- The ITM baffle PDs show much less signal - there is some on ITMY baffle PD 4, but nowwhere near as much as on the ETM. (:Plot 3 and 4)
- PRM, PR2 and PR3 roughly track the recycling gain, i.e. they increase less than linear with input power. (Plot 5)
I should also say that the Beckhof whitening and digital gain setting were the same for all 16 PD. I attached a representative snapshot for H1:AOS-ETMX_BAFFLEPD_4.
The y-axis on all these plots (labeled arbitrary) is in Microwatt/Watt input power for the Baffle PDs, and recycling gain for POP_LF.
Here is a plot of the recycling gain behaviours of carrier, 9MHz and 45MHz sideband.
Now that POP is not saturating, it looks like the 9MHz RG is still dropping faster than the carrier, but the 45MHz RG is actually dropping slower. This would not agree with a simple PRC loss - it would require some SRC loss to enhance the 45MHz sideband.
Title: 07/11/2016, Day Shift 15:00 – 23:00 (08:00 – 16:00) All times in UTC (PT) State of H1: IFO unlocked. Switched intent bit form observing to commissioning. Guardian error – "4 dead channels" & EPICs comm error Commissioning: Outgoing Operator: None Activity Log: All Times in UTC (PT) 15:00 (08:00) Start of shift. 15:00 (08:00) Guardian error 15:30 (08:30) Richard – Reset End-Y Beckhoff IOC 15:44 (08:44) Alfredo & Elizabeth – Going to the bridge over the X-Arm to put up a sign 16:25 (09:25) Sheila – Going into Optics Lab – Looking for BS optic 16:30 (09:30) Kyle – Going to Mid-Y to run test on rotating pumps 16:35 (09:35) Alfredo & Elizabeth – Back from X-Arm Bridge 16:53 (09:53) Sheila – Finished in Optics Lab – Going to Squeezer Bay 17:20 (10:20) Kyle – Back from Mid-Y 17:29 (10:29) Alfredo & Elizabeth – Going to the bridge over the X-Arm to work on new sign 17:44 (10:44) Alfredo & Elizabeth – Back from X-Arm 18:06 (11:06) Sheila & Haocun – Going to ISC table at HAM2 – They will open the table. 19:28 (12:28) Sheila & Haocun – Out of the LVEA. 19:46 (12:46) Kiwamu & Nutsinee – Transitioning LVEA to Laser Hazard 20:51 (13:51) Kiwamu – Going into LVEA to adjust ITM-X and ITM-Y IR cameras 21:13 (14:13) Dick – Going into LVEA to Squeezer bay to look for parts 21:21 (14:12) Kiwamu – Back into LVEA to work on camera alignment 21:27 (14:27) Richard – Bring a tour through control room 21:32 (14:32) Dick – Out of the LVEA 21:42 (14:42) Kiwamu – Out of LVEA 23:00 (16:00) Turn over to Ed End of Shift Summary: Title: 07/11/2016, Day Shift 15:00 – 23:00 (08:00 – 16:00) All times in UTC (PT) Support: Jenne, Sheila, Kiwamu, Stefan, Incoming Operator: Ed Shift Detail Summary: End-Y Beckhoff IOC crashed. It has been restarted. 16:40 (09:40), IFO locked at INCREASE_POWER. IFO at 39.9W, Stefan is taking data.17:10 (10:10) 19:46 (12:46) LVEA transitioned to Laser Hazard. Will leave it Laser Hazard until Peter is finished with the hardware hunt tomorrow morning. IFO has been up and down all day, while commissioners are working through various issues, adjustments, and improvements.
Sheila, Haocun
Due to the satuation of the photodetectors, we added a 90:10 beamsplitter (CVI BS1-1064-90-1025-45P) in the POP beam path.
WIth the input power of 2W (1.97W) in the interferometer:
Power before adding the BS: POPX: 5.74mW POP Air A: 2.56mW POP Air B: 2.52mW (total: 10.82mW)
Power after adding the BS: POPX: 0.852mW POP Air A: 0.4mW POP Air B: 0.4mW (total: 1.652mW)
Power right before the BS: 10.88mW; Power right after the BS: 1.82mW.