Evan, Stefan We implemented the PR3 feed-back in ENGAGE_ASC in the ISC_LOCK guardian: - We decided to leave the feed-back on PR2 during DRMI. This allows us to absorb the first correction to the initial alignment to PR2. - We then switch the feed-back to PR3 in ENGAGE_ASC. This locks down PR3 during power-up - we confirmed that the REFL beam no longer moves in that transition. - Details: - We modified the DRMI guardian ENGAGE_DRMI_ASC state to prepare both PR2 and PR3 for feed-back. - PR3 gais were set to roughly match PR2 - The settings on PR3 are: - ezca['SUS-PR3_M3_ISCINF_P_GAIN'] = 2 - ezca['SUS-PR3_M3_ISCINF_P_GAIN'] = 5 - SUS-PR3_M1_LOCK have an integrator, a -120dB and a gain of 1 - Also, it now has the flag self.usePR3, which can select feed-back to PR2 (False) or PR3 (True). The rest of the ENGAGE_DRMI_ASC state uses this flag. - The default is self.usePR3=False, i.e. it does just the old PR2 engaging. - The PRC2 loop is always off during the CARM reduction sequence. It is re-engaged in ENGAGE_ASC with feed-back to PR3 with the following steps: - The output of SUS-PR2_M3_ISCINF_P and SUS-PR2_M3_ISCINF_Y is held - The ASC-PRC2_P and ASC-PRC2_Y filters are cleared. - The output matrix is updated: - asc_outrix_pit['PR3', 'PRC2'] = 1 - asc_outrix_yaw['PR3', 'PRC2'] = -1 # required to keep the same sign as PR2 - And the loops are turned on again. - The current loop gain is still low - the step response is on the order of 30sec. - ENGAGE_ASC also has the self.usePR3 flag (default is True), so it is still backwards compatible. - The whole sequence (engage DRMI on PR2, switch to PR3 in full lock) was tested successfully once before an earthquake hit.
A new H1 model was started this afternoon on h1susey. It is h1susetmypi, which is a copy of the L1 model minus the IPC input receievers and the DAC outputs on the top level. The new model has DCU_ID=127 and runs at 64kHz.
The initial startup failed with a "DAQ too small" error. We boosted the DQ channels from 2kHz to 64kHz (commissioning frame only) and added some EpicsIn parts at the top level to get through this error. We will investigate this further next week.
The new model was added to the DAQ and the DAQ was restarted. I have added it to the CDS ENG overview MEDM screen, I'll add it to the rest later.
A while ago I made a "complete" violin mode table here. Matt and Jeff was curious whether or not the frequencies belong to correct test mass. So, I went through the violin mode damping filters and see which one works and which doesn't. So far I was able to confirm that 24 out of 32 frequencies belong to the correct test masses. The filters were able to damp most of them. Three frequencies got rung up. The rest were still inconclusive (either the damping phases were off by 90 degrees or the test masses were wrong).
Frequency | Test mass | Filter | Does the filter work? | Note |
500.054 | ITMX | MODE3 | yes | |
500.212 | MODE3 | yes | ||
501.092 | ITMX | MODE6 | yes | |
501.208 | MODE3 | yes | ||
501.254 | ITMX | MODE3 | yes | |
501.450 | MODE3 | yes | ||
502.621 | ITMX | MODE3 | NO | |
502.744 | MODE3 | yes | ||
503.007 | ITMY | MODE3 | yes | |
503.119 | MODE1 | yes | ||
504.803 | ITMY | MODE4 | NO | |
504.872 | MODE4 | yes | ||
501.606 | ITMY | MODE5 | NO | Rung up! |
501.682 | MODE5 | yes | ||
501.749 | ITMY | MODE6 | yes | |
501.811 | MODE6 | NO | ||
507.992 | ETMY | MODE5 | NO | |
508.146 | MODE5 | yes | ||
508.010 | ETMY | MODE5 | yes | |
508.206 | MODE5 | yes | ||
508.220 | ETMY | MODE5 | NO | |
508.289 | MODE5 | NO | ||
508.585 | ETMY | MODE5 | yes | |
508.661 | MODE5 | NO | ||
505.587 | ETMX | MODE6 | yes | |
505.707 | MODE6 | NO | ||
505.710 | ETMX | MODE4 | yes | |
505.805 | MODE4 | NO | Rung up! | |
506.922 | ETMX | MODE6 | yes | |
507.159 | MODE6 | yes | ||
507.194 | ETMX | MODE6 | NO | Rung up! |
507.391 | MODE6 | yes |
Below is the table of the violin mode damping filters used by Guardian and the frequencies they (supposed to) damp:
w0 (Hz) | wc (Hz) | wc-w0 (Hz) | Filter (All FM1) | Test mass | Frequency covered |
506 | 513 | 7 | MODE5 | ETMY | All ETMY |
505.78 | 505.9 | 0.12 | MODE4 | ETMX | 505.710, 505.805 |
502 | 520 | 18 | MODE6 | ETMX | The rest of ETMX (cheater…) |
485.7 | 506.4 | 20.7 | MODE3 | ITMX | All ITMX |
501.05 | 501.11 | 0.06 | MODE6 | ITMX | 501.092 |
503.08 | 503.16 | 0.08 | MODE1 | ITMY | 503.119 |
502.96 | 503.06 | 0.1 | MODE3 | ITMY | 503.007 |
504.86 | 504.91 | 0.05 | MODE4 | ITMY | 504.803, 504.872 |
501.63 | 501.7 | 0.07 | MODE5 | ITMY | 501.606, 501.682 |
501.71 | 501.85 | 0.14 | MODE6 | ITMY | 501.749, 501.811 |
Both tables are included in the excel file attached below.
I have just installed a new version of Guardian core:
guardian r1449
It address the "double main" execution bug that was been has been plaguing the system. See guardian bug 879, ECR 1078.
The new version is in place, but none of the guardian nodes have been restarted yet to pull in the new version.
You can either manually restart the nodes with 'guardctrl restart', or just try rebooting the whole guardian machine. I might start with the former, to just target the important lock acquisition nodes (ISC_LOCK, etc.), and wait until Tuesday maintenance for a full restart of the Guardian system.
ISC_LOCK and ISC_DRMI were restarted around 2015-07-19 07:07:00 Z.
Chris S. Joe D. The crew installed metal strips on the top of 350 meters of tube enclosure joints this week for a total of 1075 meters of enclosure from the corner station on the X-Arm.
APOLOGIZES FOR NOT REPORTING FOR THE PAST WEEK Scott L. Ed P. Rodney H. This report will cover 7/13-7/17 dates inclusive. The crew cleaned a total of 358.7 meters of tube this week. Test results for the week also shown here. We added another generator that we had on site to the cleaning operation so the third man could be vacuuming the support tubes and pre-cleaning the egregiously dirty areas of the tube. This has seemed to increase productivity as seen by the almost 72 meter a day average. Scott L. will be on vacation next week so to hopefully keep up with the current pace I am bringing out another Apollo employee who is very familiar with the site. Mark Layne will be filling in for Scott next week.
These are NOT the cathodes used as interlocks for the high voltage. For both end stations: I logged into the Beckhoff computer. I went to the 'CoE - Online' tab for the Inficon gauge labeled 'Pressure Gauge NEG (BPG 402)' in the system manager. In index FB44:01, 'Emission ON / OFF Command: Command', I entered 00 02 in the Binary box. I then verified that index 6015:05, 'Input Hot Cathode Ion: Emission Status Off/On Module 2' had changed from TRUE to FALSE. This was done around 11:35 PDT. Richard will go to the end stations and verify that they are off on Monday.
Cheryl, Patrick, TJ, Ed The ETMY LR RMS WD was tripped when I came in. I reset it by writing 0 and then 1 to it. Jim W. and I switched the ITMY, ITMX and BS ISI blends from Windy_90 to Quite_90. The mode cleaner was not locking because the input power was low. I had to do a search for home with the rotation stage. Spent most of the day keeping the IFO at DC power for commissioners. Reloaded Guardian a couple of times for script changes. 09:18 Richard to roof 09:24 Jason and Peter taking diode box into PSL diode room 09:35 Richard off roof 09:43 Jason and Peter done 09:50 ETMX ISI WD tripped, indicated payload trip, but no WD trip on SUS or TMS 10:51 Richard to roof 11:35 I remotely turned off the cathodes at both end stations (WP 5363) 12:09 Pepsi truck through gate 15:44 Dave installing h1susetmypi model (WP 5365) 15:48 Jeff K. restarting h1susomc model (WP 5366) Currently Stefan has the IFO and is working on ASC.
J. Kissel, S. Dwyer WP #5366 Continuing to pursue OMC ASC diagonalization (see LHO aLOG 19691), I've made changes to the top level of the OMC SUS front end model such that the ISC signals go through the originally intended ISC path, i.e. through the ISCINF, LOCK, and DRIVEALIGN banks. This is such that we can *use* the drivealign matrix to decouple L, P and Y drive. I've made the change in such a way that this is only a top-level model change, and does not impact any library parts. Sadly this means that the implementation is rather ugly, but if the new scheme is successful, we'll submit an ECR to clean up the model and install the scheme properly during a maintenance day. I've saved, compiled, installed, restarted the model, confirmed that all settings have been restored as expected, confirmed alignment sliders at the same value, and that the "new" (or remapped) drive signals arrive in the expected banks as expected. Since the former paths were not disconnected, this change is entirely backward compatible; all previous alignment schemes will still work. The development of the control filter implementation has now been handed off to Sheila.
Starting late last night the front end channel access freeze ups are now fairly regular and hourly (in the 30-40 minutes of the hour). In the past few hours they have only occured in the 37th minute of the hour. I have checked all the usual suspects (hourly rsync, autoburt, etc.) and not found any correlation so far. I have also checked Ganglia and Observium logs, no obvious computing or networking events happen at this time of the hour.
Matt found some data from last night that looks pretty good - I'm not sure what the state of the IFO was at this particular time, so I won't say.
Brute force coherence report for this perod can be found here:
https://ldas-jobs.ligo.caltech.edu/~gabriele.vajente/bruco_1120811417/
Our NOMINAL_LOW_NOISE state now includes BS coil drivers swtiched, SRCL and MICH FF. A2L coefficients were tuned before the vent but not carefully since then. We also had the ISS 2nd loop on at this time that helped the noise around 300 Hz.
we have seen that low frequency noise breathe somewhat; the noise was already low aroun 70 Hz when we switched on the SRC FF (the old filters). We have taken a few measurements with better coherence and with better fitting code and will soon get a bit more subtraction. The high frequency noise is less good than the best mysteriously. The DARM offset was giving us 20 mA total OMC DC current. We did not succeed yet in being stable at higher offsets.
Because of Robert's interest in the HAM6 ISI spring flexure to DARM coupling, I dug up a spare blade left over from aLIGO assembly and rigged it up on a table in the staging building. It's very rough, but intended only as a first look. Hopefully, Robert will attach his test results.
The following are a list of lock loss messages in the Guardian log. We've had a bunch of locklosses during the transition from locked to low noise this evening. As you can see there are a few different culprits, but one of the big ones is LOWNOISE_ESD_ETMY. It would be handy if someone can check out these lock losses and home in on what precisely went bad during this transition (e.g. ramping, switching, etc.). Then we can get back to SRCL FF tuning.
2015-07-17 06:47:05.694550 ISC_LOCK LOWNOISE_ESD_ETMY -> LOCKLOSS
2015-07-17 06:48:22.162260 ISC_LOCK LOCKING_ARMS_GREEN -> LOCKLOSS
2015-07-17 07:14:34.431170 ISC_LOCK NOMINAL_LOW_NOISE -> LOCKLOSS
2015-07-17 07:26:40.249110 ISC_LOCK CARM_10PM -> LOCKLOSS
2015-07-17 07:34:59.720030 ISC_LOCK PREP_TR_CARM -> LOCKLOSS
2015-07-17 07:42:08.269350 ISC_LOCK LOCKING_ALS -> LOCKLOSS
2015-07-17 08:02:41.773620 ISC_LOCK LOWNOISE_ESD_ETMY -> LOCKLOSS
2015-07-17 08:21:58.665420 ISC_LOCK LOWNOISE_ESD_ETMY -> LOCKLOSS
2015-07-17 08:31:29.035330 ISC_LOCK REDUCE_CARM_OFFSET_MORE -> LOCKLOSS
2015-07-17 08:48:32.514870 ISC_LOCK LOWNOISE_ESD_ETMY -> LOCKLOSS
Guardian error causing lock loss in LOWNOISE_ESD_ETMY (Evan, Keita)
Summary:
Out of four lock losses in LOWNOISE_ESD_ETMY that Rana and Sheila listed, one lock loss (15-07-17-06-47-05) was due to the guardian running main() of LOWNOISE_ESD_ETMY twice.
Running main() twice (some times but not always) is apprently a known problem of the guardian, but this specific state is written such that running main() twice is not safe.
Details:
Looking at the lock loss, I found that the ETMY_L3_LOCK_L ramp time (left of the attached, red CH16) was set to zero at the same or right after the ETMX and ETMY L3 gain (blue ch3 and brown ch5) were set to their final number (0 and 1.25 respectively). There was a huge glitch in EY actuators at that point but not to EX.
This transition is supposed to happen with the ramp time of 10 seconds, so setting the ramp time to 0 after setting the gain kills the lock.
Looking at the guardian code (attached right), the ramp time is set to zero at the beginning and set to 10 at the end.
Evan told me that main() could be executed twice, we looked at the log (attached middle), and sure enough, right after LOWNOISE_ESD_ETMY.main is finished at 2015-07-17T06:46:50.39059, the gain was set to zero again.
I have identified the source of the double main execution and have a patch ready that fixes the problem:
https://bugzilla.ligo-wa.caltech.edu/bugzilla3/show_bug.cgi?id=879#c7
If needed we can push out a point release essentially immediately, maybe during next Tuesday's maintenance period.
Bounce rang up during the EX-EY transition gain ramping, 3/4 of the times last night.
In three out of four lock losses in LOWNOISE_ESD_ETMY that Rana and Sheila listed, the guardian made it all the way to the gain ramping at the end, and it did not run main() twice.
However, about 7 to 8 seconds after the ramping started, 9.8Hz oscillation built up in DARM, then there came fast glitches in ETMY L2 drive, then the IFO lost lock.
This looks like a bounce but I have no idea why it was suddenly rang up.
See attached. First attachment shows the very end of the lock losses that clearly shows DARM oscillation.
The second plot shows the same lock losses but zoomed out so you can see that each lock losses happened 7 to 8 seconds after the ramping started.
The last attachment shows one of the DARM oscillation so you can see that 6 cycles = 0.309 seconds (i.e. 9.8Hz signal).
Update: After bounce was rung up, OMC DCPDs saturated before IFO lost lock.
In the attached, while 9.8Hz was getting bigger (top left), if you high-pass DARM_IN1_DQ (middle left) you can see that the high frequency part dominated by 2.5kHz suddenly quenched at about t=18sec.
Same thing is observed in OMC DCPDs (middle middle and bottom middle), and even though we don't have a fast channel for DCPD ADCs, it seems like they were very close to the saturation at 18sec (bottom left).
Though we don't know why 9.8Hz was excited, at least we know that the DCPD saturated to cause the lock loss.
Since the same thing happened 3 times, and each time it was 7 to 8 seconds after the ETMX and ETMY L3 LOCK_L gain started ramping, you could set the gains to the values corresponding to this in-between state, keep it there for a minute or so, and see if the IFO can stay locked. If you fail to keep it locked it's a sure sign that this instability is somehow related to the L3 actuator balance between X and Y, or L3-L2 crossover in Y (or in X) or both.
The in-between gain would be something like 1.1 for EY L3 lock and 0.125 for EX.
We were locked at 24 Watts for just over 2 hours before we rang up a PI that shows up in the Y arm QPDs at 15734 Hz. I increased the ring heater power (for both arms )from 0.5 to 0.6 Watts. template with the QPD IOP channels is attached. I tried to reduce the power, but we lost lock when I did that perhaps because the ISS second loop was on. The lockloss was at about 3:00 UTC on July 17
We suspect this was not PI, that it was the roll mode.
It would be useful if someone could track down which optic this was by looking at the roll mode peak RMS trends and looking to see if it in fact did saturate any of the actuators.
Rana
ETMX ring heater has asymmetrical heating at the moment (0.5W upper ring 0.6W lower ring). Not sure if you'd like to keep the setting so I'm leaving it there....
Matt, Sheila
Matt looked at this lock this morning, and saw that although the ROLL mode might have increased in the last few minutes it likely wasn't the culprit. However, there was a line at 1055 Hz that apeared and grew in the last 20 minutes of the lock, shown in the attached screenshot. This would indicate that the PI could be at 15329 or 17439, so this is a new PI for us. (past incidents alog 17903 and alog 18965) As far as I know this is also a different frequency from what has been seen at LLO.
Unfortunately, in my hurry to grab some fast channels for the QPDs, I used the LLO channel names but we are uusing a different ADC, so I got the wrong channels. So we don't really know which arm this was in.
I've made a template that anyone who suspects that a PI is rung up can run:
/ligo/home/sheila.dwyer/ParametricInstabilities/PI_IOP_template.xml
The assymetry in the ring heater was my mistake.
I wrote the script for the OPLEV charge measurements which sets most of the settings and easy-to-use. It works for ~ 2.5 hours. If you are the last person who leaving the LHO in the night - please, run it! Instructions: 1. Set ISC_Lock state to DOWN 2. Set both ETMs to "ALIGNED" state 3. Align Optical levers (pitch and yaw) for both arms to 0 +/-.5 urad 4. Run the scripts: scripts directory is /ligo/svncommon/SusSVN/sus/trunk/QUAD/Common/Scripts run the python files: ./ESD_Night_ETMX.py and the second script in another terminal: ./ESD_Night_ETMY.py If it works, it: a) In first ~ 30 seconds sets the channels and can warn about troubles with ESD or Alignment. If it happens - check this system and pressb) During the measurements it should once a second print "Receiving data for GPS second: 1234567" c) After all the measurements it should restore all the settings back. If it gives the errors but still receive some data - let it work. If it obviously not work, you can try to run it again. If it does not help - please, restore all settings running ./ESD Restore_Settings.py . Due to it is first try, today I will be very thankful if you'll check after running the scripts : 1. L3 Lock: Bias Voltage - 0 Offset - green light Ramp time - 5s 2. ESD linearization: Bypass - ON 3. For ETMY: turned on Hi-voltage driver. Just in case: scripts modify only this settings.
1. Script was updated to align the optical levers. If you did not align it - script will do it between (a) and (b), i.e. in first minutes. Measurements begin when OPLEV are aligned. 2. If you need to stop the charge measurements, there are two ways. 1.(preferable) Press 'Enter' - it will stop the script not later then in 12 minutes, when all the biases and quadrants will be done for this circle. It will restore all the settings 2. You can break it using Ctrl-C immediately but then you will need to restore ESD settings using ./ESD_Restore_Settings.py or do it manually. Using the second way you also loose and need to manually restore the optical levers offsets for ETMs. Note: Main charge measurement scripts set all the settings back to where they find it. If you break it and use ESD_Restore_settings.py - it will set all values to "standard". (!) We are talking about change the sign of ESD bias voltage on ETMX. Using ESD_Restore_settings.py will change it to today's value.