The OMC DCPDs have proven to be useful for monitoring the test mass acoustic modes around 15 kHz, but there is a lot of low-pass filtering in the readout chain that render them less useful for monitoring higher frequency acoustic modes. This is now being changed with modifications to the electronics that will provide separate, faster channels for PI monitoring:
The attached plot shows the magnitude response of the low-pass filtering in the previous case, and with the poles removed from the whitening and AA channels. It is no wonder that no PI modes have been seen above the 15-16 kHz grouping, as there is 40 dB of relative attenuation already at 25 kHz.
I also attach a 1kHz - 10 MHz transfer function of the in-vacuum DCPD readout that Koji measured at Caltech:
Here are the transfer functions with the AA notch included. I had forgotten that the notch is a passive twin-T type, which by design has a Q = 1/4, so it is quite wide and should be taken into account. In future the AA notches should also be removed.
The infrasound mics have already been upgraded at LLO but today I removed the sensors at LHO. I removed the sensors from the microphones in the CS, at EX, and at EY. This meant unscrewing the bottom of the instrument box and pulling out some of the equipment. As far as I could tell everything went as planned and the equipment will now be sent away for upgrades.
WP 5954 All of the 18bit DAC cards in h1susb123 and h1sush2a now successfully pass the autocal on both power up and restart of the IOP models. Results shown below: h1susb123 - power up [ 61.694760] h1iopsusb123: DAC AUTOCAL SUCCESS in 5344 milliseconds [ 67.057232] h1iopsusb123: DAC AUTOCAL SUCCESS in 5340 milliseconds [ 72.850938] h1iopsusb123: DAC AUTOCAL SUCCESS in 5344 milliseconds [ 78.213461] h1iopsusb123: DAC AUTOCAL SUCCESS in 5345 milliseconds [ 85.247010] h1iopsusb123: DAC AUTOCAL SUCCESS in 6571 milliseconds [ 90.610149] h1iopsusb123: DAC AUTOCAL SUCCESS in 5344 milliseconds [ 95.973497] h1iopsusb123: DAC AUTOCAL SUCCESS in 5345 milliseconds [ 101.336018] h1iopsusb123: DAC AUTOCAL SUCCESS in 5340 milliseconds h1susb123 - IOP model restart [ 911.921930] h1iopsusb123: DAC AUTOCAL SUCCESS in 5344 milliseconds [ 917.282286] h1iopsusb123: DAC AUTOCAL SUCCESS in 5345 milliseconds [ 923.070351] h1iopsusb123: DAC AUTOCAL SUCCESS in 5341 milliseconds [ 928.430617] h1iopsusb123: DAC AUTOCAL SUCCESS in 5344 milliseconds [ 935.453979] h1iopsusb123: DAC AUTOCAL SUCCESS in 6576 milliseconds [ 940.814416] h1iopsusb123: DAC AUTOCAL SUCCESS in 5345 milliseconds [ 946.174750] h1iopsusb123: DAC AUTOCAL SUCCESS in 5345 milliseconds [ 951.535191] h1iopsusb123: DAC AUTOCAL SUCCESS in 5341 milliseconds h1sush2a - power up [ 48.086815] h1iopsush2a: DAC AUTOCAL SUCCESS in 5344 milliseconds [ 53.450158] h1iopsush2a: DAC AUTOCAL SUCCESS in 5345 milliseconds [ 60.479701] h1iopsush2a: DAC AUTOCAL SUCCESS in 6576 milliseconds [ 65.843077] h1iopsush2a: DAC AUTOCAL SUCCESS in 5344 milliseconds [ 71.640028] h1iopsush2a: DAC AUTOCAL SUCCESS in 5345 milliseconds [ 77.001396] h1iopsush2a: DAC AUTOCAL SUCCESS in 5344 milliseconds [ 82.364713] h1iopsush2a: DAC AUTOCAL SUCCESS in 5345 milliseconds h1sush2a - IOP model restart [ 1252.909059] h1iopsush2a: DAC AUTOCAL SUCCESS in 5344 milliseconds [ 1258.269410] h1iopsush2a: DAC AUTOCAL SUCCESS in 5344 milliseconds [ 1265.292831] h1iopsush2a: DAC AUTOCAL SUCCESS in 6572 milliseconds [ 1270.653176] h1iopsush2a: DAC AUTOCAL SUCCESS in 5344 milliseconds [ 1276.441059] h1iopsush2a: DAC AUTOCAL SUCCESS in 5344 milliseconds [ 1281.801414] h1iopsush2a: DAC AUTOCAL SUCCESS in 5344 milliseconds [ 1287.161817] h1iopsush2a: DAC AUTOCAL SUCCESS in 5345 milliseconds
Last week, Rana was inquiring about how locklosses affect oplev sums.
Took a random sample of locklosses from O1 and came up with fairly consistent results for after a lockloss, so here's the nitty gritty:
Attached is an example of a lockloss from Nov.
There's a DCC document and even a tool to look at these things.
We adjusted the pressure on the pwrmtr water circuit from its nominal 5 bar to 4.5 bar. The flow rate to the laser heads decreased somewhat. Put the pressure back to ~5 bar to set the flow rate in the circuit to be over 1.25 l/min. In doing so, the other flow rates seemed to behave as expected. We will monitoring the flow rates and pressures over the the next few days to see if anything settles out/down. Jeff, Peter
Have been monitoring the PSL chiller trends during the day. The attached plot is for an 8 hour period. The spikes at 06:50 (PT) are when Peter and I varied the pressures regulators. The pressures and flows have flattened out, which is good. The head flows have also flattened out, which is also good. The head temperatures have been moving around a bit (by 0.1 degree). It appears that varying these pressures regulators may have stabilized the pressure and flows. Will check again in the morning to see if these trends hold.
Measured ~18 mV, 20 ohms between the table surface and the chassis of the ISS AOM. Installed DC block. Measured the same afterwards, so I removed the DC block. This clearly does not work as well as the one on the FSS AOM (for whatever reason).
I have set the level setpoint to 90% to exercise the new PID control.
Found that the PI output for CP5 was 0% even though the pump level is at 83% -> I switched CP5 to manual control with LLCV %open of 90% -> The other vacuum group members have been experimenting with CP5 of late so I'll "stay out of the kitchen" and let them continue/investigate.
Changing to 35% while transfer line cools -> Need to do stuff in other room and will monitor exhaust pressure sporadically.
Back to PID at 08:00 as the PID output had risen to 90%.
Jenne, Carl, Sheila, TJ, Stefan, Lisa We have a quite reliable locking sequence to 40W at this point (recycling gain ~28.5, same SOFT offset strategy as over the week-end with one set of offsets engaged before power up, one at 40W; SRM alignment done by end), so tonight we started going back to low noise while doing PI testing. Here is the list of things we successfully did; we still need to modify the ISC LOCK code to be compatible with some of those (note that we are still on POPX, so the POP beam diverter is open):
Plot 1: Noise spectrum for tonight. Below 60Hz is due to ASC, and can still be addressed.
Plot 2: Auxiliary loops noise. Note the increased coupling of the mechanical resonances just below 60Hz. I suspect that we still are not quite centered in the recycling cavity.
Plot 3: Aux loop coherences
After this lock broke Carl and I turned on the ITM ring heaters to 0.5 Watts each, and left the end stations at 1.5 Watts each. Carl thinks that this will help with PI, and this will also set us up to try some common TCS tuning tomorow.
Some of the things that Lisa mentioned are now in the guardian.
Attached is a screenshot of when (in PSL power terms) the OMC ASC rails.
For the first almost 3 hours of this lock we were toggling the gain of CSOFT P by 20 dB every 10 seconds because of a logic problem in the guardian. Should be fixed now.
Filters for PI damping have been broadened to 10Hz as the phase change over 0.5Hz for some of the filters being used was greater than 200 degrees.
These 10Hz wide filters may be problematic for the 15541.9 and 15542.6Hz modes. I have tested damping these modes at low amplitudes with the 10Hz wide filters and to damp, however as we reached 3 hours into this lock the 15541.9 and 15542.6 were slowly pushing their way up, the filters I have put in have about 60 degrees phase shift for 1Hz change in frequency.
I tested iwave on this pair of modes, it tracks the largest mode very well, however once damped to a level lower than the next highest mode it runs off and tracks that mode. This meant two iwave blocks running one on each etm pi model generally pushed their test mass with the largest amplitude mode. I was using tau of 10.
We have a SUS_PI guardian now. The gaurdian has 4 states managed by ISC_LOCK. IFO_DOWN, OMC_TRACKING, PI_DAMPING and ETMX_PI_DAMPING. The tracking state just turns on tracking for long term testing. The damping states turn on the bandpass damping chains with settings that have been tested today and very low gains for the settings in Terra's PI wiki. I will update the wiki tomorrow with some new settings.
The 0.5W increase in the ITM ring heaters should push the optical mode to lower frequencies, as we see PI in the 15540Hz group of modes close to the end of the RoC thermal transient (1.5-2hour in a 2-3hour transient) I am hoping this will be enough to make these instabilites a little less agressive. I have been doing some testing of the 15kHz mode in anticipation that this TCS change will make these modes ring up more at the beginning of lock.
Before leaving I stepped the ETMY Ring heater by 0.5W total to test the idea that we can push the 15542Hz modes appart with a little heating.
The ITM ring heaters that Sheila activated seemed to have introduced a substrate lensing of about 8 uD on each optic according to the TCS simulator. The ITMX HWS saw the consistent amount of change in the substrate lensing (~ 18 uD due to the round trip lensing which gives an extra factor of two). After the ring heaters were activated, the power recycling on average was higher than the previous 41 W stretch by 1%; this could be because the last lock stretch with the ring heaters was with a slightly lower PSL power of 39 W. I attach trend of the relevant channels.
TITLE: 06/28 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: None
SHIFT SUMMARY: Commissioners working hard. Hit 40+Mpc for a bit.
Nutsinee, Jim, Dave:
The HWS code crashed at 07:50 PDT this morning, Nutsinee tried to restart at 11:46 PDT, but it failed. We found that the 1TB raid is 100% full (has data back to December 2014). We are contacting Aidan to see how to proceed.
BTW: the /data file system on h1hwsmsr is NFS mounted at the end stations, so no HWS camera information is being recorded at the moment.
We deleted December 2014 ITMX and ITMY data to free up 21GB of disk space on /data. The code now runs.
We need a long term plan on how to keep these data if they need permanent archiving.
I have restarted both X and Y HWS codes this evening.
The disk was full again today. I deleted Jan-Feb 2015 data from ITMX folder. Freed up 194GB. HWS code now runs again.