In an attempt to make both end stations function the same we installed the Low Voltage Low Noise ESD driver below the existing HV ESD driver. Finished pulling the cables and hooked everything up. I verified that I could get Low Voltage out but could not proceed to HV testing do to site computer work. I have left the controls for the system running through the LVLN system but have the High voltage cable still driving from the ESD driver directly. Re testing will continue after software issues are fixed.
J. Kissel, B. Weaver, C. Vorvick, C. Gray, D. Barker, J. Batch While we (Jim, Betsy, and Myself) had attempted to upgrade the RCG, compile, install, and restart all front-end code/models in a piecemail fashion starting at 8:00a PDT this morning, as we began to recover, we began to see problems. We had intended to do the models in an order that would be most-facilitative to IFO recovery, namely PSL, SUS and SEI first. Also, in order to speed up the process, we (Jim, Betsy, and Myself) began compiling, and installing those models independently, and in parallel. (1) We had "found" (this is common CDS knowledge) that our simultaneous installs of PSL, SEI and SUS front-end code all need to touch the /opt/rtcds/lho/h1/target/gds/param/testpoint.par file, and *may* have resulted in collisions. After the AWG processes on the SUS models would not start, Betsy flagged down Jim, and found that, since we were all installing at the same time, there were SOME of the SUS that didn't make it into the testpoint.par file, so when AWG_STREAM was started by the front-end start script, it didn't find its list of test points, so it just never started. This "bug" had never been exposed because the all, sitewide front-end code had never had this many "experts" trying to run through the compilation / install process in parallel like this. That's what we get for trying to be efficient. Dave arrived halfway through the restarts (we had finished all corner SEI, SUS, PSL, and ISC, upgrades by this point), and informed us that, last night, after he had removed and regenerated the /opt/rtcds/lho/h1/chans/ipc/H1.ipc file by compiling everything, he had moved it out of the way temporarily, in case there were any model restarts over night. If there were model restarts, and the new IPC file was still in place, then that model could potentially be pointed to the wrong connection, and wreak havoc on the whole system. However, we (Jim, Betsy, and I) didn't know that dave had done this, and so when we began compiling this morning, we were compiling on top of the old IPC file. Dave suggested that, as long as there were no IPC related model changes between last night and when we got started, the IPC file should be fine, so we proceeded onward. (2) However, as we got further along the recovery process, we also found that some of the SUS (specifically those on the HAM2a and HAM34) were not able to drive out past their IOP model DAC outputs. We had seen this problem before on a few previous maintenance days, so we tried the fix that had worked before -- restart the all front-end models again. When that *didn't* work, we began poking around the Independent Software IOP Watchdog screens, and found that several of these watchdogs were suffering from constant, large, IPC error rates. Suspecting that the /opt/rtcds/lho/h1/chans/ipc/H1.ipc file also had been corrupted because of the parallel code installations, Dave suggested that we abandon ship. As such (starting at ~11:00a PDT) we're - Deleting the currently referenced IPC file - Compiling the entire world, sequentially (a ~40 minute process) - Installing the entire world, sequentially, (a ~30 minute process) - Killing the entire world - Restarting the entire world. As of this entry, we're just now killing the entire world. Stay tuned!
Image attached - two DOWN states, two NONE states, two CHECK_IR states - why? When did this happen? Who will fix it?
One clue to the issue is that the "top" DOWN when selected will show up in the guardian log, but the "bottom" DOWN does not produce any guardian log entry.
Check out my first entry about this... here.
This is most likely a side affect of my attempts to improve the reload situation. It's obviously something that needs to be fixed.
In the mean time, you probably have to restart that node to clear up the REQUEST_ENUM:
guardctrl restart ISC_LOCK
You should also then close and reopen the MEDM screens once the node has been restarted.
I tried to turn the gauge back on through the Beckhoff interface by writing '01 02' to the binary field of the FB44:01 parameter but kept getting a response code corresponding to 'Emission ON / OFF failed (unspecified reason)' in FB44:03. Gerardo power cycled it and it came back on.
Let's call these hot filament ion gauges or Bayard Alpert gauges rather than "hot cathode". thx
Turned back off per Gerardo's request. Was able to do so through the Beckhoff interface.
For Maintenance, we took all HEPI's and ISI's to OFFLINE.
I requested OFFLINE for BSC1 but it stalled and did not go there.
Jim helped, and got it to go OFFLINE.
TJ and I looked into why this happened, and what happened, and here's the list of events:
ITMY BSC1, Request for OFFLINE at 15:13 did not execute:
- SEI_ITMY request for OFFLINE did not take it to OFFLINE
- ISI_ITMY_ST1 left in Managed by User on 7/23 (found in the log, Tj and Cheryl)
- ISI_ITMY_ST1 Managed by User prevents SEI_ITMY from taking it OFFLINE
- the fix was that Jim requested INIT on SEI_ITMY
- this changed ISI_ITMY_ST1 from managed by User to managed by SEI_ITMY
- HOWEVER, HEPI was in transition to OFFLINE, and the INIT request interrupted that and sent HEPI back to the previous state, ROBUST_ISOLATED
- This brought HEPI back up when we wanted it OFFLINE
- Jim requested INIT again
- Jim requested OFFLINE again
- these second INIT/OFFLINE requests allowed SEI_ITMY to bring HEPI and ST1 and ST2 to OFFLINE
My questions:
- Jim, TJ, Cheryl
My questions:
Be specific about what you mean by "Guardian didn't recognize it's current state". It sounds like HPI_ITMY was always in full control of HEPI, and was reporting it's state correctly. SEI_ITMY was a bit confused since one of its subordinates was stolen, but I think it still understood what states the subordinates were in.
When INIT is requested the request is reset to what it was previously right after it jumps to INIT. Presumably SEI_ITMY's request was something that called for HPI to be ROBUST_ISOLATED when INIT was requested.
This should be considered an anomaly. Someone in the control room had manually intervened with the ITMY SEI system, thus ISI_ITMY_ST1 reporting "USER" as the manager. The intervening user should have reset the system back to it's nominal configuration (ISI_ITMY_ST1 managed by "SEI_ITMY") when they were done, which would have prevented this issue from occurring.
All of the problems here were caused by someone intervening in guardian and not reseting it properly. Guardian has specifically been programmed to not second guess the users. In that case the users have to be conscientious to reset things appropriately when they're done.
ITMx & ITMy HEPIs had counters up around ~400-500 counts, and were reset this morning.
I forgot to run the the PSL Checklist during my shift yesterday. Today, I ran our PSLweekly script and here is the output. There was model work recently/currently done on the PSL today, so this is why many items are ~10min old.
Items to note:
ISS Diffracted power is HIGH!
Laser Status:
SysStat is good
Front End power is 32.51W (should be around 30 W)
Frontend Watch is GREEN
HPO Watch is RED
PMC:
It has been locked 0.0 days, 0.0 hr 10.0 minutes (should be days/weeks)
Reflected power is 2.061Watts and PowerSum = 24.37Watts.
FSS:
It has been locked for 0.0 days 0.0 h and 10.0 min (should be days/weeks)
TPD[V] = 1.521V (min 0.9V)
ISS:
The diffracted power is around 14.4% (should be 5-9%)
Last saturation event was 0.0 days 0.0 hours and 10.0 minutes ago (should be days/weeks)
Keep in mind that this was taken on a maintenance day after the mode cleaner had been taken down and a previous snapshot had been restored from prior to proper adjustment of the AOM Diffracted power. Not the most optimal time to be taking vital signs :)
J. Kissel, C. Vorvick, B. Weaver, C. Gray We've attempted to bring down the IFO in a controlled fashion via the Gaurdian at 7:58a PDT. Cheryl will post details of how this went -- rather unexpectedly -- later.
J. Kissel, B. Weaver In getting ready for maintenance day, we've done the following to the SDF while the IFO was fully locked at 60+ [Mpc]: Accepted - new coil driver states (in COILOUTF, ESDOUTF, and BIO STATE REQUESTs) on ETMX, ETMY, SRM, MC2 - SR3 gain to be zero - new DRIVEALIGN GAINs for off-diagonal elements on ETMX, ETMY, ITMX, ITMY - A few new TRAMP times on BS - New work done in CAL-CS DARM calibration filters (H1:CAL-CS_DARM_FE_ETMY_L2_LOCK_L [turning OFF FM6, turning ON FM7] and CAL-CS_DARM_ERR_GAIN [from 1.32 to 1.22]) - Change in 538.1 calibration line frequency (from 538.1 to 329.9 [Hz]) and associated EXC amplitude (from 2.4 to 1.0 [ct]) - In LSC model, accepted new TR_X/Y QPB SUM OFFSETS - ASC DHARD P TRAMP increase from 10 to 20 [sec] - The output switches OMC M1 LOCK filters We *think* these should be ON, given the model change to support pushing these length and alignment control signals through the DRIVEALIGN matrix LHO aLOG 19714, but we don't think Sheila got to commissioning them. - OMC-ASC_QPD_A_YAW_OFFSET (was -0.11 and saved as -0.15) - PSL FSS COMON_GAIN appears to have been tuned on (LHO aLOG 19715) from 20.17544 to 20.7 - PSL-ISS_LOOP_STATE_REQUEST (was 0 -- which we think corresponds to the ISS OUTER LOOP being OFF -- to 32700 -- which we think corresponds to it being ON) - FEC-93_DACDT_ENABLE (that's for the IOPISCEY model) DAC duotone signal being ON (turned on yesterday July 27). - A ton of stuff in LSCAUX LOCKIN / DEMOD stuff that (we think) has to do with Kiwamu's calibration line cavity pole tracker (LHO aLOG 19852) Reverted - New? work done changing the OMC DCPD to DRAM input matrix element (H1:LSC-ARM_INPUT_MTRX_RAMPING_1_1) from 0 to 13.3050658281 -- this should be controlled by gaurdian! - in fact, we tried to revert it, and some gaurdian is FORCING it back to the 13.3 number. So we'll leave this as is, but it sounds like we should eventually not monitor it - MATCH gain on ISIHAM2 from 1.0 to 1.036 - PCAX and PCALY calibration lines had been turned OFF, so they will come back ON (534.7 [Hz] @ 19300 [ct] and 540.7 @ 9900 [ct] ) Things that had tiny differences that we used caput to force to a reasonable precicison: - caput H1:IMC-WFS_GAIN 0.1 - caput H1:PSL-FSS_COMMON_GAIN 20.7 Saved and loaded new EPICs DB (to clear uninitiallized channels and/or channels not found) for - ASC (NOT INITIALIZED) - OMC (NOT INITIALIZED) - ODC MASTER (NOT FOUNDs and NOT INITIALIZED) - SUS ITMX (NOT FOUNDs and NOT INITIALIZED) - LSC (NOT FOUNDs) - ISIETMX (NOT FOUNDs) - ISIETMY (NOT FOUNDs) - PEM EX and PEM EY (NOT FOUNDs)
Hit LOAD COEFFICIENTS on H1CALEX and H1CALEY since I see that they were modified last night. All LOAD COEFFICIENTs will be done with today's boots anyways, but since we were looking at the diffs of these anyways, we went ahead and did them.
pre-maintenance restarts. New PI models with associated DAQ restart.
model restarts logged for Mon 27/Jul/2015
2015_07_27 16:05 h1susetmxpi
2015_07_27 16:09 h1susetmxpi
2015_07_27 16:20 h1susetmxpi
2015_07_27 16:23 h1susetmxpi
2015_07_27 16:48 h1susetmxpi
2015_07_27 16:50 h1susetmypi
2015_07_27 16:51 h1susetmypi
2015_07_27 16:56 h1broadcast0
2015_07_27 16:56 h1dc0
2015_07_27 16:56 h1fw0
2015_07_27 16:56 h1fw1
2015_07_27 16:56 h1nds0
2015_07_27 16:56 h1nds1
None of these things changed the DARM noise. (just the calibration)
As a follow-up to point #4: I redid the bias reduction test, this time by reducing the voltage from 380 V to 190 V.
As before, there was no obvious change in the DARM noise. [See attachment.]
Took ASC_CHARD_P FM1 (20dB gain) out from guardian, and instead turned on FM9 in ASC_CHARD_P and ASC_CHARD_Y (LP9 - a low pass at 9Hz). Also added 25Hz-40Hz band stop filters to FM9 of ASC_DHARD_P and ASC_DHARD_Y (called Ncheck). These can safely be engaged in full lock, but they are not in guardian for now.
Following alog 19856 we took the same coherence measurements (OMC_DCPDs vs AS_C_LF vs AS_A_RF36) in different configurations. a) Plot 1: we locked the OMC on a 45MHz SB at 15W of input power (to avoid PD saturation). In this configuration we retook the coherence measurement between AS_C_LF and OMC_DC, as well as AS_A_RF36 and OMC_DC. (Also, the ISS was off in this state) - While we see coherence above 2Hz - similar to that in alog 19856 - nothing is visible below that. - In the power spectrum of the side band we can identify two features: - At 19032Hz we see the effect of the sideband 00 mode going through arm resonance (actually, we are seeing the peak in between where the lower and upper audio sideband goes through resonance - resulting in maximum FM to AM conversion in between) - Similarly, at 9700Hz we see the feature that corresponds to the sideband 02 mode going through resonance. This might we a way to fine-tweak the PRC to ARM mode-matching when we do common CO2 heating runs. b) Plot 2: comparing ISS_SECONDLOOP_GAIN at 25dB (dashed coherence) vs 0dB (solid coherence) - The 14kHz gain peaking of the ISS clearly shows up, but below ~8kHz there is no obvious change - for 25dB we estimate the UGF to be at 3kHz - for 0dB we estimate the UGF to be around a few 100Hz. c) Plot 3: comparing FSS_COMMON_GAIN at 20.7dB vs 14.7dB - I do not see any difference in the coherence.
Will monitor hourly -> CP8's LLCV will likely increase from its current value of ~ 50% open to something higher -> won't need to fix until tomorrow as long as CP8's level stays out of alarm -> otherwise will
I happened to witness the lock loss after 16h. We had several PRM saturations spread over ~8 minutes, before one of them took down the interferometer.
Here are some cavity pole data using a Pcal line (see alog 19852 for some details):
The data is 28 hours-long and contains three lock stretches, the first one lasted for 9-ish hours, the second about 16 hours (as Stefan reported above) and the third one 2 hours. As shown in the plot, the frequency of the cavity pole was stable on a time scale of more than 2 hours. It does not show obvious drift on such a time scale. This is good. However, on the other hand, as the interferometer gets heated up, the frequency of the cavity pole drops by approximately 40 Hz at the beginning of every lock. This is a known behavior (see for example alog 18500 ). I do not see clear coherence of the cavity pole with the oplev signals as oppose to the previous measurement (alog 19907) presumably due to a better interferometer stability.
Darkhan is planning to perform more accurate and thorough study of the Pcal line for these parcitular lock stretches.
As a test, you could inject a few lines in this neighborhood to see if instead of cavity pole drift (which seems like it would take a big change in the arm loss) its instead SRC detuning changing the phase. With one line only, these two effects probably cannot be distinguished.
Rana,
It sounds an interesting idea. I need to think a little bit more about it, but looking at a plot in my old alog (17876), having additional lines at around 100-ish Hz and 500 Hz may suffice to resolve the SRC detuning. Although it would be very difficult if the detuning turns out to be small because it would look like almost a moving cavity pole with a small detuning. I will try checking it with high frequency Pcal lines at around 550 Hz for these lock stretches. /* by the way I disabled them today -- alog 19973 */
In addition to the time series that I posted, I made another time series plot with the corner HWSs. This was a part of the effort to see impacts of the thermal transient on the DARM cavity pole frequency.
There seems to be a correlation between the spherical power of ITMY and the cavity pole in the first two-ish hours or so of every lock stretch. However, one thing which makes me suspicisous is that the time constant of the spherical power seems a bit shorter than the one for the cavity pole and also the arm powers -- see the plot shown below. I don't have a good explanation for it right now.
Unfortunately the data from ITMX HWS did not look healthy (i.e. the spherical power suspiciousely stayed at a high value regardless of the interferometer state) and that's why I did not plot it. Additionally, the ITMY data did not actually look great either since it showed a suspiciously quiet time starting at around t=3 hours and came back to a very different value at around t=5.5 hours or so. I am checking with Elli and Nutsinee about the health of the HWSs.