A while ago we saw that negative 400 volts reduced the 60 Hz line in DARM by a factor of 2. (30778)
We just checked again and this is still true and now the guardian will set in the guardian state low noise etmy esd. We are doing this using the gain in the bias filter bank so that the bias choosen to mitigate charge will still be use whenever we aren't locked.
Attached is a 60 day trend of PT140 which is a one of the new Inficon BPG402s? IP7 and IP8 have been a steady 5000 volts for this time period. Is this a gauge thing? I haven't been intimate with what Gerardo, John and Chandra have learned regarding the behavior of these new wide-range Bayard-Alpert/Pirani hybrids but this slope looks "not insignificant"
That slope looks really fishy. Are both IPs fully pumping? What does HAM6 pressure look like (also hot cathode ion gauge)? Did PT 170 and 180 flatten out after degassing?
We think that the pressure increase is due to temperature, see attached. aLOG noting temperature change.
Since we are talking temperature change in the LVEA, note the vertical change on some of the optics (BS and ITMs), other are affected as well.
TravisS, DarkhanT, Yuki, SudarshanK
Calibration measurements for Pcal Y End was completed on 2016/10/31 (before the working standard got damaged- alog 31077). The analysis shows that the calibartion is consistent with the past results. We will scrutnize the results in more detail during the Pcal Call next week.
It appears to have crashed after losing connection to H1:GRD-ISC_LOCK_STATE_N (See attached screenshot).
We were already locked when I restarted it, and it restarted at 0, so the measured time of this lock stretch will be wrong.
The fault in the fast shutter check appears to have been cleared for now. Unfortunately I am still not entirely certain of the mechanism by which it was resolved or if it will return. We have been to NLN twice since. I believe Stefan broke the first lock moving the beam spot on PRM. We are currently on the second lock and I have just finished running the three a2l scripts. All three reported errors at the end: cd /opt/rtcds/userapps/release/isc/common/scripts/decoup ./a2l_min_LHO.py Traceback (most recent call last): File "./stop_osc_LHO.py", line 29, inmatrix.asc_ads_lo_pit[osc, optic]=0 File "/ligo/apps/linux-x86_64/cdsutils/lib/python2.7/site-packages/cdsutils/matrix.py", line 300, in __setitem__ self.put(row, col, value) File "/ligo/apps/linux-x86_64/cdsutils/lib/python2.7/site-packages/cdsutils/matrix.py", line 266, in put inds = list(self.__rc_iter(row, col)) File "/ligo/apps/linux-x86_64/cdsutils/lib/python2.7/site-packages/cdsutils/matrix.py", line 151, in __rc_iter rs = [self.__rows[row]] KeyError: 'OSC2' a2l script done! ./a2l_min_PR2.py Traceback (most recent call last): File "./stop_osc_LHO.py", line 29, in matrix.asc_ads_lo_pit[osc, optic]=0 File "/ligo/apps/linux-x86_64/cdsutils/lib/python2.7/site-packages/cdsutils/matrix.py", line 300, in __setitem__ self.put(row, col, value) File "/ligo/apps/linux-x86_64/cdsutils/lib/python2.7/site-packages/cdsutils/matrix.py", line 266, in put inds = list(self.__rc_iter(row, col)) File "/ligo/apps/linux-x86_64/cdsutils/lib/python2.7/site-packages/cdsutils/matrix.py", line 151, in __rc_iter rs = [self.__rows[row]] KeyError: 'OSC2' a2l script done! a2l_min_PR3.py Traceback (most recent call last): File "./stop_osc_LHO.py", line 29, in matrix.asc_ads_lo_pit[osc, optic]=0 File "/ligo/apps/linux-x86_64/cdsutils/lib/python2.7/site-packages/cdsutils/matrix.py", line 300, in __setitem__ self.put(row, col, value) File "/ligo/apps/linux-x86_64/cdsutils/lib/python2.7/site-packages/cdsutils/matrix.py", line 266, in put inds = list(self.__rc_iter(row, col)) File "/ligo/apps/linux-x86_64/cdsutils/lib/python2.7/site-packages/cdsutils/matrix.py", line 151, in __rc_iter rs = [self.__rows[row]] KeyError: 'OSC2' a2l script done! The commissioners are in the Thursday commissioning meeting.
Sorry about the a2l error message. Everything gets set so it's okay before the thing that errored happens, but obviously you still shouldn't get errors.
In the "stop everything" script we had the columns and rows of some matrices backwards - it ran fine for me now after fixing it, and I've checked it in.
J. Kissel, M. Evans, D. Barker, H. Radkins A confusing bit of settings wrangling* after the unplanned corner station computer restarts on Tuesday (LHO aLOG 31075) in the SUSITMPI model meant that a large fraction of the EPICs records in the ITM PI system were wrong. As such, we believe this was the cause of battles with Mode 27's PI a few nights ago (LHO aLOG 31111). In order to fix the problem, we used the hourly burt backups in /ligo/cds/lho/h1/burt/yyyy/mm/dd/hh:mm/ to restore settings all settings to Monday (2016/10/31), before the computer restarts. Further, Matt performed a few spot checks on the system, and suspected it good. *Settings wrangling: There were several compounding problems with the SDF system which meant that (1) The front-end did not use the safe.snap file upon reboot, and restored bogus values (2) The safe.snap file, which we'd thought had been kept up to date, had not been so since May. Why? (2) The safe.snap file for SUSITMPI used upon restart, /opt/rtcds/lho/h1/target/h1susitmpi/h1susitmpiepics/burt/safe.snap, is a softlink to /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susitmpi_safe.snap. Unfortunately, *only* that model's safe.snap that had its permissions mistakenly set to a single user (coincidentally me, because I was the one who'd created the softlink from the target directory to the userapps repo.), and *not* the controls working group. That means Terra Hardwick, who had been custodian of the settings for this system, was not able to write to this file, and the settings to be restored upon computer reboot had not been updated since May 2016. Unfortunately, the only way to find out that this didn't work is to look in the log file, which lives in /opt/rtcds/lho/h1/log/${modelname}/ioc.log and none of us (save Dave) remember this file existed, let along looked at it before yesterday **. There are other files made (as described in Hugh's LHO aLOG 31163), but those files are not used by the front-end upon reboot. I've since fixed the permissions on this file, and we can now confirm that anyone can write to this file (i.e. accept & confirm DIFFs). We've also confirmed that there are no other safe.snap files that have their write permissions incorrectly restricted to a single user. ** Even worse, it looks like there's a bug in the log system -- even when we confirm that we have written to the file, the log reports a failure, e.g. *************************************************** Wed Nov 2 16:39:10 2016 Save TABLE as SDF: /opt/rtcds/lho/h1/target/h1susitmpi/h1susitmpiepics/burt/safe_161102_163910.snap *************************************************** Wed Nov 2 16:39:10 2016 ERROR Unable to set group-write on /opt/rtcds/lho/h1/target/h1susitmpi/h1susitmpiepics/burt/safe.snap - Operation not permitted *************************************************** Wed Nov 2 16:39:10 2016 FAILED FILE SAVE /opt/rtcds/lho/h1/target/h1susitmpi/h1susitmpiepics/burt/safe.snap *************************************************** (1) This is quite alarming. Dave has raised an FRS ticket (see LHO aLOG 6588) and fears it may be an RCG bug. I wish I could give you mode information on this, but I just don't know it. In summary, we believe the issues with SUSITMPI have been resolved, but there's a good bit of scary stuff left in the SDF system. We'll be working with the CDS team to find a path forward.
The LLO CDS system has scripts running that do regular checks on file permissions on the /opt/rtcds file system to try to catch these. Please contact Michael Thomas for details. We'll check that we are looking for this issue as well (and are acting when problems are found)
I've opened FRS6596 to do the same snap file permissions checking as LLO.
Not understanding what SDF was doing in certain situation led me to test a few things and report.
I looked in the HAM2 HEPI files as an example. In the Front End Target area all files were owned by controls and writable by owner and group. There were h1hpiham2_burt_date_time.snaps created by the FrontEnd when it cleanly shuts down. There are also safe and OBSERVE date_time.snaps and the safe and OBSERVE.snap files that are sybolic links to the USERAPPS area.
In the USERAPPS/hpi/h1/burtfiles area there are all the h1hpi{chamber}_safe & OBSERVE.snap files. The safe files are owned by controls and the OBSERVE files are owned by hugh.radkins; all files are writable by owner and group.
TESTS:
On the SDF TABLE medm, made a change(diff) and accepted and confirmed on that medm. The file was updated in USERAPPS, AND, a dated snap owned by controls was created in the target area--not expected.
On the SDF_SAVE medm, selecting OVERWRITE and clicking SAVE FILE does the same thing--It does overwrite the USERAPPS snap file but it also unexpectedly creates a new dated snap in the target area.
On the SDF_SAVE medm, selecting TIME NOW and clicking SAVE FILE does what one would expect; it creates a new dated snap in the target area and does NOT update the USERAPPS snap file.
Now, I change the write permission on the {chamber}_OBSERVE.snap file in the USERAPPS area to OWNER (hugh) only. Restart medm as controls. When I accept and confirm a diff, it creates a new dated snap in the target area but fails to update the snap file in the USERAPPS area. No notification of this was seen on the computer.
So if someone, logged in as themselves, attempts to accept and confirm changes with SDF, they will fail to update the USERAPPS snap file if:
*) they are not the owner and the file permissions are not group writable, or
*) they use the SDF_SAVE screen (rare for most commissioners maybe?) and do not change the FILE OPTIONS SELECTION to OVERWRITE.
Okay--gotta meeting, forgive the errors and omissions.
TITLE: 11/03 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Broken?
INCOMING OPERATOR: Patrick
SHIFT SUMMARY: Earlier during the night we had issue with instability during Engage_SRC_Part2 (addressed by Sheila -- see alog31158). Later the IFO kept losing lock at Increase_Power. Since lockloss tool doesn't work I couldn't pull up the log prior to locklosses. Visually from the control signals and error signals nothing was going unstable. The lock just dropped as the power increased about the same place everytime. After the third lockloss Guardian complained about fast shutter. The test failed and I couldn't move on with the locking sequence. Richard went out to the rack and mentioned that he didn't get a good voltage reading when the shutter was opened but got okay reading when the shutter was closed. We checked GS13 to make sure that it saw the shutter motion and it did. However, the SYS channels that supposed to report shutter state and voltage didn't see anything (maybe this is why the shutter test failed?). So to my understanding the shutter is somewhat functioning. Anyway, right now it seems like the ISC_LOCK guardian is stall because of the FAST_SHUTTER guardian. I was able to make it all the way to DRMI_ASC_OFFLOAD before it refuses to move on.
Ops Info: On an unrelated note, violin modes have been coming back high every time after locklosses. At one point it saturated OMC DCPD. So be mindful before moving on pass DC_READOUT_TRANSITION.
There were no sign of anything going unstable. It's just dropped. I tried engaing the new PRC1 filter that Sheila removed from the Guardian as IFO goes to high power but it's still causing instability at that point.
Guardian won't move on, got a message telling me to check fast shutter trigger. The "Fast Shutter Plot" button doesn't work so I manually made some plots following alog29689 (at least one DQ channel didn't exist so I picked something close). It seems to me like the shutter did not close during the last lockloss at Increase Power. I couldn't find an alog explaining how to clear this message and make Guardian move on again. I found an instruction on how to test fast shutter on H1 Troubleshooting page. Follow the instruction by selecting LOW_ARM_POWER on LOCKLOSS_SHUTTER_CHECK guardian (had to be done manually, there's no path from SHUTTER_FAIL to LOW_ARM_POWER). Then I selected TEST_SHUTTER on FAST_SHUTTER Guardian. And got a message "Fast shutter failed tests! Do not power up!!"
I think I broke the interferometer =(
Tonight (Nov 3rd around 8:20 UTC) we saw something none of us had seen before, ETMY violins rang up very suddenly. The noise is DARM was extremely high, and we saw beat notes between violins at low frequencies in DARM once things calmed down.
Nutsinee has been damping them very effectively, but we don't know why they rang up so sudddenly
I have made scripts that we can run that will adjust the a2l gains for PR2 and PR3. These use the same a2l engine that the test mass script does, it just sets the settings for PR2 and PR3. They can be found in the same location (just checked in): .../userapps/isc/common/scripts/decoup/a2l_min_PR*.py, where * is either 2 or 3.
Each script has been run tonight, so we now have non-zero a2l elements for both those mirrors.
Nutsinne is having trouble relocking, with instabilities around 3.5 Hz that ring up when she engages PRC1 ASC. We removed both the new offset from POP A pit and the new a2l from PR3, hoping that this will help.
Stefan's offset for POP A was -0.05, Jenne's a2l coefficients were PR3_M3_DRIVEALIGN P2L 0.85 and Y2L 1.85
The problem was the new cut offs added to the PRC1 loop tonight, they seemed fine at low noise but aren't stable when the loops forst come on. We have just removed them from the gaurdain for now.
Hmmm, odd. Anyhow, the cutoffs now come on in LowNoiseASC rather than earlier.
I tried to set up an acoustic injection that reproduced several of the bumps in DARM (last time I only did one shaker bump) and tried PR2 pitch, yaw and length injections and IM2 pitch. The best was PR2 pitch, see attached figure. The behavior at other bumps was similar.
x5 scatter reduction at 50Hz.
=======================
Taking Robert's observation as a hit, we moved the PR2 spot position in PIT (using the PR2_SPOT_MOVE state), looking for variations in scatter coupling while driving with acoustic noise.
Indeed, as we move from -275urad to -302urad PR3 PIT alignment counts, the scatter peak at 50Hz varies like a bar code reader.
The starting alignment (PR3 PIT = -296.72urad) corresponds to a maximum in scatter at 50Hz . We found minima at -301.72, -299.32, -292.47, -290.22 and -278.52 urad or PR3.
We thus picked -292.47, which seemed to be the best perfoming spot. It corresponds to a PR2 PIT alignment of 1767.24urad (see alignment snap shot).
The attached plot 1 shows the undriven DARM noise (blue), the acoustically driven DARM noise at the original position (-296.72urad of PR3 PIT), and the acoustically driven DARM noise at the final position (-292.47urad).
Plot 2 is an alignment snap shot.
Finally, to lock this position in, we had to add a POP_A_PIT_OFFSET of -0.05. Jenne ran a PR2 A2L script (demodulating PRCL) to mark the new PR2 spot position. The required M3 drive align numbers are in snapshot 3.
Matt Daniel Fil (WP 6294)
The MCL and MCF readbacks have a 10/100 Hz Sallen-Key whitening stage which amplifies the high frequency spectrum to get above ADC noise. Since a while we have observed a 20-50 mHz/√Hz flat noise level in the these spectra when we are locked with the IMC only. Looking with the oscilloscope we estimated about 10m V signal between 100 kHz and 1 MHz, before the whitening. This seems too much for the AA board, so we included additional low pass filters in the readbacks with cut-offs around 15 kHz. A 15/150 kHz pole-zero was added to the Sallen-Key, and another 15 kHz pole was added to the output stage.
In detail (common mode board, IMC, s/n 1102626):
The attached spectra now show a frequency noise level which is compatible with the one observed in full lock. The coherence is also improved. The ADC noise is not too far away in regions with reduced coherence.
Here is a comparison between MCF fully locked and 2W IMC only (REF traces). The changes are much smaller now, indicating that MCF sees frequency noise from the laser.
The IMC shot noise limit here should be about 1 mHz/Hz1/2, assuming 0.3 mW of light (mostly carrier) on the PD with the IMC locked, 5 mW of light on the PD with the IMC unlocked, and a modulation depth of 0.01 rad.
On the attached snap of the Terramon window, the second event is a largesh EQ in Middle East. The EQ distance to H1 is 1000km shorter than the distance to L1 but the computed site velocity is 1/2 at H1 versus L1. Is this one of those cases where the waves arrive first from opposite directions and so the crust encountered is different for the travelling surface waves? Interesting info I'm sure everyone wants to know. I see a similar descrepency for the G1 and V1 velocities but those waves are certainly travelling on the same direction. Maybe it is just the local site geology being taken into account? HunterG? Thanks-H
Hey Hugh!
Apologies for the late response. I'm going to paraphrase what Michael Coughlin told me in a recent discussion.
We have historically attributed the different behavior to the sites themselves rather than to any particular selection effect from LHO and LLOs location relative to most EQs. Amplitude as a function of EQ direction dependence would be interesting to look at, as we essentially fit it away by only taking distance into account. Might be a good summer project for someone.
--Hunter
(Carlos, Richard, Gerardo)
Y-End RGA is ON.
By "on" I assume that the electronics are energized (i.e. the fan is running) but not the filament?
That is correct, Kyle.