Stefan, Matt, Rana, Evan
We rephased REFL9 in full lock by driving a line at 5443 Hz into the error point of the common-mode board.
Then we switched control of CARM from REFLAIR9I to REFL9I in the usual fashion (by repeatedly measuring the OLTF of CARM while slowly swapping the relative strengths of the two sensors). Settings are shown in the attachment, along with an OLTF.
The noise in DARM is not better than with REFLAIR control, but at least this time it is not worse.
We were locked at 24 Watts for just over 2 hours before we rang up a PI that shows up in the Y arm QPDs at 15734 Hz. I increased the ring heater power (for both arms )from 0.5 to 0.6 Watts. template with the QPD IOP channels is attached. I tried to reduce the power, but we lost lock when I did that perhaps because the ISS second loop was on. The lockloss was at about 3:00 UTC on July 17
We suspect this was not PI, that it was the roll mode.
It would be useful if someone could track down which optic this was by looking at the roll mode peak RMS trends and looking to see if it in fact did saturate any of the actuators.
Rana
ETMX ring heater has asymmetrical heating at the moment (0.5W upper ring 0.6W lower ring). Not sure if you'd like to keep the setting so I'm leaving it there....
Matt, Sheila
Matt looked at this lock this morning, and saw that although the ROLL mode might have increased in the last few minutes it likely wasn't the culprit. However, there was a line at 1055 Hz that apeared and grew in the last 20 minutes of the lock, shown in the attached screenshot. This would indicate that the PI could be at 15329 or 17439, so this is a new PI for us. (past incidents alog 17903 and alog 18965) As far as I know this is also a different frequency from what has been seen at LLO.
Unfortunately, in my hurry to grab some fast channels for the QPDs, I used the LLO channel names but we are uusing a different ADC, so I got the wrong channels. So we don't really know which arm this was in.
I've made a template that anyone who suspects that a PI is rung up can run:
/ligo/home/sheila.dwyer/ParametricInstabilities/PI_IOP_template.xml
The assymetry in the ring heater was my mistake.
I have modified monit on h1pemmy to log system over-loads to its local /var/log file system. This will give us an indication of how system loading follows CA freeze up events.
Travis, Patrick 08:40 earthquake breaks lock, Leonid starting charging measurements 09:29 fire department through gate 09:41 fire department gone 09:48 Katie to LVEA to take pictures of magnetometers 09:55 Robert and SURF students to LVEA to work on PEM 10:16 Jeff B. and Karen to clean diode room 10:22 Vinny to mid stations to test PEM sensors 10:36 Jeff B. and Karen done 10:40 Leonid done charging measurements, Ops starting locking sequence 10:42 Robert and SURF students done 10:53 Kyle and Gerardo driving forklift in high bay 11:00 Jason taking training tours into diode room 11:31 Kyle and Gerardo done 12:07 Dave restarted h1oaf 13:47 fire department through gate 14:02 Nutsinee has had violin mode damping off for the last hour, turning it back on 15:00 out for Ops meeting
The lift pumps that service the LSB are not functioning. See FRS 3342. There will be a contractor on site tomorrow morning to remove the pumps and diagnose the problem. We may have to resort to porta-potties temporarily at the LSB depending on how long the repairs take.
[WP 5359] LHO rtbuild was upgraded to RCG tag-2.9.5, this is now the default build release.
h1oaf was built against 2.9.5 and restarted. This will be used to test the fix of the True RMS part.
Leo, Jeff Today morning we had some time after the earthquake to perform OPLEV charge measurements. Results for ETMX are consistent with negative trend, now the charge is about 10 Volts on all the quadrants. Results for ETMY do not not show a significant trend and below the 10 [V] Effective Bias Voltage. Note. I'll check it again, but it seems that we had the wrong sign of applied bias voltage in charge measurements for ETMY. Applied bias was multiplied by -1 in filters. Applied ETMY plots use the corrected sign for June 12 to July 14.
Summary: the effect of wind in the sub 0.1 Hz tilt band is very local (little coherence between seismometers 20m apart) and more than a factor of two greater in the HAM 2 and 5 seismometer locations then in the beer garden. We may be less sensitive to wind if the sensor correction seismometer(s) are located only in the beer garden. Also, because tilt is so local, real tilt meters, like Krishna’s at EX, should be as close as possible to the chambers.
Wind tilts our buildings, which produces spurious control signals from servo seismometers and can make it difficult to lock or maintain lock. A previous log showed that there was almost no wind tilt at a location 40 m from the EY building, making it clear that wind tilt is a local effect (Link). As a result, Hugh and I have been wondering if the HAM 5 seismometer location is better because it is down-wind for most storms or if the beer garden is better because it is furthest from walls. With Hugh’s help, I looked at chance coincidences between wind storms and seismometer huddles over the last few months as well as data with seismometers in the 3 locations. I think the answer is that the beer garden shows substantially less tilt than either the HAM 5 or 2 locations.
Figure 1 shows how local tilt is. The blue seismometer traces are for “huddled” seismometers (about 2m apart) in the beer garden and show high coherence below 0.1 Hz. But the red seismometer trace shows much lower coherence in this tilt band between the beer garden seismometer and the HAM5 seismometer, only about 20 m away. During high wind, I also found low coherence in the tilt band between the beer garden and the HAM2 seismometer locations. The local nature of the tilt has implications for true tilt meters used to correct the tilt signal from seismometers. The tilt meter at EX is about 4 m from the chamber and, in Figure 1 we saw very little coherence at 20m. While it may not be enough of a return to move this one, it may be best to try and place the next one even closer, and, to the degree possible, engineer the BRS so that it can be as close as possible or even under the chamber.
Figure 2a and b show that tilt is very different at different locations in the LVEA and, of the 3 locations, the beer garden is the best. In both horizontal axes, the tilt in the beer garden is at least a factor of two better than the best of the HAM2 and HAM5 locations. It is about a factor of ten better than the worst of the HAM2 or 5 locations. I checked the 3 windstorms during the period when all 3 seismometers were working and, for each time that I examined, the beer garden seismometer was better. Figure 3 shows the two seismometers that were available during the windy period that caused locking problems last night: the tilt noise was half as much in the unused beer garden seismometer than in the HAM5 seismometer that was used for sensor correction. So, a sensor correction seismometer in the beer garden may be better than in the HAM2 or 5 locations in the frequency band dominated by tilt instead of real acceleration (roughly below 0.5 Hz). This morning Jim switched sensor correction to the beer garden seismometer.
Finally, when we have two STSs available, I think we should do a more detailed study of tilt-band coherence length, and attenuation with distance from the walls.
Robert, Hugh
Just to check - Are you sure that there was no activity in the LVEA during these data times? That will also cause local distortions of the floor and might confuse the results.
Actually, people on the floor make very different spectral signatures than wind and would be easy to identify in any of the spectra. But, nevertheless, I did check for any anomolous spikes in the 30 to 100 mHz band of the PEM seismometers, or, for more recent data, the new equivalent bands of the ISI seismometers.
For anyone who is interested in classifying locklosses, or in earthquakes, here is an example of a full low noise lock broken by an earthquake.
Hugh and Robert asked that all sensor correction for the corner station be switched to the ITMY STS. I've done this (because interferometry is being impeded by an earthquake) and accepted the changes in the SDF. I'm assuming Robert and Hugh have an imminent alog describing the need for the change.
Jenne, Evan, Stefan - Reverted the the ramp times in PREP_TR back to Monday's values - this seemed to be more reliable, and not produce any transients at that stage of the script. - Repeatedly brought the machine up to 24W, but we still see a hint for the 0.41Hz CSOFT resonance, and occasionally a YAW gain oscillation at maybe 1.5Hz. - The DC oplev PIT on SR3 was stuck on the limiter. - We also cleaned up the Guardian flow chart. It now has a single flow line. (The previous spider web caused more lock losses than it was worth.) - Since the 0.41Hz problem seems to come and go, we decided to kill it with a good CSOFT loop design - using a 2nd UGF at the resonance to damp it. Evan is still testing this filter.
Just to summarize, we seem to be suffering from several issues at 23+ W:
I was able to get a 30 minute lock at 23.7 W with the following:
Other notes:
We were able to get two more stable locks at 24 W, this time in the full low noise state.
However, Patrick and I found that EY L2 was periodically saturating, with the quadrants having something like 50,000 ct rms, coming primarily from the microseism. So the EY L1/L2 crossover is now increased in the Guardian (the L1 filter gain was 0.16, and now it is 0.3). The rms drive on L2 is now more like 30,000 ct rms. L1 is 6000 ct rms, and L3 is 1000 ct rms.
Kiwamu, Stefan, Jenne, Matt, Evan,
We've almost recovered from yesterday's "maintenance". Kiwamu found this morning that the X arm camera servo had been off, I think this was a miscommunication/mistake durring maintence recovery which probably lead us into some bad alingment last night. Stefan/Kiwamu/Jenne did some realignment this morning and early afternoon, and we were able to lock with good recycling gain and were stable at 24 Watts.
The gain of the OMC DC PDs was wrong this was picked up by the SDF put I am not sure what maintence acvitity would have caused that. The ETMY ESD bias had the wrong sign. Stefan added it in SDF.
We were able to lock momentarily at low noise, and lost it for reasons that we don't understand yet, but might have been a slow alignment instability.
About 2 hours ago the wind picked up, we have gusts up to 35-40 mph, forecast to stay this way until 1 am. Our improvement to the offloading of DRMI has certaintly helped with windy locking, DRMI locking is still slow but it does lock and stay locked. Stefan has rasied the gains that are used for acquisition, we can try to get some statisctics about DRMI lockign times. Now we get to the next windy locking problem, which seems to be the handoff from ALS COMM to the transmitted arm powers for CARM.
The guardian has been having an occasional problem where it cannot acces channels (the channels exist and are spelled correctly). One solution has been to reload the gaurdian a few times, just now I had to reload the DRMI guardian about 10 times. Eventualy Stefan paused and reloaded it and the problem went away.
Screen shot attached.
We've also had some more epics freeze incidents today. 4:42:39 UTC Local 17:25:50
The alignment settings of the IFO this morning are a bit different than they were according to the hourly burt snap files from ~midnight on Monday night when locking was ~good. The slider values from Monday's good locking are consistent with the alignment values in hourly snaps from a few days before Monday as well. Attached is the snap file from Monday at 11:10pm in the event the commissioners need to do a complete restore. Commissioners report that it is confusing as to why to ASC systems did not recover the pointing even when the starting point was not quite right. Evan is working on it now.
Most notably:
IM4 is 1400 uRad different in Pitch
IM4 is 200uRad different in Yaw
MC3 is 100 uRad different in Pitch
MC1 is 80 uRad different in Pitch
PRM is 30 uRad different in Pitch
PRM is 40 uRad different in Yaw
Turns out the IMC slider values changed significantly, resulting in the IMC to hang at a different place. This caused a lot of the trouble we faced the last day. Once we simply restored Monday’s alignment slider values, the IMC mirror positions moved pretty much back to where they were. This meant we also reverted the alignment references back to Monday's values. This includes the following settings: H1:ALS-X_CAM_ITM_PIT_OFS 256 H1:ALS-X_CAM_ITM_YAW_OFS 340.9 H1:ALS-Y_CAM_ITM_PIT_OFS 303.9 H1:ALS-Y_CAM_ITM_YAW_OFS 433.5 H1:ASC-X_TR_A_PIT_OFFSET 0 H1:ASC-X_TR_A_YAW_OFFSET -0.095 H1:ASC-X_TR_B_PIT_OFFSET -0.11 H1:ASC-X_TR_B_YAW_OFFSET -0.067 H1:ASC-Y_TR_A_PIT_OFFSET -0.128 H1:ASC-Y_TR_A_YAW_OFFSET -0.174 H1:ASC-Y_TR_B_PIT_OFFSET -0.516 H1:ASC-Y_TR_B_YAW_OFFSET -0.1 H1:ASC-POP_A_PIT_OFFSET 0.38 H1:ASC-POP_A_YAW_OFFSET 0.248 H1:ALS-X_QPD_A_PIT_OFFSET 0.2 H1:ALS-X_QPD_A_YAW_OFFSET 0 H1:ALS-X_QPD_B_PIT_OFFSET 0 H1:ALS-X_QPD_B_YAW_OFFSET -0.05 H1:ALS-Y_QPD_A_PIT_OFFSET 0.1 H1:ALS-Y_QPD_A_YAW_OFFSET -0.4 H1:ALS-Y_QPD_B_PIT_OFFSET 0 H1:ALS-Y_QPD_B_YAW_OFFSET 0.15
CO2X laser RTD sensor alarm (H1:TCS-ITMX_CO2_INTRLK_RTD_OR_IR_ALRM) tripped at 14 Jul 15 17:15:00 UTC this morning (10:15am), shutting off the CO2X laser. Folks were pulling cables near HAM4 this morning, which is probably why it tripped. CO2X laser was restarted at 19:30:00 UTC, and is now running normally again.
Just adding some words, parroting what Elli told me: this temperature sensor (RTD) is nominally supposed to be on "the" viewport (HAM4? Some BSC? The Injection port for the laser? Dunno). This sensor is not mounted on the viewport currently, it's mounted on "the" chassis, which (I believe) resides in the TCS remote racks by HAM4. She's seen this in the past: even looking at this sensor wrong (my words, not hers) while you're cabling / electronics-ing near HAM4, this sensor trips. As she says, this was noticed and recovered by her before it became an issue with the IFO because recovery went much slower than anticipated.
If I understand correctly the sensor I think your talking about then yes this should be on the viewport (the BSC viewport which the laser is injected in). The viewport sensor though is an IR sensor, but for some parts of the wiring in the control box (and thus on the MEDM screen) the IR sensor and RTD sensor are wired in together making it hard to know which one caused the trip. Its supposed to monitor scattered light coming off that viewport. It is very sensitive and can be affected by humans standing near it, light being shown onto it (one of the ways to set the trip level is to hold a lighter up to it ), maybe also heat from electronics, etc. So just sitting in the rack I am not at all surprised that it is tripping all the time and causing grief.
My suggestion is to try to get this installed on the viewport if you can, otherwise if you can’t and it really is causing problems all the time, there is a pot inside the control box which you can alter to change the level at which it trips.