Yesterday, I measured the ISS 3rd loop while we were locked at 40W. This is motivated by the fact that we can't increase past 40W without seeing this loop go unstable (see alog 28482).
This measurement was taken before the PRM was moved, so the recycling gain was very low (~25). Still to do is measure the 3rd loop after we recover the recycling gain.
The screenshot shows Kiwamu's 20W measurement in blue (alog 27940, note that the phase is wrong by 180deg), and the new 40W low recycling gain measurement in red (red and unseen green are same). Interestingly, the gain at the peak is much higher at 40W than it is at 20W. I'm not totally sure why that is. Loop still looks stable though, so I don't know why it would go crazy with another couple of Watts.
Jonathan, Jim, Dave:
Yesterday morning we increased the log level on h1fw0's daqdrc from 2 to 10. At this point it ran for over an hour writing all but commissioning frames. I then restarted it, now with a log level of 30 and writing both commissioning and science frames (but not trends). h1fw0 has not crashed since then through to today's DAQ restart (23 hours). Interestingly the log file has not increased its verbosity, and we are now seeing random retransmission requests similar to what LLO used to see.
In the mean time h1fw1 continues to restart. We increased its log level from 2 to 30, it still restarts roughly once an hour on average (today's restarts shown below). h1fw1 is writing all four types of frames.
We are building a third frame writer (h1fw2) which will be used to test new daqd code, for example mutex control of the writing threads to prevent more than one file to be written at a time.
Carl, Dave:
This morning we installed the latest SUS PI model changes. The new system uses a brand new model (h1susprocpi) on the h1oaf0 machine to run additional signal processing. Ideally this model should be located on the h1lsc0 machine, but this currently does not have any spare cores whereas h1oaf does. We may upgrade h1lsc0 to a faster 10-core machine and move the model at a later time.
Additional Dolphin channels have been added to send channels between h1susprocpi and h1omcpi, h1omc, h1susitmpi. At the RFM IPC level nothing is changed, only corner station dolphin mods.
The models changed are: h1susetmxpi, h1susetmypi, h1pemex, h1pemey, h1omc, h1omcpi, h1susitmpi and h1susprocpi (new)
The details of h1susprocpi:
front end = h1oaf0
rate = 2048Hz
dcuid = 71
cpu-num = 5
DAQ was restarted to resync to the new models.
Still to do:
Dave: add new model to overviews and CDS, set SDF to monitor everything, make OBSERVE.snap a link to safe.snap
Carl: create and load all filters
Travis called, said the laser had just tripped out. From the status screen it looks like it was a flow rate error in the power meter cooling circuit - the same suspected reason as last time. This in turn trips the power watchdog. Hopefully this is a sensor problem and not one of hardening of the arteries. Note the laser shutter(s) remain open. The injection relock counter was reset to 0 on both the Beckhoff PC and the MEDM screen.
J. Kissel, C. Whittle While playing around with actuating on the ITMs to test out Chris's new charge scripts, we noticed that the Voltage monitor for the DC Bias was railed at -32768 for both ITMX and ITMY. Worried that it was us, we trended the channel for the past 12 days. Other than the shenanigans on Tuesday for the timing upgrade, it looks like these monitors have been railed constantly for all 12 of those days, and unless we're up and running in DC Readout, there's no requested bias voltage on ITMX and ITMY's requested voltage is has been changing. We'll have a look at the electronics when convenient.
Jim reported yesterday that the cold restart of the ISI from the long Tuesday shutdown led to a thermal sag causing an increase vertical drive in turn causing a yaw...Not sure why the horizontal loops would not keep the yaw at minimum unless the heating is also impacting Stage0. Attached are 48 hours of the ITMY vertical drives, one horizontal drive and the OpLev signals.
Okay, based on the horizontal drive, the loop is dealing with the yaw but of course relative to Stage0. Is HEPI or Stage0 the source of the actual Yaw or a secondary affect...not sure I can reconcile that. Also, while V1 and V2 show pretty flatline drive before and since the isolation cycle yesterday afternoon, V3 and the H2 show a continuing trend that was present before the reisolation.
This morning, all the other BSC ISIs have gone through an isolation cycle.
J. Kissel, T. Sadecki, C. Whittle We found the IFO Up and running this morning when we got in, and it had be up for the last 8 hours. It was stagnant at 2 [W] input power, in DC READOUT TRANSITION. However, I noticed immediately that just about all IFO signals were reporting huge slow oscillations. I attached a screenshot of one of Sheila's strip tools that shows the oscillation was present in the ASC signals for at least 80 minutes, then a trend of the entire lock stretch revealed that there was a power oscillation in the Y arm at similar frequencies. As of writing this aLOG we've lost lock; unclear if it was because of Chris's testing out of ITM charge measurements, or if the oscillation just finally got out of control. I won't speculate what this might have been, but I wanna capture it for the record and talk with the vanguard in case it has something to do with any of the new SRM alignment scheme (i.e. LHO aLOG 28538).
We've had several locklosses throughout the day due to the 18040 Hz ETMY PI mode.
This last lock at 25W, I finally found a combination of settings that seemed to be actively damping it according to the OMC DCPDs. (Using the QPD as the error signal, ETMY_PI_DAMP_MODE4, -60deg +30deg -3000gain). In the OMC DCPD spectrum, the peak was going down. According to the RMS monitor of this mode in the DCPDs, I was winning. However, according to the RMS monitor of this mode in the ETMY QPD, the mode was still going up. We lost lock because of this. I don't know of any DQed high freq channels for the QPDs, so we can't look back in time at the spectrum to see if that helps illuminate the situation.
Anyhow, if someone can help me understand how a mode can seem like it's going down according to the OMC DCPDs, but still actually be increasing according to the transmon QPDs and causing a lockloss, that would be great.
> Anyhow, if someone can help me understand how a mode can seem like it's going down according to the OMC DCPDs,
> but still actually be increasing according to the transmon QPDs and causing a lockloss, that would be great.
I guess there could be many explanations, but one easy one is that the dirt which produces the OMC signal might be alignment dependent. That is, for odd (n+m) optical modes the OMC transmission will depend on the OMC alignment. Since the OMC DC signal should (in a perfect world) be zero for these higher order modes, it might not be surprising that it sometimes actually approaches zero.
Title: 07/20/2016, Day Shift 23:00 – 07:00 (16:00 – 00:00) All times in UTC (PT) State of H1: IFO locked. Wind is a Light to Gentle Breeze (4 to 12mph). Seismic activity is low, with the exception of End-Y. It is a bit elevated; but should pose no operational difficulties. Microseism is also low. Commissioning: Commissioners are commissioning. Outgoing Operator: Ed Activity Log: All Times in UTC (PT) 23:00 (16:00) Take over from Ed 23:41 (16:41) Gerardo – Back from N2 overfill Title: 07/20/2016, Day Shift 23:00 – 07:00 (16:00 – 00:00) All times in UTC (PT) Support: Jenne, Sheila, Kiwamu Incoming Operator: N/A Shift Detail Summary: IFO has been up and down as the commissioner's were working through a variety issues and improvements. Did one initial alignment and several relocks. All went well. Environmental conditions remained favorable for commissioning work during the entire shift. There were 3 earthquakes during the last hour of the shift (4.8, 4.8, 5.3mag), which (per Terramon) did not represent much of a threat to the IFO lock. They did pass without any notice. Cleared TIM errors: H1IOPSUSEY, H1SUSETMY, H1SUSETMYPI, H1SUETMXPI Cleared IPC errors: H1IOPSEIEY, H1ISIETMY, H1PEWMEY, H1ALSEY, H1IOPSEIEX, H1PEMEX, H1ALSEX, H1SCEX Cleared ADC errors: H1IOPSUSEX
We certainly noticed the earthquake, but we held lock at 20W throughout.
Hugh, JeffK, JimW
Short version:
I noticed this afternoon that all of the BSC-ISIs were pushing large DC-ish drives on their St1 vertical actuators (generally about 2000 counts, normal drives are ~100). When I trended the BS and ITMX drives against their Sus oplevs, there seems to be a very clear correlation between large ISI drives and oplev yaw. I think this may be the cause of the drifts that Sheila complained about yesterday. The first attached plot shows the BS 3 vertical drives and BS oplev yaw, second is for the same for ITMX.
One possible cause of this is the actuators heating up the ISI. During the timing kerfluffle yesterday, all of the ISIs sat for ~6 hours with no acuator drive. The control room then recovered all of the platforms and proceeded to start aligning the interferometer, so the ISIs went from a "cold", immediately to an aligned state. Arnaud found a similar effect at LLO, where large ISI tidal drives caused angular drift during locks, when LLO tried offloading tidal to the ISI ( https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=20222 ).
In the future we should try to at least get the ISIs damped as soon as possible after similar maintenance, or go through a cycle of de-isolating/re-isolating after the ISIs have been running for a while, before trying to relock the IFO. The morning of Jul 11th, ITMX was down for almost 14hrs, but the damping loops were still on, and the actuators were still getting a drive signal. When the ISI was re-isolated, the oplev didn't see any appreciable drift over the next several hours (see third plot). Similarly, sometime before the commissioners start locking tomorrow, all the BSCs should be taken down to damped and then re-isolated.
We also considered that something in the timing system could have been responsible, but were unable to find anything to support that. None of the timing signals we looked at showed anything like the drift we saw in the ISIs. We also saw a similar alignment drift after the June power outage, that would seem to support a thermal drift over a some timing shenanigans.
I can think of two fixes for this,
1) we could have a new "off" state where the actuators have a DC value that represents the offset that they were at in the running state
2) we measure the temperature of the the stages (Quad top mass vertical position for stage two, possibly stage one vertical force for stage one) and feed forward to the yaw control
Arnaud's comment in case we come across this problem again: turn off the RZ loops. The drift is caused (we think) by thermal expansion of the stage, this changes the free hanging position of the ISI and the drift that the interferometer sees is likely dominated by RZ. If we turn the loop off, the stage will stay in a neutral position, rather than following a spurious thermally driven alignment.
We have been studying the list of blip glitches that Miriam Cabero Müller generated for O1. We noticed that the rate of blip glitches increased dramatically during two time periods, see figure 1. We first checked if the glitch rate might be correlated with outside temperature, in case the blip glitches were produced by beam tube particulate events. The correlation with outside temperature was strong, but, for beam tube particulate, we expected it to be stronger with rate of temperature change than with temperature, and it was not. So we checked relative humidity and found that inside relative humidity was well correlated with outside temperature (and glitch rate), most likely because of the extra heating needed in cold spells. A plot of the blip glitch rate and RH inside the CS mass storage room is attached.
While the correlation with inside relative humidity is not better than with outside temperature, we plot inside relative humidity because we can better speculate on reasons for the correlation. Dry conditions may lead to the build up and discharge of static electricity on electronics cooling fans. Alternatively, there may be current leakage paths that are more likely to discharge in bursts when the pathways dry out. While this is, at best, speculation, we set up a magnetometer near the HV line for the EY ESD to monitor for possible small short fluctuations in current that are correlated with blip glitches. At the same time, we suggest that, as risk mitigation, we consider humidifying the experimental areas during cold snaps.
The low-humidity correlated blip glitches may represent a different population of glitches because they have statistically significantly smaller SNRs than the background blip glitches. We analyzed the distribution of SNR (as reported by pycbc) of the blip glitches during three time periods – segments 1, 2, and a relatively quiet period from October 5 – October 20 (segment 3). This gave approximately 600 blip glitches for each segment. Figure 2 is a histogram of these
To determine if these distributions are statistically different, we used the Mann-Whitney U test. Segments 2 and 3 matched, reporting a one-sided p-value of 0.18. The distribution in SNR for segment 3 - the low glitch rate times - did not match either segment 1 or 2, with p-values of 0.0015 and 2.0e-5, respectively. Thus we can conclude that the distributions of 1 and 2 are statistically significantly different from 3.
We are currently examining the diurnal variations in the rate of these blip glitches, and will post an alog about that soon.
Paul Schale, Robert Schofield, Jordan Palamos
This is probably something you already checked, but could it be just that there is more heating going on when the outside temperature is low? More heating would mean more energy consumption, which I guess could bother the electronics in a variety of ways that have nothing to do with humidity (magnetic fields, power glitches, vibration, acoustics in the VEAs, etc.). Which other coupling mechanisms have you investigated?
In a sense we are suggesting that the effect is due to the extra heating during the cold snaps, the humidity is just our best guess as to how the extra heating affects the electronics. We think it is unlikely to be temperature, since the temperature in the buildings changed little and did not correlate as well as humidity. Our understanding is that DetChar and others have looked carefully for, and not found, coincident events in auxiliary channels, which would argue against magnetic or power glitches from heaters. The heaters dont increase vibration or sound levels by much, the fans work continuously. The humidity, however, changed a lot.
Could the decrease in humidity affect the electronics cooling efficiency by changing the heat capacity of the cooling air? Is there any recorded direct measurement of the electronics heat-sink temperatures or exhaust temperature?
If you want to do a similar study at L1 then one of the best periods (in terms of a fairly quick change in RH) is the period from Nov. 20th to Nov. 27th 2015. Of course RH values here in the swamps are much higher. Where is the "blip glitch" list? I followed this link: https://wiki.ligo.org/viewauth/DetChar/GlitchClassificationStudy but there's nothing past Sept. 23 there.
The list of blip glitches was emailed out by Miriam Cabero Müller. They're store at https://www.atlas.aei.uni-hannover.de/~miriam.cabero/LSC/blips/full_O1/, and Omega scans for H1 are here: https://ldas-jobs.ligo-wa.caltech.edu/~miriam.cabero/blips/wscan_tables/ and for L1 here: https://ldas-jobs.ligo-la.caltech.edu/~miriam.cabero/blips/wscan_tables/.
John:
Humidity has very little effect on the thermal properties of air at the relevant temperatures and humidities (20-40 degrees C, 0-20 % RH). On pages 1104 and 1105 of this paper (http://www.ewp.rpi.edu/hartford/~roberk/IS_Climate/Impacts/Resources/calculate%20mixture%20viscosity.pdf), there are plots of specific heat capacity, viscosity, thermal conductivity, and thermal diffusivity.
J. Kissel, E. Hall Checking in on the power delivered to the ITMY compensation plate after the strange drop in TSCY's front-end laser power drop yesterday (see LHO aLOG 28506), it looks like the laser power has recovered, mostly. The power, as measured by the pick-off beam just before the up-periscope into the vacuum system (H1:TCS-ITMY_CO2_LSRPWR_MTR_OUTPUT) is roughly stable at 0.3075 [W], where it used to deliver a really stable 0.312 [W]. I attach two zoom levels of the same 4-day trend. There's also some weird, 10-min period feature in the *minimum* of the minute-trend, where the reported power drops to 0.16 [W]. Given its periodicity, one might immediate suspect data viewer and data retrieval problems, but one can see in the un-zoomed trend that this half-power drop has been happening since the drop-out yesterday, but tracks the reported laser power even before the delivered power was increased back to nominal.
I'm wondering if this weird behaviour is due to the RF driver - we should try to swap in the spare driver soon to check this because if the power output is really glitching low like that then it's likely to cause issues for commisioning, or possibly to fail altogether.
The temperature trend for the laser doesn't give any signs that it might have overheated.
The delivered power was recovered because I added an offset to the RS calibration so that it allows the right amount of power through. The laser itself is still outputting 42W.
Upgrade of Timing Firmware
Daniel, Ansel, Jim, Dave
Most of today was spent upgrading the entire timing system to the new V3 firmware. This did not go as smootly as planned, and took from 9am to 6pm to complete. By the end of the day we had reverted the timing master and the two CER fanouts to the orginal code (the end station fanouts were not upgraded). We did upgrade all the IRIG-B fanouts, all the IOChassis timing slaves, all the comparators and all the RF Amplifiers.
The general order was: stop all front end models and power down all front end computers, upgrade the MSR units, upgrade the CER fanouts, upgrade PSL IO Chassis (h1psl0 was restarted, followed by a DAQ restart), upgrade all CER slaves (at this point the master was reverted to V2), at EY we upgraded IRIG-B and slaves (skipping fanout), at MY we upgraded the PEM IO Chassis, at EX we did the same as EY and at MX the same as MY.
All remaining front ends were now powered up. The DAQ was running correctly but the NDS were slow to complete their startup. Addiional master work in the MSR required a second round to restarts, at this point comparators which had been skipped were upgraded and the CER fanouts were downgraded. Finally after h1iopsush56 cleared a long negative irig-b error all systems were operational.
During these rounds of upgrades FEC and DAQ were restarted several times.
Addition of Beam Splitter Digital Camera
Richard, Carlos, Jim
An analog camera was replaced with a digital video GIGE-POE camera at the Beam Splitter.
New ASC code
Sheila:
new h1asc code was installed and the DAQ was restarted.
Reconfigured RAID for ldas-h1-frames file system
Dan:
The ldas-h1-frames QFS file system was reconfigured for faster disk access. This is the file system exported by h1ldasgw0 for h1fw0's use. After the system was upgraded, we reconfigured h1fw0 to write all four frame types (science, commissioning, second and minute). As expected, h1fw0 was still unstable at the 10 minute mark, similar to the test when h1fw0 wrote to its own file system. h1fw0 was returned to its science-frames-only configuration.
Just curious -- it's my impression that the point of "upgrading the timing system to the new V3 firmware" was to reprogram all timing system hardware's LED lights so as to not blink every second or two, because we suspect that those LEDs are somehow coupling into the IFO and causing 1 to 2 Hz combs in the interferometer response. The I/O chassis, IRIG-B, comparators, and RF amplifiers are a huge chunk of the timing system. Do we think that this majority will be enough to reduce the problem to negligible, or do we think that because the timing master and fanouts -- which are the primary and secondary distributors of the timing signal -- are still at the previous version that we'll still have problems?
With the I/O chassis timing upgrade we removed the separate power supply form the timing slaves on the LSC in the corner and both EX and EY ISC chassis. Hopefully the timing work will eliminate the need for the separate supplies.
Could you clarify that last comment? Was yesterday's test of changing the LED blinking pattern done in parallel with removal of separate power supplies for timing and other nearby electronics?
Ansel has been working with Richard and Robert of the past few months testing out separate power supplies for the LEDs in several I/O chassis (regrettably, there are no findable aLOGs showing results about this). Those investigations were apparently enough to push us over the edge of going forward with this upgrade of the timing system. Indeed, as Richard says, those separate power supplies were removed yesterday, in addition to upgrading the firmware (to keep the LEDs constantly ON instead of blinking) on the majority of the timing system.
To clarify Jeff's comment: testing on separate power supplies was done by Brynley Pearlstone, and information on that can be found in his alog entries. Per his work, there was significant evidence that the blinking LEDs were related to the DARM comb, but changing power supplies on individual timing cards did not remove the comb. This motivated changing the LED logic overall to remove blinking. I'm not sure whether the upgrades done so far will be sufficient to fix the problem. Maybe Robert or others have a better sense of this? Notable alog entries from Bryn: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=25772 https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=25861 https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=27202
I have gone through and manually compared FScan spectrograms and normalized spectra for the 27 magnetocometer channels that are processed daily: https://ldas-jobs.ligo-wa.caltech.edu/~pulsar/fscan/H1_DUAL_ARM/H1_PEM/fscanNavigation.html, to look for changes following Tuesday's timing system intervention, focusing on the lowest 100 Hz, where DARM 1-Hz (etc.) combs are worst. Because of substantial non-stationarity that seems to be typical, it's not as straightforward as I hoped it would be to spot a change in the character of the spectra. I compared today's generated FScans (July 20-21) to an arbitrary choice two weeks ago (July 6-7). But these six channels seemed to improve w.r.t. narrow line proliferation: H1_PEM-CS_MAG_EBAY_LSCRACK_X_DQ H1_PEM-EX_MAG_EBAY_SUSRACK_X_DQ H1_PEM-EX_MAG_EBAY_SUSRACK_Y_DQ H1_PEM-EX_MAG_EBAY_SUSRACK_Z_DQ H1_PEM-EY_MAG_EBAY_SUSRACK_X_DQ H1_PEM-EY_MAG_VEA_FLOOR_X_DQ (before & after figures attached) while these four channels seemed to get worse w.r.t. narrow lines: H1_PEM-EX_MAG_VEA_FLOOR_Z_DQ H1_PEM-EY_MAG_EBAY_SEIRACK_X_DQ H1_PEM-EY_MAG_EBAY_SEIRACK_Y_DQ H1_PEM-EY_MAG_EBAY_SEIRACK_Z_DQ In addition, many of today's spectrograms show evidence of broad wandering lines and a broad disturbance in the 40-70 Hz band (including in the 2nd attached figure).
Weigang Liu has results in for folded magnetometer channels for UTC days July 18 (before changes), July 19-20 (overlapping with changes) and July 21 (after changes): Compare 1st and 4th columns of plots for each link below. CS_MAG_EBAY_SUSRACK_X - looks slightly worse than before the changes CS_MAG_EBAY_SUSRACK_Y - periodic glitches higher than before CS_MAG_EBAY_SUSRACK_Z - periodicity more pronounced as than before CS_MAG_LVEA_VERTEX_X - periodic glitches higher than before CS_MAG_LVEA_VERTEX_Y - periodic glitches higher than before CS_MAG_LVEA_VERTEX_Z - periodic glitches higher than before EX_MAG_EBAY_SUSRACK_X - looks better than before EX_MAG_EBAY_SUSRACK_Y - looks better than before EX_MAG_EBAY_SUSRACK_Z - looks slightly worse than before EY_MAG_EBAY_SUSRACK_Y - looks slightly better after changes EY_MAG_EBAY_SUSRACK_Z - looks the same after changes (Weigang ran into a technical problem reading July 21 data for EY_MAG_EBAY_SUSRACK_X) A summary of links for these channels from ER9 and from this July 18-21 period can be found here.
Jeff K, Alastair (by phone), Nutsinee
Jeff noticed that TCS CO2Y was throwing a bunch of guardian error messages which led him to investigate and found that the CO2Y actual output power was lower since the laser recovered from maintenance activity this morning. Timeseries shows that CO2Y power dropped out at 15:41 UTC (8:41 local time) and never came back to its nominal (~57W). Chiller temperature which is read off the front end was down at the same time indicating CO2Y was down due to some front end maintenance activity. The supply current to CO2Y was also low compared to CO2X (19A vs 22A) suggesting that the low power output was real. And indeed, we went out and measured about 40W at the table (we stick a handheld power meter right before the first steering mirror).
We don't know why would the Front End maintenance today would affect CO2Y output power (CO2X is fine by the way). On the plus side, the heating profile looks good on the FLIR camera which means nothing was misaligned and we can still use CO2Y laser. The beam dump that was in front of the FLIR screen hasn't been put back so be mindful if you ever want to blast full power through the rotation stage.
I commented out the output power fault checker part in TCS power guardian so that ISC_LOCK can still tell it to go places. I added a temporary +1 degree offset to the minimum angle parameter for CO2Y rotation stage calibration so it would go to requested powers. We requested TCS CO2 laser stabilization guardian to down because it's not usable given a current output power.
Quick conclusion: CO2Y is still functional. The reason for power loss is to be investigated.
J. Kissel, S. Dwyer, N. Kijbunchoo, J. Bartlett, V. Sandberg A few checks we forgot to mention to Nutsinee last night: - Nutsinee and I checked the flow rate on the mechanical flowmeter for both the supply and return for TCSY chiller line, and it showed (what Nutsinee said was) nominal ~3 Gallon per minute. This was after we manually measured the power to be 41 W coming out of the laser head to confirm the EPICs readout. - Sheila and I went to the TCS chillers on the mezzanine. Their front-panel display confirmed the ~20.1 deg C EPICs setting for temperature. - On our way out, we also noticed that a power supply in the remote rack that is by the chillers marked "TCSY" was drawing ~18 mA, and was fluctuating by about +/- 2mA. We didn't know what this meant, but it was different than the power supply marked TCSX. We didn't do anything about it. - The RF oscillator mounted in that same remote rack appeared functional spitting out some MHz frequency sine wave. Sheila and I did not diagnose any further than "merp -- looks like an oscillator; looks on; looks to be programed to spit out some MHz sine wave."
Alastair, Nutsinee
Today I went and check the CO2Y power supply set point. Voltage limit is set to 30V and current limit is set to 28A. Same goes for CO2X power supply. These are correct settings, which means CO2Y laser is really not behaving properly.