Took 6 minutes to overfill CP3, first set the manual fill to 36%, noticed the temperature drop slow, so I opened the valve a bit more, to 50%. See attached for more.
LLCV remains at 20%.
The demod chassis was fixed. One of the chips was not fully seated. Keita updated the dark offsets. I tried locking but had trouble on PRMI. I ran through an initial alignment. After that I lost lock twice in a row on the transition to LOCK_DRMI_3F. I stopped at DRMI_ENGAGE_ASC and Keita checked the phases related to the fixed chassis. He said they looked fine, but noticed that H1:LSC-REFLAIR_B_RF27_I_OFFSET and H1:LSC-REFLAIR_B_RF135_Q_OFFSET had been changed. He determined that they must be changed in guardian and asked me to find out where. I believe I have found the relevant code in ISC_DRMI.py under ZERO_3F_OFFSETS, but I'm not sure how to address it. The commissioners are in a meeting. class ZERO_3F_OFFSETS(GuardState): index = 110 request = True @assert_mc_locked @assert_drmi_locked #@nodes.checker() def main(self): ezca.switch('LSC-SRCL1', 'OFFSET', 'OFF') # Zero the 3f offsets # [FIXME] either shorten the averaging or run the offsets in parallel, # or both average the current offsets and put them in. offsets = cdu.avg(5, ['LSC-REFLAIR_B_RF27_I_INMON', 'LSC-REFLAIR_B_RF27_Q_INMON', 'LSC-REFLAIR_B_RF135_I_INMON', 'LSC-REFLAIR_B_RF135_Q_INMON'], ) # write offsets ezca['LSC-REFLAIR_B_RF27_I_OFFSET'] = -round(offsets[0], 3) ezca['LSC-REFLAIR_B_RF27_Q_OFFSET'] = -round(offsets[1], 3) ezca['LSC-REFLAIR_B_RF135_I_OFFSET'] = -round(offsets[2], 3) ezca['LSC-REFLAIR_B_RF135_Q_OFFSET'] = -round(offsets[3], 3) @assert_mc_locked @assert_drmi_locked #@nodes.checker() def run(self): return True
From my understanding the problem was not with the guardian code, but that the INMON signals that were being averaged were bad and there was no DRMI 3f signal. Stefan and Richard went out and disconnected/reconnected cables and it came back.
This is a comparison between the ISS, ILS and PMC signals before (REF traces) and after the changes in the electroncis and the modulation depth, see 31095.
A few observations:
A better plot showing the relationship between the ILS and PMC mixer and HVMon signals.
Reducing the ILS gain by 16 dB increases the noise seen by the PMC by the same amount below 1 kHz. This change reduced the ILS ugf from ~10 kHz down to ~1 kHz.
The PMC PZT is decribed in alog 30729:
The ILS PZT is
Matt, Evan
As has been noted before, the DARM residual these days is usually microseism-dominated, and it is getting worse as we move into winter.
We installed a new boost (FM2 in DARM1) to give >40 dB more suppression at the microseism. The performance during yesterday's 25 W lock is shown in an attachment.
Tagging CAL.
Ryan and I were wondering why is there such a big difference in the residual OMC PD sum between L1 and H1. Both spectra are calibrated in mA so assuming similar optical gains the H1 DARM rms in meters is also 100 times higher than L1 (500 before the boost). This large residual DARM fringe motion may be responsible for the increased/incoherent H1 laser noise coupling.
We added a boost with resonant gain around 2 Hz. Now the residual is 7×10−3 mA rms below the bounce modes.
The DARM UGF is 70 Hz with 30° of phase.
At 5:43UTC something was hooked up to TFIN and the path was enabled. This generates a horrible 60Hz harmonics problem for the PMC servo.
model restarts logged for Thu 03/Nov/2016 - Wed 02/Nov/2016 No restarts reported
model restarts logged for Tue 01/Nov/2016
2016_11_01 11:14 h1ioppsl0
2016_11_01 11:14 h1psldbb
2016_11_01 11:14 h1pslfss
2016_11_01 11:14 h1psliss
2016_11_01 11:14 h1pslpmc
2016_11_01 11:23 h1calcs
2016_11_01 11:23 h1iopoaf0
2016_11_01 11:23 h1ioppsl0
2016_11_01 11:23 h1ngn
2016_11_01 11:23 h1oaf
2016_11_01 11:23 h1odcmaster
2016_11_01 11:23 h1pemcs
2016_11_01 11:23 h1psliss
2016_11_01 11:23 h1tcscs
2016_11_01 11:24 h1calcs
2016_11_01 11:24 h1iopasc0
2016_11_01 11:24 h1ioplsc0
2016_11_01 11:24 h1iopoaf0
2016_11_01 11:24 h1iopsusb123
2016_11_01 11:24 h1lscaux
2016_11_01 11:24 h1lsc
2016_11_01 11:24 h1ngn
2016_11_01 11:24 h1oaf
2016_11_01 11:24 h1odcmaster
2016_11_01 11:24 h1omc
2016_11_01 11:24 h1pemcs
2016_11_01 11:24 h1psldbb
2016_11_01 11:24 h1pslfss
2016_11_01 11:24 h1pslpmc
2016_11_01 11:24 h1susitmy
2016_11_01 11:24 h1susprocpi
2016_11_01 11:24 h1tcscs
2016_11_01 11:25 h1ascimc
2016_11_01 11:25 h1asc
2016_11_01 11:25 h1hpiham4
2016_11_01 11:25 h1hpiham5
2016_11_01 11:25 h1iopseih45
2016_11_01 11:25 h1iopsush2a
2016_11_01 11:25 h1iopsush2b
2016_11_01 11:25 h1omcpi
2016_11_01 11:25 h1susauxasc0
2016_11_01 11:25 h1susbs
2016_11_01 11:25 h1sushtts
2016_11_01 11:25 h1susim
2016_11_01 11:25 h1susitmpi
2016_11_01 11:25 h1susitmx
2016_11_01 11:25 h1susmc1
2016_11_01 11:25 h1susmc3
2016_11_01 11:25 h1susprm
2016_11_01 11:27 h1hpiham2
2016_11_01 11:27 h1hpiham3
2016_11_01 11:27 h1iopseih23
2016_11_01 11:27 h1iopsush34
2016_11_01 11:27 h1iopsush56
2016_11_01 11:27 h1isiham2
2016_11_01 11:27 h1isiham3
2016_11_01 11:27 h1isiham4
2016_11_01 11:27 h1isiham5
2016_11_01 11:27 h1susmc2
2016_11_01 11:27 h1suspr2
2016_11_01 11:27 h1suspr3
2016_11_01 11:27 h1sussrm
2016_11_01 11:29 h1hpibs
2016_11_01 11:29 h1hpiitmx
2016_11_01 11:29 h1iopseib1
2016_11_01 11:29 h1iopseib2
2016_11_01 11:29 h1iopseib3
2016_11_01 11:29 h1isibs
2016_11_01 11:29 h1isiitmx
2016_11_01 11:29 h1isiitmy
2016_11_01 11:29 h1susomc
2016_11_01 11:29 h1sussr2
2016_11_01 11:29 h1sussr3
2016_11_01 11:30 h1hpiham1
2016_11_01 11:30 h1hpiham6
2016_11_01 11:30 h1hpiitmy
2016_11_01 11:30 h1iopseih16
2016_11_01 11:30 h1isiham6
2016_11_01 12:05 h1alsex
2016_11_01 12:05 h1calex
2016_11_01 12:05 h1iopiscex
2016_11_01 12:05 h1iscex
2016_11_01 12:05 h1pemex
Tuesday maintenance. ADC addition to h1oaf0 caused dolphin crash in corner station. Rebooted h1iscex to check for ADC offset. No DAQ restarts required.
model restarts logged for Mon 31/Oct/2016 - Fri 28/Oct/2016 No restarts reported
SEI: No issues to report SUS: No issues to report PSL: Ultrasonic flow sensor installed, not reading back, in contact with vendor VAC: No issues to report FAC: Still searching for water leak, 1/2 gallon per minute CDS: Demod chassis swapped with spare, investigating in lab (alog 31181)
TITLE: 11/04 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Patrick
SHIFT SUMMARY: Sheila was working hard all night at 36W with a 6 hour lock. Broke lock about 45min ago and Richard is working on possibly broken POP90 demod input.
LOG:
Here are some noise estimates for our current configuration (at 25W). The PMC PZT noise was measured before tuesdays fix, if I can get some time I will remeasure that tonight.
These curves are a combination of excess power measurements, gwinc curves for thermal and residual gas, and Evan Hall's noise budget code.
Obvious things that are still missing are frequency noise and ESD DAC noise.
Looking at the noise budget above, we should be limited by shot noise around 100Hz, so increasing the power should expose the noise more. I decided to try 35Watts to see what we can learn.
We locked at 50Watts with similar settings 2 weeks ago without problems, but there were a few things to work out for locking at 35Watts:
Over the last couple of days we noticed the POPAIR_B_90 was randonly jumping up and down.
Further investigating this, we found that just touching the SMA connector of the 90MHz LO of the POPAIR_B_90 demodulator box, the phase jumps by easily 90deg.
Smells like that LO input is broken - but we'd have to take the chassis out to fix it.
Found the SMA connectors to be over tight. I could not remove them with out pliers. The SMA insulating sleeves were in bad shape. The filter for 90MHz was not seated properly. Re-seated the filter, Change the rear panel with a new one. Made sure all of the connector inside the chassis were torqued properly. Re-installed chassis. Fault report 6599 https://services.ligo-la.caltech.edu/FRS/show_bug.cgi?id=6599 While in the LVE re-torqued all the SMA connections in the racks by the PSL. There were a number of loose ones. (AGAIN) We have a torque wrench lets use it.
Attached is a 60 day trend of PT140 which is a one of the new Inficon BPG402s? IP7 and IP8 have been a steady 5000 volts for this time period. Is this a gauge thing? I haven't been intimate with what Gerardo, John and Chandra have learned regarding the behavior of these new wide-range Bayard-Alpert/Pirani hybrids but this slope looks "not insignificant"
That slope looks really fishy. Are both IPs fully pumping? What does HAM6 pressure look like (also hot cathode ion gauge)? Did PT 170 and 180 flatten out after degassing?
We think that the pressure increase is due to temperature, see attached. aLOG noting temperature change.
Since we are talking temperature change in the LVEA, note the vertical change on some of the optics (BS and ITMs), other are affected as well.
J. Kissel, M. Evans, D. Barker, H. Radkins A confusing bit of settings wrangling* after the unplanned corner station computer restarts on Tuesday (LHO aLOG 31075) in the SUSITMPI model meant that a large fraction of the EPICs records in the ITM PI system were wrong. As such, we believe this was the cause of battles with Mode 27's PI a few nights ago (LHO aLOG 31111). In order to fix the problem, we used the hourly burt backups in /ligo/cds/lho/h1/burt/yyyy/mm/dd/hh:mm/ to restore settings all settings to Monday (2016/10/31), before the computer restarts. Further, Matt performed a few spot checks on the system, and suspected it good. *Settings wrangling: There were several compounding problems with the SDF system which meant that (1) The front-end did not use the safe.snap file upon reboot, and restored bogus values (2) The safe.snap file, which we'd thought had been kept up to date, had not been so since May. Why? (2) The safe.snap file for SUSITMPI used upon restart, /opt/rtcds/lho/h1/target/h1susitmpi/h1susitmpiepics/burt/safe.snap, is a softlink to /opt/rtcds/userapps/release/sus/h1/burtfiles/h1susitmpi_safe.snap. Unfortunately, *only* that model's safe.snap that had its permissions mistakenly set to a single user (coincidentally me, because I was the one who'd created the softlink from the target directory to the userapps repo.), and *not* the controls working group. That means Terra Hardwick, who had been custodian of the settings for this system, was not able to write to this file, and the settings to be restored upon computer reboot had not been updated since May 2016. Unfortunately, the only way to find out that this didn't work is to look in the log file, which lives in /opt/rtcds/lho/h1/log/${modelname}/ioc.log and none of us (save Dave) remember this file existed, let along looked at it before yesterday **. There are other files made (as described in Hugh's LHO aLOG 31163), but those files are not used by the front-end upon reboot. I've since fixed the permissions on this file, and we can now confirm that anyone can write to this file (i.e. accept & confirm DIFFs). We've also confirmed that there are no other safe.snap files that have their write permissions incorrectly restricted to a single user. ** Even worse, it looks like there's a bug in the log system -- even when we confirm that we have written to the file, the log reports a failure, e.g. *************************************************** Wed Nov 2 16:39:10 2016 Save TABLE as SDF: /opt/rtcds/lho/h1/target/h1susitmpi/h1susitmpiepics/burt/safe_161102_163910.snap *************************************************** Wed Nov 2 16:39:10 2016 ERROR Unable to set group-write on /opt/rtcds/lho/h1/target/h1susitmpi/h1susitmpiepics/burt/safe.snap - Operation not permitted *************************************************** Wed Nov 2 16:39:10 2016 FAILED FILE SAVE /opt/rtcds/lho/h1/target/h1susitmpi/h1susitmpiepics/burt/safe.snap *************************************************** (1) This is quite alarming. Dave has raised an FRS ticket (see LHO aLOG 6588) and fears it may be an RCG bug. I wish I could give you mode information on this, but I just don't know it. In summary, we believe the issues with SUSITMPI have been resolved, but there's a good bit of scary stuff left in the SDF system. We'll be working with the CDS team to find a path forward.
The LLO CDS system has scripts running that do regular checks on file permissions on the /opt/rtcds file system to try to catch these. Please contact Michael Thomas for details. We'll check that we are looking for this issue as well (and are acting when problems are found)
I've opened FRS6596 to do the same snap file permissions checking as LLO.
I ran the current version of the calibration pipeline over a stretch of O1 data to reproduce the kappas and compare to those in the C02 frames. The filters file used was aligocalibration/trunk/Runs/O1/GDSFilters/H1DCS_1131419668.npz, as suggested by the calibration configuration page for O1: https://wiki.ligo.org/viewauth/Calibration/GDSCalibrationConfigurationsO1#LHO_AN2 The agreement looks quite good. Time series plots of the kappas and the cavity pole are attached. The start time used here was 2016-10-04 12:41:19 UTC (GPS 1127997696).
J. Kissel Admiring the work of the SEI and ASC teams, we've just lost lock on a really impressive lock stretch in which we had ~40 mph winds, ~70th percentile microseism, and a 5.4 Mag earhtquake in the horn of Africa and survived. It would be most excellent it DetChar can compare amplitudes of ISC control signals, check out the beam rotation sensor tilt levels, the ISI platform sensor amplitudes, take a look at optical lever pitch and yaw compared with ASC signals etc. Start: Oct 31 2016 16:15:05 UTC End: 17:37-ish UTC
Winds and some ground BLRMS (showing microseism and the earthquake arrival) for this lock stretch. We survived at least one gust over 50mph before losing lock. No one changed seismic configuration during this time.
For the record, the units of the above attached trends (arranged in the same 4-panel format as the plot) are ([nm/s] RMS in band) [none] ([nm/s] RMS in band) [mph] Thus, - the earthquake band trend (H1:ISI-GND_STS_ITMY_Z_BLRMS_30M_100M) shows the 5.3 [mag] EQ peaked at 0.1 [um/s] RMS (in Z, in the corner station, between 30-100 [mHz]), - the microseism (again in Z, in the corner station, H1:ISI-GND_STS_ITMY_Z_BLRMS_100M_300M) is averaging 0.25 [um/s] RMS between 100-300 [mHz] (which is roughly average, or 50th percentile -- see LHO aLOG 22995), and - the wind speed (in the corner station) is beyond the 95th percentile (again, see LHO aLOG 22995) toward the end of this lock stretch, at 40-50 [mph]. Aside from Jordan Palamos' work in LHO aLOG 22995, also recall David McManus' work in LHO aLOG 27688, that -- instead of a side-by-side bar graph, shows a surface map. According to the cumulative surface map, with 50th percentile winds and 95th percentile winds, the duty cycle was ~30% in O1. So, this lock stretch is not yet *inconsistent* with O1's duty cycle, but it sure as heck-fy looks promising.