I tested a new build of the frame writer on h1fw0. After a few minutes of testing the system was unable to establish a connection to the network share the frames are written to. After rebooting the ldasgw machine and the frame writer Jim changed the link between the two machines from a 10Gb fiber link back to a 1Gb copper link. While waiting for the network problems to be solved I reconfigured the system to write to local disk. The frame write was stable while writing to local disks. When H1FW0 & H1FW1 went unstable even writing to local disks would fail. After the network link to the LDAS disk system was re-established Jim and I restarted the frame writer process writing to LDAS instead of local disk. It has been running since then. The next step is to watch h1fw0 for a week. If h1fw0 is stable then we will move h1fw1 to use the new code so that we have access to the additional metrics is provides.
Jason, Jim, Dave:
I've been looking into why when we restart the PSL front end models the shutter controlled by the diode room Beckhoff OPC gets closed. Specifically it happens when the h1pslpmc model is started.
The h1pslpmc model controls the shutter through ezcawrite to the Beckhoff channel H1:PSL-EPICSALARM. A ZERO keeps the shutter open, a ONE closes the shutter. Settings within the pmc safe.snap file are zero, so the front end should not output a ONE value on startup and the shutter should not close.
The shutter can be instructed to close by the model if the FLOW monitor (a raw ADC input to the model) lies outside of upper or lower limits. The FLOW signal is passed through a standard IIR filter module. Trending the INMON and the OUT16 of this filtermodule when we started the model last Tuesday reveals for the first second the OUT16 is ZERO while the input is the ADC value. We suspect this is normal for an IIR integrating filter module. The zero OUT16 triggers the shutter close because it lies outside of the upper and lower limits.
The filter module does not have any filters loaded, has no offsets and has unity gain. We propose (and Jason agrees) that the next time we restart this PSL model we replace FLOW's filter module with an EPICS-OUTPUT part and see if we can get the shutter to remain open during the model startup.
(see WP 6111) RGA data following recent bake shows improvement but needs extended baking - will extend bake of Vertex RGA (running pump) until commissioners are negatively impacted. 1000 hrs. local -> Heating of Vertex RGA resumed -> will need periodic access to measure temps and make adjustments
TITLE: 08/23 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventitive Maintenance
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Maintenace Day recovery has been halted by the IMC not locking. Investigation is ongoing.
LOG in attached txt file
M. Pirello, R. McCarthy, E. Castrellon
Work Permit #6116
We removed the ISC Anti-Aliasing Chassis and investigated channel 14 which was suspected to have a failure, per ALOG 29189. To test the channels, we attached the DB9 Breakout to the front DB9 connector and used the SR785 to test the transfer function through the filter. We applied the test leads directly to the differential test points on the PCB. Channel 14 looked much like all of the other channels. Unfortunately, the data was corrupted on the thumb drive, but I did get one scan of channel 9 and have attached this scan. Every scan was nearly identical to this scan.
We returned the chassis to its original location and reattached all of the cables.
J. Kissel Here's your weekly broken record. Time to change the ETMX and ETMY bias sign! Several quadrants of the H1SUSETMX and H1SUSETMY's TST L3 ESD systems show an effective bias voltage of around 40 [V], which is roughly 10% of the request / applied bias voltage of 430 [V]. This means the actuation strength is 10% different from its value in late June when we had last herded the effective bias voltage to 0 [V] and we'd hoped to begin the campaign of regular bias flipping (see LHO aLOG 27890). We're still waiting for a robust enough interferometer / acquisition sequence to declare we're happy (and probably also for me to be in the control room for a non-Tuesday late night when we're ever happy) to flip the bias sign and debug why we had troubles with ALS DIFF and switching to ETMY (see LHO aLOGs 28362 and 28152).
Marc P, Chris W
As per Work Permit #6104, we added a 200 kHz lowpass filter (D1600314) to the fast path of the Common Mode Servo Board (D040180), located in the LVEA ISC-R1 rack. The serial number of the servo board is S1102626.
When reconnecting the power to the Common Mode Servo Board, the 17V supply was connected first (should connect 24V first or use the sequencer), which resulted in blowing diodes D4 and D5 at the power input (see D0901846). At the time, we saw LEDs of other boards on the rack turning off. The diodes were replaced and the current drawn by the Common Mode Servo Board was confirmed to be the same as drawn by an equivalent spare board. The board was then replaced in ISC-R1 rack, this time using the sequencer switch to power cycle the whole rack.
For reasons unknown it seems that the first channel on the modecleaner IQ demodulator broke today. The I output gives a fixed 300mV offset. We switched to the second channel (spare).
I also removed 2 TNC-to-BNC adapters, 2 BNC cables and a BNC barrel which were used to connect the I output of the demodulator with the IMC board input. This mashup was replaced with a proper TNC cable.
An FRS ticket has been opened for this event. See https://services.ligo-la.caltech.edu/FRS/show_bug.cgi?id=6084
The IMC VCO had a couple of blown OpAmps that were replaced. The IMC is locking again and the MC_F spectrum looks OK.
I have created two new "Beckhoff" SDF systems: h1hpipumpctrlsdf and h1pslopcsdf. These are built in the same way as Jonathan's slow controls SDF systems, following Jonathan's wiki-page instructions. They run on h1build and take the next available DCUIDs
h1hpipumpctrlsdf (dcuid=1033). Monitors settings on the three "purple box" hepi pump controllers
h1pslopcsdf (dcuid=1034). Monitors the diode-room Beckhoff OPC system settings (Peter K provided the channel list)
The two new SDF systems were added to the SDF_OVERVIEW.adl medm screen. Also the missing h1susprocpi was added, which necessitated making the screen taller to accomodate the additional sus system. The new systems are marked on the screen capture image attached.
After Daniel's h1ecac1plc1 code changes, I have installed the new INI file and restarted the DAQ.
This TwinCAT update included:
Ramped CP4 LLCV in 5% increments, every 5 minutes, until it reached 100% full (39% to 63% open). Note: generated alarm in CDS due to overfilling Attached is a snap of the exhaust flow and pressure as LLCV ramped. I lowered the fill set point from 92% to 88% and and will redo experiment once CP ramps down to 88%.
I've made a script to somewhat automate the weekly oplev trends FAMIS task. It makes 3 plots like the attached image of the oplev pit, yaw and sum channels for the test-masses, BS, PR3 and SR3. It still requires a little fiddling with the plot, you have to zoom in manually on any plots that have 1e9-like spikes, but this should still be easier than running dataviewer templates. It uses h1nds for data and a pre-release version of the python nds2 client that has gap handling, so updates in the future could break this. I'll try to maintain this script, so any changes or improvements should come to me. The script lives in the userapps/sys/h1/scripts folder.
The script is run by going to the sys/h1/scripts folder:
jim.warner@opsws0:~ 0$ cd /opt/rtcds/userapps/release/sys/h1/scripts
And running the oplev_trends.py script with python:
jim.warner@opsws0:scripts 0$ python oplev_trends.py
You will then need to do the usual zooming in on useful data, saving screen shots and posting to the alog. I'll look into automating more of this, but it works well enough for now. It would also be very easy to add this to a "Weeklies" tab on the sitemap, which I believe LLO has done with some similar tasks.
I've now added the HEPI monthly pressure trends to the same folder. Admittedly, there's little difference here between running my python script and running the dataviewer template, as the HEPI trends all fit on one dataviewer window easily. But this was pretty easy to throw together, and may allow us to automate these tasks more in the future, say if we could couple this with something like TJ's shift summary alog script.
Running it is similar to the oplev script:
jim.warner@opsws0:~ 0$ cd /opt/rtcds/userapps/release/sys/h1/scripts
jim.warner@opsws0:scripts 0$ python hepi_trends.py
For the oplev trends, they look good. I'll update the FAMIS procedure to run this script instead of using dataviewer.
Can you add the HAM2 oplev to this as well? While its usefulness is debated, it is an active optical lever so we should be trending it as well.
Thanks Jim!
All vacuum pumping related connections are now removed from HAM5/HAM6
Darkham, Jeff, Jim, Dave:
h1susetm[x,y] models were restarted following changes to QUAD_MASTER.mdl.
Change is to replace DAQ DQ channel H1:SUS-ETMY_LKIN_P_LO_DQ with H1:SUS-ETMY_L3_CAL2_LINE_OUT_DQ and add an oscillator (H1:SUS-ETMY_L3_CAL2_LINE).
DAQ did not restart nicely. My change to add h1sysecatxxplcysdf INI and PAR files to daq master caused h1dc0 to stop on error "channel '' has bad DCU id 1024". I have removed these from the master and the DAQ is running.
PT-210b CC was not reading pressure, so I remotely rebooted.
Jenne, Sheila
We had an unusual lockloss a few minutes ago, related to 28255
I happened around 8:11 August 23rd UTC, the DRMI gaurdian seemed to think that the lock was lost although it was not.
There are two locklosses around that time, so Ill play detective for both.
1.) 8:09:33 UTC (1155974990)
Looking at the Guardian log:
2016-08-23_08:09:30.786330Z ISC_DRMI [ENGAGE_DRMI_ASC.run] ezca: H1:ASC-MICH_P_SW1 => 16
2016-08-23_08:09:31.037960Z ISC_DRMI [ENGAGE_DRMI_ASC.run] ezca: H1:ASC-MICH_P => OFF: FM1
2016-08-23_08:09:31.042700Z ISC_DRMI [ENGAGE_DRMI_ASC.run] ezca: H1:ASC-MICH_Y_SW1 => 16
2016-08-23_08:09:31.290770Z ISC_DRMI [ENGAGE_DRMI_ASC.run] ezca: H1:ASC-MICH_Y => OFF: FM1
2016-08-23_08:09:33.911750Z ISC_DRMI new request: DRMI_WFS_CENTERING
2016-08-23_08:09:33.911930Z ISC_DRMI calculating path: ENGAGE_DRMI_ASC->DRMI_WFS_CENTERING
2016-08-23_08:09:33.912540Z ISC_DRMI new target: DOWN
2016-08-23_08:09:33.912620Z ISC_DRMI GOTO REDIRECT
2016-08-23_08:09:33.912900Z ISC_DRMI REDIRECT requested, timeout in 1.000 seconds
Seems as though there was a request for a state that is behind its current position, so it had to go through DOWN to get there. This request came from ISC_LOCK:
2016-08-23_08:09:33.546800Z ISC_LOCK [LOCK_DRMI_3F.run] DRMI TRIGGERED NOT LOCKED:
2016-08-23_08:09:33.546920Z ISC_LOCK [LOCK_DRMI_3F.run] LSC-MICH_TRIG_MON = 0.0
2016-08-23_08:09:33.547020Z ISC_LOCK [LOCK_DRMI_3F.run] LSC-PRCL_TRIG_MON = 1.0
2016-08-23_08:09:33.547110Z ISC_LOCK [LOCK_DRMI_3F.run] LSC-SRCL_TRIG_MON = 0.0
2016-08-23_08:09:33.547210Z ISC_LOCK [LOCK_DRMI_3F.run] DRMI lost lock
2016-08-23_08:09:33.602500Z ISC_LOCK state returned jump target: LOCKLOSS_DRMI
2016-08-23_08:09:33.602710Z ISC_LOCK [LOCK_DRMI_3F.exit]
2016-08-23_08:09:33.666340Z ISC_LOCK JUMP: LOCK_DRMI_3F->LOCKLOSS_DRMI
2016-08-23_08:09:33.667220Z ISC_LOCK calculating path: LOCKLOSS_DRMI->LOCK_DRMI_3F
2016-08-23_08:09:33.667760Z ISC_LOCK new target: LOCK_DRMI_1F
2016-08-23_08:09:33.668520Z ISC_LOCK executing state: LOCKLOSS_DRMI (3)
2016-08-23_08:09:33.668750Z ISC_LOCK [LOCKLOSS_DRMI.enter]
2016-08-23_08:09:33.854350Z ISC_LOCK EDGE: LOCKLOSS_DRMI->LOCK_DRMI_1F
2016-08-23_08:09:33.855110Z ISC_LOCK calculating path: LOCK_DRMI_1F->LOCK_DRMI_3F
2016-08-23_08:09:33.855670Z ISC_LOCK new target: ENGAGE_DRMI_ASC
2016-08-23_08:09:33.856260Z ISC_LOCK executing state: LOCK_DRMI_1F (101)
2016-08-23_08:09:33.856410Z ISC_LOCK [LOCK_DRMI_1F.enter]
2016-08-23_08:09:33.868100Z ISC_LOCK [LOCK_DRMI_1F.main] USERMSG 0: node TCS_ITMY_CO2_PWR: NOTIFICATION
2016-08-23_08:09:33.868130Z ISC_LOCK [LOCK_DRMI_1F.main] USERMSG 1: node SEI_BS: NOTIFICATION
2016-08-23_08:09:33.893890Z ISC_LOCK [LOCK_DRMI_1F.main] ezca: H1:GRD-ISC_DRMI_REQUEST => DRMI_WFS_CENTERING
and
2.) 08:13:12 UTC (1155975209)
Doesnt seem to be any funny business here. The DRMI_locked() function looks at the channels in the log below and then will return to DRMI_1F, and at this point it seems like the MC lost lock (see plots).
2016-08-23_08:13:17.613090Z ISC_DRMI [DRMI_WFS_CENTERING.run] DRMI TRIGGERED NOT LOCKED:
2016-08-23_08:13:17.613160Z ISC_DRMI [DRMI_WFS_CENTERING.run] LSC-MICH_TRIG_MON = 0.0
2016-08-23_08:13:17.613230Z ISC_DRMI [DRMI_WFS_CENTERING.run] LSC-PRCL_TRIG_MON = 1.0
2016-08-23_08:13:17.613300Z ISC_DRMI [DRMI_WFS_CENTERING.run] LSC-SRCL_TRIG_MON = 0.0
2016-08-23_08:13:17.613500Z ISC_DRMI [DRMI_WFS_CENTERING.run] la la
2016-08-23_08:13:17.670880Z ISC_DRMI state returned jump target: LOCK_DRMI_1F
2016-08-23_08:13:17.671070Z ISC_DRMI [DRMI_WFS_CENTERING.exit]
2016-08-23_08:13:17.671520Z ISC_DRMI STALLED
2016-08-23_08:13:17.734330Z ISC_DRMI JUMP: DRMI_WFS_CENTERING->LOCK_DRMI_1F
2016-08-23_08:13:17.741520Z ISC_DRMI calculating path: LOCK_DRMI_1F->DRMI_WFS_CENTERING
2016-08-23_08:13:17.742080Z ISC_DRMI new target: DRMI_LOCK_WAIT
2016-08-23_08:13:17.742750Z ISC_DRMI executing state: LOCK_DRMI_1F (30)
2016-08-23_08:13:17.742920Z ISC_DRMI [LOCK_DRMI_1F.enter]
2016-08-23_08:13:17.744030Z ISC_DRMI [LOCK_DRMI_1F.main] MC not Locked
2016-08-23_08:13:17.795150Z ISC_DRMI state returned jump target: DOWN
2016-08-23_08:13:17.795290Z ISC_DRMI [LOCK_DRMI_1F.exit]
Here are the functions that are used as decorators in DRMI_WFS_CENTERING
def MC_locked():
trans_pd_lock_threshold = 50
return ezca['IMC-MC2_TRANS_SUM_OUTPUT']/ezca['IMC-PWR_IN_OUTPUT'] >= trans_pd_lock_threshold
def DRMI_locked():
MichMon = ezca['LSC-MICH_TRIG_MON']
PrclMon = ezca['LSC-PRCL_TRIG_MON']
SrclMon = ezca['LSC-SRCL_TRIG_MON']
if (MichMon > 0.5) and (PrclMon > 0.5) and (SrclMon > 0.5):
# We're still locked and triggered, so return True
return True
else:
# Eeep! Not locked. Log some stuff
log('DRMI TRIGGERED NOT LOCKED:')
log('LSC-MICH_TRIG_MON = %s' % MichMon)
log('LSC-PRCL_TRIG_MON = %s' % PrclMon)
log('LSC-SRCL_TRIG_MON = %s' % SrclMon)
return False
Something I also should have mentioned is that ISC_LOCK was brought into Manual and then requested LOCK_DRMI_3F right before the logs seen above. Seems as though it wasnt quite ready to be there yet so it jumped back down to LOCK_DRMI_1F, reran the state where it requested DRMI_WFS_CENTERING from the ISC_DRMI guardian.
Overview
A synchronized oscillator was added to QUAD_MASTER
model test mass stage (L3). After re-compiling the SUS-ETMY
model there will be two synchronized ossilators in L3 stage that will be used for driving calibration lines: *_L3_CAL_LINE
and *_L3_CAL2_LINE
.
Removed channel LKIN_P_LO
from the list of DQ channels and added L3_CAL2_LINE_OUT
into the list.
The h1susetmy
model must be recompiled in order for the changes to take effect.
Details
For one of the two calibration lines that we needed to run during ER9 we used a pitch dither oscillator, SUS-ETMY_LKIN_P
(see LHO alog 28164). After analyzing the ER9 data we found two problems with this line (see LHO alog 29108):
CAL-CS
model that calculates the time-dependent parameters rely on an oscillator that is synchronized with the ones in the SUS-ETMY
and CAL-PCALY
models. Since SUS-ETMY_LKIN_P
is not a fixed phase (synchronized) oscillator, the phases have to be adjusted by hand every time the oscillator gets restarted. With synchronized oscillators this is will not be necessary.
The second synchronized oscillator was added at L3_CAL2_LINE_OUT
and the list of DQ channels was modified accordingly. The L3_CAL2_LINE_OUT
was added with sampling rate 512 Hz. LKIN_P_LO
was removed from the list of DQ channels.
The changes were commited to USERAPPS repository, rev. 14081.
Dave, TJ, Jeff K, Darkhan,
H1:SUS-ETM(X|Y)
were recompiled and restarted, DAQ was restarted (see LHO alog 29245, WP 6117).
The QUAD MEDM screen was updated to show the new oscillator settings.
The MEDM screen updates were committed to userapps repository (rev. 14088):
common/medm/quad/SUS_CUST_QUAD_ISTAGE_CAL2_LINE.adl
common/medm/quad/SUS_CUST_QUAD_OVERVIEW.adl