Per FAMIS #7068, Saw that two ISIs needed their L4C WD counters to be cleared.
Jeff K, Darkhan T,
Last Tuesday we updated the infrastructure for injecting calibration lines (see LHO alogs 29245, 29249). Below is the table of currently active calibration lines:
Channel Names _FREQ (Hz) _SINGAIN (ct)
H1:CAL-PCALY_PCALOSC1_OSC 36.7 125 O1-scheme kappa_TST / kappa_PU
H1:CAL-PCALY_PCALOSC2_OSC 331.9 2900 O1-scheme kappa_C / f_C
H1:CAL-PCALY_PCALOSC3_OSC 1083.7 15000 high-frequency calibration check ("bonus" line)
H1:CAL-PCALX_PCALOSC1_OSC 3501.3 39322 high-frequency sensing function characterization ("mobile" line)
_FREQ (Hz) _CLKGAIN (ct)
H1:CAL-CS_TDEP_DARM_LINE1_DEMOD 37.3 0.1 O1-scheme kappa_PU
H1:SUS-ETMY_L3_CAL_LINE 35.9 0.11 O1-scheme kappa_TST / kappa_PU
H1:SUS-ETMY_L1_CAL_LINE 33.7 11 O2-scheme synched oscillator for kappa_U
H1:SUS-ETMY_L2_CAL_LINE 34.7 1.1 O2-scheme synched oscillator for kappa_P
H1:SUS-ETMY_L3_CAL2_LINE 35.3 0.11 O2-scheme synched oscillator for kappa_T
We plan to adjust the three O2-scheme line frequencies and amplitudes, and cancel them out with PCALY (they will not appear in the reconstructed DARM spectrum), following synchronized oscillators will be utilized for this purpose H1:CAL-PCALY_PCALOSC{4-6}_OSC.
We were locked last night at 50 W for ~2.5 hours and lost lock from an ITMX 15522 Hz PI. This is a known PI seen months ago; I had purposefully left damping settings off to see if it rang up. See first attachment showing lockloss.
This afternoon we were locked at 50 W for ~2 hours and I let the mode ring up so I could demonstrate successful damping. See second attachment showing damping.
All PIs that were previously observed have now been seen and damped post OMC vent. We have 5 PIs at 50 W (at least up to ~3 hour locks): ITMX 15520 Hz; ETMX 15541 Hz; ETMY 15542 Hz, 15009 Hz, 18041 Hz (aliased from 47495 Hz). All are successfully damped via the guardian and have had their damping phase and gain optimized.
Last week, Keith posted the results of a study of folded magnetometer channel data (alog 29166) aimed at understanding the results of recent changes to the timing system (primarily LED reprogramming and power supply switching). This is a follow-up, looking at the spectra of the same channels, and tracking the behavior of the two combs which the timing system interventions were intended to mitigate.
Detailed plots
Overview table (daily spectra, selected dates)
Full data set (daily spectra)
Full data set (cumulative spectra since Jul 1 2016, covering date ranges where Fscan SFTs were available)
These plots were generated from Fscans + spec_avg_long + my own plotting tools.
Timeline
July 14-21 comparison: before and after initial updates to timing slave card firmware (blinking LEDs turned off in many places, but not on timing fanouts)
July 21-Aug 6 comparison: firmware updated for EX fanout; CPS timing fanout power supply changed
Aug 6 - Aug 18 comparison: firmware updated for CER, MSR, EY fanouts
Notable features
This afternoon I ramped CP4's LLCV in 5% increments every 2 minutes, from 39% to 88% open, from 88% full to 100% full, for more data from exhaust flow meter. Fill level SP has been reset to 92% and level is slowly coming back down at 20% open on LLCV. Kudos to Patrick for writing an effective PI-code. It works very well for overfill scenarios!
Gerardo, Chandra On Tuesday, Aug. 23rd, we adjusted potentiometer on PT-140a (pirani) again - this time CCW 11 turns. Since Gerardo terminated cables for AIPs, gauge voltage has changed again and needs to be adjusted again so CC does not keep tripping due to set point interlock.
One more adjustment to the potentiometer since the CC interlock tripped a couple of times since the last change. 6 more turns CCW.
TITLE: 08/24 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Commissiong work continues.
LOG:
1425 - 1440 hrs. local -> To and from Y-mid Opened exhaust check-valve bypass-valve, opened LLCV bypass valve 1/2 turn -> LN2 @ exhaust in 1 minute 20 seconds -> Restored valves to as found configuration. Next CP3 overfill to be Friday, August 26th.
Daniel and Vern asked for a list of H1 models which are using the cdsEzcaRead and cdsEzcaWrite parts to transfer data to remote IOCs. This is following my discovery that the h1psliss model is attempting to send data to LLO EPICS channels (which also do not exist at LLO, presumably obsolete channels).
To make the list, I created a list of front end model core mdl files and grep within each file (grep -B 2 cdsEzCaWrite */h1/models/${model}|grep ":")
cdsEzCaRead:
h1ioppemmx.mdl *
Name "H1:DAQ-DC0_GPS"
h1ioppemmy.mdl *
Name "H1:DAQ-DC0_GPS"
h1sushtts.mdl
Name "H1:LSC-REFL_A_LF_OUTPUT"
h1pslpmc.mdl
Name "H1:PSL-OSC_LOCKED"
h1tcscs.mdl
Name "H1:ASC-X_TR_B_SUM_OUTPUT"
Name "H1:ASC-Y_TR_B_SUM_OUTPUT"
Name "H1:TCS-ETMX_RH_LOWERPOWER"
Name "H1:TCS-ETMX_RH_UPPERPOWER"
Name "H1:TCS-ETMY_RH_LOWERPOWER"
Name "H1:TCS-ETMY_RH_UPPERPOWER"
Name "H1:TCS-ITMX_CO2_LASERPOWER_ANGLE_CALC"
Name "H1:TCS-ITMX_CO2_LASERPOWER_ANGLE_REQUEST"
Name "H1:TCS-ITMX_CO2_LASERPOWER_POWER_REQUEST"
Name "H1:TCS-ITMX_CO2_LSRPWR_MTR_OUTPUT"
Name "H1:TCS-ITMX_RH_LOWERPOWER"
Name "H1:TCS-ITMX_RH_UPPERPOWER"
Name "H1:TCS-ITMY_CO2_LASERPOWER_ANGLE_CALC"
Name "H1:TCS-ITMY_CO2_LASERPOWER_ANGLE_REQUEST"
Name "H1:TCS-ITMY_CO2_LASERPOWER_POWER_REQUEST"
Name "H1:TCS-ITMY_CO2_LSRPWR_MTR_OUTPUT"
Name "H1:TCS-ITMY_RH_LOWERPOWER"
Name "H1:TCS-ITMY_RH_UPPERPOWER"
h1odcmaster.mdl
Name "H1:GRD-IFO_OK"
Name "H1:GRD-IMC_LOCK_OK"
Name "H1:GRD-ISC_LOCK_OK"
Name "H1:GRD-OMC_LOCK_OK"
Name "H1:PSL-ODC_CHANNEL_LATCH"
* mid station pem systems do not have IRIG-B timing, cdsEzCaRead is used to remotely obtain starting GPS time.
EzCaWrite:
h1psliss.mdl *
Name "L1:IMC-SL_QPD_WHITEN_SEG_1_GAINSTEP"
Name "L1:IMC-SL_QPD_WHITEN_SEG_1_SET_1"
Name "L1:IMC-SL_QPD_WHITEN_SEG_1_SET_2"
Name "L1:IMC-SL_QPD_WHITEN_SEG_1_SET_3"
Name "L1:IMC-SL_QPD_WHITEN_SEG_2_GAINSTEP"
Name "L1:IMC-SL_QPD_WHITEN_SEG_2_SET_1"
Name "L1:IMC-SL_QPD_WHITEN_SEG_2_SET_2"
Name "L1:IMC-SL_QPD_WHITEN_SEG_2_SET_3"
Name "L1:IMC-SL_QPD_WHITEN_SEG_3_GAINSTEP"
Name "L1:IMC-SL_QPD_WHITEN_SEG_3_SET_1"
Name "L1:IMC-SL_QPD_WHITEN_SEG_3_SET_2"
Name "L1:IMC-SL_QPD_WHITEN_SEG_3_SET_3"
Name "L1:IMC-SL_QPD_WHITEN_SEG_4_GAINSTEP"
Name "L1:IMC-SL_QPD_WHITEN_SEG_4_SET_1"
Name "L1:IMC-SL_QPD_WHITEN_SEG_4_SET_2"
Name "L1:IMC-SL_QPD_WHITEN_SEG_4_SET_3"
h1pslpmc.mdl
Name "H1:PSL-EPICSALARM"
* all writes to L1 channels will be removed on next restart of PSL ISS model.
Tagging all subsystems that are nominal responsible for these models.
Interestingly h1tcscs is cdsEzCaRead'ing some of its own EPICS channels. Looks like a copy-paste issue, I'll work with Nutsinee when she gets back.
See yesterday's alogs for a review of yesterday's activities. Please close all FRSs.
SEI - All good. ITMX did not trip from the two big EQs last night.
SUS - Charge is growing, needs a sign flip soon.
CDS - Next tues pulling demod board and putting the common mode board back in.
PSL - After recovering from a trip yesterday, things look good.
Vac - Hoping for more experiments on CP4, it will cause alarms. Kyle still baking vertex RGA, noise will continue for the week.
Facilities - [[Edit]] Safety meeting today.
I was asked to summarize the SWWD (software watchdog) timing sequence as a reminder.
t=0: SUS IOP detects top OSEM RMS exceeds trip level, starts its 1st countdown (5 mins)
t=5mins: SUS IOP 1st countdown expired, its IPC output goes to BAD and it starts its 2nd countdown (15 mins). SEI IOP receives the BAD IPC, and starts its 1st countdown (4 mins)
t=9mins: SEI IOP 1st countdown expired, its IPC output goes to BAD and it starts its 60 second 2nd countdown. SEI user models get the 60 second warning IPC so they can cleanly shutdown before the DACs are killed
t=10mins: SEI IOP 2nd countdown expired, DAC cards associated with the chamber the SUS is located in are killed
t=20mins: SUS IOP 2nd countdown expired, SUS DAC cards are killed
For the hardware watchdog (HWWD) the times are doubled. The power to the ISI Coil Driver chassis is removed after 20 mins of continuous SUS shaking.
Tagging SEI and SUS.
The power meter in the high power oscillator's external shutter was replaced. The existing unit ceased to function some time around the planned power outage a couple of months ago for reasons best known to itself. old S/N = 589365 new S/N = 627203 Jason/Peter
Forgot to mention that the hoses for the power meter and front end cooling circuits were swapped over at the water manifold under the table. Given that the hoses were labelled numerically, this seems to have been a remnant from the laser installation.
After the hose swap yesterday the flow of the PSL_AMP_FLOW dropped by 0.3bar and the PWRMETERFLOW increases by 0.4bar.
The hoses were swapped because we found that they were hooked up backwards. I.E. the MOPA cooling hose was plugged into the Power Meter water circuit and vice versa (most likely during the recent water manifold swap). This means that in the trend data the flow for the front end (H1:PSL-AMP_FLOW) was actually reading the flow through the power meter circuit; the same applies to the power meter circuit flow (H1:PSL-OSC_PWRMETERFLOW), it was actually reading the flow for the front end. This was fixed yesterday and the flow data is now reading from the correct water circuits.
S. Dwyer, J. Kissel, C. Gray After successfully recovering the IMC's VCO and recovering the IMC (29264), we were able to get up through LOCKING_ARMS_GREEN in the lock acquisition sequence. However, we found that ALS COMM failed caused lock losses during the next step (LOCKING_ALS), when the input for IMC length control in its Common Mode Chassis was switched from the IMC's PDH output to the ALS COMM PLL output. The ALS COMM PLL output is connected to IN2 of the IMC chassis that had a new daughter board installed in the star-crossed ISC rack H1-ISC-R1 today (LHO aLOG 29250). After fighting through MEDM screen confusion* at the racks, we found that OUT2 (an analog pickoff pick-off just after the input gain circuit) indicated a ~2.5 [V] offset, even with IN2 terminated with 50 [Ohms]. Suspecting that this symptom was indicative that the input gain circuit (circled in red in MEDM screen capture) was yet another causality of the unfortunate rack power mishap today (LHO aLOG 29253), we've replaced the entire chassis (which lives in U14 of H1-ISC-R1) with a spare we found in the EE shop -- S/N S1102627 (or Board S/N S1102627MC). Notably, this spare does not have one of the new daughter boards on which Chris has worked so hard. We're not suggesting this swap be permanent, but we make the swap for tonight at least, so we can hopefully make forward progress. We suggest that IN2 and/or the input gain stage of SN S1102626 be fixed tomorrow, and the chassis restored so we can employ the new daughter board. Other Details: - Before removing the chassis, we powered down the entire rack using the voltage sequencer around the back at the top of the rack. - After installing the rack, we were sure to have all cables connected appropriately before turning the rack power on again (via the sequencer again). - We added a few labels to the IMC's PDH output and the ALS COMM PLL output cables such that they're easier to follow and reconnect in the future. *MEDM Screen Confusion -- whether IN1 or IN2 is fed into OUT2 of all common mode chassis is selectable on their MEDM screens. For the IMC's common mode board (at least for SN S1102626), the MEDM screen's indication of the status of that switch is exactly backwards. When the screen indicates that IN1 is feeding OUT2, IN2 is feeding OUT2, and vice versa. #facepalm
With Sheila's help, the OUT 2 switch should now be correct for the MC Common Mode Servo medm (H1IMC_SERVO.adl). This change was committed to the svn.
M. Pirello (reported by J. Kissel from verbal discussion with F. Clara) Marc has inspected the Common Mode Board chassis we've removed (SN S1102626), and indeed found several blown transistors and opamps -- and is not even through the chassis test procedure. Unfortunately, the EE shop needs a restocking of surface mount components before we can make the repairs, but the plan is to shoot for a re-install of this board by next Tuesday (Aug 30th).
Repairs to S1102626 are complete and the chassis has been tested with the 200kHz low pass filter. The chassis performance is similar to the previous test performed September 2011.
When the 200kHz low pass filter is activated we detected a 3mV dc offset which should be noted. The low pass filter works as designed with -3dB gain at 200kHz and rolls off nicely. I have attached files from the testing. File details can be found in the readme.txt included in the zip.
I've made a script to somewhat automate the weekly oplev trends FAMIS task. It makes 3 plots like the attached image of the oplev pit, yaw and sum channels for the test-masses, BS, PR3 and SR3. It still requires a little fiddling with the plot, you have to zoom in manually on any plots that have 1e9-like spikes, but this should still be easier than running dataviewer templates. It uses h1nds for data and a pre-release version of the python nds2 client that has gap handling, so updates in the future could break this. I'll try to maintain this script, so any changes or improvements should come to me. The script lives in the userapps/sys/h1/scripts folder.
The script is run by going to the sys/h1/scripts folder:
jim.warner@opsws0:~ 0$ cd /opt/rtcds/userapps/release/sys/h1/scripts
And running the oplev_trends.py script with python:
jim.warner@opsws0:scripts 0$ python oplev_trends.py
You will then need to do the usual zooming in on useful data, saving screen shots and posting to the alog. I'll look into automating more of this, but it works well enough for now. It would also be very easy to add this to a "Weeklies" tab on the sitemap, which I believe LLO has done with some similar tasks.
I've now added the HEPI monthly pressure trends to the same folder. Admittedly, there's little difference here between running my python script and running the dataviewer template, as the HEPI trends all fit on one dataviewer window easily. But this was pretty easy to throw together, and may allow us to automate these tasks more in the future, say if we could couple this with something like TJ's shift summary alog script.
Running it is similar to the oplev script:
jim.warner@opsws0:~ 0$ cd /opt/rtcds/userapps/release/sys/h1/scripts
jim.warner@opsws0:scripts 0$ python hepi_trends.py
For the oplev trends, they look good. I'll update the FAMIS procedure to run this script instead of using dataviewer.
Can you add the HAM2 oplev to this as well? While its usefulness is debated, it is an active optical lever so we should be trending it as well.
Thanks Jim!
After decoupling the pumping components used during the recent bake out of the Y-end RGA, I exposed the RGA to the Y-end vacuum volume, energized the filament and let it come into equilibrium for an hour or more. I then let the RGA scan continuously with the multiplier (SEM) on for an additional hour or so while I gathered up my mess(es). I periodically checked the scanning as I walked past the screen. At one point, I noticed that the spectrum was changing rapidly towards the "dirty". I monitored the scanning and noted that after reaching a temporary maximum, the amus which had increased then returned to near their original values. After consulting with the Jeff B. (the operator on shift), I feel that the observed changes in partial pressures were likely the result of IFO locking attempts as they coincide closely in time. Perhaps something gets hot when the IFO is locked or when mirrors are steered? See attached scans
If true that could be kind of scary (!) Can we set an RGA in MID (stripchart) mode and run time series following the main peaks through a locking attempt?
I could imagine baking the adsorbed water off the ETM and perhaps nearby baffles. But this should not persist (or repeat) after the first good cavity buildup.
Mike - Chandra's stated goal is to eventually continuously trend 7 AMUs (max allowed by software) at each building. The observance cited in this aLOG obviously would have been missed while in Faraday. Too bad that the RGAs don't live long with their SEMs on 24/7. As we install/commission the RGAs and as she works out the issues with the CDS and/or GC folks this trending will eventually be happening. J. Smith - The partial pressures that are changing are too small and not expected to show up on the total pressure gauges. From the graphic scans and knowing that the total pressure at the Y-end is 2 x 10-9 torr, we see that the partial pressures that changed are small (10-12 torr) - but still interesting because they are measurable and even more interesting if the changes can be shown to be tied to some IFO locking activity. (Science interesting? Who knew?)
Doh!!! Here are the .txt versions of the ASCII data
The indicated currents for these scans are typical of the SEM @ 1300 volts (which is the factory default). I have noticed in the past that setting the SEM voltage value in the EDIT tab does not change the value displayed in the device status screen or vice versa - so, though I set this to 1500 volts in one of those two fields, it may not have taken effect.