[Jenne, Terra]
We've been pretty frustratingly plagued by the CSOFT / dPdTheta instability today. Earlier today with Sheila, we were able to use the offsets from yesterday, but then later in the evening those offsets make things unstable when we go up in power. I'm starting to wonder if an initial alignment needs to be done, since the green arm powers (measured at LockingArmsGreen, or anything early in the sequence) have been decreasing with each lock. Maybe that will help?
Sheila migrated the Soft offsets into the trans QPDs, so the IFO now aligns to that placewhen the soft loops come on. Also, the POPA offsets are engaged during ASC_Part2, so the IFO is aligned to yesterday's place. However, with this alignment at 2W, we cannot engage the roll mode damping. The last few hours we've been skipping over EngageRollModeDamping, and I've commented out the final roll mode gain setting that used to happen in DC readout. We don't seem to usually have much of a problem just leaving the damping off, but we can turn it on once we're at at least a semi-high power (maybe 20+ W?).
Since we kept being troubled by the CSOFT instability, we went back to trying new offsets. We think there's a little more that could be done, but we have a place where the recycling gain stays fairly constant as we power up. The recycling gain looks terrible at 2W (30ish), but then only decays to 28ish. This is in comparison with the usual decay to 22ish. Also with this alignment, when we're at high power, the green arm transmissions are both high. This seems like something that we want, since we think that the green and red beams are pretty well co-aligned after the Soft loops come on, so hopefully this means that we're closer to finding a place that maintains the alignment of the IFO from 2W to 50W.
In the SDF screenshot, the POP_A offsets in the setpoint are what Sheila and Terra found last night, while the current values are the place that we like best so far tonight. The setpoint offsets for the soft loops aren't meaningful, but the current values are the ones that we like from tonight, which are on top of last night's offsets (which have already been put into the QPDs and accepted in SDF).
Attached also is a screenshot showing two striptools (and a bunch of other stuff that's basically ignorable). Right around -100 minutes is the inital power-up and corresponding power recycling gain drop. You can see that as we move offsets around, we're obviously changing the red and green arm powers, as well as the recycling gain. Note though the time around -15 min where both green arm transmissions (blue and teal) are high, and the PRC gain is high. Xarm green seems most strongly affected by Pop_A_pit, while Yarm green seems most strongly affected by DSOFT.
To help maintain these locks, we were increasing the ISS 3rd loop gain from -1 to -10. Also, we increased the SOFT Yaw gains from 3 to 7, and the Soft Pit gains from 0.5 to 0.7. This seemed to ameliorate most instabilities, although we still are sometimes struggling to hold the lock - I think it's perhaps an indicator of needing an initial alignment.
H1 Commissioning
Jenne, Terra, & Sheila were at the helm for H1 higher power work.
MEDM work
Made a corrections to the sitemap (IFO Common Mode Servo on pull-down), and to the Digital Video Overview (added name for POP Air camera at the CAM30 postion).
Stranger Things Happening Around Gate Tonight
Happened to catch a vehicle rolling on site around 9:45 (it turned toward the LSB, turned around, & lingered before leaving). The other oddity here was this car illuminated another vehicle as it approached. This vehicle was at the gate with lights off. Monitored the care for about 30min, called out to the Gate phone, and then three of us went to investigate. There were about 4-6people, and they were looking at the stars. They said they had talked to someone on the phone about coming out to watch the sky.
Upon returning to the OSB, we checked the external front door lock to make sure it hadn't been unlocked (luckily it was locked). The group eventually left (so they were here for atleast an hour.
Per FAMIS #7068, Saw that two ISIs needed their L4C WD counters to be cleared.
Jeff K, Darkhan T,
Last Tuesday we updated the infrastructure for injecting calibration lines (see LHO alogs 29245, 29249). Below is the table of currently active calibration lines:
Channel Names _FREQ (Hz) _SINGAIN (ct)
H1:CAL-PCALY_PCALOSC1_OSC 36.7 125 O1-scheme kappa_TST / kappa_PU
H1:CAL-PCALY_PCALOSC2_OSC 331.9 2900 O1-scheme kappa_C / f_C
H1:CAL-PCALY_PCALOSC3_OSC 1083.7 15000 high-frequency calibration check ("bonus" line)
H1:CAL-PCALX_PCALOSC1_OSC 3501.3 39322 high-frequency sensing function characterization ("mobile" line)
_FREQ (Hz) _CLKGAIN (ct)
H1:CAL-CS_TDEP_DARM_LINE1_DEMOD 37.3 0.1 O1-scheme kappa_PU
H1:SUS-ETMY_L3_CAL_LINE 35.9 0.11 O1-scheme kappa_TST / kappa_PU
H1:SUS-ETMY_L1_CAL_LINE 33.7 11 O2-scheme synched oscillator for kappa_U
H1:SUS-ETMY_L2_CAL_LINE 34.7 1.1 O2-scheme synched oscillator for kappa_P
H1:SUS-ETMY_L3_CAL2_LINE 35.3 0.11 O2-scheme synched oscillator for kappa_T
We plan to adjust the three O2-scheme line frequencies and amplitudes, and cancel them out with PCALY (they will not appear in the reconstructed DARM spectrum), following synchronized oscillators will be utilized for this purpose H1:CAL-PCALY_PCALOSC{4-6}_OSC.
We were locked last night at 50 W for ~2.5 hours and lost lock from an ITMX 15522 Hz PI. This is a known PI seen months ago; I had purposefully left damping settings off to see if it rang up. See first attachment showing lockloss.
This afternoon we were locked at 50 W for ~2 hours and I let the mode ring up so I could demonstrate successful damping. See second attachment showing damping.
All PIs that were previously observed have now been seen and damped post OMC vent. We have 5 PIs at 50 W (at least up to ~3 hour locks): ITMX 15520 Hz; ETMX 15541 Hz; ETMY 15542 Hz, 15009 Hz, 18041 Hz (aliased from 47495 Hz). All are successfully damped via the guardian and have had their damping phase and gain optimized.
Last week, Keith posted the results of a study of folded magnetometer channel data (alog 29166) aimed at understanding the results of recent changes to the timing system (primarily LED reprogramming and power supply switching). This is a follow-up, looking at the spectra of the same channels, and tracking the behavior of the two combs which the timing system interventions were intended to mitigate.
Detailed plots
Overview table (daily spectra, selected dates)
Full data set (daily spectra)
Full data set (cumulative spectra since Jul 1 2016, covering date ranges where Fscan SFTs were available)
These plots were generated from Fscans + spec_avg_long + my own plotting tools.
Timeline
July 14-21 comparison: before and after initial updates to timing slave card firmware (blinking LEDs turned off in many places, but not on timing fanouts)
July 21-Aug 6 comparison: firmware updated for EX fanout; CPS timing fanout power supply changed
Aug 6 - Aug 18 comparison: firmware updated for CER, MSR, EY fanouts
Notable features
This afternoon I ramped CP4's LLCV in 5% increments every 2 minutes, from 39% to 88% open, from 88% full to 100% full, for more data from exhaust flow meter. Fill level SP has been reset to 92% and level is slowly coming back down at 20% open on LLCV. Kudos to Patrick for writing an effective PI-code. It works very well for overfill scenarios!
Gerardo, Chandra On Tuesday, Aug. 23rd, we adjusted potentiometer on PT-140a (pirani) again - this time CCW 11 turns. Since Gerardo terminated cables for AIPs, gauge voltage has changed again and needs to be adjusted again so CC does not keep tripping due to set point interlock.
One more adjustment to the potentiometer since the CC interlock tripped a couple of times since the last change. 6 more turns CCW.
TITLE: 08/24 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Commissiong work continues.
LOG:
1425 - 1440 hrs. local -> To and from Y-mid Opened exhaust check-valve bypass-valve, opened LLCV bypass valve 1/2 turn -> LN2 @ exhaust in 1 minute 20 seconds -> Restored valves to as found configuration. Next CP3 overfill to be Friday, August 26th.
Daniel and Vern asked for a list of H1 models which are using the cdsEzcaRead and cdsEzcaWrite parts to transfer data to remote IOCs. This is following my discovery that the h1psliss model is attempting to send data to LLO EPICS channels (which also do not exist at LLO, presumably obsolete channels).
To make the list, I created a list of front end model core mdl files and grep within each file (grep -B 2 cdsEzCaWrite */h1/models/${model}|grep ":")
cdsEzCaRead:
h1ioppemmx.mdl *
Name "H1:DAQ-DC0_GPS"
h1ioppemmy.mdl *
Name "H1:DAQ-DC0_GPS"
h1sushtts.mdl
Name "H1:LSC-REFL_A_LF_OUTPUT"
h1pslpmc.mdl
Name "H1:PSL-OSC_LOCKED"
h1tcscs.mdl
Name "H1:ASC-X_TR_B_SUM_OUTPUT"
Name "H1:ASC-Y_TR_B_SUM_OUTPUT"
Name "H1:TCS-ETMX_RH_LOWERPOWER"
Name "H1:TCS-ETMX_RH_UPPERPOWER"
Name "H1:TCS-ETMY_RH_LOWERPOWER"
Name "H1:TCS-ETMY_RH_UPPERPOWER"
Name "H1:TCS-ITMX_CO2_LASERPOWER_ANGLE_CALC"
Name "H1:TCS-ITMX_CO2_LASERPOWER_ANGLE_REQUEST"
Name "H1:TCS-ITMX_CO2_LASERPOWER_POWER_REQUEST"
Name "H1:TCS-ITMX_CO2_LSRPWR_MTR_OUTPUT"
Name "H1:TCS-ITMX_RH_LOWERPOWER"
Name "H1:TCS-ITMX_RH_UPPERPOWER"
Name "H1:TCS-ITMY_CO2_LASERPOWER_ANGLE_CALC"
Name "H1:TCS-ITMY_CO2_LASERPOWER_ANGLE_REQUEST"
Name "H1:TCS-ITMY_CO2_LASERPOWER_POWER_REQUEST"
Name "H1:TCS-ITMY_CO2_LSRPWR_MTR_OUTPUT"
Name "H1:TCS-ITMY_RH_LOWERPOWER"
Name "H1:TCS-ITMY_RH_UPPERPOWER"
h1odcmaster.mdl
Name "H1:GRD-IFO_OK"
Name "H1:GRD-IMC_LOCK_OK"
Name "H1:GRD-ISC_LOCK_OK"
Name "H1:GRD-OMC_LOCK_OK"
Name "H1:PSL-ODC_CHANNEL_LATCH"
* mid station pem systems do not have IRIG-B timing, cdsEzCaRead is used to remotely obtain starting GPS time.
EzCaWrite:
h1psliss.mdl *
Name "L1:IMC-SL_QPD_WHITEN_SEG_1_GAINSTEP"
Name "L1:IMC-SL_QPD_WHITEN_SEG_1_SET_1"
Name "L1:IMC-SL_QPD_WHITEN_SEG_1_SET_2"
Name "L1:IMC-SL_QPD_WHITEN_SEG_1_SET_3"
Name "L1:IMC-SL_QPD_WHITEN_SEG_2_GAINSTEP"
Name "L1:IMC-SL_QPD_WHITEN_SEG_2_SET_1"
Name "L1:IMC-SL_QPD_WHITEN_SEG_2_SET_2"
Name "L1:IMC-SL_QPD_WHITEN_SEG_2_SET_3"
Name "L1:IMC-SL_QPD_WHITEN_SEG_3_GAINSTEP"
Name "L1:IMC-SL_QPD_WHITEN_SEG_3_SET_1"
Name "L1:IMC-SL_QPD_WHITEN_SEG_3_SET_2"
Name "L1:IMC-SL_QPD_WHITEN_SEG_3_SET_3"
Name "L1:IMC-SL_QPD_WHITEN_SEG_4_GAINSTEP"
Name "L1:IMC-SL_QPD_WHITEN_SEG_4_SET_1"
Name "L1:IMC-SL_QPD_WHITEN_SEG_4_SET_2"
Name "L1:IMC-SL_QPD_WHITEN_SEG_4_SET_3"
h1pslpmc.mdl
Name "H1:PSL-EPICSALARM"
* all writes to L1 channels will be removed on next restart of PSL ISS model.
Tagging all subsystems that are nominal responsible for these models.
Interestingly h1tcscs is cdsEzCaRead'ing some of its own EPICS channels. Looks like a copy-paste issue, I'll work with Nutsinee when she gets back.
See yesterday's alogs for a review of yesterday's activities. Please close all FRSs.
SEI - All good. ITMX did not trip from the two big EQs last night.
SUS - Charge is growing, needs a sign flip soon.
CDS - Next tues pulling demod board and putting the common mode board back in.
PSL - After recovering from a trip yesterday, things look good.
Vac - Hoping for more experiments on CP4, it will cause alarms. Kyle still baking vertex RGA, noise will continue for the week.
Facilities - [[Edit]] Safety meeting today.
I was asked to summarize the SWWD (software watchdog) timing sequence as a reminder.
t=0: SUS IOP detects top OSEM RMS exceeds trip level, starts its 1st countdown (5 mins)
t=5mins: SUS IOP 1st countdown expired, its IPC output goes to BAD and it starts its 2nd countdown (15 mins). SEI IOP receives the BAD IPC, and starts its 1st countdown (4 mins)
t=9mins: SEI IOP 1st countdown expired, its IPC output goes to BAD and it starts its 60 second 2nd countdown. SEI user models get the 60 second warning IPC so they can cleanly shutdown before the DACs are killed
t=10mins: SEI IOP 2nd countdown expired, DAC cards associated with the chamber the SUS is located in are killed
t=20mins: SUS IOP 2nd countdown expired, SUS DAC cards are killed
For the hardware watchdog (HWWD) the times are doubled. The power to the ISI Coil Driver chassis is removed after 20 mins of continuous SUS shaking.
Tagging SEI and SUS.
The power meter in the high power oscillator's external shutter was replaced. The existing unit ceased to function some time around the planned power outage a couple of months ago for reasons best known to itself. old S/N = 589365 new S/N = 627203 Jason/Peter
Forgot to mention that the hoses for the power meter and front end cooling circuits were swapped over at the water manifold under the table. Given that the hoses were labelled numerically, this seems to have been a remnant from the laser installation.
After the hose swap yesterday the flow of the PSL_AMP_FLOW dropped by 0.3bar and the PWRMETERFLOW increases by 0.4bar.
The hoses were swapped because we found that they were hooked up backwards. I.E. the MOPA cooling hose was plugged into the Power Meter water circuit and vice versa (most likely during the recent water manifold swap). This means that in the trend data the flow for the front end (H1:PSL-AMP_FLOW) was actually reading the flow through the power meter circuit; the same applies to the power meter circuit flow (H1:PSL-OSC_PWRMETERFLOW), it was actually reading the flow for the front end. This was fixed yesterday and the flow data is now reading from the correct water circuits.
S. Dwyer, J. Kissel, C. Gray After successfully recovering the IMC's VCO and recovering the IMC (29264), we were able to get up through LOCKING_ARMS_GREEN in the lock acquisition sequence. However, we found that ALS COMM failed caused lock losses during the next step (LOCKING_ALS), when the input for IMC length control in its Common Mode Chassis was switched from the IMC's PDH output to the ALS COMM PLL output. The ALS COMM PLL output is connected to IN2 of the IMC chassis that had a new daughter board installed in the star-crossed ISC rack H1-ISC-R1 today (LHO aLOG 29250). After fighting through MEDM screen confusion* at the racks, we found that OUT2 (an analog pickoff pick-off just after the input gain circuit) indicated a ~2.5 [V] offset, even with IN2 terminated with 50 [Ohms]. Suspecting that this symptom was indicative that the input gain circuit (circled in red in MEDM screen capture) was yet another causality of the unfortunate rack power mishap today (LHO aLOG 29253), we've replaced the entire chassis (which lives in U14 of H1-ISC-R1) with a spare we found in the EE shop -- S/N S1102627 (or Board S/N S1102627MC). Notably, this spare does not have one of the new daughter boards on which Chris has worked so hard. We're not suggesting this swap be permanent, but we make the swap for tonight at least, so we can hopefully make forward progress. We suggest that IN2 and/or the input gain stage of SN S1102626 be fixed tomorrow, and the chassis restored so we can employ the new daughter board. Other Details: - Before removing the chassis, we powered down the entire rack using the voltage sequencer around the back at the top of the rack. - After installing the rack, we were sure to have all cables connected appropriately before turning the rack power on again (via the sequencer again). - We added a few labels to the IMC's PDH output and the ALS COMM PLL output cables such that they're easier to follow and reconnect in the future. *MEDM Screen Confusion -- whether IN1 or IN2 is fed into OUT2 of all common mode chassis is selectable on their MEDM screens. For the IMC's common mode board (at least for SN S1102626), the MEDM screen's indication of the status of that switch is exactly backwards. When the screen indicates that IN1 is feeding OUT2, IN2 is feeding OUT2, and vice versa. #facepalm
With Sheila's help, the OUT 2 switch should now be correct for the MC Common Mode Servo medm (H1IMC_SERVO.adl). This change was committed to the svn.
M. Pirello (reported by J. Kissel from verbal discussion with F. Clara) Marc has inspected the Common Mode Board chassis we've removed (SN S1102626), and indeed found several blown transistors and opamps -- and is not even through the chassis test procedure. Unfortunately, the EE shop needs a restocking of surface mount components before we can make the repairs, but the plan is to shoot for a re-install of this board by next Tuesday (Aug 30th).
Repairs to S1102626 are complete and the chassis has been tested with the 200kHz low pass filter. The chassis performance is similar to the previous test performed September 2011.
When the 200kHz low pass filter is activated we detected a 3mV dc offset which should be noted. The low pass filter works as designed with -3dB gain at 200kHz and rolls off nicely. I have attached files from the testing. File details can be found in the readme.txt included in the zip.
After decoupling the pumping components used during the recent bake out of the Y-end RGA, I exposed the RGA to the Y-end vacuum volume, energized the filament and let it come into equilibrium for an hour or more. I then let the RGA scan continuously with the multiplier (SEM) on for an additional hour or so while I gathered up my mess(es). I periodically checked the scanning as I walked past the screen. At one point, I noticed that the spectrum was changing rapidly towards the "dirty". I monitored the scanning and noted that after reaching a temporary maximum, the amus which had increased then returned to near their original values. After consulting with the Jeff B. (the operator on shift), I feel that the observed changes in partial pressures were likely the result of IFO locking attempts as they coincide closely in time. Perhaps something gets hot when the IFO is locked or when mirrors are steered? See attached scans
If true that could be kind of scary (!) Can we set an RGA in MID (stripchart) mode and run time series following the main peaks through a locking attempt?
I could imagine baking the adsorbed water off the ETM and perhaps nearby baffles. But this should not persist (or repeat) after the first good cavity buildup.
Mike - Chandra's stated goal is to eventually continuously trend 7 AMUs (max allowed by software) at each building. The observance cited in this aLOG obviously would have been missed while in Faraday. Too bad that the RGAs don't live long with their SEMs on 24/7. As we install/commission the RGAs and as she works out the issues with the CDS and/or GC folks this trending will eventually be happening. J. Smith - The partial pressures that are changing are too small and not expected to show up on the total pressure gauges. From the graphic scans and knowing that the total pressure at the Y-end is 2 x 10-9 torr, we see that the partial pressures that changed are small (10-12 torr) - but still interesting because they are measurable and even more interesting if the changes can be shown to be tied to some IFO locking activity. (Science interesting? Who knew?)
Doh!!! Here are the .txt versions of the ASCII data
The indicated currents for these scans are typical of the SEM @ 1300 volts (which is the factory default). I have noticed in the past that setting the SEM voltage value in the EDIT tab does not change the value displayed in the device status screen or vice versa - so, though I set this to 1500 volts in one of those two fields, it may not have taken effect.