Work Permit | Date | Description | alog/status |
6546.html | 03/27/17 04:05 PM | Create and test a Guardian node that will turn on and adjust a set of Cal lines for later analysis. | 35153 |
6545.html | 03/27/17 01:43 PM | Inject blip-like glitches to test performance of NOISEMON channels | 35116 |
6544.html | 03/27/17 01:20 PM | We will add a new signal path in h1omc and h1calcs models in order to implement the DCPD cross correlation scheme (whose ECR, E1700107, has been approved). Because this implementation is independent of the main interferometer control signals, we don't expect any impact on locking or sensitivity. DAQ restart is required. | 35156, 35139, 35115 |
6543.html | 03/27/17 09:56 AM | Upgrade LHO CDS login servers cdsssh, cdslogin and opslogin to Debian8. Needed because current OS is near end-of-support. | 35119 |
6542.html | 03/27/17 09:03 AM | Repair underground water leak on feed line to VPW. This will require digging up asphalt and digging down to the PVC line. | |
6541.html | 03/27/17 08:40 AM | Perform scheduled maintenance to scroll compressors at Y-MID vent/purge-air supply skid. Maintenance activity will require for the compressors to run for brief periods of time to check compression. Lock-out/tag-out power to skid as required. | 35149 |
6540.html | 03/23/17 12:57 PM | Reduce crontab tasks on h1hpipumptrcl[l0,EX] to be that used on EY unit. Check local clocks are correct. | 35141 |
6539.html | 03/23/17 11:36 AM | We would like to "borrow" a HWS PCIe card from one of the end stations (whichever one we can get to on Tuesday) and put it on the spared HWS machine at the corner station so we can get HWSY running. | Transfer of PCIe card completed. |
6538.html | 03/23/17 11:03 AM | Remove the INSTAIR alarms for the MX compressor from the cell phone text alarm system. | |
6537.html | 03/23/17 07:04 AM | Swap out the harmonic frequency generator with a spare to see if this is the cause of glitches in the RF 45. | Swap complete, now monitor for glitches. |
Updates to previous Work Permits | |||
6527.html | 03/19/17 09:36 PM | Access to pcal system and camera to take pictures and possibly move the pcal beams | 34980, 35137 |
Aidan, Nutsinee, Dave:
during the HWS image grabber card swap this morning, a duplicate of the ITMX EPICS IOC was running on the h1hwsmsr1 machine. The DAQ EDCU incorrectly connected to the duplicate and had bad data between the times 19:45 and 22:56 UTC. I powered h1hwsmsr1 down and the EDCU reconnected to the correct channels sometime later.
WP 6544, ECR E1700107,
We have installed the infrastructure in the frontend models to relatively easily produce the DCPD cross correlation spectrum.
It seems to do what it is supposed to do. See the first attachment for demonstration of data acquisition and calibration using DTT.
[Additional model changes]
In addition to what we have reported yesterday (35115), we implemented two additional minor changes today.
CAL-DELTAL_A/B
to 4096 Hz.CAL-DELTAL_A/B.
[Front end settings and other settings]
NULL
signal by shutting off the output of OMC-READOUT_ERR_NULL.
ISC_LOCK
guardian to update OMC_READOUT_SIMPLE_NULL_GAIN
when it updates the counter part for the SUM
signal.Also, I have made a DTT template in which the frequency-domain calibration (33161) is applied to all the relevant spectra. The template is saved at
/opt/rtcds/userapps/release/isc/h1/scripts/CrossCorrTemplate.xml
Two MEDM screens are newly made for this infrastructure:
OMC_NULL_READOUT.adl
displays the signal conditioning for the NULL
signal in the omc frontend model, which can be accessed from the OMC tab in SITEMAP.CAL_CS_CUST_CROSSCORR.adl
displays the mixing process of the SUM
and NULL
signals in the CALCS
model. This can be accessed from the CAL tab in SITEMAP.A screenshot of each MEDM screen is attached as well. They are saved in common medm directories at
/opt/rtcds/userapps/release/omc/common/medm/OMC_NULL_READOUT.adl
/opt/rtcds/userapps/release/cal/common/medm/CAL_CS_CUST_CROSSCORR.adl
[A quick verification: it seems OK]
To check whether the new infrastructure is doing the right thing or not, I exported the spectra from the DTT (the ones shown in the first attachment). I then computed the difference between the ordinary DARM spectrum and the cross correlated spectrum by subtracting one from the other. If things are correct, this leaves only sensing noise which should be dominated by shot noise at most of frequencies. The result is shown in the last attachment-- the difference seems reasonably smooth in its spectral shape as expected at most of frequencies.
The are a few points/regions where the difference deviated from shot noise. The peaks/valleys at 35 Hz, 60 Hz, 350 Hz and 500 Hz and low frequency wall below 30 Hz are reasonably suppressed by at least roughly 20 dB from the ordinary DARM which may be limited by lack of number of averages. On the other hand, the peak at 180 Hz was reduced only by 6 dB or so. I am not sure why. Otherwise, it looks reasonably good.
TITLE: 03/28 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY:
LOG:
15:06 Intention bit set to "Preventative Maintenance"
15:07 Peter and Jeff B to PSL enclosure
15:08 Fire Dept. on site smoke/heat testing
15:15 Hugh to EX
15:35 Travis to EX for PCAL work
15:50 Fil to EX for Interlock cabling.
15:53 Christina and Karen to out buildings
15:55 Gerardo to mid-stations
16:00 NORCO LN2 Y-ARM
16:13 Jeff B going to End Stations for FAMIS dust monitor checks
16:13 Hugh back from EX?
16:31 TJ doing Charge measurements on ETMY
16:34 More FD personell on site
16:40 Taking FD to CER
16:43 NORCO LN2 X-ARM
16:45 More fire personnell on site
16:45 Betsy to LVEA for parts. HWS work is postponed.
17:04 Travis back from EX
17:05 Karen leaviong EY
17:09 Jeff B back
17:14 Carlos out to EX to retrieve HWS card
17:31 Bubba bringing FD into LVEA
17:44 Dick into LVEA - non invasive
17:44 Carlos reported back from EX
17:53 Fire testing in OSB is done
17:54 Hugh into CER
18:00 Bubba out of LVEA with FD
18:28 DAQ restart
18:38 Paradise Water on site for delivery
19:06 FIll back
19:07 Bailing still going on
21:31 Betsy, Travis and TJ to MX
22:34 Lockloss - yet undetermined but possibly linked to some BRS issue that Hugh was doing. He and Jeff K are hashing it out.
22:41 Switched ISI_CONFIG back to NOBRSY and recovered ETMY ISI to resume.
22:49 TJ, Betsy, and Travis back
23:00 Handing of to Nutsinee
In prepping for the high-power PRMI test tomorrow, I noticed that the HWS-ITMX EPICS channels are not being updated in the MEDM screens. The gradient field data is still being written to file.
Looking at the frames, we can see normal data until 19:47 UTC (around 12:47PM PDT).
I ran a CAGET on the H1HWSMSR and can see the data just fine in the channels. The same CAGET on OPSWS4 returns zero.
Dave Barker and Nutsinee are looking into this.
Jim, Dave:
we noticed a strange gap in the FOM seismic BLRMS plot. It starts at around 11am PDT and seems to end when the DAQ was restarted at 11:28. The subsequent 12:47 PDT DAQ restart shows as a sharp line as expected. To the best of our knowledge, nothing was happening on CDS or GDS at 11am.
19:11 Assessing re-locking - TJ performed LVEA sweep.
21:03 NLN
21:13 Intention Bit - UNDISTURBED
Sudarshan, Shaffer
Today I created, started, and tested a node that Sudarshan made to change some EX PCal lines every 24hrs while we are not locked. The code will wait for the gps time to get greater than a day since its last change, then it will wait till the IFO is not locked, and finally it will adjust the frequency by 500hz and jump to waiting for a day to go by. Once the frequency has reached 5100hz it will then hang our in the FINISHED state. Here, Sudarshan can let us know what he wants to do with it.
Attached are the graph for the node and the Guardian overview
Just a few points of verification, clarification, and tagging DetChar, letting them (especially the CW group) know this sweeping PCAL X calibration line has been turned on. I attach a full and zoomed CAL-DELTAL spectrum against a PCALX spectrum (among the other usual things from the front wall's sensitivity FOM). Note that Pulsar injections are on-going, and it doesn't look like these features will interfere. This calibration line regime is sticking to the 2000 to 5000 Hz regions, and pulsar injections only go up to 1991.2 Hz (For a reminder of the full list of pulsar injection frequencies, see LHO aLOG 27642. The starting frequency is 2001.3 Hz, and the starting excitation amplitude is 30000 [ct], and has been hard-coded to remain this amplitude for all frequencies. The excitation frequency can be tracked using EPICs via this channel: H1:CAL-PCALX_PCALOSC1_OSC_FREQ and you can see a fast-channel version of the requested output here: H1:CAL-PCALX_EXC_SUM_DQ The guardian code for this node lives here /opt/rtcds/userapps/release/cal/h1/guardian/HIGH_FREQ_LINES.py and is attached for ease of reference. After some initial debugging, it guardian's log suggests that TJ left the frequency at 2001.3 starting at 2017-03-28 17:16:31 UTC, though it looks like the frequency has only been stable at 2001.3 since 19:49:00 UTC. TJ has the CHECK_IFO_STATUS state look whether the ISC_LOCK guardian's state is lower than 11, which means the excitation is killed if we're in - INITIAL ALIGNMENT - DOWN - IDLE - LOCKLOSS - LOCKLOSS_DRMI - INIT but maybe we want to rethink this, because it seems to turn off the excitation at unnecessary times (see attached dataviewer trend, where the guardian has changed the frequency twice since TJ left it).
Adjusted pressures and checked the temperatures of the dust monitor vacuum pumps at EX, EY, and the CS. All operating within normal ranges. Closing FAMIS #7511
EY charge is still growing and the angular actuation strength continues to slowly change, but the longitudinal relative actuation strength is still around 3-4%. This is basically the same as last week (look at Jeff's alog, not mine).
Unit #1 and #2 are done, grease was applied to scroll pump for unit #1 and #2, and to all electrical motors.
Compression test results:
- Unit #1 pressure 125 psi
- Unit #2 pressure 120 psi
- Unit #3 pressure 120 psi
- Unit #4 pressure 125 psi
- Unit #5 pressure 125 psi
Scroll pumps #3, #4 and #5 need to be greased.
Unit #2 needs a new one-way valve.
Unit #5 needs a new relief valve, but all other relief valves are OK.
Done under WP 6541.
Following improvements seen with insulating the Legs of the table, 34805, & 34869, 1 to 2 inches of insulation has been added to the table held by the legs which is holding the T240. See the photos in 34805 and compare with the photo attached here.
Sad thing though, I should have realized, the decline of the driftmon now puts the camera image in jeopardy of being too close to the edge. I had the enclosure open for a time and the cool down has pulled the drift signal down to -15660. It could be 24 hours before things warm up enough to bring it back to before incursion level of -14000.
Ed has the BRSY engaged to nominal now but we are watching to make sure it is doing some good for the IFO.
We'll need to make another centering effort within a couple weeks.
As reported in 34677, the wandering bump in the ST2 Corner2 CPS Spectra showed again. First seen at this location in June 2016, it was gone after a site power outage. A couple weeks ago, test suggested its origins were inside the vacuum system, 34820. I speculated other in-vacuum sensors might be the perpetrators.
Today, with the ISI offline, I sequentially powered down and powered back up the Corner2 GS13s, the L4Cs, and then the T240. The bump while steadily wandering around, remained proudly above background until the T240 was off. So far it has not returned.
See the attached plots for the Sensor signals for time line and the CPS Spectra (note T0 time.)
I will update and close FRS 7576.
I unplugged a few unused extension chords and turned off the CS wifi, but that was all that I touched.
There are much more PEM cables strewn across the floors of the LVEA, at least compared to last time I was in there. Please be cautious of this tripping hazard.
We're planning to spend up to 8 hours for high power PRMI/DRMI test to narrow down the hot spot location in the HWSX chain.
Wed. Mar. 29 2017, from 8AM Pacific, 10AM Central, 11AM Eastern.
We're coordinating with LLO so they can do their commissioning tasks during this window to minimize the down time for double coincidence.
Most crontab jobs were removed on h1hpipumpctrl last Wednesday, 3/22 ( https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=35001 ) and this resolved the daily and weekly 4am glitches on the fluid pressure.
Today during maintenance I removed the last unneeded cronjob on EY, the one that runs monthly. The EY configuration is:
periodicity | jobs which run | change |
hourly | none | no change |
daily | logrotate | removed 8 jobs |
weekly | none | removed 2 jobs |
monthly | none | removed 1 job |
The old cron directories have been renamed /etc/oldcron.[daily, weekly, monthly] (hourly was not changed)
I made this the standard configuration on h1hpipumpctrl[ex, l0] this morning during maintenance.
both channels on the EX comparator went out of nominal (by a very large margin) at 17:04:40 UTC (10:04:40 PDT) for just one second. It looks like the reference signal went bad at this time, which is coincidence with cable pulling activities at EX. Signals recovered after the one second glitch, with no re-occurence.
WP6544
Daniel, Kiwamu, Dave:
new model code was installed for h1calcs and h1omc. This added one Dolphin IPC channel (omc sender, calcs receiver) and added two 4kHz DAQ channels from calcs.
Models were restarted, followed by a DAQ restart.
BTW: prior to the DAQ restart it had been running for 28 days 0 hours and 38 minutes.
Evan, Miriam,
While L1 was having a small lock loss, we made a series of injections (with H1 in comissioning mode) in the H1:SUS-ETMY_L2_DRIVEALIGN_Y2L_EXC channel. The injections are single sine-gaussian pulses that simulate blip glitches (see https://ldas-jobs.ligo-wa.caltech.edu/~miriam.cabero/sine-gaussian.png ).
We started very quiet and slowly increased the amplitude, so that only the last 3 of the 9 injections appear in GDS-CALIB_STRAIN. The GPS times of the injections are:
1174684631
1174684680
1174684715
1174684745
1174684793
1174684854
1174684889 *
1174684945 *
1174685355 *
* Can be seen in CALIB_STRAIN
We will be repeating this kind of injections at opportunistic times (L1 not observing) in the next days, taking different blip morphologies, and different amplitudes.
Only the loudest of these saturated the noisemons, and it did so by hitting an analog limit at plus/minus 22000 counts. I projected the drive signal into noisemon counts and looked at the last three injections on the list. In the first two, the noisemon signal tracks the drive, and the subtraction of the two is just noise. In the last (loudest) injection, the noisemon hits an analog saturation at both plus and minus 22,000 counts leaving a huge glitch in the subtracted data. This is good because it suggests that the only important analog limit in the noisemon is this threshold. I don't have time to document it now, but I've tried the same with a set of loud detchar injections, which go up to hundreds of Hz, and I get the same behavior. So when the drive signal does not push the noisemon beyond 22,000 counts, we can trust the subtraction, and anything we see has entered the signal between the DAC and noisemon; it's a glitch in the electronics and not a result of the DARM loop. Attached are the three subtractions, noisemon minus projected drive signal.