Additional CBC injections are scheduled: 1164747030 H1L1 INJECT_CBC_ACTIVE 1 1.0 Inspiral/{ifo}/bbhspin_hwinj_snr24_1163501502_{ifo}_filtered.txt 1164751230 H1L1 INJECT_CBC_ACTIVE 1 1.0 Inspiral/{ifo}/bbh_hwinj_snr24_1163501502_{ifo}_filtered.txt 1164755430 H1L1 INJECT_CBC_ACTIVE 1 0.5 Inspiral/{ifo}/imri_hwinj_snr24_1163501530_{ifo}_filtered.txt 1164759630 H1L1 INJECT_CBC_ACTIVE 1 0.5 Inspiral/{ifo}/imri_hwinj_snr24_1163501538_{ifo}_filtered.txt
Continuing the schedule for this roaming line with a move from 2501.3 to 3001.3 Hz. Frequency Planned Amplitude Planned Duration Actual Amplitude Start Time Stop Time Achieved Duration (Hz) (ct) (hh:mm) (ct) (UTC) (UTC) (hh:mm) --------------------------------------------------------------------------------------------------------------------------------------------------------- 1001.3 35k 02:00 39322.0 Nov 28 2016 17:20:44 UTC Nov 30 2016 17:16:00 UTC days @ 30 W 1501.3 35k 02:00 39322.0 Nov 30 2016 17:27:00 UTC Nov 30 2016 19:36:00 UTC 02:09 @ 30 W 2001.3 35k 02:00 39322.0 Nov 30 2016 19:36:00 UTC Nov 30 2016 22:07:00 UTC 02:31 @ 30 W 2501.3 35k 05:00 39322.0 Nov 30 2016 22:08:00 UTC Dec 02 2016 20:16:00 UTC days @ 30 W 3001.3 35k 05:00 39322.0 Dec 02 2016 20:17:00 UTC 3501.3 35k 05:00 39322.0 4001.3 40k 10:00 39322.0 4301.3 40k 10:00 39322.0 4501.3 40k 10:00 39322.0 4801.3 40k 10:00 39222.0 5001.3 40k 10:00 39222.0 Frequency Planned Amplitude Planned Duration Actual Amplitude Start Time Stop Time Achieved Duration (Hz) (ct) (hh:mm) (ct) (UTC) (UTC) (hh:mm) --------------------------------------------------------------------------------------------------------------------------------------------------------- 1001.3 35k 02:00 39322.0 Nov 11 2016 21:37:50 UTC Nov 12 2016 03:28:21 UTC ~several hours @ 25 W 1501.3 35k 02:00 39322.0 Oct 24 2016 15:26:57 UTC Oct 31 2016 15:44:29 UTC ~week @ 25 W 2001.3 35k 02:00 39322.0 Oct 17 2016 21:22:03 UTC Oct 24 2016 15:26:57 UTC several days (at both 50W and 25 W) 2501.3 35k 05:00 39322.0 Oct 12 2016 03:20:41 UTC Oct 17 2016 21:22:03 UTC days @ 50 W 3001.3 35k 05:00 39322.0 Oct 06 2016 18:39:26 UTC Oct 12 2016 03:20:41 UTC days @ 50 W 3501.3 35k 05:00 39322.0 Jul 06 2016 18:56:13 UTC Oct 06 2016 18:39:26 UTC months @ 50 W 4001.3 40k 10:00 39322.0 Nov 12 2016 03:28:21 UTC Nov 16 2016 22:17:29 UTC days @ 30 W (see LHO aLOG 31546 for caveats) 4301.3 40k 10:00 39322.0 Nov 16 2016 22:17:29 UTC Nov 18 2016 17:08:49 UTC days @ 30 W 4501.3 40k 10:00 39322.0 Nov 18 2016 17:08:49 UTC Nov 20 2016 16:54:32 UTC days @ 30 W (see LHO aLOG 31610 for caveats) 4801.3 40k 10:00 39222.0 Nov 20 2016 16:54:32 UTC Nov 22 2016 23:56:06 UTC days @ 30 W 5001.3 40k 10:00 39222.0 Nov 22 2016 23:56:06 UTC Nov 28 2016 17:20:44 UTC days @ 30 W (line was OFF and ON for Hardware INJ)
Ops has been seeing high dust count alarms in the PSL. Trended the PSL and LVEA dust monitors for 15 and 90 days. The trends show constant major alarm dust levels in the PSL enclosure (Monitor PSL101) in the 0.3u particles and several 0.5u alarm level events. This trend holds for the PSL Anti-Room (PSL102) as well, although at a lower level of alarm raising events. The PSL Enclosure and the Anti-Room have not been cleaned for quite some time. Will arrange for a cleaning of the PSL during the next maintenance window. If this does not lower the PSL dust counts, will need to start looking into the air filtration system feeding the PSL enclosure.
This was a quick EQ during the OWL shift at 6:16amPST (14:16utc). H1 rode through it.
TITLE: 12/02 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 73.9893Mpc
OUTGOING OPERATOR: Nutsinee
CURRENT ENVIRONMENT:
Wind: 2mph Gusts, 1mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.41 μm/s
QUICK SUMMARY:
As reported in logs 32076 and 32079, HAM3 ISI Corner2 CPSs started glitching around 8am yesterday. Looking this morning, given that they had glitched several times in just a few hours yesterday once they started and now have not reoccurred since power cycling the satellite rack and seating the probe gauge boards a few times, I'd say the issue has been suppressed for now.
Given that saturations are automatically removed after 1 hour, just looking at the HAM3 watchdog periodically is not sufficient to know if saturations have occurred. If one wants to know if the platform has seen sensor saturations, trend:
H1:ISI-HAM3_WD_CPS_SAT_COUNT and
H1:ISI-HAM3_WD_CPS_SAT_SINCE_RESTART
The former channel will be reset upon WD trip, the latter channel will accumulate the saturation.
TITLE: 12/02 Owl Shift: 08:00-16:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 75.0267Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Quiet shift. Been locked all night. TCS hasn't been a problem. BRS thing also didn't happen again during the shift.
TITLE: 12/02 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 73.5385Mpc
INCOMING OPERATOR: Nutsinee
SHIFT SUMMARY: Mostly a quiet shift, but there were a couple SDF issues
LOG:
Corey had just relocked after an earthquake when I arrived. Shortly after going to OBSERVE the TCS guardians knocked us out, as Kiwamu logged. Then quiet until just a couple minutes ago. SDF_DIAG kick us out of OBSERVE, looking at the log I find:
2016-12-02T07:51:08.18971 DIAG_SDF [RUN_TESTS.run] USERMSG 0: DIFFS: ngn: 1
2016-12-02T07:51:10.70676 DIAG_SDF [RUN_TESTS.run] USERMSG 0: DIFFS: ngn: 3
2016-12-02T07:51:17.18839 DIAG_SDF [RUN_TESTS.run] USERMSG 0: DIFFS: ngn: 1
I can't find an (the? I didn't know we had one) NGN guardian, but I know where the CS_BRS guardian lives. When I looked at that guardian, it had just started a recenter cycle at the same time:
2016-12-02T07:51:07.99681 CS_BRS JUMP target: TURN_OFF_DAMPING
2016-12-02T07:51:07.99694 CS_BRS [RUN.exit]
2016-12-02T07:51:08.05901 CS_BRS JUMP: RUN->TURN_OFF_DAMPING
2016-12-02T07:51:08.05920 CS_BRS calculating path: TURN_OFF_DAMPING->RUN
2016-12-02T07:51:08.05959 CS_BRS new target: TURN_OFF_PZT_CTRL
...
Did the CS_BRS guardian throw an SDF difference in NGN that dropped us out of OBSERVE?
That's excatly what happend. I went and UNmonitored all of the CBRS channels in SDF so this cant happen again.
The rest of the NGN channels are being monitored, but I'm not sure if they should be since they are not tied into the IFO at all. I'll talk to the right people and find out.
Oh, yeah, I'm glad that you not-mon'ed the cBRS channels. Anything in the NGN Newtonian noise model is totally independent of the IFO, and shouldn't be stuff that'll knock us out of observing.
Probably the cBRS and its need for occassional damping is the only thing that will change some settings and knock us out of Observe, so maybe we can leave things as-is for now. The rest of the NGN channels are just seismometers, whos output doesn't go anywhere in the front ends (we collect the data offline, and look at it). Since all of those calibrations are in, and should be fine, I don't anticipate needing to change any other settings in the NGN EPICS channels.
Sheila and Jenne informed me that the TSC ITMY CO2 guardian suddenly had changed its state this evening, kicking us out of the observing mode
I trended some relevant data and checked the guardian logs. It seems that since the upgrade of the guardian machine this morning (32072), the TCS CO2 guardians for both X and Y arms unlocked multiple times for some reason. The first attachment is the trend of some relevant channels for this past 12 hours. As shown in the upper panels, the guardians changed their states multiple times in the past hours due to the lasers unlocked. Reading the guardian logs (which are attached) and codes, these events were indentified to be due to the PZT output signals exceeding some thresholds that are set in the guardian codes. This had not happened in the past four or five days until this morning. The number of incidents seems to decrease as a function of time. I am hoping that the CO2 lasers will eventually settle to some kind of stable point in which the guardians don't need to change the state any more.
For operators,
If this happens again (which should be obvious as it flips the intent bit back to commissioning), please follow the following procedure.
Additionally, the attached is a magnified version of the trend focusing on the last incident. Initially the laser power dropped monotonically in sync with the PZT output signal which was monotonically increasing. Then the PZT signal crossed the upper threshold of 70 counts which was detected by the guardian. Subsequently the guardian switched its state to something else. Finally this change in the guardian state was detected by the IFO guardian who lowered the intent bit to commissioning.
TJ investigated why the LOCKLOSS_SHUTTER_CHECK guardian was sometimes mistakenly identifying locklosses when there had not been any lockloss while ISC_LOCK was in the DRMI_ON_POP state.
As a reminder, the only purpose of LOCKLOSS_SHUTTER_CHECK is to check that the shutter triggers after locklosses in which we had more than 25kW circulating power in the arms. The lockloss checking for this guardian is independent of any other guardian.
The problem TJ found was a consequence of the work described in 31980 . Since that work when we switch the whitening gain on the transmon QPDs, there is a large spike in the arm transmission channels which the LOCKLOSS_SHUTTER_CHECK guardian recognizes as a lockloss (TJ will attach a plot).
We edited the ISC_LOCK guardian to hold the output of the TR_A,B_NSUM filter before making the switch, and turn the hold off after the switch is done. We loaded this when we were knocked out of observe by TCS. This is a simple change but if operators have any trouble with DRMI_ON_POP tonight you can call TJ or I.
Here are some plots with the TR_A,B_NSUM channels and the numeric states for ISC_LOCK. The Lockloss Shutter Check node would think that the power in the arms was above its 25kW threshold where it would then move to it's High Power state. This state would check that the arm power didn't drop below its low threshold, thinking it was a lockloss, and then jump to the Check Shutter state. Here it takes the last 10sec of data and tests for a kick in the HAM6 GS13s. This test would fail since there was no lockloss. We were not even in high power at the time.
TITLE: 12/01 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Jim
SHIFT SUMMARY:
Much_Of_Morning Main Issues:
LOG:
Locking Notes:
After hand-off this morning, held at VIOLIN_MODE_DAMPING. Kiwamu came in to take a look at H1 & wanted to note a few items he checked.
A note on the OMC whitening:
The 4.7kHz mode was super rung up, and this was causing the saturations, and a giant comb of upconversion around the line. I turned off the stage of whitening so that we would have a hope of damping anything, which is nearly impossible to do while saturations are happening everywhere. Anyhow, hopefully this won't be a problem anymore since we have found filters that work well for this mode, but any operator can use this trick to save a lock, if a mode is super rung up and needs serious damping.
To remove a stage of whitening, I opened the "all" screen of the OMC_LOCK guardian, and selected RemoveWhiteningStage. Once it starts that state, you can re-select ReadyForHandoff (the nominal state) and it'll return there when it is done. You should see 2 SDF diffs in the OMC, which ensures that you aren't going to Observe with this weird state - it's just for use while damping seriously bad modes.
Young-min and I looked into the 22:08 lockloss that is still unexplained, and attempted to use the BLRMS tool.
The first suspensions to saturate are the ETMY ESD channels, which are saturate at almost exactly the lockloss time. There isn't much in the ASC until after the lockloss, and other than DARM the other LSC loops don't seem to be having trouble.
The first thing that we see happening is a fast glitch in the DCPDs. We don't see anything in CARM signals, OMC PZTs, or ISS, but there is a similar glitch in AS_C_SUM, AS_A, AS_B. I
It is hard to imagine optics moving fast enough to cause this lockloss, but I am not sure what would have caused it.
Corey, Sheila, Jim W
TerraMon and LLO warned Corey that this EQ was coming, with a predicted R wave velocity of 4.8 um/second (it showed up in our EQ band BLRMS peak at about 1 um/second RMS at about the time predicted). Our useims blrms is around 0.3-0.4 um/second right now.
Since Corey had a warning he consulted with Jim W who suggested trying BLEND_QUIET_250_SC_EQ for both end station ISIs (one at a time). The attached screenshot shows the transition from BLEND_QUIET_250_SC_EQ back to our normal windy configuration BLEND_QUIET_250_SC_BRS, which is much quieter at 50-70 mHz.
Jim explans that this sensor correction has a notch at around 50mHz (he will attach a plot), and that this worked OK durring the summer when the microseism was verry low. However, it will reduce the amount of isolation that we get at the microseism, which was fine when Jim first tested it in the summer months.
If an EQ moves the whole site in common, we can lock all the chambers to the ground at EQ frequencies to reduce the common motion. Our problem this time was probably that we switched only the end stations without changing the corner.
For now, the recomeneded operator action durring earthquakes is:
If the IFO is locked, don't do anything. We want to collect some data about what size EQ we can ride out with our normal WINDY configuration.
If the IFO unlocks, and the earthquake is going to be large enough to trip ISIs (several um/sec) switch the ISI configuration node to LARGE_EQ_NOBRSXY. This just prevents tripping of ISIs
Once the BLRMS are back to around 1 um/sec you can set the SEI_CONF back to WINDY, and ask ISC_LOCK to try LOCKING_ARMS_GREEN. If the arms stay locked for a miute or so, you can try relocking the IFO.
I took a quick look at Seismon performance on the MIT test setup. The internal notice was written a few hundred seconds after the earthquake. Internal: File: /Seismon/Seismon/eventfiles/private/pt16336050-1164667247.xml EQ GPS: 1164667247.0 Written GPS: 1164667525.0 H1 (P): 1164667949.1 L1 (P): 1164667773.9 We beat the p-wave arrival by about 200s at LLO and 400s at LHO. Arrivals below: -bash-4.2$ /usr/bin/python seismon_info -p /Seismon/Seismon/seismon/input/seismon_params_earthquakesInfo.txt -s 1164667243 -e 1164670843 --eventfilesType private --doEarthquakes --doEPICs /Seismon/Seismon/all/earthquakes_info/1164667243-1164670843 1164667246.0 6.3 1164667949.1 1164667963.2 1164671462.6 1164669655.5 1164668932.7 4.52228e-06 1164667900 1164671500 -15.3 -70.5 8.433279e+06 H1 1164667246.0 6.3 1164667773.9 1164667787.6 1164670002.2 1164668821.0 1164668348.5 1.12682e-05 1164667700 1164670100 -15.3 -70.5 5.512348e+06 L1 1164667246.0 6.3 1164668050.7 1164668064.9 1164672594.3 1164670302.2 1164669385.3 6.73904e-06 1164668000 1164672600 -15.3 -70.5 1.069658e+07 G1 1164667246.0 6.3 1164668041.4 1164668055.5 1164672479.8 1164670236.7 1164669339.5 3.22116e-06 1164668000 1164672500 -15.3 -70.5 1.046759e+07 V1 1164667246.0 6.3 1164667831.5 1164667845.3 1164670438.5 1164669070.3 1164668523.0 6.99946e-06 1164667800 1164670500 -15.3 -70.5 6.385045e+06 MIT 1164667243.2 6.3 1164667948.9 1164667953.5 1164671451.9 1164669648.2 1164668926.7 4.74116e-06 1164667900 1164671500 -15.3 -70.8 8.417411e+06 H1 1164667243.2 6.3 1164667773.6 1164667778.0 1164669993.8 1164668815.0 1164668343.5 1.15920e-05 1164667700 1164670000 -15.3 -70.8 5.501199e+06 L1 1164667243.2 6.3 1164668052.2 1164668056.8 1164672601.7 1164670305.2 1164669386.6 7.10833e-06 1164668000 1164672700 -15.3 -70.8 1.071690e+07 G1 1164667243.2 6.3 1164668043.0 1164668047.6 1164672488.9 1164670240.7 1164669341.5 3.35518e-06 1164668000 1164672500 -15.3 -70.8 1.049125e+07 V1 1164667243.2 6.3 1164667832.1 1164667836.6 1164670436.3 1164669067.8 1164668520.5 7.31460e-06 1164667800 1164670500 -15.3 -70.8 6.386137e+06 MIT 1164667247.0 6.2 1164667941.5 1164667978.2 1164671455.2 1164669651.7 1164668930.3 2.75907e-06 1164667900 1164671500 -15.4 -71.0 8.416356e+06 H1 1164667247.0 6.2 1164667767.1 1164667802.5 1164669998.4 1164668819.2 1164668347.6 7.79549e-06 1164667700 1164670000 -15.4 -71.0 5.502860e+06 L1 1164667247.0 6.2 1164668045.1 1164668082.4 1164672612.9 1164670313.2 1164669393.4 3.86408e-06 1164668000 1164672700 -15.4 -71.0 1.073178e+07 G1 1164667247.0 6.2 1164668035.9 1164668073.2 1164672500.5 1164670249.0 1164669348.4 1.94756e-06 1164668000 1164672600 -15.4 -71.0 1.050694e+07 V1 1164667247.0 6.2 1164667825.7 1164667861.5 1164670443.9 1164669073.8 1164668525.8 4.24775e-06 1164667800 1164670500 -15.4 -71.0 6.393872e+06 MIT
I have looked at all the A2L data that we have since the last time the alignment was significantly changed, which was Monday afternoon after the PSL PZT work (alog 31951). This is the first attached plot.
The first data point is a bit different than the rest, although I'm not totally sure why. Other than that, we're mostly holding our spot positions quite constant. The 3rd-to-last point, taken in the middle of the overnight lock stretch (alog 32004) shows a bit of a spot difference on ETMX, particularly in yaw, but other than that we're pretty solid.
For the next ~week, I'd like operators to run the test mass a2l script (a2l_min_lho.py) about once per day, so that we can track the spot positions a bit. After that, we'll move to our observing run standard of running a2l once a week as part of Tuesday maintenence.
The second attached plot is just the last 2 points from the current lock. First point was taken immediately upon lock, second was take about 30 min into the lock. The maximum spot movement in the figure appears to be about 0.2mm, but I think that is within the error of the A2L measurement. I can't find it right now, but once upon a time I ran A2L 5 or 7 times in a row to see how consistent the answer is, and I think I remember the stdev was about 0.3mm.
The point of the second plot is that at 30W, it doesn't seem to make a big difference if we run a2l immediately or a little later, so we can run it for our once-a-days as soon as we lock, or when we're otherwise out of Observe, and don't have to hold off on going to Observe just for A2L.
In case you don't have it memorized, here's the location of the A2L script:
A2L: How to know if it's good or bad at the moment.
Here is a dtt template to passively measure a2l quality: /opt/rtcds/userapps/release/isc/common/scripts/decoup/DARM_a2l_passive.xml
It measures the coherence between DARM and ASC drive to all test masses using 404 seconds worth of data.
All references started 25 seconds or so after the last a2l was finished and 9 or 10 seconds before the intent bit was set (GPS 116467290).
"Now" is actually about 15:00 UTC, 7AM PT, and you can see that the coherence at around 20Hz (where the ASC feedback to TM starts to be dominated by the sensing noise) significantly worse, and DARM itself was also worse, so you can say that the a2l was worse AT THIS PARTICULAR POINT IN TIME.
Thing is, this might slowly drift around and go better or worse. You can run this template for many points in time (for example each hour), and if the coherence seems to be consistently worse than right after a2l, you know that we need a2l. (A better approach is to write a script to plot the coherence as a time series, which is a good project for fellows.)
If it is repeatedly observed over multiple lock stretches (without running a2l) that the coherence starts small at the beginning of lock and becomes larger an hour or two into the lock, that's the sign that we need to run a2l an hour or two after the lock.
Could someone on site check the coherence of DARM around 1080 Hz with the usual jitter witneses? We're not able to do it offsite because the best witness channels are stored with a Nyquist of 1024 Hz. What we need is the coherence from 1000 to 1200 Hz with things like IMC WFS (especially the sum, I think). The DBB would be nice if available, but I think it's usually shuttered. There's indirect evidence from hVeto that this is jitter, so if there is a good witness channel we'll want to increase the sampling rate in case we get an SN or BNS that has power in this band.
@Andy I'll have a look at IOP channels.
Evan G., Keita K. Upon request, I'm attaching several coherence plots for the 1000-1200 Hz band between H1:CAL-DELTAL_EXTERNAL_DQ and many IMC WFS IOP channels (IOP-ASC0_MADC0_TP_CH[0-12]), ISS intensity noise witness channels (PSL-ISS_PD[A,B]_REL_OUT_DQ), PSL QPD channels (PSL-ISS_QPD_D[X,Y]_OUT_DQ), ILS and PMC HV mon channels, and ISS second loop QPD channels. Unfortunately, there is low coherence between all of these channels and DELTAL_EXTERNAL, so we don't have any good leads here.
A2L: How to know if it's good or bad at the moment.
Here is a dtt template to passively measure a2l quality: /opt/rtcds/userapps/release/isc/common/scripts/decoup/DARM_a2l_passive.xml
It measures the coherence between DARM and ASC drive to all test masses using 404 seconds worth of data.
All references started 25 seconds or so after the last a2l was finished and 9 or 10 seconds before the intent bit was set (GPS 116467290).
"Now" is actually about 15:00 UTC, 7AM PT, and you can see that the coherence at around 20Hz (where the ASC feedback to TM starts to be dominated by the sensing noise) significantly worse, and DARM itself was also worse, so you can say that the a2l was worse AT THIS PARTICULAR POINT IN TIME.
Thing is, this might slowly drift around and go better or worse. You can run this template for many points in time (for example each hour), and if the coherence seems to be consistently worse than right after a2l, you know that we need a2l. (A better approach is to write a script to plot the coherence as a time series, which is a good project for fellows.)
If it is repeatedly observed over multiple lock stretches (without running a2l) that the coherence starts small at the beginning of lock and becomes larger an hour or two into the lock, that's the sign that we need to run a2l an hour or two after the lock.
[EDIT] Sorry wrong alog.
Betsy, Keita, Daniel
As part of the LVEA sweep, prior to the start of O2, this morning, we spent over an hour doing a cleanup of misc cables and test equipment in the LVEA and electronics room. There were quite a few cables dangling from various racks, here's the full list of what we cleaned up and where:
Location | Rack | Slot | Description |
Electronics Room | ISC C2 | Found unused Servo controller/cables/mixer from top of rack. Only power was connected, but lots of dangling cables. Removed entire unit and cables. | |
Electronics Room | ISC C3 | 19 | D1000124 - Port #7 had dangling cable - removed and terminated. |
Electronics Room | ISC C4 | Top | Found dangling cable from "ALS COM VCO" Port 2 of 6. Removed and terminated. |
Electronics Room | Rack next to PSL rack | Dangling fiber cable. Left it... | |
LVEA near PSL | ISC R4 | 18 | ADC Card Port stickered "AO IN 2" - Dangling BNC removed. |
LVEA near PSL | ISC R4 | 18 to PSL P1 | BNC-Lemo with restor blue box connecting "AO2" R4 to "TF IN" on P1 PMC Locking Servo Card - removed. |
LVEA near PSL | ISC R4 | 20 | T'd dangling BNC on back of chassis - removed T and unused BNC. |
LVEA near PSL | Disconnected unused O-scope, Analyzer, and extension cords near these racks. | ||
LVEA | Under HAM1 south | Disconnected extension cord running to powered off Beckhoff Rotation stage termination box thingy. Richard said unit is to be removed someday altogether. | |
LVEA | Under HAM4 NE cable tray | Turned off via power cord the TV monitor that was on. | |
LVEA | HAM6 NE corner | Kiwamu powered off and removed power cables from OSA equipment near HAM6 ISCT table. | |
LVEA | Unplugged/removed other various unused power strips and extension cords. |
I also threw the main breaker to the OFF position on both of the free standing unused transformer units in the LVEA - one I completely unplugged because I thought I could still hear it humming.
No monitors computers appear to be on except the 2 VE BECKHOFF ones that must remain on (in their stand alone racks on the floor).
We'll ask the early morning crew to sweep for Phones, Access readers, lights, and WIFI first thing in the morning.
Final walk thru of LVEA was done this morning. The following items were unplugged or powered off:
Phones
1. Next to PSL Rack
2. Next to HAM 6
3. In CER
Card Readers
1. High Bay entry
2. Main entry
Wifi
1. Unplugged network cable from patch panel in FAC Rack
Added this to Ops Sticky Notes page.
Kyle R., Gerardo M., Richard M. (completes WPs #6332 and #6360) Initial pressure indication shortly after being energized was 3 x 10-3 torr (PT180 is a "wide-range" gauge). If real, this would be higher than expected for the ~17 hrs. of accumulation -> "burped" the accumulated gas into the connected pump setup while Gerardo monitored its pirani gauge -> it gave no indication of a change and remained steady at 1.9 x 10-3 torr which is as expected. Neither gauge is calibrated in this pressure region of interest. Noted PT180 responded as expected to being combined with local turbo -> Isolated temporary local pump setup, valved-in (exposed/combined) PT180 to site vacuum volume, vented locally mounted turbo and removed from PT180 hardware -> Installed 1 1/2" O-ring valve and 2.75" CF to NW40 adapter in place of the turbo and pumped to rough vacuum the space between the two 1 1/2" pump port valves.
Jenne, Sheila Keita
We had another instance of a jump in POP90, in which both the I and Q phases increased. We think this is a problem with the readback, similar to what is described in 31181.
We were acquiring lock, and had no ASC running on the SRC. We looked at witness sensors for all the interferometer optics, and it looks like none of them moved at the time. We also don't see changes in other RF sensors, like AS36, AS90, or POP18. We looked at both quadratures of POP90 before rotation and it seems to have both a phase shift and a 3.5 dB increase in the sum of I+Q. The RFmon and LO mon on the demod don't have any jumps nearly that large, so if it is a problem in the demod it is probably downstream of the directional coupler for the RFmon.
This seems not to be the same as the jumps in SRC alingment that started after last tuesday's maintence, (31804 31865 and other alogs), but since the symptom is very similar it would make debugging the other problem easier if this issue could be fixed. Since we use POP90 for a dither lock of the SRM angle durring lock acquisition, this can cause a lockloss if it happens while we are trying to lock.
I tested the chassis that was pulled out (S1000977). During the testing I did not see any level changes or glitches in either the I or Q channel outputs, except when a pair of cables to attached to the front panels via a BNC tee were strongly wiggled. Removal of the tee and wiggling the cables directly didn't induce any changes. Attached is an oscilloscope trace of the I&Q monitor output for the POP90 channel. It is fuzzy because of an RF amplitude modulation I was applying, however the distortion discontinuities are present with the modulation off. Daniel pointed out to me that the distortion is due to my not looking at the signal differentially on the oscilloscope. Sure enough it looks a lot cleaner when processed differentially. I did however notice that if the RF input is more than -11 dBm, the monitor signals on the rear panel are saturated/distorted. The only other output level changed that was observed was when the chassis was turned off in the evening and back on again the following morning. The chassis (strictly) failed the following tests: - front panel monitor coupling factors (all channels at 100 MHz, failed by less than 1 dB) - IF beat note amplitude versus frequency (all channels, I & Q, at 100 kHz, failed by as little as 50 mV and as much as 360 mV) - IF output noise level (channel 3, I & Q, failed by as little as 3 dB and as much as 4 dB). Channel 3 is labelled as REFLAIR_27. By any chance when the chassis was in the rack, was there more than one cable attached to the (one) front panel BNC connector?