The Pressure runs pretty darn solidly at 70 psi with the servo off. I don'think it would trend off more than a few tenths of a PSI overnight. I'll come out over the weekend and adjust if needed.
Some coherence with OAF-CAL_MICH goes away with the servo OFF, very little coherence seen in PRCL or SRCL
DRMI was locked for 9 hours until ~1527utc 30 Jan. POPAIR_B_RF18_I_MON was at 300 when lock was lost.
Measurement started at 0700utc for HEPI Pump Servo On data; Measurement here at 220015 during a DRMI stretch for Pump Servo Off data. Lots of activity during this time so a quieter stretch with servo off would be good to have.
On the attachment you see plenty of coherence between 10 & 100mHz that pretty much goes away when the servo is turned off.
Also attached is the amplitudes of the Mich and the Pump Pressure and the STS floor seismometer. Input motion looks pretty similar but there have been activities in the LVEA today. The weird nature of the Mich signal at higher frequencies may indicate I'm too close to the lock loss or something. Will repeat this when the DRMI is quiet.
(Elli, Richard, Dave, Rich, etc.)
Last Tuesday around 1pm the readback channels of the TCSX CO2 laser system went crazy. We tracked this to the AI/AA chassis which were not powered. Not clear why, but maybe the power connector fell off, when someone worked in the PEM rack? When the digital system goes down, the chiller doesn't get a temperature setpoint. It will then turn itself off with a fault. In turn the laser shuts off. With the power restored, it was possible to turn the chiller on, and then the laser. Both actions need to be done on the floor. The laser is now back in operation with the requested heating set to 0W. Once we leave we can set it back to 0.3 W. The mask is set to central heating, but the monitor for the annular mask is flaky.
In conclusion: The TCSX CO2 was probably off since last Tuesday 1pm.
The TCS Y arm chiller was off when I investigated the problem. There were no fault codes on restart and the reservoir level was well above the trip level. The X arm chiller was still running. However, the laser controller was showing a flow fault for both the X and Y arm. Toggling the key and re-enabling the gate restored the controller to its operating state.
Rana, Alexa, Evan
We wanted to increase the phase margin in the RF DARM loop, so that we have room to increase the UGF. In the LSC DARM filter module, we have added filter FM9 (zpk([200;200],[400+i*692.82;400-i*692.82],1,"n")), which cancels the two poles at 200 Hz present in our FM8 LLO filter. It also adds a roll off at 800 Hz. See DARMFilter_changes.pdf to compare the TF with FM8 only vs FM8 + FM9. The UGF of the RF DARM loop is 30 Hz with a phase margin ~32 deg (compared to the old ~20 deg margin). I have attached a plot with the modeled and measured RF DARM OLTFs, and one that compares this with the old loop where FM9 is turned off.
Configuration for RF DARM: FM2(z2:p0), FM3 (resG), FM4(z4^2:p1^2), FM8(LL0), FM9(Lead200)
Note, the configuration for ALS DIFF: FM2(z2:p0), FM3 (resG), FM7(SB60), FM8(LL0), FM10(RL33)
I added 250 ml of water to the chiller.
06:30 Cris and Karen into the LVEA
08:00 DRMI locked when i came on shift
08:01 Jim taking measurements wjile DRMI locked
08:02 Lock broke. Sigg flashed the mode cleaner
08:02 I was able to relock DRMI by re-requesting the 'DRMI_1F_LOCKED_ASC
08:38 Jeff B into Optics lab
08:39 Janeen and Gary moving stuff from VPW into LVEA.
08:47 Manny will be going to collect 3IFO stuff at Ends and in Corner.
10:00 Commissioning starts
SEI - Data mining; HEPI pump investigation ongoing; performance measurements with DRMI locked.
SUS - Jamie will be coming to update Guardian infrastructure and work on SDF; TJ adding HAMAUX and TT to Guradian Overview; Thomas working on Drift Monitor
ISC/Commish - 10:00AM commissioning schedule; moving forward in a positive direction.
3IFO - BSC5 testing done; still has CPS issue; Will be moving from staging building into LVEA (TBA); 1 of 4 dessicant cabinets have arrivedand going into highbay.
Daniel, Filiburto, Dave:
The h1tcscs ADC signals were discovered today to have changed around 1pm Tuesday 27 Jan PST. We found that the DC power strip feeding the non-PEM AA and the 16bit DAC AI chassis for h1oaf0 to be powered down. We reconnected the DC power strip's cable and these systems are now operational.
K. Venkateswara
BRS was turned off just before the vent at EX (15711). I restarted the code in the BRS laptop and noticed that the DC position of the balance had changed owing to the 2 deg C temperature change in the XVEA. The balance is just out of the nominal range but still working correctly, as far as I can tell. If it proves to be noisier, I may adjust the set point in hardware on Tuesday, since it will take a couple of hours to get right. For now, I adjusted the DC offset in the code appropriately and it all seems to be working as usual (see image). The wind-speed is barely a few mph.
The sensor correction is now using the tilt-subtracted ground super-sensor. This should make no difference to the platform, at the moment, as the sensor correction is using the "0.43-Hz only" filter.
Rana and Travis updated a newer version of the drift monitor and added it to the sitemap at LHO, and I made some further changes: I have fixed bugs in the MEDM screen (/opt/rtcds/userapps/release/sus/common/medm/SUS_DRIFT_MONITOR.adl), and now the buttons for updating individual suspensions work again. LLO, if you svn update the MEDM screen, please be aware that instances of "H1" in the code will need to be changed to "L1" to prevent horrible, epic failure. I've added a set of dictionaries to the drift monitor update script (/opt/rtcds/userapps/release/sus/common/script/driftmon_update.py) that allow for setting fixed threshold values by editing the code itself (example below). The current values are somewhat arbitrary guesses, so they require tuning. The changes have been committed to svn, and the code should be LLO-friendly without any modification. Example: To set fixed thresholds, open driftmon_update.py, and scroll down until you find the following (at line 115 at the time of this post): ########## TUNE THRESHOLDS HERE ########### # yellow thresholds = mean +- yellow_factor * BOUND value # red thresholds = mean +- red_factor * BOUND value yellow_factor = 1 red_factor = 2 BOUND_MC1 = {'P' : 50, 'V' : 10, 'Y' : 15} BOUND_MC2 = {'P' : 50, 'V' : 10, 'Y' : 5} BOUND_MC3 = {'P' : 50, 'V' : 15 , 'Y' : 20} . . . and so on.... and edit the values corresponding to the suspensions and degrees of freedom you wish you tune. For instance, with the code above, if the script updates MC1 pitch and sets, say, 10uRad as the nominal value, then the code above will make the yellow alarm trip at <-40uRad and >60uRad (mean +- 50uRad), and the red alarm trip at <-90uRad and >110uRad (mean +- 2*50uRad).
I opened up the MEDM code for the driftmon, and changed all specific references to H1 to $(IFO), and now I believe it should run fine on either site. So please disregard my prior nonsense about having to change the MEDM code for use at LLO. Macros are awesome.
I updated the GUARD_OVERVIEW and the IM/TT medm screens to contain micro/mini Guardians for IM1, IM2, IM3, IM4, RM1, RM2, OM1, OM2, OM3. The scripts were already made available by Stuart Aston and Jameson Rollins at LLO see https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=15772 and also https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=15585. The Guardian Nodes were not yet created though so I had to create those and figure out that they also had to be started.
Currently RM1, RM2, OM1, OM2, OM3 do not have any offset.snap files saved to them in userapps/sus/h1/burtfiles/ (hence the red border on the Guardian Minis in the overview screen shot).
Attached are the before and after of the GUARD_OVERVIEW.adl screens along with the new HAUX and one of the new HSSS IM screens.
The red boarder actually indicates an ERROR with the node. If there's not actualy ERROR conditions in those nodes, then there might be a version incompatibility between the version of guardian in use and the indicator screen.
I'll help resovle this issue when I'm on site tomorrow.
ISS arrays are ready to be containerized. The bases for the cans needed a slight modification so they are currently at a machine shop. Once they are back and out of the air bake, work can continue for storage and shipping. I have attached a brief report with pictures for the next soul that works on this so things can be located easily. Also attached is a spreadsheet of which items are associated with which ISS and a report of progress.
There is a problem with the signal from the ITMY OSEM, it was verry noisy durring the lock losses last night and it is still noisy now, this seems to be always this way.
The attached screen shots show the signals up the chain on each quad.
It looks like the ITMy L1 stage UR sensor is the exact culprit.
Dan, Richard, Daniel, Rich We loaded the updated Beckhoff code today to enable the full functionality of the newly installed Fast Shutter Driver. The system is now operating properly. There are a couple of things worthy of note: 1. The ISI watchdogs trip in HAM6 whenever the fast shutter fires at full speed AND whenever the fast shutter is actuated in the slow mode either up or down. Reason for this sensitivity as compared to the apparently less sensitive response at LLO is not yet understood. 2. If the high voltage is not enabled (HV ready) on the front panel of the shutter driver, the shutter Beckhoff code defaults to a blocking shutter state (closed) and will not allow you to unblock the shutter. This is a code feature, and is not actually precluded in hardware. 3. We verified that the logic is correct on all signals including: shutter controller output, LV OMC Length shutter input, Fast Shutter input, all readbacks. All are correct.
When I came in this morning, DRMI was still locked and I had the place to myself, so I decided to try turning on the BSC RZ loops with a high blend. I was using SRCL, PRCL and MICH as witnesses. I was mostly able to turn the loops on without losing lock if I turned the loops on slowly, but the BS broke DRMI and Ed has been having a hard time getting DRMI back. I tried ITMX first, then added ITMY, then tried the BS. Attached plots show the cavity spectra for the baseline (in red), ITMX RZ on(blue) and both ITM's RZ on in green. Not much change at low frequency, although SRCL and PRCL both improve some. But between 1 and 4 hz the RZ loops make things worse. I've returned the BSCs to their standard configuration.
Time line from this morning, times in UTC
15:28 Turned on ITMX ST1 RZ
15:51 Turned on ITMY ST1 RZ
15:55 Daniel kills IMC, breaking lock
15:57 Ed restores DRMI
16:13 I try to turn on BS St1 RZ, this kills DRMI, Ed is unable to recover DRMI consistently after this point, probably because end station PLL's are down
We have noticed that it is sometimes taking a verry long time for guardian to calculate paths (the ISC_LOCK guardian).
We also just saw something more strange. It seems that the ALS COMM guardian, which was managed by the ISC_LOCK guardian, became executed, although we are pretty sure no one in the control room did this. screen shot of the log is attached.
The log files and conlog report the ALS_COMM out-of-managed-mode sequence is managed->pause->exec->pause->managed. Text file is attached.
On the long times for ISC_LOCK to calculate paths, I see a log entry which suggests a transition request from LOCKING_ALS_DIFF to DARM_WFS (via LOCKING_ALS_COMM) took 12 seconds to calculate the path. Is the processing of the current state causing the delay? Log details attached.
Can you guys ellaborate on this claim of overly long path calculation time? The log you post doesn't seem to support it. From the log you posted:
2015-01-30T03:36:06.566Z ISC_LOCK [LOCKING_ALS_DIFF.run] USERMSG cleared 2015-01-30T03:36:15.586Z ISC_LOCK new request: DARM_WFS 2015-01-30T03:36:15.586Z ISC_LOCK calculating path: LOCKING_ALS_DIFF->DARM_WFS 2015-01-30T03:36:16.778Z ISC_LOCK [LOCKING_ALS_DIFF.run] USERMSG: node ALS_DIFF: NOTIFICATION 2015-01-30T03:36:27.308Z ISC_LOCK [LOCKING_ALS_DIFF.run] ALS_XARM: REQUEST => LOCKED_TRANSITION
The path calculation happens at 3:36:15.586, followed by some usercode logging about changes to the ALS_XARM request, which presumably is a subordinate of this ISC_LOCK node.
What makes you think that the ISC_LOCK is taking a long time to calculate a path? My guess is that you're confusing the manager notification about changing subordinate request with a problem with the path calculation. They're not related.
I was able to close the ISS Second Loop few times this morning. The loop performance was on-par with what we achieved when we locked it last time. The picomotor closed to the ISS PD array was also moved to optimize the light on the ISS PDs and the QPD
Noticing that the ISS QPD pitch and yaw were off, I moved the picomotor closer to the ISS array to optimize the light on the PDs and the QPD. This work improved the light on half of the PDs by about 10-20% . This also improved the beam position on the QPD. Before and after readings are listed:
Before (cts) | After(cts) | Before (cts) | After(cts) | ||
PD1 | 4430 | 4460 | PD5 | 4600 | 5400 |
PD2 | 4150 | 4900 | PD6 | 4700 | 5000 |
PD3 | 4750 | 4800 | PD7 | 5750 | 5780 |
PD4 | 5050 | 5300 | PD8 | 5300 | 5380 |
Before | After | |
QPD_PIT | 0.73 | 0.08 |
QPD_YAW | 0.75 | 0.01 |
QPD_SUM | 24400 | 24300 |
I was able to close the loop without kicking the IMC out of lock. This was not robust and the previously used script wouldnot work because the second loop output fluctuation was much bigger than that we used as threshold in the script. Rather changing the script, I would want to investigate why we are not able to obtain the same robustness that we previously had. The loop performance is on par with what we have achieved in the past. With the loop closed and boost and integrator on, RIN was about 2E-8/sqrt(Hz) at 10 Hz . The attached plot shows the loop performance at different configurations.
For people interested in what the loop performance was downstream. Here is a plot that shows the loop performance at IM4_TRANS and MC2_TRANS. The loop closing is still not robust because of too much noise at the second loop ouput but I am working on understanding it.