At 2:18:50 UTC the IFO reached LSC_FF and I engaged the Intent Bit as a test, since Jaime had just had me load new code into the ISC_LOCK guardian, that will unset the Intent Bit at lock loss.
NOTE: This lock stretch IS NOT at about 60Mpc - the calibration is KNOWN TO BE INACCURATE, and is being worked on.
ETMY ESD is saturating constantly, so while I set the Intent Bit as a test, this lock stretch is of QUESTIONABLE USE for data analysis.
It has been decided that for ER7 the OBSERVATION INTENT bit shall be UNSET after every lockloss, thereby forcing the operator to manually re-enable to the bit each time full lock has been achieved and the operator has deemed the detector "observation ready".
I have modified the LOCKLOSS state in the ISC_LOCK guardian to set H1:ODC-OPERATOR_OBSERVATION_READY to zero:
class LOCKLOSS(GuardState):
index = 2
request = False
def main(self):
ezca['ODC-OPERATOR_OBSERVATION_READY'] = 0
return True
This change has been committed to the USERAPPS svn, but the ISC_LOCK node has not been reloaded. Someone (e.g. operator) should hit LOAD on the ISC_LOCK guardian to make this change take affect.
This is meant to be an ER7 hack only. The handling of the OBSERVATION INTENT bit will be reviewed after ER7, and a better, more structured handling of this bit (and of other modal bits) will be implemented ahead of O1.
I just spoke to Cheryl, the on duty operator, about reloading the ISC_LOCK code. While I was on the phone, she hit LOAD on the ISC_LOCK guardian control screen, and the code was reloaded without issue. We should watch to make sure that ISC_LOCK does the correct thing to unset the bit on the next lockloss.
The last change didn't take because of a little guardian subtlty. The ISC_LOCK:DOWN state is a "goto" state, which means the LOCKLOSS state experiences a "redirect" before it even executes any code. I added the "redirect = False" flag to the LOCKLOSS state which ensures that it is executed in full before the redirect.
The code was reloaded, so we're again waiting for a lock->intent->lockloss cycle to confirm that things are working.
Pcal Team,
The displacement calibration factor that converts the TxPD (photodiode that measures a small amount of sampled light going to the ETM) and RxPD (photodiode that measures all of the reflected light from the ETM) readings into metres are now reported in DCC document # T1500252.
We have calibrated photon calibrator at each end-stations (both LHO and LLO) multiple times after installation. This document contains all the calibration factors obtained on different dates. However, the numbers obtained at different dates for each endstation are within a percent of each other when they were measured under "identical conditions". There has been some optical layout changes in the Transmitter module (described in LHO alog #17145 ) which has resulted in different displacement calibration factor for TxPD over time as you will notice in the table in DCC document linked above. But these numbers are consistent with the changes that were made.
For RxPD, the calibration factor at LHO has been measured consistently within 1% percent. However, we noticed recently that LLO had their calibration factor differ from LHO by about 20%, although the calibration numbers from different dates for same end-station agreed within a percent. We were not sure about why the sensitivity of these detector were different despite the components being same. This should not effect the calibration but we nonetheless replaced those two RxPD at LLO with two spare RxPD and now the calibration numbers at LLO are comparable with LHO. So the latest calibration number for RxPD from LLO is about 20% more than what it usually has been.
Recently, we noticed some power variation in the RxPD signal (while TxPD stays flat) and also variation in optical efficiency. We are actively looking into this issue and have some leads. We will put an alog about it soon. However, this should not alter the calibration factor by more than a few percent at its max.
Added 747 channels. Removed 36 channels. All were calibration channels.
Jeff, Jim, Dave
The h1calex, h1caley models were changed to add RFM sender IPC parts. The h1calcs model was changed to add two receivers for these channels. When the new code was ran, random single and sometimes double errors were seen at the receiver for these channels at a rate of about one per minute from both end stations.
Working on the X-Arm, our first test was to turn off the h1iscex model, which has 8 RFM senders, to check if the addition of one channel has exceeded a limit. The CAL error rate was unaffected.
We noted that both h1iscex and h1calex are processing for about the same length of time (6uS and 5uS respectively). We thought that therefore the models are writing to the RFM card in h1iscex at roughly the same time. The h1lasex model, on the other hand, is writing to the RFM card at the 33uS mark.
We delayed the execution of h1calex by modifying the PCAL_MASTER.mdl model and adding a bunch of filter modules. Jim created filters to run on these FMs to slow down the model. Our first try did not delay by much (cpu about 10uS) and the error rate was about the same. Our second try delayed too much, to about 33uS and the error rate shot up to hundreds per second. Our third try put the cpu time to 25uS which is what we wanted. Unfortunately the receive error rate was unaffected.
At this point we were considering sending the channel over the the ISC model via shared memory and letting that model RFM the channel to the corner station (similar to what ODC is doing). Jeff decided at this point to remove the new IPCs for now.
To back the change out, we did the following:
08:00 IFO relocked
08:02 OIB switched to commissioning
08:06 Jodi into the LVEA, no one in working yet, I gave permission....lock loss. :/
08:15 Jodi out of the LVEA
08:23 Christine to Forkift from LSB to wherehouse
08:25 DAQ overview is showing eveidence of excitation sessions on SUSITMX and SUSETMY....FYI
08:28 R Savage going to EY for PCAL calibration
08:36 Inspiral range FOM m(video6 upper) won't restart. J Batch says it very broken.
08:51 ISS AOM diff power was down to ~4%. REFSIG adjusted to -2.01 from -2.03 to brin Diff Power to ~ 8%.
09:02 Darkhan in an going to EY to join Rick.
09:20 RO alarming
09:30 TJ to EX to tend to BRS
09:51 Bubba down X arm between Mid and End to inspect cleaning project.
09:56 Rick and Darkham back from EY
10:15 TJ back from EX
10:30 turned IFO over to Kiwamu for Mich calibration
11:05 Tour in the control room
11:20 second tour group into the control room
12:32 Sigg to MY
12:42 Kissel restarting the cal models in the ends and corner and then restarting the DAQ
12:46 DAQ back up
12:48 Darkhan to EY to retrieve equipment.
13:25 Darkhan back from EY
13:45 ALL HANDS MEETING
16:00 Hanging out late until Cheryl arrives
These are the past 10 days trends.
After a measurement of charge on each ETM yesterday, I took a few more on each today. Attached show the results trended with the measurements taken in April and Jan of this year. There appears to be more charge on the ETMs than in previous measurements, although there is quite a spread in the measurements. The ion pumps at the end stations are valved in.
Note, the measurement was saturating on ETMy so Kiwamu pointed me to switch the ETMy HI/LOW Voltage mode and BIO state. This made the measurement run with saturation. Attached is a snapshot of the settings I used for the ETMy charge measurement.
1. I think that the results of charge measurements of ETMY on May, 28 are probably mistaken. I haven't see any correlation in dependence of pitch and yaw from the DC bias. 2. It seems like there was very small response at ETMX LL quadrant at this charge measurements. Other ETMX quadrants are ok. It correlates with results of June, 10 https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=19049
I updated the SUS Drift Monitor with values from a shortish lock stretch last night, 1116855522. In addition, I updated the threshold updating script to widen the thresholds for PR2 and BS per Betsy's suggestion. The IMs and OMs are still showing alarm, but we assume they will come back to green during a lock. As part of the background to get the Drift Monitor back to functional, we (TJ at the keyboard and Betsy, Hugh, and myself in the peanut gallery) reassigned the alarm severity states (.HSV, .LSV, .LLSV, .HHSV) to their appropriate values (MINOR, MINOR, MAJOR, MAJOR respectively). At sometime during the past month, these values were lost (i.e. NO_ALARM), presumably during some bootfest.
We need to update the appropriate database such that these values are not lost during bootfests in the future.
Rick, Peter, Jason, Jeff B. A series of apparent glitches and dropouts of the PSL chiller water flow caused the PSL to shut down several times. We swapped out the online chiller (Gromit) with the backup chiller (Wallace) from 05/07/15 until 05/20/15. During this time, we replaced Gromit’s Central Chiller Controllers (for both the Crystal and Diode chillers). The older turbine flow sensors, which were flagged as the source of the PSL shut downs, were also replaced with new vortex flow sensors. After the upgrades, we swapped Gromit back in as the online chiller and put Wallace into backup. Gromit has been running for the past 8 days without any glitches or dropouts. The turbine flow sensors in Wallace will be replaced by new vortex flow sensors. The Di-Water cartridges in both chillers have not been replaced in sometime, and there was concern for their proper functioning. The attached plots show the 180 and 30-day trends for the flow rates and the conductivity of the Di-cartridge’s for both chillers. The Diode chiller is functioning within operational parameters. The conductance ranges between 4 and 7 µs/cm (the low/high set points) during a 14 to 15 day period. The flow rate is very steady at 21 l/min. The fluctuations at the right of the plots are when Gromit and Wallace were swapped on 05/07 and then swapped back on 05/20. The conductance appears to be normal, although we need a longer periods of data to reconfirm the previous trends. The flow rate appears to have climbed from 21 l/min to around 30 l/min. The flows through the diodes are all normal. This appears to be a calibration issue with the new vortex flow sensors. The data for the Crystal chiller appears to have the same patterns as the Diode chiller, with two notable exceptions. (1) The conductance rise and regeneration period is three to four months (vs 14 to 15 days for the Diode chiller) and (2) the flow rate for the Crystal chiller has dropped from 18 l/min to 10 l/min. Again, this is likely to be a calibration issue as the flows through the PSL components are all in normal ranges. We will discuss this calibration question with the laser group in Germany.
The BRS was very rung up this morning so I went to investigate. It looks like the damper may have caused the BRS to ring up.
People were at End X yesterday but left around 1230 pst, where as the BRS didnt start to dramatically ring up till around 1900 pst. The damper seemed to be on before the large amplitudes seen on both the INMON and the DRIFTMON. Siesmometers aren't showing any signs of an earthquake within the past 24 hours, so that seems to be be out.
I went to EX to turn OFF the damper by uncommenting the line of code, and adjust the damper masses (about 75 degrees out of position). I left the damper OFF to allow the BRS to ring down on its own like we have done previousily.
Screenshots attached: 1st - A 4.5 hour view around the event. 2nd - 24 hour trend. In the 2nd, you can see when people were at EX and the damper seemed to relax the signals back down to normal where they stayed for a little over 4 hours before everything picked up again.
DarkhanT, RickS
We are running a short (hour or two) test at Yend.
The Rx PD assembly is back in the Rx module. The outer beam is blocked inside the Tx module so only the inner (upper) beam is reflecting from the ETM into the Rx PD assembly.
We are trying to identify the source of variations we have seen in the received power, but not in the transmitted power.
Dan, Evan, Sheila
Tonight we started to look at the angle to length couplings of our test masses. We injected lines into pitch and yaw on the PUMs, and adjusted the A2L gains to minimize them. Using the math in the 40 meter alog and Jax's alog, we can estimate the miscentering from these measurements
| Gain P2L | vertical miscentering (mm) | Gain Y2L | horizontal miscentering (mm) | |
| ETMX | 1.6 | 21 | 1.1 | 14.4 |
| ETMY | 0.69 | 9 | -0.3 | -3.9 |
| ITMX | 2.4 | 31.5 | 1.15 | 15 |
| ITMY | 1.5 | 19.7 | N/A (-1.7 to -2) |
After we had adjusted these, we saw an improvement in the spectrum below 20 Hz. The line in the attached screen shot at 16.6 Hz with sidebands at half a hertz are the excitation. Keep in mind that this is on the new ESD driver and we haven't redone the calibration yet, but clearly this improved the noise below 20 Hz.
Earlier in the evening we were having difficuulty powering up because of a pitch instability at the main suspension resonant frequency that showed up in all the test masses. We moved the QPD offsets for pitch back to what they were may 15th, (they had been changed last tuesday). We then remeasured the miscentering for pitch only, things were a little bit better. Once we increased the power to 17 Watts, the IFO was stable and we repeated some of the measurements. We were able to power up to 23 Watts without seeing the instability twice, but lost the lock quickly for other reasons.
| Gain P2L | vertical miscentering (mm) | 17 Watts P2L | ||
| ETMX | 0.7 | 9.18 | 0.8 | |
| ETMY | -0.57 | -7.5 | -0.49 | |
| ITMX | 2.1 | 27.6 | 2.4 | |
| ITMY | 1.2 | 15.7 | NA |
DARM OLTF file attached. This template has reduced drive strength so that the ESD does not saturate in the LVLN state.
At last I was able to switch the DARM actuation over to ETMY at 25 W with the LPF engaged on the LVLN driver. We had discovered that the L1 LOCK filters on the ETMs were charging up because of small amounts of ringing in the lower stage filters. Therefore, the L1 filter for ETMX is zeroed after actuation is moved to ETMY, and the lock filters for ETMY are cleared after lockloss. Also, the INCREASE_POWER state now automatically increases the power to 25 W once again.
I tried the LOWNOISE_ESD_ETMY state at 25 W once, and it seemed to work. I then turned on some pieces of the LSC_FF state (namely the SRCL gain reduction, the SRCL cutoff, and the MICH FF). I am leaving the IFO locked with the intent bit undisturbed.
One last note: the power was 3 Watts in the spectrum attached, and to repeat, the calibration is not updated since the actuator change. They're working on it
The displacements in mm are wrong here, we were measureing from the PUM.
Another DARM OLTF, this time with the ETMY LPF off.
There are two DARM Open Loop Gain TFs attached as comments to this entry that represent the first two DARM OLGTFs taken with the new low noise ESD driver and the new L1L2L3 hierarchical control scheme. I've downloaded them and submitted them to the CalSVN for use later: - From LHO aLOG 18662, DcDarmLVLN.xml, measured starting 2015-05-28 13:17:00 UTC, has been copied to /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER7/H1/Measurements/DARMOLGTFs/2015-05-28_H1_DARM_OLGTF_LHOaLOG18662_ETMYL3LPON.xml - From LHO aLOG 18709, DcDarmLVLN.xml, measured starting 2015-05-30 03:07:00 UTC, has been copied to /ligo/svncommon/CalSVN/aligocalibration/trunk/Runs/PreER7/H1/Measurements/DARMOLGTFs/2015-05-30_H1_DARM_OLGTF_LHOaLOG18709_ETMYL3LPOFF.xml. I attach conlogs of all relevant DARM filter banks and BIO switches, where the date in the file name corresponds to each measurement.
- commissioners had the IFO, working on EY
- Rick and crew returned from EY(?) early in the shift
- currently an instabillity is preventing the IFO from going beyond 10W
K. Izumi, S. Karki, J. Kissel, R. Savage, D. Tuyenbayev Lots more work done today -- still a fire hose of measurements that we're furiously trying to understand quickly enough to make a statement about them and/or use them in our loop models, but we're falling behind the analyzing / documenting as expected. What we accomplished today: J. Kissel, S. Karki Finished measurements of ETMX PCAL electronics, as had been done for ETMY yesterday (AA and AI data finished being processed, still need to process full-chain measurements; both end station's work need aLOGs) K. Izumi, J. Kissel Gathered Free-Swinging Michelson calibration of ETMY L2 stage between 2 and 7 [Hz] (by the usual daisy chain of measurements), tried to get L3 stage but it's just too weak -- will have to do an ETMX to ETMY propogation using the full IFO. (measurement processed, needs aLOG for procedure and results) K. Izumi Gathered data for calibration of ETMY L2 stage using ALS DIFF VCO (preliminary aLOG is LHO aLOG 18656, but still needs process and results aLOG) R. Savage, D. Tuyenbayev Compared response of standard ETMY PCAL PDs against other broadband PDs to check for high-frequency frequency dependence. Still needs more time (and an aLOG). R. Savage, D. Tuyenbayev, J. Kissel Begun looking at changes to be made to CAL-CS model and PCAL models. Accidentally updated ${userapps}/cal/common/models/ and found new library parts which need new connections and IPC at the top level that we don't know the details about. Goals for tomorrow: B. Weaver Measure ETM charge R. Savage, D. Tuyenbayev Continue at EY, and move to EX comparing high frequency response of PDs. J. Kissel, K. Izumi Measure Simple Michelson Technique again, but propagate to stronger ETMX this time (and get all three stages), then lock of DARM to propagate from ETMX to ETMY (again all three stages) J. Kissel, R. Savage, D. Tuyenbayev Call up J. Betzwieser and S. Kandashamy and figure out what the heck they've done to the CAL-CS and PCAL library parts, so that we can absorb the changes and have functional front end models.
2:32 UTC, lock loss and Intent Bit is still green, calling Jaime.
Hopefully addressed. See comment to previous post.