Nutsinee, Kiwamu,
WP5990
We have (re-) set up the polarization monitors on the HWS table by HAM4. We have confirmed that they are functional. For those who are interested in the polarization data, here are the channels to look at:
In theory, they should be in unit of watts as measured at the HWS table.
[Installation notes]
This time, we have newly installed a short pass optic (DMSP950L from Thorlabs) to pick off the main interferometer beam without getting too much contamination from either the SLED light (790 nm) or the ALS beam (532 nm). The short pass mirror was inserted between the bottom periscope mirror and the first iris (D1400252-v1). Looking at the green light at the table from the end stations, we learned that the beam size is already pretty small and (visually) small enough for the beams to fit into the PDA50Bs without a lens. So we decided to go without lenses as opposed to the previous setup (24046).
The short pass mirror reflects the interferometer beam toward the left on D1400252. We placed a PBS (CM1-PBS25-1064-HP) on the left side of the short pass and placed the PDA50Bs. The power reflectivity of the newly installedshort pass mirror was measured to be 5% +/-3% for 532 nm. The absolute power (assuming the Nova hand held power meter is accurate) of the reflected green light was measured to be 1 uW.
One thing we leaned today was that the green light is not so trustable to get the optimum alignment. We first aligned the optics with the green light and then noticed that the infrared beams were almost falling off of the PDA80Bs. So we then closed the shutters and aligned them with the actual infrared beam.
The manual gain settings are:
The digital gains were also changed accordingly so that the calibration of these channels should be accurate.
J. Kissel, S. Dwyer, S. Ballmer We continue to have trouble with the FSS oscillating after a lock loss, in that it'll often either take several minutes to relax, or it requires manual intervention such as briefly reducing the common gain of the FSS loop. As such, Sheila took a look at the IMC PDH loop to look for problems and instabilities there. I looked over her shoulder at her results, and saw some areas for improvement in the loop design. The current loop design has an UGF at 66 [kHz], with a phase margin of 68 [deg]. However the gain margin around ~200 [kHz] is pretty dismal because of what looks to be some icky features in the physical plant. These features have been shown to be directly influenced by the FSS common gain (see second attachment in LHO aLOG 28183). I figure, given that we've got oodles of phase margin, what harm could be done by just adding a simple 200 [kHz] pole in loop, and reducing the gain by ~2 [dB]? As such I took Sheila's data, which lives here /opt/rtcds/userapps/release/isc/common/scripts/netgpib/netgpibdata/TFAG4395A_12-07-2016_163422.txt (also attached) and added these modifications offline as a design study. In the attached plots, I compare the as-measured IMC PDH Open Loop Gain, G, Loop Suppression, (1/1+G), and the Closed Loop Gain, (G/1+G), against one modified as described above (blue is as measured, and green is the modified design study). The results are encouraging: a still-substantial UGF of 47 [kHz], and a very-healthy phase margin of 58 [deg]. However, as can bee seen in the loop suppression and the closed loop gain, there is far less gain-peaking and/or a much great gain margin and we would no longer have to worry about the icky features in the plant that are so sensitive to the FSS common gain. Where to stick such an analog filter? It's of course dubious to claim that the MEDM screen for such a system is representative of the analog electronics, but assuming it is, one can see that there is the possibility of a switchable daughter board in the FAST path that gets shipped off to the PSL AOM for the FSS. Because it's switchable, we can toss whatever simple filter in there that we like, and then compare and contrast the performance for ~1 week to see if it improves the stability problems we've been having. What impact would this have on the full IFO's CARM loop? I'll remind you of Evan's loop analysis of the whole frequency stabilization spaghetti monster in LHO aLOG 22188. There he suggests that the CARM UGF is around 17 kHz, so as long as the Closed Loop Gain around there is the same, then this change in the IMC PDH loop should have little impact [[I just made this sentence up based on just a few words from Sheila who asked me to look at the CLG. I'm not confident of its truth. Experts should chime in here]]. Indeed the third .pdf attachment shows that G/(1+G) of the IMC PDH loop, regardless of modification remains unity out to 100 [kHz].
How does this compare with the Pomona box from anno domini?
Description of the notch in pamona box 5141 (this was in the loop for a few years, but was removed serveral months ago, I think before O1)
Thanks for finding the aLOG entry Shiela! @Daniel -- though she doesn't say it explicitly, the aLOG shows that the Pomona Box notch was centered about ~700 kHz. As shown by my OLGTF model, if we add this ~200 kHz pole, then not only will any features at 200 kHz be suppress significantly, but whatever might happen at 700 kHz is even further suppressed. In otherwords this pole just shapes the high-frequency, super-UGF portion of the OLG to better handle *any* non-sense, instead of the focused bandaid fixes that any notch would provide.
J. Kissel In the rush and panic of ER9 last week, we neglected to aLOG that we reverted the ETM ESD bias flip that I'd done last Tuesday because -- for some unknown reason -- the sign flip *this* time caused problems for ALS DIFF and subsequently switching to EY later in the acquisition sequence. As such, we didn't gain as much in this week's charge mitigation, as shown by the plots attached below, which includes a new measurement from today. The effective bias voltage is still within the ~10 [V] range, so the reversion did little harm. Since ER9, we haven't had the time to explore why the flipping failed us. I'll continue to try to push for figuring this out, but we'll see what kind of priority it gets with respect to all other commissioning tasks.
The mic in the EY electronics bay had been reported by Vinny Roma as having unusually low signal, requiring a calibration factor 10 times larger than the other microphones. I determined that the fault was with the signal conditioning box - when plugged in to a different channel, the microphone performed normally. Additionally, I verified that the power supply was delivering the proper voltage to the signal conditioning box. Using the second channel in the signal conditioning box also did not improve the signal. We should plan to replace the signal conditioning box next week.
asc filters
The h1asc filters which have been pending install since before ER9 were loaded.
framewriter testing
Jim, Dave
we continued testing of h1fw0 to investigate its instability. The daqd code was rebuilt against a later version of framecpp (2.4.2 instead of 1.19.32). When fw0 was instructed to write all frame types, it was as unstable with the older framecpp. The size of the science frames between the two writers at this point was different, with fw0's frame 2 bytes smaller than fw1's frame. We suspect the framecpp version string is encoded in the header, "2.4.2" is two chars shorter than "1.19.32". To get the frames the same size, we reverted fw0 back to framecpp 1.19.32.
The next test was to install a local 1TB 7200rpm SATA hard disk drive on h1fw0 and build an ext4 file system on it. The daqd process was directed to write to the local disk instead of the NFS mounted SATABOY. The disk access immediately sped up by many seconds. When we got the writer to write all four types of files, very surprisingly the frame writer still crashed, but took longer to do so. Writing all frames to the NFS-QFS crashed in less than 5 minutes, with local ext4 it crashing in 9 to 10 minutes. The crash time is not related to writing second trends, we saw several successful trend file writes over the several tests we ran. Monitoring the threads showed that with local disk the thread did not go into the "D" diskIO wait state. Another surprise was with local disk writing we did see a retransmission request about 40 seconds prior to the slew of requests at the time of crash.
We reconfigured fw0 backj to writing to ldas-h1-frames so they could be used to back-fill fw1's gaps.
As a test we had fw0 write commissioning frames only rather than science frames only. The data loading goes up from 0.9GB to 1.7GB per 64 second time period. fw0 is stable in this configuration. It can handle either 64 second frame, just not both.
We are leaving fw0 writing science frames overnight.
Y sled has been replaced recently and now that X sled is getting very dim, I decided to replace X sled as well. This way it's easier to keep track of when the sleds got replaced (this time both were replaced within one week). The old sled number is 12.05.21 replaced with 07.14.256.
Kiwamu, Stefan
Yesterday Kiwamu realigned the red IR cameras. I reset the centroid setting today. The configuration files are:
ITMX:
[Camera Settings]
Camera Name = H1 ITMX (h1cam21)
maxX = 659
maxY = 494
Exposure = 100000
Analog Gain = 1023
Auto Exposure Minimum = 150
Name Overlay = True
Time Overlay = True
Calculation Overlay = True
Do Calculations = True
Calculation Mask = Circle
Circle Mask X = 373
Circle Mask Y = 150
Circle Mask Radius = 250
Calculation Subtraction File = None
Auto Exposure = False
Calculation Noise Floor = 25
Snapshot Directory Path = /ligo/data/camera
Frame Type = Mono12
Number of Snapshots = 1
Archive Image Minute Interval = 0
Archive Image Directory = /ligo/data/camera/archive/
[No Reload Camera Settings]
Base Channel Name = H1:VID-CAM21
Camera IP = 10.106.0.41
Multicast Group = 239.192.106.41
Multicast Port = 5004
Height = 480
Width = 640
X = 0
Y = 0
ITMY:
[Camera Settings]
Camera Name = H1 ITMY (h1cam23)
maxX = 659
maxY = 494
Exposure = 100000
Analog Gain = 1023
Auto Exposure Minimum = 150
Name Overlay = True
Time Overlay = True
Calculation Overlay = True
Do Calculations = True
Calculation Mask = Circle
Circle Mask X = 295
Circle Mask Y = 220
Circle Mask Radius = 250
Calculation Subtraction File = None
Auto Exposure = False
Calculation Noise Floor = 25
Snapshot Directory Path = /ligo/data/camera
Frame Type = Mono12
Number of Snapshots = 1
Archive Image Minute Interval = 0
Archive Image Directory = /ligo/data/camera/archive/
[No Reload Camera Settings]
Base Channel Name = H1:VID-CAM23
Camera IP = 10.106.0.43
Multicast Group = 239.192.106.43
Multicast Port = 5004
Height = 480
Width = 640
X = 0
Y = 0
Title: 07/12/2016, Day Shift 15:00 – 23:00 (08:00 – 16:00) All times in UTC (PT) State of H1: IFO unlocked. Maintenance day Commissioning: Outgoing Operator: None Activity Log: All Times in UTC (PT) 13:00 (06:00) Peter – Looking for missing equipment in laser enclosures. 13:15 (06:15) Peter – Transitioned LVEA to Laser safe 14:00 (07:00) Peter – Finished looking for equipment 15:00 (08:00) Start of shift 15:15 (08:15) Soda vendor on site for deliveries 15:36 (08:36) Ryan – Going into LVEA to recover computer equipment 15:55 (08:55) Joe – Going into LVEA to check eye wash stations 15:58 (08:58) Robert – Going into LVEA to work on PEM stuff 16:03 (09:03) Ryan – Out of the LVEA 16:04 (09:04) Paul – Going to End-Y to work on PEM stuff 16:09 (09:09) Narco - Deliver of N2 to CP2 (LX) 16:46 (09:46) Joe – Out of LVEA – Still has two forklifts on battery charger 16:50 (09:50) Cintas – On site to change out matts 16:55 (09:55) Kyle – RGA work at End-Y (WP #5992) 17:42 (10:41) Filiberto – Going into LVEA to check PSL and HAM6 racks 17:52 (10:52) Joe – Going into LVEA to check on battery charging 17:53 (10:53) Paul – Back from End-Y 18:06 (11:06) Joe – Back from LVEA – Battery chargers are off 18:24 (11:24) Paul – Going to End-Y to check PEM stuff 18:25 (11:25) Jeff K. – Running charge measurements on ETM-X 18:29 (11:29) Filiberto – Back from LVEA 18:31 (11:31) Filiberto – Going to Mid-Y 18:45 (11:45) Hugh – Going to End-Y to center STSs 19:06 (12:06) Filiberto – Leaving Mid-Y 19:08 (12:08) Hugh – Back from End-Y 19:10 (12:10) Kyle – Back from End-Y 19:11 (12:11) Jeff K. – Running charge measurements on ETM-Y 19:25 (12:25) Jeff K. – Finished with charge measurements at End-X 19:30 (12:30) Kiwamu – Transition LVEA to Laser Hazard 19:40 (12:40) Narco – Delivery of N2 to CP5 (Mid-X) 19:50 (12:50) Kiwamu & Nutsinee – Working on HWS Table (WP# 5990) 20:01 (13:01) Dick – Going into LVEA to check electronics racks 20:10 (13:10) Dick – Out of LVEA 20:55 (13:55) Twin Cities Metals on site to make delivery for Bubba 23:00 (16:00) Turn over to Ed End of Shift Summary: Title: 07/12/2016, Day Shift 15:00 – 23:00 (08:00 – 16:00) All times in UTC (PT) Support: Sheila, Jenne, Jeff K., Incoming Operator: Ed Shift Detail Summary: Maintenance day – All hands search of site for missing equipment. Equipment hunt has completed. Jeff K. ran charge measurements on ETM-X and ETM-Y. Did initial alignment and locked the IFO to DC_READOUT Commissioners requested high power – Lock broke just before reaching 40W. Relocking.
As Patrick noted in 28181, the EndY had one STS2 leg out of range. This Seismo's BIO does not work remotely so we'd been waiting til today to center it. Attached is the 1 hour trend of the mass positions. Note it did take two centerings to get the U and W masses much closer to zero so don't expect one to be enough. They should settle down pretty quickly though so it still should not take too long to do a good centering. I guess the V mass is happy where it is and doesn't plan to change its position.
There was a request for a representative example of BSC-ISI performance during ER9. My attached plot shows the ETMY L(ength) suspoint motion of the ST2 GS13s versus the ground motion. The red curve is the ISI's longitudinal motion (using the calibrated SUS_POINT channel), blue is the ETMY BRS/STS Y super sensor (showing the tilt subtracted low frequency ground motion) and green is the calibrated STS Y ground motion. Somethings to note:
1. Below .1hz the height of the green trace above the blue gives you an idea of how windy it was. In this frequency range the L motion is mostly ISI Y, so much of the height of the red trace above blue is due to tilt. The bump between 30 & 100mhz is from the gain peaking of the sensor correction filter. As Conor said on Friday, this should be mostly common mode between all of the tables.
2. Between .1 and 1 hz the blend filters are rolling off from the CPSs to the inertial instruments. It doesn't look like we are getting much at the microseism, but this is probably limited by performance of the sensor correction. We should do "better" here when the microseism comes up, during O2.
3. Above 1 hz the St2 L motion is limited by St2 RX/RY motion, because we can't turn on those loops on St2. GS13 noise is too high below 1 hz so it's hard to make a blend filter that improves >1 hz motion that doesn't spoil lower frequency motion.
4. Above 10hz we are limited by GS13 noise and loop gain.
Something that we've noticed a lot lately is that if the interferometer drops lock during the CoilDrivers state, guardian doesn't notice the lockloss for a long time.
This was because all of the coil switching was happening in the "main" part of the state, which included many sleep commands. Since we were switching 5 optics' coils, and each optic was about 90 seconds long, we had a solid 7+ minutes between lockloss checks.
Yesterday I re-wrote the Coil Drivers state so that the switching happens in the "run" part of the state, and I use a series of guardian timers rather than sleep commands. So, the state still takes many minutes, but now it's constantly checking for lockloss as guardian is intended to do.
The switching has worked at least once so far with the new code, although since we didn't drop lock, the lockloss catching hasn't been tested yet.
Future work is to test whether we can do all 5 optics in parallel rather than in series, and then change the guardian to do so.
RGA had been valved-in but not running. Need to finish commissioning this unit via baking "hot" at next available opportunity.
Here is a 60 day trend of the the EndY Drift Monitor channel. This DC reading of the Beam Mirror Reflection obviously is still 'drifting' down but does continue to slow. A quick and dirty extrapolation gives a need to recenter date of around July 25.
All teams have completed search of assigned areas. Two already known TDS3034s locations were confirmed and one was returned to the EE shop. No other items were found.
EX was transitioned to LASER SAFE. EY was already LASER SAFE when I got there. Temporarily transitioned it to LASER HAZARD to enable my opening of the table enclosure. Transitioned back to LASER SAFE. No untoward items of test equipment were found in the table enclosures.
[Jenne, Robert]
As a result of Keita's alog 28196 regarding the beam position on the BS, we wanted to move the beam splitter around in relation to the beamline, to see if that would change any clipping that we may have on the baffles. Short answer: nope.
First, we moved ST1 by putting offsets in the isolation loops. JeffK tells us that these are calibrated in nm, so our 5,000 count offsets correspond to about 0.5mm of motion. We moved ST1 up and down, as well as laterally along the plane of the beam splitter (+x+y and -x-y). No effect seen in the power recycling gain.
Next, we moved HEPI in a similar fashion. The thought here is that the ITM elliptical baffles are suspended from this ST0, so we weren't moving them earlier. (By moving both ST1 and ST0 we had hoped to differentiate which set of baffles was causing us trouble.) We moved up and down, as well as in RZ, rotation about the z-axis. RZ is calibrated into nrad, and the baffles are order 1m away from the center of the ISI, so they were each moved on the order of 0.5mm also. Again no effect seen in power recycling gain.
Attached is a snapshot of our striptool, with the first offsets starting at about 0:06:00 UTC, and the last ones ending around 1:00:00 UTC. Teal is the power recycling gain. The POP18 seems to be still relaxing from the power up to 40W for the first few minutes of our tests, but doesn't seem to be correlated with our movements. Red trace is the vertical CPS measure of BS ST1 ISI position, and orange is superimposed with brick red measuring our lateral motion. Light purple is vertical HEPI motion and light green is RZ HEPI motion.
We felt that if we were really dominated by clipping losses around the beam splitter, moving by 0.5mm in some direction should show us some change in recycling gain. Since it doesn't, we conclude that the power loss must be somewhere else.
For the record -- indeed the calibration of the offsets are 1 [nm / ct] or 1 [nrad / ct], but that would mean at 5,000 [ct] offset in translation (X, Y, or Z) is 5 [um] = 0.005 [mm] (not 500 [um] = 0.5 [mm] as stated above). Similarly the RZ offset of 5,000 [ct] = 5 [urad] = 0.005 [mrad].
Yeah, Mittleman just pointed that out to me. Apparently math is hard in the evenings. We'll give this another try with a bit more actual displacement.
Andy, Duncan, Laura, Ryan, Josh,
In alog 28299 Andy reported that we were seeing the ER9 range deteriorate due to glitches every 2 seconds. Figure 1 shows the glitches turning on in DARM at 2016-07-09 05:49:34 UTC.
We think the ALS system not being shuttered and changing state in lock is to blame. Here's why.
Excavator pointed us to a strong coupling between DARM and the ALS channels. Raw data confirmed a correlation, Figure 2 shows the ALS glitches tuning on at that same time and figure 3 shows that both DARM and ALS are glitching at the same times.
When we investigated the Guardian ALS state for this time (figure 4), it was not in a nominal configuration to start with and that got worse around the time the glitches started. The shutter was not "Active" and at 05:49:34 UTC the ALS X state changed from "-15 locked transition" to "-19 locking WFS" and at that same time the glitches started in DARM. So at some point, ALS X decided it needed to lock the arm (looks like Y followed an hour or so later). We did not track down exactly how the glitches originated or made it into DARM because this seems non-standard enough that a configuration fix should make it go away.
Figure 5 shows a summary page plot for nominal ALS X Guardian behavior from O1. So the shutter should be active and we don't expect to see "locking WFS" come on during an analysis ready state.
It seems like the ALS didn't think that the IFO was locked on IR anymore. The ALS-X state suddenly drops from 'Locked on IR' to 'PLL locked' (state 6 to state 2), then the requested state changes from 'Locked on IR' to 'Lock Arm' (state 6 to state 3). It seems like something went wrong in the communication and the ALS started to try to lock the arm. I don't think it would have helped if it were shuttered, because it would have unshuttered when trying to 'relock'. The attachment is just a plot of the two EPICS channels. As Josh said, the change corresponds to the time the glitching started.
Two additional notes: Here are the full Excavator results for the time period: https://ldas-jobs.ligo-wa.caltech.edu/~rfisher/Excavator_Results/Jul11_Tests/1152079217/ (Note: Excavator was run over unsafe channels as we were running a test of the code and then we started to follow up why something in ALS ODC popped up.) We were pointed to the source of the problems by the ALS-X ODC channel indicating ADC overflows on a 2 second interval with precise timing. The ADC overflows reported by the EPICs system at this time had timing fluctuations relative to the actual overflows of +/- .6 seconds in just the 5 minutes we looked at by hand.
We have been leaving the green light into the arms because sometimes it is usefull to see the power build ups and green WFS signals as we are trying to understand alignment problems. As people pointed out we would normally have these shuttered if things were really nominal, and in the shuttered state we don't check if the green arm is still ocked or not so it would not try to relock causing the glitches.
This is a first look at the polarization data with the new setup. Some analysis with the previous setting was reported by Aidan at 25442 back in this February with a focus on noise behaviors. This time, since we are looking for a cause of the degradation in the power recycling gain, we focused on the time series rather than the spectra.
We saw two behavior in the polarization data when PSL was ~ 40 W.
Based on the fact that the amount of S-pol decreases as a function of time (which should increase the power recycling gain at the same time, naively speaking), I am inclining to say that the variation in the polarization is not a cause for the smaller power recycling gain.
[An observation from last night, July 13th]
I have used a lock stretch from last evening starting at ~ 2016-07-13 1:00 UTC for 2-ish hours. The attached two plots show the measured polarization in time series.
At the beginning of the lock stretch, the input power was increased step by step up to 40-ish W. The power recycling gain hit 35 right after completing the power-up operation and then settled to a lower value of 29 or so. The power in P-pol was about a factor of 8 larger than that for the S-pol. Note that this is opposite to what Livingston observed (G1501374-v1) where the S-pol was bigger than the P-pol. Back-propagating the measured power to those at BS's AR surface (the ones propagating from ITMX to BS), we estimated the power ratio to be Pp/Ps ~ 2500. This separation ratio is better than what has been measured at Livingston (G1501374-v1) by a factor of roughly 13.
[Another observation from Jan 31st for comparison]
I also looked at a similar data set from Jan 31st of this year (25442) to see if the polarization in the past behaved in the same way or not. This data was with a 20 W PSL without the HPO activated. The behavior looked similar to what we have observed last night -- a slow decay in the S-pol and P-pol was larger than the S-pol by a factor of 6-ish. See the attached below.
Matt later pointed out that there is a possibility that my measurement set up could be unintentionally rotated with respect to interferometer's polarization plane. In this case, depending on the rotation angle, the S-pol can appear to decrease even though the actual S-pol in the interferometer increases. I did a back of envelop calculation and confirmed that the measurement setup needs a rotation of about 20 deg to get such confusion [ angle = atan(sqrt(1/8) )]. I don't think we have such a big rotation in our setup. So it seems that the S-pol really decreases at the beginning of the lock stretch.
Here are some photos of our set up.