BSC SWWDs were tripped around 13:00 this afternoon following a 7.0 mag EQ in Alaska.
After Dave texted me about the eq, I logged in, reset all the watchdogs, set all of the ISI to damped and turned off the sensor correction. This should be a safe enough state until people get back in on Monday.
We are still recovering CDS from Thursday's power outage. All critical systems have been recovered and IFO locking was started yesterday.
Both Alarms and Alerts systems are operational and sending cell phone texts, I'm having issues sending alert emails.
Here is a brief summary of we are currently working on:
Timing:
Timing has two issues. The main one is with EY timing fanout chassis, its single mode link to MY (port15) is showing delay issues. Despite this h1pemmy appears to have good timing.
The secondary issue is that the atomic clock in the MSR has time jumped by 0.4S and needs resyncing.
Disk Failures:
The MSR file cluster has lost 4 disks. Resilvering for 2 of them is ongoing.
h1susauxh2 Power Supply Failure:
One of h1susauxh2's power supplies has failed, FRS36264.
EDC Disconnected Channels:
EDC currently has 574 disconnected channels, relating to auxiliary IOCs which still need to be started.
CDS OVERVIEW
Referencing the CDS Overview attached:
Range 7-segment LED service needs to be restarted.
Range of -1 MPc shows an outstanding GDS issue
EDC disconnect count mentioned above
Timing RED because of issues covered above
Picket Fence WHITE, needs restarting (see list below)
CDS Alarm RED due to EDC disconnect count
CDS SDF YELLOW because of remote power control and FCES-WIFI issues
Missing Auxillary EPICS IOCs
List may not be complete:
Picket Fence, End Station HWS, Mains Power (CS, EX), SUS Violin monitor, ncalx, cds load mon h1vmboot1, cal inj exttrig, range led, Observation mode
EY Geist Watchdog1250 actually went good for a few days after the power outage and has only recently failed again, suggesting a possible power supply issue.
Sat Dec 06 10:19:46 2025 Fill completed in 19min 42secs
TITLE: 12/05 Day Shift: 1530-2300 UTC (0730-1500 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: None
SHIFT SUMMARY: IFO is currently in IDLE. We were eventually able to get the IMC relocked after a lot of touching up optics and the PZT, but it seems like whenever it's been unlocking, it's having more trouble relocking than usual. We ran the dither align scripts for the TMSs and ITMs, and also ran through several full or partial initial alignments trying to get everything looking better. We had issues with basically every part of initial alignment, and after finishing alignment we would still have issues with flashes looking bad, especially for PRMI. Only green arms and MICH have been consistantly looking good.
Matt has started running an inverse filter on the ITMX ring heater over the weekend.
LOG:
- initial alignments and first few relocking states
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 15:49 | FAC | Eric | FCES | n | Heating issues | 16:58 |
| 15:50 | CDS | Richard | Ends | n | RFM troubleshooting (EX, then EY) | 16:34 |
| 16:12 | FAC | Kim, Nelly | LVEA | n | Mega cleanroom recleaning | 19:17 |
| 16:44 | FAC | Randy | LVEA | n | Moving septum in from high bay and fork lifting around | 19:51 |
| 16:45 | CDS | Richard | LVEA | n | Checking on things | 16:51 |
| 18:05 | JAC | Jennie | LVEA, Prep lab | n | Grab parts in LVEA, then JAC table work | 19:00 |
| 18:06 | CDS | Marc | CER Mezz | n | Checking power supplies | 18:14 |
| 18:10 | VAC | Gerardo | LVEA | n | Checking on purge air | 18:19 |
| 18:17 | CDS | Marc | CER | n | Pulling Beckhoff chassis 3 | 18:40 |
| 18:18 | CDS | Daniel | EX | n | Checking Bechoff chassis | 19:21 |
| 18:51 | SUS | Betsy, Rahul | High bay | n | Cables | 18:59 |
| 19:00 | PCAL | Rick, Volker | PCAL lab | n | Checking on lab | 19:12 |
| 19:02 | CDS | Marc | CER | n | Putting Beckhoff chassis back in | 19:15 |
| 19:16 | SUS | Rahul | LVEA | n | Putting things in sus storage racks | 19:24 |
| 19:27 | CDS | Daniel, Marc | EX | n | Checking on power supply | 19:46 |
| 19:41 | TCS | TJ | LVEA | n | Turning TCS power supply on | 19:51 |
| 21:20 | VAC | Gerardo | LVEA | n | Getting parts for AWC | 21:29 |
| 21:38 | SQZ | Kar Meng, Eric | Opt Lab | n | OPO work | 00:04 |
| 21:38 | PCAL | Oli | EX | n | Turning on PCAL X laser | 22:38 |
| 21:52 | JAC | Jennie | LVEA | n | Looking for parts | 23:52 |
| 21:57 | VAC | Travis | MY | n | Parts transportation | 22:21 |
| 22:17 | - | Matt, Jason | LVEA | n | Check on PSL env and plug back in IMC whitening cable | 22:21 |
| 23:29 | PSL | Jason | LVEA | n | Resetting PSL environmental settings | 23:34 |
I used Tony and I's statecounter.py to take a look at the past year of data using minute trends. Minute and second trends do some rounding that I had to consider in my search, ETMX goes from -8.9 to +6.05425 during a lock aquisition, these average to ~ -1.42. I ended up searching the chan H1:SUS-ETMX_L3_LOCK_BIAS_OFFSET for above and below -2.0 doing the following calls:
This gave me an outfile.txt file full of results in the format "idx (data_idx_start, data_idx_stop) gpsstart gpsstop duration" which I did some brief analysis on yielding:
Percentage of the time the Bias offset was [+]: 43.70 % *Past locked LOWNOISE_ESD_ETMY
Percentage of the time the Bias offset was [-]: 56.25 % *Between PREP_FOR_LOCKING and LOWNOISE_ESD_ETMY
Total duration in [+]: 205.3 [Days], [-]: 159.5 [Days] over 365.0 [Days]
Data timespan missing due to minute rounding and FW restarts 0.1965 [Days] (0.054 % of total time).
Where the BIAS_OFFSET is changed from negative to positive has changed a little over the past year but it's always during one of the final states of ISC_LOCK and it's always reset to negative PREP_FOR_LOCKING.
I haven't done the same search for ETMY but looking through where it's set in ISC_LOCK and ndscope I can say that ETMY spends the large majority (>90%) of time at -4.9. Specifically it stays at this value for 4min 23sec per acquisition, it gets changed to +9.3 during LOWNOISE_ESD_ETMY but it's brought back to -4.9 in the next state LOWNOISE_ESD_ETMX as we switch back.
Jenne sent me a Mattermost message this afternoon pointing out an odd "oscillation" in Amp2 output power, so I took a look. Sure enough, it was doing something weird. Ever since the power outage, on a roughly 2-hour period, the output power would drop slightly and come back up. I looked at items that directly impact the amplifier: Water/amp/pump diode temperatures, pump diode operating currents, and pump diode output. Everything looked good with the temperatures and operating currents, but the pump diode output for 3 of the 4 pump diodes (1, 2, and 4) showed the same periodic behavior as the amp output; every ~2 hours, the pump diode output would spike and come back down, causing the "oscillation" in Amp2 output power. See first attachment. But what was causing this behavior?
At first, I couldn't think of anything beyond, "Maybe the pump diodes are finally starting to fail..." Gerardo, who was nearby at the time, reminded me that during the last power outage the H2 enclosure had an odd sound coming from its environmental control system, and that it was not showing up on the control panel for the system; the control panel showed everything as OFF, but when Randy climbed on top of the enclosure he found one of the anteroom fans moving very slowly and haltingly, and making a noise as it did so. Turning the fans off at the control panel fixed the issue at the time. So, I took a look at the signal for the PEM microphone in the PSL enclosure, which Gerardo also reminded me of, and sure enough, the mic was picking up more noise than it was before the power outage (see 2nd attachment). Around the same time, from the front of the Control Room Sheila noted that Diag_Main was throwing an alarm about the PSL air conditioner being ON. It had been doing this throughout the day, but everytime I checked the AC temperature it was reading 70 °F, which was a little lower than normal but not as low as it reads when the AC is actually on (which is down around 67 °F or so). This time, however, when I checked the AC temperature it was reading 68 °F. Huh. So I pulled up a trend of the PSL enclosure temps and sure enough, every 2 hours it looks like the AC comes on, drops the temperature a little, then turns off, and this behavior lines up with the "oscillation" in Amp2 output (see 3rd attachment; not much data for the enclosure temps since those come in through Beckhoff, which was recovered earlier this afternoon, but enough).
I went out and turned every PSL environmental item (HEPA fans, ACs, and make-up air) ON then OFF again and placed the enclosure back in Science Mode (HEPA fans and AC off, make-up air at 20%). Won't know for sure if this cured the issue, as it's been happening on a 2-hour period, but looking at the PEM microphone in the enclosure shows promise. The PEM mic is not picking up the extra noise, it's back to where it was before the power outage (see final 2 attachments). Also encouraging, at the time of writing the AC temperatures are above the temperature where the ACs would kick on before I cycled the environmental controls. I'll continue to monitor this over the weekend.
Dave, Jonathan, Tony, operators, ...
This is a compilation of recovery actions based off of a set of notes that Tony took while helping with recovery. This is to augment the already existing log entries 88381 and 88376. Times are localtime.
Thurs 4 Dec
At 12:25 PST power went out. Tony and Jonathan had been working to shut down some of the systems so that they could have a graceful power off. The UPS ran out around 1:17 PST. At 2:02 the power came back.
Tony checked the status of the network switch, making sure they all powered on and we could see traffic flowing.
We started up the DNS/DHCP servers, as well as made sure the FMCS system was coming up.
Then we got access to the management box. We did this with a local console setup.
The first steps were to get file servers up, we needed to get /ligo and /opt/rtcds up. We started on /opt/rtcds as that is what FE computers need. We turned on h1fs0 and made sure it was exporting file systems. H1fs0 was problematic for us. The opt/rtcds file system is a ZFS file system. We think that the box came up, exported the /opt/rtcds path, and then got the zpool ready to use. In the mean time another server came up and wrote to /opt/rtcds. This appears to have happened before the zfs filesystem could be mounted, so it created directories in /opt/rtcds and kept the zfs filesystem (which had the full contents of /opt/rtcds) from mounting. When we noticed this we deleted the /opt/rtcds contents on h1fs0, made sure the zfs file system mounted, and then re-exporting things. This gave all the systems a populated /opt/rtcs. We had to reboot everything that had started as they ended up now having stale file handles. There were still problems with the mount. The file system performance was very slow over nfs. Direct disk access was fast when testing on the file server. We fixed this the next day after rulling out network conjestion and errors.
We then turned on the h1daq*0 machines to make it possible to start recording data. However they would need a reboot to clear issues with /opt/rtcds, and would need front end systems up in order to have data to collect.
Then we went to get /ligo started. We logged onto cdsfs2. As a reminder cdsfs2,3,4,5 are a distributed system with the files. We don't start this much so we had forgotten how. Our notes hinted at it. Dan Moraru helped here. What we had to do was to tell pacemaker (pcs) to leave maintenance mode, then it started the cdsfsligo ip address. Dan did a zpool reopen to fix zfs errors. Then we restarted the nfs-server service. At this point we had a /ligo file system. We updated the notes on cdswiki as well so that we a reminder for next time. The system was placed back into maintenance mode (the failover is problematic).
The next step was to get the boot server running. This was h1vmboot5-5, that lived on h1hypervisor. This is a kvm based system, that does not use proxmox, like our vm cluster. So it took us a moment to get on, we ended up going in via the console and doing a virsh start on h1vmboot5-5.
Dave started front-ends at this point. Operators were checking the IOC for power.
We started the 0 leg of the daq.
We started the procmox cluster up and began starting ldap and other core services. To get the VMs to start we had to do some work on cdsfs0 as the VM images are stored on a share there. This was an unmount of the share, starting zfs on cdsfs0 and a remount.
Ldap came up around 4:31. This allowed users to start logging into systems as themselves.
Turned on the following vms
We powered on the epics gateway machine.
We needed to reboot h1daqscript0 to get mounts right and to start the daqstat ioc. This was around 5pm. The overview showed that TW1 was down. We needed to bring and interface up on h1daqdc1 and start a cds_xmit process so that data would flow to TW1. We got TW1 working around 5:15pm.
Powered on h1xegw0, and fmcs-epics. Note with fmcs-epics the button doesn't work (it is a mac mini with a rack mount kit), you need to go the back and find the power button there.
Reviewing systems, we turned off the old h1boot1 (there are no network connections, so it doesn't break anything to power on, but it should be cleaned up). Powered on the ldasgw0,1 machines so that h1daqnds could mount the raw trend archive.
The epics gateways did not start automatically, so we went onto cdsegw0 around 5:45 and ran the startup procedure.
The wap controller did not come up. Something is electrically wrong (maybe a failed power supply).
At 6:01 Dave powered down susey for Fil. It was brought back around 6:11.
Through this Dave was working on starting models. The /opt/rtcds was very slow and models would time out while reading the safe.snap files. Eventually Dave got things going.
Patrick, Tony, and Jonathan did some work on the cameras while Dave was restarting systems. We picked a camera, cycled the network switch port to power cycle the camera. Then restarted the camera service. However this did not result in a video stream.
Friday
Jonathan found a few strange traffic flows while looking for slow downs. These were not enough to cause the slow downs we had. h1daqgds1 did not have it's broadcast interface come up and was transmitting the 1s frames out over its main interface and through to the router. So this was disabled until it could be looked at. The dolphin manager was sending more traffic to all daqd boxes trying to establish the health of the dolphin fabric. It was not a complete fabric as dc0 was not brought back up as it isn't doing anything at this point. In response we started dc0 to remove another source of abnormal traffic from the network.
These were not enought to explain the slow downs. Further inspection showed that there was not abnormal traffic going to/from h1fs0 and that other than the above noted traffic there was no unexpected traffic on the switch.
It was determined the slow downs were strictly nfs related. We changed cdsws33 to get /opt/rtcds from h1fs1. This was done to test the impact of restarting the file server. After testing restarts with h1fs1 and cdsws33 we stopped the zfs-share service on h1fs0 (so there would not be competing nfs servers) and restarted nfs-kernel-server. In general no restarts where required, access just returned to normal speed for anything touching /opt/rtcds.
After this, TJ restarted the guardian machine to try and put it into a better state. All nodes came up except one, which he then restarted.
Dave restarted h1pemmx, which had been problematic.
We restarted h1digivideo2,4,5,6. This brought all the cameras back.
Looking at the h1daqgds machines the broadcast interface had not come back. So starting the enp7s0f1 interfaces and restarting the daqd fixed its transmission problems. The 1s frames are flowing to DMT. At Dan's request Jonathan took a look at a DMT issue after starting h1dmt1,2,3. the /gds-h1 share was not mounted on h1dmtlogin, so nagios checks were failing due to not have that data available.
The wap control was momentarily brought to life. It's cmos battery was dead so it needed intervention to boot and needed a new date/time. However it froze while saving settings.
The external medm was not working. Looking at the console for lhoepics, its cmos battery had failed as well and needed intervention to boot.
After starting ldasgw0,1 yesterday we were able to mount the raw minute trend archive to the nds servers.
Fw2 needed a reboot to reset it's /opt/rtcds.
We also brought up more of the test stand today to allow Ryan Crouch and Rahul to work on the sustriple in the staging building.
A note, things using kerberos for authentication are going slower. We are not sure why. We have reached out to the LIAM group for help.
I was able to get the wap controller back by moving it's disks to another computer.
This is a reminder that we need to rebuild the controller, and do it in a vm.
TITLE: 12/05 Day Shift: 1530-2300 UTC (0730-1500 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Oli
SHIFT SUMMARY: We've continued to recover after the power outage yesterday. The major milestones were getting the /opt/rtcds/ file server reset and running back up to normal speed. This allowed us to get all guardian nodes up and running and start other auxillary scripts and IOCs. The Beckhoff work wrapped up around noon and we were able to get the IMC locked and ready to start initial alignment just before 1300PT. We then ran into some IMC whitening issues, which have since been solved. The main issue at this point is the COMM and DIFF beatnotes are very low. Investigation ongoing. See many other alogs for specific information on the power outage recovery.
While TJ was running initial alignment for the green arms, I noticed that the ALS X beam on the camera appeared to be too far to the right side of the screen. The COMM beatnote was at -20 dBm, when it is normally between -5 and -10 dBm. I checked both the PR3 top mass osems and the PR3 oplevs. The top mass osems did not indicate any significant change in position, but the oplev seems to indicate a significant change in the yaw position. PR3 yaw was around -11.8 but then changed to around -11.2 after the power outage. It also appears that the ALS X beam is closer to its usual position on the camera.
I decided to try moving PR3 yaw. I stepped 2 urad and which brought the oplev back to -11.8 in yaw and the COMM beatnote to -5 dBm. Previous slider value: -230.1, new slider value: -232.1.
The DIFF beatnote may not be great yet, but we should wait for beamsplitter alignment before making any other changes.
Actually, this may not have been the right thing to do. I trended the oplevs and top mass osems of ITMX and ETMX and compared their values during today's initial alignment before moving PR3, to the last initial alignment we did before the power outage. They are mostly very similar except for ETMX yaw.
| P then | P now | Y then | Y now | |
| ITMX oplev | -7.7 | -7.8 | 5.8 | 5.6 |
| ETMX oplev | 2.7 | 3.1 | -11.5 | -3.1 |
I put the PR3 yaw slider back to its previous value of -230.1 until we can figure out why ETMX yaw has moved so much.
I moved PR3 yaw back to -231.9 on the slider. This allowed us to see PRMI flashes on POPAIR. We can revist if we really want to keep this alignment of PR3 on Monday.
Closes FAMIS38811, last checked in alog87797.
We can see yesterdays outage on every plot as expected and the BRS issues from October 21st. There's doesn't look to be any trend in the driftmon, and ETMY looks to be still slowly increasing in temperature before the outage. The aux plot looks about the same as during the last check except for ETMY DAMP CTL looks to have come back at a different lower spot but the EY SEI configuration looks like is not quite fully recovered so that may be why.
All systems have been recovered enough to start relocking at this point. We will not be relocking the IFO fully at this time though since the ring heaters have been off and their time to thermalize would take too long. We will instead be running other measurements that will not need the ring heaters on.
There is a good chance that other issues will pop up, but we are starting initial alignment now.
[Oli, Jenne]
Oli brought all of the IMC optics back to where they had been yesterday before the power outage. We squiggled MC2 and the IMC PZT around until we could lock on the TEM00 mode, and let the WFS (with 10x usual gain) finish the alignment. We offloaded the IMC WFS using the IMC guardian. We then took the IMC to OFFLINE, and moved MC1 and the PZT such that the DC position on the IMC WFS matched a time from yesterday when the IMC was offline before the power outage. We relocked, let the alignment run again, and again offloaded using the guardian. Now the IMC is fine, and ready for initial alignement soon.
I restarted all the dust monitor IOCs, they all came back nicely. I then reset the alarm levels using the 'check_dust_monitors_are_working' script.
Related: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=87729
We disconnected everything from the ISS array installation spare unit S1202965 and stored it in the ISS array cabinet in the vac prep area next to the OSB optics lab. See the first 8 pictures.
The incomplete spare ISS array assy originally removed from LLO HAM2 (S1202966) was moved to a shelf under the work table right next to the clean loom in the optics lab (see the 9th picture). Note that one PD was pulled from that and was transplanted to our installation spare S1202965.
Metadata for both 2965 and 2966 were updated.
ISS second array parts inventory https://dcc.ligo.org/E2500191 is being updated.
Rahul and I cleared the optics table so Josh and Jeff can do their SPI work.
Optics mounts and things were put in the blue cabinet. Mirrors, PBS and lenses were put back into labeled containers and in the cabinet in front of the door to the change area.
Butterfly module laser, the LD driver and TEC controller were put back in the gray plastic bin. There was no space in the cabinets/shelves so it's put under the optics table closer to the flow bench area.
Single channel PZT drivers were put back in the cabinet on the northwest wall in the optics lab. Two channel PZT driver, oscilloscopes, a function generator and DC supplies went back to the EE shop.
OnTrack QPD preamp, its dedicated power transformer, LIGO's LCD interface for QPD and its power supply were put in a corner of one of the bottom shelf of the cabinet on the southwest wall.
Thorlabs M2 profiler and a special lens kit for that were given to Tony who stored them in the Pcal lab.
aLIGO PSL ISS PD array spare parts inventory E2500191 was updated.
I was baffled to find that I haven't made an alog about it, so here it is. These as well as other alogs written by Jennie, Rahul or myself in since May-ish 2025 will be added to https://dcc.ligo.org/LIGO-T2500077.
Multiple PDs were moved so that there's no huge outlier in the position of the PDs relative to the beam. When Mayank and Siva were here, we used to do this using an IR camera to see the beam spot position. However, since then we have found that the PD output itself to search for the edge of the active area is easier.
After the adjustments were made, the beam going into the ISS array was scanned vertically as well as horizontally while the PD outputs were recorded. See the first attachment. There are two noteworthy points.
1. PDs "look" much narrower in YAW than in PIT due to 45 degrees AOI only in YAW.
Relative alignment matters more for YAW because of this.
2. YAW scan shows the second peak for most of PDs but only in one direction.
This was observed in Mayank/Siva data too but it wasn't understood back then. This is the design feature. The PDs are behind an array plate like in the second attachment (the plate itself is https://dcc.ligo.org/D1300322). Red lines show the nominal beam lines and they're pretty close to one side of the conical bores on the plate. Pink and blue arrows represent the shifted beam in YAW.
If the beam is shifted too much "to the right" on the figure (i.e. pink), the beam is blocked by the plate, but if the shift is "to the left" (i.e. blue) the beam is not blocked. It turns out that it's possible that the beam grazes along the bore, and when that happens, a part of the broad specular reflection hits the diode.
See the third attachment, this was shot when PD1 (the rightmost in the picture) was showing the second peak while PD2 didn't.
(Note that the v2 plate which we use is an improvement over the v1 that actually blocked the beam when the beam is correctly aligned. However, there's no reason things are designed this way.)
We used a PZT-driven mirror to modulate the beam position, which was measured by the array QPD connected to ON-TRAK OT-301 preamp as explained in this document in T2500077 (though it is misidentified as OT-310).
See the fourth attachment where relatively good (small/acceptable) coupling was obtained. The numbers measured this time VS April 2025 (Mayank/Siva numbers) VS February 2016 (T1600063-V2) are listed below. All in all, horizontal coupling was better in April but vertical is better now. Both now and Apr/2025 are better than Feb/2016.
| PD number |
Horizontal [RIN/m] |
Vertical [RIN/m] |
||||
| Now |
Apr/2025 (phase NA) |
Feb/2016 (phase NA) |
Now |
Apr/2025 (phase NA) |
Feb/2016 (phase NA) |
|
|
1 |
6.9 | 0.8 | 20 | -0.77 | 34.1 | 11 |
| 2 | 7.1 | 2.7 | 83 | 5.1 | 2 | 25 |
| 3 | 8.2 | 5.5 | 59 | 2.2 | 4.4 | 80 |
| 4 | 8.8 | 2.3 | 33 | 0.30 | 1.1 | 21 |
| 5 | -19 | 5.1 | 22 | 11 | 12.3 | 56 |
| 6 | -14 | 12.9 | 67 | 16 | 30.4 | 44 |
| 7 | -18 | 10.2 | 27 | 2.9 | 42.7 | 51 |
| 8 | -19 | 5.3 | 11 | 12 | 52.1 | 54 |
Phase of the jitter coupling: You can mix and match to potentially lower jitter coupling further.
Only in "Now" column, the coupling is expressed as signed numbers as we measured the phase of the array PD output relative to the QPD output. Absolute phase is not that important but relative phase between the array PDs is important. The phase is not uniform across all diodes when the beam is well aligned. This means that you can potentially mix and match PDs to further minimize the jitter coupling.
Using the example of this particular measurement, if you choose PD1/2/3/4 as the in-loop PD, the jitter coupling of the combined signal is roughly mean(6.9,7.1,8.2,8.8)=7.8 RIN/m horizontally and mean(-0.77, 5.1, 2.2, 0.3) = 1.7.
However, if you choose PD1/3/4/7 (in analog land), the coupling is reduced to mean(6.9, 8.2, 8.8, -18)=1.5 horizontally and mean(-0.77, 2.2, 0.3, 2.9)=1.2.
You don't pre-determine the combination now, you should tune the alignment and measure the coupling in chamber to decide if you want a different combination than 1/2/3/4.
Note, when monotonically scanning the beam position in YAW (or PIT) edge to edge of PDs, some PDs showed more than one phase flips. When the beam is apparently clipped at the edge (thus the coupling is huge), all diodes show the same phase as expected. But that's not necessarily the case when the beam is well aligned as you saw above.The reason of the sign flips when the beam is far from the edge of the PD is unknown but there should be something like particulates on the PD surface.
The QPD was physically moved so the beam is very close to the center of the QPD. This can be used as a reference in chamber when aligning the beam to the ISS array.
After this, we manually scanned the beam horizontally and measured the QPD output. See the 5th attachment, vertical axis is directly comparable to the normalized PIT/YAW of the CDS QPD module, assuming that the beam size on the QPD in the lab is close enough to the real beam in chamber (which it should be).