TITLE: 10/05 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Austin
SHIFT SUMMARY:
We dropped out of observing for a quick ring heater change from 17:34:23 till 17:35:25UTC, we dropped out of observing again to revert the ring heater changes and for Dave to restart picket-fence which has been frozen, starting at 20:39:26 but unfortunately we lost lock soon after at 20:46UTC.
Relocking I had to play the DOF dance a little with Yarm, turning off WFS_DOF_1_P to get ALS to lock. I couldn't get any flashes on DRMI or PRMI and CHECK_MICH lost lock so I started an initial alignment at 21:37UTC, finished at 22:05UTC. Robert and Ryan went into the LVEA to investigate the PSL airflow while we were relocking and Robert did a sweep as he left.
We reaquired NLN at 22:46UTC, we're just waiting on ADS to converge to go into Observing. PI 24 and 31 rang up early on but they were quickly damped by guardian.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 15:02 | FAC | Randy | MidX | N | Delivery, 1445 | 15:11 |
| 15:37 | FAC | Karen | Optics lab, vac prep | N | Tech clean | 16:12 |
| 18:16 | VAC | Jordan | VPW | N | Move pumps to mech room | 18:29 |
| 20:28 | FAC | Fil | MidY | N | Inventory | 21:58 |
| 20:36 | VAC | Jordan, Travis | EndY | N | Mech room pumps | 21:55 |
| 21:55 | VAC | Jordan, Travis | EndX | N | Pumps | 22:19 |
| 22:01 | PEM | Robert | Outside PSL eclos | N | Check mouse traps/guards | 22:18 |
| 22:19 | PSL | RyanS, Robert | LVEA, PSL | N | Check PSL airhandler | 22:34 |
FAMIS 25959, last checked in alog 73124
Script reports elevated noise for the following sensors:
ITMX_ST2_CPSINF_H3
ITMX_ST2_CPSINF_V1
ITMX_ST2_CPSINF_V3
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1380574010, No clear cause DCPD saturation before.
We completed tracking the final unlabeled power runs in Mezanine rack H1-VDC-C6. The old cable identification was insufficient to determine exactly where the power was being delivered. Last Tuesday we verified ISC R3 & R5 supplies +/- 24v and +/- 18V (VDC-C6 U30 & U26). This week we verified the old SQZ1 has a +/-18V run which is not connected to any chassis at the moment, but is powered on (VDC-C6 U22). The remaming unidentified power runs (VDC-C6 U18, U14, & U10) were identified to supply H1-ISC-R2 +/-18V and H1-ISC R1 +/-24V, +/-18V. Rack document has been updated and placed in the DCC. Mystery solved!
Both end station pumps running smoothly. The corner station pump (1194) must have over heated and kicked the breaker. One of the felt filters disentigrated and clogged the filter/muffler. This was cleaned out and put back together. A newer felt filter was added. The graphite veins seemed to be in fair condition. Pump 1194 was put back into service. I will check to see if it continues to run smoothly this after noon. More rebuild kits need to be ordered.
Ryan C, Camilla. We popped into commissioning to adjust the ETM ring heaters up from 1.0W to 1.1W/segment at 19:35 17:35UTC, accepted changes in sdf. Back in observing while the slow thermalization happens, plan to revert to nominal in 3 hours at 20:35UTC. This weeks past ring heater tests are 73093 and 73272.
Circulating power decreased 3kW, ndscope attached, kappa_c 0.3% decrease. HOM peaks moved down in frequency as expected. High frequency noise slightly decreased but this could be due to the reduction in circulating power. 52Hz jitter noise in DARM shows an increase.
Thu Oct 05 10:10:45 2023 INFO: Fill completed in 10min 41secs
Jordan confirmed a good fill curbside
Picket fence has not been updating recently. Ryan restarted the process on nuc5, which did not fix the issue.
Attached MEDM shows the 08:24 restart, a server uptime of only 2 mins, and no update for 24 mins.
We are investigating.
I restarted the service on nuc5 at 13:47 It has been running with no issues since that time (51 minutes)
TITLE: 10/05 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 7mph Gusts, 5mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.15 μm/s
QUICK SUMMARY:
TITLE: 10/04 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
- 23:34 - inc. 5.4 EQ from Japan
- 2:02 - inc 5,4 EQ from Papua New Guinea, 2:11 EQ mode activated, 2:16 - another 6.1 in Japan, successfully rode out (peakmon peaked at 2000)!
- 2:10 - Superevent S231005j
- EX saturations @ 3:11
- 6:39 - Looks like another EQ from Japan, 5.6 in magnitude
LOG:
No log for this shift.
Today, during commissioning time and between 12:30 and 3:40 PM, I did the following.
1) Impulse injections at EX - this was to investigate whether the coupling at EX will be greatly reduced by damping the cryobaffle or if there is a secondary coupling site.
2) Increasing the 9.8 ITM bounce mode amplitudes by injecting into M0 Test V for the ITMs - this was to make sure that they were not responsible for the ~10 Hz harmonics that we sometimes see in the 20-100 Hz band.
3) Injecting in the LVEA to try and find the mystery vibration coupling (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72778).
It was clear that I did not produce noise in the 20-100 Hz band with my ITM bounce mode injections, but I havent yet analyzed the data from the other two tests.
Starting RH changes at 19:35UTC, turned up ITM RHs +0.1W and down ETM RHs -0.1W. Plots of DARM and HOM after 2 hours attached and ndscope trend. High frequency noise increased and the HOM moved higher in frequency. Circulating powers increased. After 3 hours at 22:35UTC I turned RHs back to nominal.
During this time Robert was doing some PEM injections and we had been locked at NLN for 5 hours so weren't completely thermalized at the start.
I added a 6600 to 6800Hz BLRM as H1:OAF-RANGE_RLP_8 to monitor high frequency noise, see attached, didn't prove very useful. H1:SQZ-DCPD_RATIO_6_DB shows trend better.
| Nominal (W/segment) | Test Values 19:35 to 22:35UTC | |
| ITMX | 0.44 | 0.54 |
| ITMY | 0.0 | 0.1 |
| ETMX | 1.0 | 0.9 |
| ETMY | 1.0 | 0.9 |
Last weeks test 73093. Future tests: Can't go the other direction as ITMY started with 0W RH. Could try commonly turning up just ETMs in a future while observing test as we had decreased high frequency noise with both ETM RHs ~1.4W/seg in February 67501, plot attached.
Updated plot showing thermalization back to nominal RH settings attached. Main changes were higher circulating power and more high frequency noise during the test.
J. Oberling, F. Mera
This morning we swapped the failing laser in the ITMx OpLev with a spare. The first attached picture shows the OpLev signals before the laser swap, the 2nd is after. As can be seen there was no change in alignment, but the SUM counts are now back around 7000. I'll keep an eye on this new laser over the next couple of days.
This completes WP 11454.
J. Oberling, R. Short
Checking on the laser after a few hours of warm up, I found the cooler to be very warm, and the box housing the DC-DC converter that powers the laser (steps ~11 VDC down to 5 VDC) was extremely warm. Also, the SUM counts had dropped from the ~7k we started at to ~1.1k. Seeing as how we just installed a new laser, my suspicion was that the DC-DC converter was failing. Checking the OpLev power supply in the CER it was providing 3A to the LVEA OpLev lasers; this should only be just over 1A, which is further indication something is up. Ryan and I replaced the DC-DC converter with a spare. Upon powering up with the new converter the current delivered by the power supply was still ~3A, so we swapped the laser with another spare. With the new laser the delivered current was down to just over 1A, as it should be. The laser power was set so the SUM counts are still at ~7k, and we will keep an eye on this OpLev over the coming hours/days. Both lasers SN 191-1 and SN 119-2 will be tested in the lab; my suspicion is that the dying DC-DC converter damaged both lasers and they will have to be repaired by the vendor, will see what the lab testing says. New laser SN is 199-1.
Noticing as the night progresses, the sum counts are slowly going up, starting from ~6200 and now ~7100. Odd.
ITMX OPLEV sum counts are at about 7500 this morning.
Sum counts around 7700 this morning, they're still creeping up
At the start of commissioning at 19:01UTC, we went out of observing to turn CO2X power up from 1.53 to 1.67W. The power had dropped 7% since May, see 72943. I've edited lscparams and reloaded TCS_ITMX_CO2_PWR guardian expecting we'll want to keep this change.
Over the last 2 weeks CO2X power has dropped 0.04W (2%), plot attached. We should keep an eye on this and maybe bump up the requested power again in the comming weeks. We plan to replace the CO2X chiller when a new one arrives. We may also replace want to replace the laser with the re-gased laser.
First ENDX Station Measurement:
During the Tuesday maintenace, the PCAL team( Rick Savage & Tony Sanchez) went to ENDX with Working Standard Hanford aka WSH(PS4) and took an End station measurement.
But the Upper PCAL BEAM had been move to the left by 5 mm last week. See alog https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72063.
We liked the idea of doing a calibration measurement with the beam off to the left just to try and see the effects of the offset on the calibration.
Because of limitations of our analysis tool which names files with a date stamp, the folder name for this non nominal measurement is tD20230821 even though it actually took place on Tuesday 2023-08-22.
Beam Spot Picture of the Upper Beam 5 mm to the Left on the apature
Martel_Voltage_Test.png
Document***
WS_at_TX.png
WS_at_RX.png
TX_RX.png
LHO_ENDX_PD_ReportV2.pdf
https://svn.ligo.caltech.edu/svn/aligocalibration/trunk/Projects/PhotonCalibrator/measurements/LHO_EndX/tD20230821/
We then Moved the PCAL BEAM back to the center, which is its NOMINAL position.
We took pictures of the beam spot.
Second NOMINAL End Station Measurement:
Then we did another ENDX Station measurement as we would normally do which is appropriately documented as tD20230822.
The second ENDX Station Measurement was carried out according to the procedure outlined in Document LIGO-T1500062-v15, Pcal End Station Power Sensor Responsivity Ratio Measurements: Procedures and Log, and was completed by noon.
We took pictures of the Beam Spot .
Martel:
We started by setting up a Martel Voltage source to apply some voltage into the PCAL Chassis's Input 1 channel and we record the times that a -4.000V, -2.000V and a 0.000V signal was sent to the Chassis. The analysis code that we run after we return uses the GPS times, grabs the data and created the Martel_Voltage_Test.png graph. We also did a measurement of the Martel's voltages in the PCAL lab to calculate the ADC conversion factor, which is included on the document .
After the Martel measurement the procedure walks us through the steps required to make a series of plots while the Working Standard(PS4) is in the Transmitter Module. These plots are shown in WS_at_TX.png.
Next steps include: The WS in the Receiver Module, These plots are shown in WS_at_RX.png.
Followed by TX_RX.png which are plots of the Tranmitter module and the receiver module operation without the WS in the beam path at all.
All of this data is then used to generate LHO_ENDX_PD_ReportV2.pdf which is attached, and a work in progress in the form of a living document.
All data and Analysis has been commited to the SVN.
https://svn.ligo.caltech.edu/svn/aligocalibration/trunk/Projects/PhotonCalibrator/measurements/LHO_EndX/tD20230822/
PCAL Lab Responsivity Ratio Measurement:
A WSH/GSHL (PS4/PS5)BackFront Responsivity Ratio Measurement was ran, analyzed, and pushed to the SVN.
The analysis of this measurement produces 4 PDF files which we use to vet the data for problems.
raw_voltages.pdf
avg_voltages.pdf
raw_ratios.pdf
avg_ratios.pdf
All data and Analysis has been commited to the SVN.
https://svn.ligo.caltech.edu/svn/aligocalibration/trunk/Projects/PhotonCalibrator/measurements/LabData/PS4_PS5/
I switched the order of the lab Measurements this time to have the Front Back Last this time to see is it changed the relative difference between FB and BF measurements.
PCAL Lab Responsivity Ratio Measurement:
A WSH/GSHL (PS4/PS5)FrontBack Responsivity Ratio measurement was ran, analyzed, and pushed to the SVN.
The analysis of this measurement produces 4 PDF files which we use to vet the data for problems.
raw_voltages2.pdf
avg_voltages2.pdf
raw_ratios2.pdf
avg_ratios2.pdf
This adventure has been brought to you by Rick Savage & Tony Sanchez.
After speaking to Rick and Dripta,
Line 10 in the pcal_params.py needs to be changed from:
PCALPARAMS['WHG'] = 0.916985 # PS4_PS5 as of 2023/04/18
To:
PCALPARAMS['WHG'] = 0.9159 #PS4_PS5 as of 2023-08-22
This change would reflect the changes we have observed in the measurements of PS4_PS5 responsivity ratio measurements taken in the lab which affect the plots of Rx Calibration in sections 14 and 22 of the LHO_EndY_PD_ReportV2.pdf .
Investigations have shown that PS4 has changed but not PS5 OR Rx Calibration.
J. Kissel, J. Driggers I was brainstorming why LOWNOISE_LENGTH_CONTROL would be ringing up a Transmon M1 to M2 wire violin mode (modeled to be at 104.2 Hz for a "production" TMTS; see table 3.11 of T1300876) for the first time on Aug 4 2023 (see current investigation recapped in LHO:72214), and I remembered "TMS tracking..." In short: we found that ETMX M0 L OSEM damping error signal has been fed directly to TMSX M1 L path global control path, without filtering, since Sep 28 2021. Yuck! On Aug 30 2021, I resolved the discrepancies between L1 and H1 end-station SUS front-end models -- see LHO:59772. Included in that work, I cleaned up the Tidal path, cleaned up the "R0 tracking" path (where QUAD L2 gets fed to QUAD R0), and installed the "TMS tracking" path as per ECR E2000186 / LLO:53224. In short, "TMS tracking" couples the ETM M0 longitudinal OSEM error signal to the TMS M1 longitudinal "input to the drivealign bank" global control path, with the intent of matching the velocity of the two top masses to reduce scattered light. On Aug 31 2021, the model changes were installed during an upgrade to the RCG -- see LHO:59797, and we've confirmed that I turned both TMSX and TMSY paths OFF, "to be commissioned later, when we have an IFO, if we need it" at Tuesday -- Aug 31 2021 21:22 UTC (14:22 PDT) However, 28 days later, Tuesday -- Sept 28 2021 22:16 UTC (15:16 PDT) the TMSX filter bank got turned back on, and must have been blindly SDF saved as such -- with no filter in place -- after an EX IO chassis upgrade -- see LHO:60058. At the time, that RCG 4.2.0 still had the infamous "turn on a new filter with its input ON, output ON, and a gain of 1.0" feature, that has been since resolved with RCG 5.1.1. So ... maybe, somehow, even though the filter was already installed on Aug 31 2021, the IO chassis upgrade rebuild, reinstall, and restart of the h1sustmsx.mdl front end model re-registered the filter as new? Unclear. Regardless this direct ETMX M0 L to TMSX M1 L path has been on, without filtering, since Sep 28 2021. Yuck! Jenne confirms the early 2021 timeline in the first attachment here. She also confirms via a ~2 year trend of the H1:SUS-TMSY_M1_FF_L filter bank's SWSTAT, that no filter module has *ever* been turned on, confirmed that there's *never* been filtering. Whether this *is* the source of 102.1288 Hz problems and that that frequency is the TMSX transmon violin mode is still unclear. Brief investigations thus far include - Jenne briefly gathered ASDs of ETMX M0 L (H1:SUS-ETMX_M0_DAMP_L_IN_DQ) and TMSX M1 L OSEMs' error signal (H1:SUS-TMSX_M1_DAMP_L_IN1_DQ) around the time of Oli's LOWNOISE_LENGTH_CONTROL time, but found that at 100 Hz, the OSEMs are limited by their own sensor noise and don't see anything. - She also looked through the MASTER_OUT DAC requests (), in hopes that the requested control signal would show something more or different, but found nothing suspicious around 100 Hz there either. - We HAVE NOT, but could look at H1:SUS-TMSX_M1_DRIVEALIGN_L_OUT_DQ since this FF control filter should be the only control signal going through that path. I'll post a comment with this. Regardless, having this path on with no filter is clearly wrong, so we've turned off the input, output, and gain accepted the filter as OFF, OFF, and OFF in the SDF system (for TMSX, the safe.snap is the same as the observe.snap).
No obvious blast in the (errant) path between ETMX M0 L and TMSX M1 L, the control channel H1:SUS-TMSX_M1_DRIVEALIGN_L_OUT_DQ, during the turn on of the LSC FF. Attached is a screenshot highlighting one recent lock acquisition, after the addition / separation / clean up of calibration line turns ons (LHO:72205): - H1:GRD-ISC_LOCK_STATE_N -- the state number of the main lock acquisition guardian, - H1:LSC-SRCLFF1_GAIN, H1:LSC-PRCLFF_GAIN, H1:MICHFF_GAIN -- EPICs records showing the timing of when the LSC feed forward is turned on - The raw ETMX M0 L damping signal, H1:SUS-ETMX_M0_DAMP_L_IN1_DQ -- stored at 256 Hz - The same signal, mapped (errantly) as a control signal to TMSX M1 L -- also stored at 256 Hz - The TMSX M1 L OSEMs H1:SUS-TMSX_M1_DAMP_L_IN1_DQ, which are too limited by their own self noise to see any of this action -- but also only stored at 256 Hz. In the middle of the TRANSITION_FROM_ETMX (state 557), DARM control is switching from ETMX to some other collection of DARM actuators. That's when you see the ETMX M0 L (and equivalent TMSX_M1_DRIVEALIGN) channels go from relatively noisy to quiet. Then, at the very end of the state, or the start of the next state, LOW_NOISE_ETMX_ESD (state 558), DARM control returns to ETMX, and the main chain top mass, ETMX M0 gets noisy again. Then, several seconds later, in LOWNOISE_LENGTH_CONTROL (state 560), the LSC feed forward gets turned on. So, while there is control request changes to the TMS, at least according to channels stored at 256 Hz, we don't see any obvious kicks / impulses to the TMS during this transition. This decreases my confidence that something was kicking up a TMS violin mode, but not substantially.
@DetChar --
This errant TMS tracking has been on throughout O4 until yesterday.
The last substantial nominal low noise segment before the this (with errant, bad TMS tracking) was on
2023-08-15 04:41:02 to 15:30:32 UTC
1376109680 - 1376148650
the first substantial nominal low noise segment after this change
2023-08-16 05:26:08 - present
1376198786 - 1376238848
Apologies for the typo in the main aLOG above, but *the* channels to understand the state of the filter bank that's been turned off are
H1:SUS-TMSX_M1_FF_L_SWSTAT
H1:SUS-TMSX_M1_FF_L_GAIN
if you want to use that for an automated way of determining whether the TMS tracking is on vs. off.
If the SWSTAT channel has a value of 37888 and the GAIN channel has a gain of 1.0, then the errant connection between ETMX M0 L and TMSX M1 L was ON. That channels has now a value of 32768 and 0.0, respectively, indicating that it's OFF. (Remember, for a standard filter module a SWSTAT value of 37888 is a bitword representation for "Input, Output, and Decimation switches ON." A SWSTAT value of 32768 is the same bitword representation for just "Decimation ON.")
Over the next few weeks, can you build up an assessment of how the IFO has performed a few weeks before vs. few weeks after?
I'm thinking, in particular, in the corner of scattered light arches and glitch rates (also from scattered light), but I would happily entertain any other metric you think are interesting given the context.
The major difference being that TMSX is no longer "following" ETMX, so there's a *change* in the relative velocity between the chains. No claim yet that this is a *better* change or worse, but there's definitely a change. As you know, the creation of this scattered-light-impacting, relative velocity between the ETM and TMS is related to the low frequency seismic input motion to the chamber, specifically between the 0.05 to 5 Hz region. *That* seismic input evolves and is non-stationary over the few weeks time scale (wind, earthquakes, microseism, etc.), so I'm guessing that you'll need that much "after" data to make a fair comparison against the "before" data. Looking at the channels called out in the lower bit of the aLOG I'm sure will be a helpful part of the investigation.
I chose "a few weeks" simply because the IFO configuration has otherwise been pretty stable "before" (e.g., we're in the "representative normal for O4" 60 W configuration rather than the early O4 75 W configuration), but I leave it to y'all's expertise and the data to figure out a fair comparison (maybe only one week, a few days, or even just the single "before" vs. "after" is enough to see a difference).
detchar-request git issue for tracking purposes.
Jane, Debasmita We took a look at the Omicron and Gravity triggers before and after this tracking was turned off. The time segments chosen for this analysis were: TMSX tracking on: 2023-07-29 19:00:00 UTC - 2023-08-15 15:30:00 UTC, ~277 hours observing time TMSX tracking off: 2023-08-16 05:30:00 UTC - 2023-08-31 00:00:00 UTC, ~277 hours observing time For the analysis, the Omicron parameters chosen were SNR > 7.5, and a frequency between 10 Hz and 1024 Hz. The Gravity Spy glitches included a confidence of > 90%. The first pdf contains glitch rate plots. In the first plot, we have the Omicron glitch rate comparison before and after the change. The second and third plots shows the comparison of the Omicron glitch rates before and after the change as a function of SNR and frequency. The fourth plot shows the Gravity Spy classifications of the glitches. What we can see from these plots is that when the errant tracking was on, the overall glitch rate was higher (~29 per hour when on, ~15 per hour when off). It was particularly high in the 7.5-50 SNR range and 10Hz - 50Hz range, which is typically where we observe scattering. The Gravity Spy plot shows that scattered light is the most common glitch type when the tracking is both on and off, but reduces after the tracking is off. We also looked into see if these scattering glitches were coincidence in "H1:GDS-CALIB_STRAIN" and "H1:ASC-X_TR_A_NSUM_OUT_DQ", which is shown in the last pdf. From the few examples we looked at, there does seem to be some excess noise in the transmitted monitor channel when the tracking was on. If necessary, we can look into more examples of this.
Debasmita, Jane We have plotted the ground motion trends in the following frequency bands and DOFs 1. Earthquake band (0.03 Hz--0.1 Hz) ground motion at ETMX-X, ETMX-Z and ETMX-X tilt-subtracted 2. Wind speed (0.03 Hz--0.1 Hz) at ETMX 3. Micro-seismic band (0.1 Hz--0.3 Hz) ground motion at ETMX-X We have also calculated the mean and median of the ground motion trends for two weeks before and after the tracking was turned off. It seems that while motion in all the other bands remained almost same, the microseismic band ground motion (0.1-0.3 Hz) has increased significantly (from a mean value of 75.73 nm/s to 115.82 nm/s) when the TMS-X tracking was turned off. Still, it produced less scattering than before when the TMS-X tracking was on. The plots and the table are the attached here.
ENDY Station Measurement
During the Tuesday maintenace, the PCAL team(Julianna Lewis & Tony Sanchez) went to ENDY with Working Standard Hanford aka WSH(PS4) and took an End station measurement.
The ENDY Station Measurement was carried out according to the procedure outlined in Document LIGO-T1500062-v15, Pcal End Station Power Sensor Responsivity Ratio Measurements: Procedures and Log, and was completed by 11 am.
First thing we did is take a picture of the beam spot before anything is touched!
Martel:
We started by setting up a Martel Voltage source to apply some voltage into the PCAL Chassis's Input 1 channel and we record the times that a -4.000V, -2.000V and a 0.000V signal was sent to the Chassis. The analysis code that we run after we return uses the GPS times, grabs the data and created the Martel_Voltage_Test.png graph. We also did a measurement of the Martel's voltages in the PCAL lab to calculate the ADC conversion factor, which is included on the document.
After the Martel measurement the procedure walks us through the steps required to make a series of plots while the Working Standard(PS4) is in the Transmitter Module. These plots are shown in WS_at_TX.png.
Next steps include: The WS in the Receiver Module, These plots are shown in WS_at_RX.png.
Followed by TX_RX.png which are plots of the Tranmitter module and the receiver module operation without the WS in the beam path at all.
The last picture is of the Beam spot after we had finished the measurement.
All of this data is then used to generate LHO_ENDY_PD_ReportV2.pdf which is attached, and a work in progress in the form of a living document.
All data and Analysis has been commited to the SVN.
https://svn.ligo.caltech.edu/svn/aligocalibration/trunk/Projects/PhotonCalibrator/measurements/LHO_ENDY/
PCAL Lab Responsivity Ratio Measurement:
A WSH/GSHL (PS4/PS5)FrontBack Responsivity Ratio Measurement was ran, analyzed, and pushed to the SVN.
The analysis of this measurement produces 4 PDF files which we use to vet the data for problems.
raw_voltages.pdf
avg_voltages.pdf
raw_ratios.pdf
avg_ratios.pdf
All data and Analysis has been commited to the SVN.
https://svn.ligo.caltech.edu/svn/aligocalibration/trunk/Projects/PhotonCalibrator/measurements/LabData/PS4_PS5/
A surpise BackFront PS4/PS5 Responsivity Ratio appeared!!
PCAL Lab Responsivity Ratio Measurement:
A WSH/GSHL (PS4/PS5)BF Responsivity Ratio measurement was ran, analyzed, and pushed to the SVN.
The analysis of this measurement produces 4 PDF files which we use to vet the data for problems.
raw_voltages2.pdf
avg_voltages2.pdf
raw_ratios2.pdf
avg_ratios2.pdf
This adventure has been brought to you by Julianna Lewis & Tony Sanchez.
Post PCAL meeting update:
Rick Dripta and I have spoken at length about the recent End Station report's depiction of the RxPD Calibration in ct/W Plot which looks like there is a drop in the calibration and thus has changed.
This is not the case, even though we see this drop on both arms from the last 3 End Station measurements.
There is an observed change in the plots of the Working Standard(PS4)/ Gold Standard(PS5) responsivity ratio made in the PCAL lab as well. Which is why we make an in lab measurement of the Working Standard over the Gold Standard after every End Station measurement.
The timing of the change in May, the direction of the change, and the size of the change all indicate that there must be a change with either PS4 or PS5 which would have been seen on RxPD Calibration plots.
We have not seen the same change in the responsity ratio plots involving the Gold Standard (PS5) and any other integrating sphere.
This means that the observed changes in the RxPD Calibration is very likely due to a change associated with the Working Standard (PS4).