Both end station pumps running smoothly. The corner station pump (1194) must have over heated and kicked the breaker. One of the felt filters disentigrated and clogged the filter/muffler. This was cleaned out and put back together. A newer felt filter was added. The graphite veins seemed to be in fair condition. Pump 1194 was put back into service. I will check to see if it continues to run smoothly this after noon. More rebuild kits need to be ordered.
Ryan C, Camilla. We popped into commissioning to adjust the ETM ring heaters up from 1.0W to 1.1W/segment at 19:35 17:35UTC, accepted changes in sdf. Back in observing while the slow thermalization happens, plan to revert to nominal in 3 hours at 20:35UTC. This weeks past ring heater tests are 73093 and 73272.
Thu Oct 05 10:10:45 2023 INFO: Fill completed in 10min 41secs
Jordan confirmed a good fill curbside
Picket fence has not been updating recently. Ryan restarted the process on nuc5, which did not fix the issue.
Attached MEDM shows the 08:24 restart, a server uptime of only 2 mins, and no update for 24 mins.
We are investigating.
I restarted the service on nuc5 at 13:47 It has been running with no issues since that time (51 minutes)
TITLE: 10/05 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 7mph Gusts, 5mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.15 μm/s
QUICK SUMMARY:
TITLE: 10/04 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
- 23:34 - inc. 5.4 EQ from Japan
- 2:02 - inc 5,4 EQ from Papua New Guinea, 2:11 EQ mode activated, 2:16 - another 6.1 in Japan, successfully rode out (peakmon peaked at 2000)!
- 2:10 - Superevent S231005j
- EX saturations @ 3:11
- 6:39 - Looks like another EQ from Japan, 5.6 in magnitude
LOG:
No log for this shift.
Today, during commissioning time and between 12:30 and 3:40 PM, I did the following.
1) Impulse injections at EX - this was to investigate whether the coupling at EX will be greatly reduced by damping the cryobaffle or if there is a secondary coupling site.
2) Increasing the 9.8 ITM bounce mode amplitudes by injecting into M0 Test V for the ITMs - this was to make sure that they were not responsible for the ~10 Hz harmonics that we sometimes see in the 20-100 Hz band.
3) Injecting in the LVEA to try and find the mystery vibration coupling (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72778).
It was clear that I did not produce noise in the 20-100 Hz band with my ITM bounce mode injections, but I havent yet analyzed the data from the other two tests.
TITLE: 10/04 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: SEISMON_ALERT
Wind: 19mph Gusts, 15mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.15 μm/s
QUICK SUMMARY:
- H1 has been locked for just over 8:30 hours
- SEI/CDS/DMs ok
TITLE: 10/04 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Austin
SHIFT SUMMARY: We've been locked all day 8:30 so far, and just finished up commissioning for the day.
Norco arrived at 15:35UTC and started down Xarm at 15:45 and was parking at MX at 15:54 and was leaving around 17:25.
We started commissioning at 19:00UTC with a calibration sweep (19:03 to 19:33). I also manually restarted NUC26 which had frozen late last night. Robert wrapped up around 22:45UTC form his PEM injections and swept the LVEA as he left.
Camilla made the ring heater changes at 19:35UTC and undid them at 22:35UTC.
Back into observing at 22:46UTC
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
16:23 | FAC | Karen | Wood shop, fire pump room | N | Tech clean | 18:10 |
17:12 | FAC | Cindi | Weld shop/Laundry | N | Laundry | 19:07 |
17:51 | FAC | Kim | H2 | N | Tech clean | 18:10 |
19:05 | PEM | Robert | LVEA | N | Grab equipment | 19:12 |
19:15 | CAL | RyanC | CR | N | Calibration sweep | 19:34 |
19:16 | PEM | Robert | EndX | N | PEM injection | 20:54 |
19:33 | FAC | Cindi | Wood shop | N | grab supplies | 19:53 |
19:35 | TCS | Camilla | CR | N | Ring heater test | 20:23 |
20:54 | PEM | Robert | CR | N | ITM tests | 21:50 |
21:17 | VAC | Travis, Jordan | Mids | N | Gauge updates | 22:20 |
21:18 | Marc +2 | Roof | N | Tour for new intern | 21:22 | |
21:50 | PEM | Robert | LVEA | N | HAM3 Shaking Tests | 22:47 |
Starting RH changes at 19:35UTC, turned up ITM RHs +0.1W and down ETM RHs -0.1W. Plots of DARM and HOM after 2 hours attached and ndscope trend. High frequency noise increased and the HOM moved higher in frequency. Circulating powers increased. After 3 hours at 22:35UTC I turned RHs back to nominal.
During this time Robert was doing some PEM injections and we had been locked at NLN for 5 hours so weren't completely thermalized at the start.
I added a 6600 to 6800Hz BLRM as H1:OAF-RANGE_RLP_8 to monitor high frequency noise, see attached, didn't prove very useful. H1:SQZ-DCPD_RATIO_6_DB shows trend better.
Nominal (W/segment) | Test Values 19:35 to 22:35UTC | |
ITMX | 0.44 | 0.54 |
ITMY | 0.0 | 0.1 |
ETMX | 1.0 | 0.9 |
ETMY | 1.0 | 0.9 |
Last weeks test 73093. Future tests: Can't go the other direction as ITMY started with 0W RH. Could try commonly turning up just ETMs in a future while observing test as we had decreased high frequency noise with both ETM RHs ~1.4W/seg in February 67501, plot attached.
Updated plot showing thermalization back to nominal RH settings attached. Main changes were higher circulating power and more high frequency noise during the test.
I ran a calibration sweep today, starting with the BB then simulines.
BB output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20231004T190400Z.xml
Simulines:
GPS start: 1380481801.115491
GPS stop: 1380483144.778068
2023-10-04 19:32:06,617 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20231004T190945Z.hdf5
2023-10-04 19:32:06,638 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20231004T190945Z.hdf5
2023-10-04 19:32:06,650 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20231004T190945Z.hdf5
2023-10-04 19:32:06,662 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20231004T190945Z.hdf5
2023-10-04 19:32:06,674 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20231004T190945Z.hdf5
We've been locked and observing since 14:47UTC, we plan to commission from 19:00UTC to 23:00UTC. Plans include a calibration sweep, ring heater test, and PEM injections.
TITLE: 10/04 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 3mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.18 μm/s
QUICK SUMMARY:
As Austin mentioned in his alog nuc26 is not responding over the network. This is the upper camera FOM showing the MC and PR cameras.
Its web image has not updated since Wed 4th 00:08 UTC (Tue 3rd 17:08 PDT). The EDC lost connection to its 12 load_mon EPICS channels at the same time (list shown below)
Ryan says the local display in the control room is updating its images, so we we schedule a reboot of nuc26 at the next target-of-opportunity.
H1:CDS-MONITOR_NUC26_CPU_LOAD_PERCENT
H1:CDS-MONITOR_NUC26_CPU_COUNT
H1:CDS-MONITOR_NUC26_MEMORY_AVAIL_PERCENT
H1:CDS-MONITOR_NUC26_MEMORY_AVAIL_MB
H1:CDS-MONITOR_NUC26_PROCESSES
H1:CDS-MONITOR_NUC26_INET_CONNECTIONS
H1:CDS-MONITOR_NUC26_NET_TX_TOTAL_MBIT
H1:CDS-MONITOR_NUC26_NET_RX_TOTAL_MBIT
H1:CDS-MONITOR_NUC26_NET_TX_LO_MBIT
H1:CDS-MONITOR_NUC26_NET_RX_LO_MBIT
H1:CDS-MONITOR_NUC26_NET_RX_ENO1_MBIT
H1:CDS-MONITOR_NUC26_NET_TX_ENO1_MBIT
Ryan rebooted nuc26 at 12:10 PDT. The web snapshot now looks good and the EDC has reconnected to its PVs.
J. Oberling, F. Mera
This morning we swapped the failing laser in the ITMx OpLev with a spare. The first attached picture shows the OpLev signals before the laser swap, the 2nd is after. As can be seen there was no change in alignment, but the SUM counts are now back around 7000. I'll keep an eye on this new laser over the next couple of days.
This completes WP 11454.
J. Oberling, R. Short
Checking on the laser after a few hours of warm up, I found the cooler to be very warm, and the box housing the DC-DC converter that powers the laser (steps ~11 VDC down to 5 VDC) was extremely warm. Also, the SUM counts had dropped from the ~7k we started at to ~1.1k. Seeing as how we just installed a new laser, my suspicion was that the DC-DC converter was failing. Checking the OpLev power supply in the CER it was providing 3A to the LVEA OpLev lasers; this should only be just over 1A, which is further indication something is up. Ryan and I replaced the DC-DC converter with a spare. Upon powering up with the new converter the current delivered by the power supply was still ~3A, so we swapped the laser with another spare. With the new laser the delivered current was down to just over 1A, as it should be. The laser power was set so the SUM counts are still at ~7k, and we will keep an eye on this OpLev over the coming hours/days. Both lasers SN 191-1 and SN 119-2 will be tested in the lab; my suspicion is that the dying DC-DC converter damaged both lasers and they will have to be repaired by the vendor, will see what the lab testing says. New laser SN is 199-1.
Noticing as the night progresses, the sum counts are slowly going up, starting from ~6200 and now ~7100. Odd.
ITMX OPLEV sum counts are at about 7500 this morning.
Sum counts around 7700 this morning, they're still creeping up
First ENDX Station Measurement:
During the Tuesday maintenace, the PCAL team( Rick Savage & Tony Sanchez) went to ENDX with Working Standard Hanford aka WSH(PS4) and took an End station measurement.
But the Upper PCAL BEAM had been move to the left by 5 mm last week. See alog https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72063.
We liked the idea of doing a calibration measurement with the beam off to the left just to try and see the effects of the offset on the calibration.
Because of limitations of our analysis tool which names files with a date stamp, the folder name for this non nominal measurement is tD20230821 even though it actually took place on Tuesday 2023-08-22.
Beam Spot Picture of the Upper Beam 5 mm to the Left on the apature
Martel_Voltage_Test.png
Document***
WS_at_TX.png
WS_at_RX.png
TX_RX.png
LHO_ENDX_PD_ReportV2.pdf
https://svn.ligo.caltech.edu/svn/aligocalibration/trunk/Projects/PhotonCalibrator/measurements/LHO_EndX/tD20230821/
We then Moved the PCAL BEAM back to the center, which is its NOMINAL position.
We took pictures of the beam spot.
Second NOMINAL End Station Measurement:
Then we did another ENDX Station measurement as we would normally do which is appropriately documented as tD20230822.
The second ENDX Station Measurement was carried out according to the procedure outlined in Document LIGO-T1500062-v15, Pcal End Station Power Sensor Responsivity Ratio Measurements: Procedures and Log, and was completed by noon.
We took pictures of the Beam Spot .
Martel:
We started by setting up a Martel Voltage source to apply some voltage into the PCAL Chassis's Input 1 channel and we record the times that a -4.000V, -2.000V and a 0.000V signal was sent to the Chassis. The analysis code that we run after we return uses the GPS times, grabs the data and created the Martel_Voltage_Test.png graph. We also did a measurement of the Martel's voltages in the PCAL lab to calculate the ADC conversion factor, which is included on the document .
After the Martel measurement the procedure walks us through the steps required to make a series of plots while the Working Standard(PS4) is in the Transmitter Module. These plots are shown in WS_at_TX.png.
Next steps include: The WS in the Receiver Module, These plots are shown in WS_at_RX.png.
Followed by TX_RX.png which are plots of the Tranmitter module and the receiver module operation without the WS in the beam path at all.
All of this data is then used to generate LHO_ENDX_PD_ReportV2.pdf which is attached, and a work in progress in the form of a living document.
All data and Analysis has been commited to the SVN.
https://svn.ligo.caltech.edu/svn/aligocalibration/trunk/Projects/PhotonCalibrator/measurements/LHO_EndX/tD20230822/
PCAL Lab Responsivity Ratio Measurement:
A WSH/GSHL (PS4/PS5)BackFront Responsivity Ratio Measurement was ran, analyzed, and pushed to the SVN.
The analysis of this measurement produces 4 PDF files which we use to vet the data for problems.
raw_voltages.pdf
avg_voltages.pdf
raw_ratios.pdf
avg_ratios.pdf
All data and Analysis has been commited to the SVN.
https://svn.ligo.caltech.edu/svn/aligocalibration/trunk/Projects/PhotonCalibrator/measurements/LabData/PS4_PS5/
I switched the order of the lab Measurements this time to have the Front Back Last this time to see is it changed the relative difference between FB and BF measurements.
PCAL Lab Responsivity Ratio Measurement:
A WSH/GSHL (PS4/PS5)FrontBack Responsivity Ratio measurement was ran, analyzed, and pushed to the SVN.
The analysis of this measurement produces 4 PDF files which we use to vet the data for problems.
raw_voltages2.pdf
avg_voltages2.pdf
raw_ratios2.pdf
avg_ratios2.pdf
This adventure has been brought to you by Rick Savage & Tony Sanchez.
After speaking to Rick and Dripta,
Line 10 in the pcal_params.py needs to be changed from:
PCALPARAMS['WHG'] = 0.916985 # PS4_PS5 as of 2023/04/18
To:
PCALPARAMS['WHG'] = 0.9159 #PS4_PS5 as of 2023-08-22
This change would reflect the changes we have observed in the measurements of PS4_PS5 responsivity ratio measurements taken in the lab which affect the plots of Rx Calibration in sections 14 and 22 of the LHO_EndY_PD_ReportV2.pdf .
Investigations have shown that PS4 has changed but not PS5 OR Rx Calibration.
J. Kissel, J. Driggers I was brainstorming why LOWNOISE_LENGTH_CONTROL would be ringing up a Transmon M1 to M2 wire violin mode (modeled to be at 104.2 Hz for a "production" TMTS; see table 3.11 of T1300876) for the first time on Aug 4 2023 (see current investigation recapped in LHO:72214), and I remembered "TMS tracking..." In short: we found that ETMX M0 L OSEM damping error signal has been fed directly to TMSX M1 L path global control path, without filtering, since Sep 28 2021. Yuck! On Aug 30 2021, I resolved the discrepancies between L1 and H1 end-station SUS front-end models -- see LHO:59772. Included in that work, I cleaned up the Tidal path, cleaned up the "R0 tracking" path (where QUAD L2 gets fed to QUAD R0), and installed the "TMS tracking" path as per ECR E2000186 / LLO:53224. In short, "TMS tracking" couples the ETM M0 longitudinal OSEM error signal to the TMS M1 longitudinal "input to the drivealign bank" global control path, with the intent of matching the velocity of the two top masses to reduce scattered light. On Aug 31 2021, the model changes were installed during an upgrade to the RCG -- see LHO:59797, and we've confirmed that I turned both TMSX and TMSY paths OFF, "to be commissioned later, when we have an IFO, if we need it" at Tuesday -- Aug 31 2021 21:22 UTC (14:22 PDT) However, 28 days later, Tuesday -- Sept 28 2021 22:16 UTC (15:16 PDT) the TMSX filter bank got turned back on, and must have been blindly SDF saved as such -- with no filter in place -- after an EX IO chassis upgrade -- see LHO:60058. At the time, that RCG 4.2.0 still had the infamous "turn on a new filter with its input ON, output ON, and a gain of 1.0" feature, that has been since resolved with RCG 5.1.1. So ... maybe, somehow, even though the filter was already installed on Aug 31 2021, the IO chassis upgrade rebuild, reinstall, and restart of the h1sustmsx.mdl front end model re-registered the filter as new? Unclear. Regardless this direct ETMX M0 L to TMSX M1 L path has been on, without filtering, since Sep 28 2021. Yuck! Jenne confirms the early 2021 timeline in the first attachment here. She also confirms via a ~2 year trend of the H1:SUS-TMSY_M1_FF_L filter bank's SWSTAT, that no filter module has *ever* been turned on, confirmed that there's *never* been filtering. Whether this *is* the source of 102.1288 Hz problems and that that frequency is the TMSX transmon violin mode is still unclear. Brief investigations thus far include - Jenne briefly gathered ASDs of ETMX M0 L (H1:SUS-ETMX_M0_DAMP_L_IN_DQ) and TMSX M1 L OSEMs' error signal (H1:SUS-TMSX_M1_DAMP_L_IN1_DQ) around the time of Oli's LOWNOISE_LENGTH_CONTROL time, but found that at 100 Hz, the OSEMs are limited by their own sensor noise and don't see anything. - She also looked through the MASTER_OUT DAC requests (), in hopes that the requested control signal would show something more or different, but found nothing suspicious around 100 Hz there either. - We HAVE NOT, but could look at H1:SUS-TMSX_M1_DRIVEALIGN_L_OUT_DQ since this FF control filter should be the only control signal going through that path. I'll post a comment with this. Regardless, having this path on with no filter is clearly wrong, so we've turned off the input, output, and gain accepted the filter as OFF, OFF, and OFF in the SDF system (for TMSX, the safe.snap is the same as the observe.snap).
No obvious blast in the (errant) path between ETMX M0 L and TMSX M1 L, the control channel H1:SUS-TMSX_M1_DRIVEALIGN_L_OUT_DQ, during the turn on of the LSC FF. Attached is a screenshot highlighting one recent lock acquisition, after the addition / separation / clean up of calibration line turns ons (LHO:72205): - H1:GRD-ISC_LOCK_STATE_N -- the state number of the main lock acquisition guardian, - H1:LSC-SRCLFF1_GAIN, H1:LSC-PRCLFF_GAIN, H1:MICHFF_GAIN -- EPICs records showing the timing of when the LSC feed forward is turned on - The raw ETMX M0 L damping signal, H1:SUS-ETMX_M0_DAMP_L_IN1_DQ -- stored at 256 Hz - The same signal, mapped (errantly) as a control signal to TMSX M1 L -- also stored at 256 Hz - The TMSX M1 L OSEMs H1:SUS-TMSX_M1_DAMP_L_IN1_DQ, which are too limited by their own self noise to see any of this action -- but also only stored at 256 Hz. In the middle of the TRANSITION_FROM_ETMX (state 557), DARM control is switching from ETMX to some other collection of DARM actuators. That's when you see the ETMX M0 L (and equivalent TMSX_M1_DRIVEALIGN) channels go from relatively noisy to quiet. Then, at the very end of the state, or the start of the next state, LOW_NOISE_ETMX_ESD (state 558), DARM control returns to ETMX, and the main chain top mass, ETMX M0 gets noisy again. Then, several seconds later, in LOWNOISE_LENGTH_CONTROL (state 560), the LSC feed forward gets turned on. So, while there is control request changes to the TMS, at least according to channels stored at 256 Hz, we don't see any obvious kicks / impulses to the TMS during this transition. This decreases my confidence that something was kicking up a TMS violin mode, but not substantially.
@DetChar -- This errant TMS tracking has been on throughout O4 until yesterday. The last substantial nominal low noise segment before the this (with errant, bad TMS tracking) was on 2023-08-15 04:41:02 to 15:30:32 UTC 1376109680 - 1376148650 the first substantial nominal low noise segment after this change 2023-08-16 05:26:08 - present 1376198786 - 1376238848 Apologies for the typo in the main aLOG above, but *the* channels to understand the state of the filter bank that's been turned off are H1:SUS-TMSX_M1_FF_L_SWSTAT H1:SUS-TMSX_M1_FF_L_GAIN if you want to use that for an automated way of determining whether the TMS tracking is on vs. off. If the SWSTAT channel has a value of 37888 and the GAIN channel has a gain of 1.0, then the errant connection between ETMX M0 L and TMSX M1 L was ON. That channels has now a value of 32768 and 0.0, respectively, indicating that it's OFF. (Remember, for a standard filter module a SWSTAT value of 37888 is a bitword representation for "Input, Output, and Decimation switches ON." A SWSTAT value of 32768 is the same bitword representation for just "Decimation ON.") Over the next few weeks, can you build up an assessment of how the IFO has performed a few weeks before vs. few weeks after? I'm thinking, in particular, in the corner of scattered light arches and glitch rates (also from scattered light), but I would happily entertain any other metric you think are interesting given the context. The major difference being that TMSX is no longer "following" ETMX, so there's a *change* in the relative velocity between the chains. No claim yet that this is a *better* change or worse, but there's definitely a change. As you know, the creation of this scattered-light-impacting, relative velocity between the ETM and TMS is related to the low frequency seismic input motion to the chamber, specifically between the 0.05 to 5 Hz region. *That* seismic input evolves and is non-stationary over the few weeks time scale (wind, earthquakes, microseism, etc.), so I'm guessing that you'll need that much "after" data to make a fair comparison against the "before" data. Looking at the channels called out in the lower bit of the aLOG I'm sure will be a helpful part of the investigation. I chose "a few weeks" simply because the IFO configuration has otherwise been pretty stable "before" (e.g., we're in the "representative normal for O4" 60 W configuration rather than the early O4 75 W configuration), but I leave it to y'all's expertise and the data to figure out a fair comparison (maybe only one week, a few days, or even just the single "before" vs. "after" is enough to see a difference).
detchar-request git issue for tracking purposes.
Jane, Debasmita We took a look at the Omicron and Gravity triggers before and after this tracking was turned off. The time segments chosen for this analysis were: TMSX tracking on: 2023-07-29 19:00:00 UTC - 2023-08-15 15:30:00 UTC, ~277 hours observing time TMSX tracking off: 2023-08-16 05:30:00 UTC - 2023-08-31 00:00:00 UTC, ~277 hours observing time For the analysis, the Omicron parameters chosen were SNR > 7.5, and a frequency between 10 Hz and 1024 Hz. The Gravity Spy glitches included a confidence of > 90%. The first pdf contains glitch rate plots. In the first plot, we have the Omicron glitch rate comparison before and after the change. The second and third plots shows the comparison of the Omicron glitch rates before and after the change as a function of SNR and frequency. The fourth plot shows the Gravity Spy classifications of the glitches. What we can see from these plots is that when the errant tracking was on, the overall glitch rate was higher (~29 per hour when on, ~15 per hour when off). It was particularly high in the 7.5-50 SNR range and 10Hz - 50Hz range, which is typically where we observe scattering. The Gravity Spy plot shows that scattered light is the most common glitch type when the tracking is both on and off, but reduces after the tracking is off. We also looked into see if these scattering glitches were coincidence in "H1:GDS-CALIB_STRAIN" and "H1:ASC-X_TR_A_NSUM_OUT_DQ", which is shown in the last pdf. From the few examples we looked at, there does seem to be some excess noise in the transmitted monitor channel when the tracking was on. If necessary, we can look into more examples of this.
Debasmita, Jane We have plotted the ground motion trends in the following frequency bands and DOFs 1. Earthquake band (0.03 Hz--0.1 Hz) ground motion at ETMX-X, ETMX-Z and ETMX-X tilt-subtracted 2. Wind speed (0.03 Hz--0.1 Hz) at ETMX 3. Micro-seismic band (0.1 Hz--0.3 Hz) ground motion at ETMX-X We have also calculated the mean and median of the ground motion trends for two weeks before and after the tracking was turned off. It seems that while motion in all the other bands remained almost same, the microseismic band ground motion (0.1-0.3 Hz) has increased significantly (from a mean value of 75.73 nm/s to 115.82 nm/s) when the TMS-X tracking was turned off. Still, it produced less scattering than before when the TMS-X tracking was on. The plots and the table are the attached here.
The ISS Second Loop engaged this lock with a low-ish diffracted power (about 1.5%). Oli had chatted with Jason about it, and Sheila noticed that perhaps it being low could be related to the number of glitches we've been seeing. A concern is that if the control loop needs to go "below" zero percent (which it can't do), this could cause a lockloss.
I "fixed" it by selecting IMC_LOCK to LOCKED (which opens the ISS second loop), and then selecting ISS_ON to re-close the second loop and put us back in our nominal Observing configuration. This set the diffracted power back much closer to 2.5%, which is where we want it to be.
This cycling of the ISS 2nd loop (a DC coupled loop) dropped the power into the PRM (H1:IMC-PWR_IN_OUT16) from 57.6899 W to 57.2255 over the course of ~1 minute 2023-Aug-07 17:49:28 UTC to 17:50:39 UTC. It caught my attention because I saw a discrete drop in arm cavity power of ~2.5W while trending around looking for thermalization periods. This serves as another lovely example where time dependent correction factors are doing their job well, and indeed quite accurately. If we repeat the math we used back in O3, (see LHO:56118 for math derivation), we can model the optical gain change in two ways: - the relative change estimated from the power on the beam splitter (assuming the power recycling gain is constant and cancels out) relative change = (np.sqrt(57.6858) - np.sqrt(57.2255)) / np.sqrt(57.6858) = 0.0039977 = 0.39977% - the relative change estimated by the TDCF system, via kappa_C relative change = (0.97803 - 0.974355)/0.97803 = 0.0037576 = 0.37576% indeed the estimates agree quite well, especially given the noise / uncertainty in the TDCF (because we like to limit the height of the PCAL line that informs it). This gives me confidence that -- at least over the several minute time scales -- kappa_C is accurate to within 0.1 to 0.2%. This is consistent with how much we estimate the uncertainty is from converting the coherence between the PCAL excitation and DARM_ERR into uncertainty via Bendat & Piersol's unc = sqrt( (1-C) / (2NC) ). It's nice to have these "sanity check" warm and fuzzies that the TDCFs are doing their job; but also nice to have detailed record of these weird random "what's that??" when trending around looking for things. I also note that there's no change in cavity pole frequency, as expected.
When the circulating power dropped ~2.5kW, kappa_c trended down, plot attached. This implies that the lower circulating powers induced in previous RH tests 73093are not the reason kappa_c increases. Maybe see an slight increase in high frequency noise as the circulating power is turned up, plot attached.
ENDY Station Measurement
During the Tuesday maintenace, the PCAL team(Julianna Lewis & Tony Sanchez) went to ENDY with Working Standard Hanford aka WSH(PS4) and took an End station measurement.
The ENDY Station Measurement was carried out according to the procedure outlined in Document LIGO-T1500062-v15, Pcal End Station Power Sensor Responsivity Ratio Measurements: Procedures and Log, and was completed by 11 am.
First thing we did is take a picture of the beam spot before anything is touched!
Martel:
We started by setting up a Martel Voltage source to apply some voltage into the PCAL Chassis's Input 1 channel and we record the times that a -4.000V, -2.000V and a 0.000V signal was sent to the Chassis. The analysis code that we run after we return uses the GPS times, grabs the data and created the Martel_Voltage_Test.png graph. We also did a measurement of the Martel's voltages in the PCAL lab to calculate the ADC conversion factor, which is included on the document.
After the Martel measurement the procedure walks us through the steps required to make a series of plots while the Working Standard(PS4) is in the Transmitter Module. These plots are shown in WS_at_TX.png.
Next steps include: The WS in the Receiver Module, These plots are shown in WS_at_RX.png.
Followed by TX_RX.png which are plots of the Tranmitter module and the receiver module operation without the WS in the beam path at all.
The last picture is of the Beam spot after we had finished the measurement.
All of this data is then used to generate LHO_ENDY_PD_ReportV2.pdf which is attached, and a work in progress in the form of a living document.
All data and Analysis has been commited to the SVN.
https://svn.ligo.caltech.edu/svn/aligocalibration/trunk/Projects/PhotonCalibrator/measurements/LHO_ENDY/
PCAL Lab Responsivity Ratio Measurement:
A WSH/GSHL (PS4/PS5)FrontBack Responsivity Ratio Measurement was ran, analyzed, and pushed to the SVN.
The analysis of this measurement produces 4 PDF files which we use to vet the data for problems.
raw_voltages.pdf
avg_voltages.pdf
raw_ratios.pdf
avg_ratios.pdf
All data and Analysis has been commited to the SVN.
https://svn.ligo.caltech.edu/svn/aligocalibration/trunk/Projects/PhotonCalibrator/measurements/LabData/PS4_PS5/
A surpise BackFront PS4/PS5 Responsivity Ratio appeared!!
PCAL Lab Responsivity Ratio Measurement:
A WSH/GSHL (PS4/PS5)BF Responsivity Ratio measurement was ran, analyzed, and pushed to the SVN.
The analysis of this measurement produces 4 PDF files which we use to vet the data for problems.
raw_voltages2.pdf
avg_voltages2.pdf
raw_ratios2.pdf
avg_ratios2.pdf
This adventure has been brought to you by Julianna Lewis & Tony Sanchez.
Post PCAL meeting update:
Rick Dripta and I have spoken at length about the recent End Station report's depiction of the RxPD Calibration in ct/W Plot which looks like there is a drop in the calibration and thus has changed.
This is not the case, even though we see this drop on both arms from the last 3 End Station measurements.
There is an observed change in the plots of the Working Standard(PS4)/ Gold Standard(PS5) responsivity ratio made in the PCAL lab as well. Which is why we make an in lab measurement of the Working Standard over the Gold Standard after every End Station measurement.
The timing of the change in May, the direction of the change, and the size of the change all indicate that there must be a change with either PS4 or PS5 which would have been seen on RxPD Calibration plots.
We have not seen the same change in the responsity ratio plots involving the Gold Standard (PS5) and any other integrating sphere.
This means that the observed changes in the RxPD Calibration is very likely due to a change associated with the Working Standard (PS4).
Circulating power decreased 3kW, ndscope attached, kappa_c 0.3% decrease. HOM peaks moved down in frequency as expected. High frequency noise slightly decreased but this could be due to the reduction in circulating power. 52Hz jitter noise in DARM shows an increase.