I've made a script that uses a la mode to double check the mode matching that Jason and Ryan are working on using Jammt , I'm posting it here so that people can use it in the future if they want to. It is saved in sheila.dwyer/PSL/modematching along with a copy of the a la mode code (alm).
I took the beam profile measurements that they saved in /ligo/gitcommon/labutils/beam_scans/Data/NPRO_a_21Nov2024.txt and used them for fitting. In this text file the distances are a rail location, I used the information in 80895 and compared it to what I got by fitting the data in NPRO_26Oct2024.txt to find the offset between their rail distances and their coordinate system.
From their data I get a horizontal waist at 162.1um at -0.097 meters, and a vertical waist of 182.1 um at -0.176 meters (see first attachment). This is more astigmatism than the laser 1661 had. If no mode matching had been done at all with this NPRO swap we would end up with an overlap of 87% to amp1 (sqrt(X overlap* y overlap)).
I used the average of the vertical and horizontal waist locations, and asked a la mode to optimize lens 2 + 3 for that average waist (as Jason and Ryan were only planning to move lens 2 + 3 to avoid changing the waist through the EOM). Then using that lens solution to propagate the fitted x and y waists, the overlap given by sqrt(Xoverlap * Y overlap) for that solution is 95% for lens locations of:
label z (m) type parameters
----- ----- ---- ----------
L1 0.1020 lens focalLength: 0.2220
L2 1.1511 lens focalLength: -0.334
L3 1.2787 lens focalLength: 0.2220
This is in pretty good agrement with the photo that Jason sent me of their Jammt solution. I also plugged in their solution, and get an overlap of 94%:
label z (m) type parameters
----- ----- ---- ----------
L1 0.1020 lens focalLength: 0.2220
L2 1.1440 lens focalLength: -0.334
L3 1.2780 lens focalLength: 0.2220
Using the lockloss page, I picked out some times when the tag "IMC" or "FSS_OSSCILATION" was triggered. Then, I just made some simple time series comparison plots between several PSL channels and PSL PEM channels. In particular, the PSL channels I used were:
H1:PSL-FSS_FAST_MON_OUT_DQ, H1:PSL-FSS_TPD_DC_OUT_DQ, H1:IMC-MC2_TRANS_SUM_IN1_DQ, H1:PSL-PWR_NPRO_OUT_DQ, H1:PSL-PMC_MIXER_OUT_DQ, H1:PSL-PMC_HV_MON_OUT_DQ. I compared those to the following PSL PEM channels:
H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ, H1:PEM-CS_ACC_PSL_PERISCOPE_Y_DQ, H1:PEM-CS_ACC_PSL_TABLE1_X_DQ, H1:PEM-CS_ACC_PSL_TABLE1_Y_DQ, H1:PEM-CS_ACC_PSL_TABLE1_Z_DQ, H1:PEM-CS_ACC_PSL_TABLE2_Z_DQ, H1:PEM-CS_MIC_PSL_CENTER_DQ. For this analysis, I've only looked at 3 time periods but will do more. Below are some things I've seen so far:
From my very limited dataset so far, it seems like the most interesting area is where the PSL_PERISCOPE_X/Y and PSL_TABLE1_X/Y/Z channels are located.
For the same 3 time periods, I checked if the temporary magnetometer H1:PEM-CS_ADC_5_18_2K_OUT_DQ witnessed any glitches that correlate with the IMC/FSS/PMC channels. Attached are some more time series plots of this. During the time period on September 13th, I do not see anything interesting in the magnetometer. However, during the other two periods in November I do see a glitch that correlates with these channels. The amplitude of the glitch isn't very high (not as high as what is witnessed by the periscope_x/y and table1_x/y/z channels), but it is still there. Like in the original slides posted, I don't see any correlations with the PWR_NPRO channel with any glitches in the pem channels on any of the days.
The stem wall pour is completed, forms are stripped and it has been entirely back-filled. Weather has prevented the pour of the slab this week so Jake has shifted his attention to erecting the metal framework of the building. The forecast is not favorable for a slab pour tomorrow, so its likely to take place early next week. T. Guidry
[Debasmita, Anamaria]
We wanted to check if there is any microseismic coupling at LHO, similar to that of LLO. So far, for LLO our understanding is that the microseismic ground motion (0.1-0.5 Hz) is being amplified by a potential scatterer which has a transfer function as shown in the first image. This transfer function was modeled based on some other information and seems to explain the LLO noise quiet well (LLO aLOG 73845).
For LHO, we amplified the ground motion by the same transfer function and simulated the scatter shelf. If we assume that the scattered light amplitude is same as LLO (4e-10), given the amount of microseismic ground motion at LHO the scatter shelf falls below the current DARM spectra of LHO (second image). Even if we double the light amplitude, the scatter shelf does not show up above the present noise background (third image) but it comes quiet close to the noise floor.
But, if we double the microseismic ground motion and then further amplify it by the model transfer function, the scatter shelf shows up above the present DARM spectra (fourth image). This shows that, the same source which is producing the microseismic scattering at LLO can produce similar noise at LHO, but it will only be visible if the IFO is locked during a somewhat high microseismic motion. Even if the amount of light is half of what it is at LLO, the scatter shelf would still be visible if the ground motion is higher (fifth image).
I used the ground motion as it was during 2023-12-06 11:00:00 UTC at LHO. This is one of the high microseismic days of LHO, the median of rms ground motion in 0.1-0.3 Hz was 689.66 nm/s and in 0.3-1.0 Hz was 95.02 nm/s.
In the last image, I plot all the simulated traces together for a better visualization of how the light amplitude and amount of motion can modify the scatter shelf.
WP12214
Marc, Ryan C, Dave:
h1susex has been fenced from Dolphin and powered down in preparation to re-establishing the LIGO DAC 28ao32 as the driver of the ETMX L1/L2 and ESD channels, essentially undoing what was done on Tuesday.
Marc has completed the cable move, h1susex is powered back up.
All watchdogs have been cleared, Ryan is recovering SUSTEMX, SUSTMSX and SUSETMXPI.
Thu21Nov2024
LOC TIME HOSTNAME MODEL/REBOOT
13:37:36 h1susex h1iopsusex
13:37:49 h1susex h1susetmx
13:38:02 h1susex h1sustmsx
13:38:15 h1susex h1susetmxpi
Fil is borrowing one of the PEM adc channels I was borrow for the Guralp 3T huddle testing. He unplugged one of the horizontal readbacks from the Guralp on ADC4, leaving the other 2 dofs plugged in. Channel he connected to is ADC 4 28, H1:PEM-CS_ADC_4_28_2K_OUT_DQ.
Thu Nov 21 10:11:56 2024 INFO: Fill completed in 11min 53secs
FAMIS26472
All trends look relatively stable, but in the last week EX is starting to see some small control output and a minor trend downward in pressure. Definitely worth keeping an eye on this week.
The PSL NPRO swap has begun per work permit 12210 and I have changed the OPS mode to CORRECTIVE MAINTENANCE.
TITLE: 11/21 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Microseism
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 3mph Gusts, 1mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.57 μm/s
QUICK SUMMARY:
As expected the FSS could not stay locked by itself. This overnight test was with O3 NRPO + the different power supply.
With only the laser+PMC+FSS, the PSL FSS unlocked >8 times overnight. Each time the bigger +/- 0.1 V PMC mixer glitches were observed. SQZ TTFSS fiber mixer also sees the PMC mixer glitches.
The attached plots show the PMC mixer error signal and the ISS PDB readout. The later implements a fair amount of whitening so tends to fluctuate at a slow rate compared to the glitches.
The PMC error signal goes as high as 1.25Vpp during a PDH sweep. The channel recorded by the DAQ has a lot of gain and clips around +/-80mV, but it is calibrated correctly. So, the calibration is 1.25V/1.19MHz = 1.05V/MHz.
First plot shows a train of glitches, going as high as ~70mVpk (or ~70kHz).
The second plot shows a zoomed in version of a glitch of about the same size. The PMC servo drives the PZT with a 3.3K series resistor forming a ~1kHz low pass with the PZT capacitance of ~45nF. This yields a chracteristic time constant of ~150us. The glitches as seen by the PMC mixer are at least a few times faster than this.
The third plot shows a PDH scan.
The forth plot shows a typical trace when nothing happens.
TITLE: 11/20 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: We spent the day in IDLE again investigating the PSL issues, we've had only the PMC locked for most of the day. At the end of the shift, the power supply for the PLS was swapped. Microseism has slowly starting to come down in the last ~5 hours. We are going to remain DOWN again tonight.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
21:17 | SAF | LVEA LASER SAFE | LVEA | N | LVEA IS LASER SAFE | 01:17 |
17:05 | PSL | Jason | CER, LVEA | N | Revive the PSL NPRO | 17:17 |
17:26 | FAC | Karen | Optics lab, vac prep | N | Tech clean | 18:26 |
17:40 | PSL | Jason, Marc | CER | N | Turn off PMC HV and unplug db37 cable | 17:46 |
18:41 | FAC | Christina | Mids | N | Check out storage | 20:02 |
18:48 | PSL | Jason | LVEA, CER | N | Take pictures | 18:51 |
19:03 | FAC | Kim | H2 enc | N | Tech clean | 20:17 |
19:13 | FAC | Karen | Woodshop, fire pump | N | Tech clean | 19:26 |
19:25 | VAC | Janos | MidY | N | Mech room, pump checks | 20:25 |
21:37 | VAC | Janos | MidX | N | Pump checks | 22:00 |
23:10 | PSL | Jason, RyanS | LVEA, Optics lab | N | Grab power supply and swap it | 23:43 |
23:56 | PSL | Fil | CER | N | Plug back in DB37 for the FSS | 00:08 |
Yesterday I went to check on the chillers and found a bit of coolant on the CO2Y chiller air filter and on the bottom of the unit. The last time we saw this we had inconsistent supply temperatures, something we don't really see now but might explain our more frequent lock losses lately (alog 81362). We'll keep an eye on this and swap the chiller if necessary. It would be great if this could make it another 6 months and then we can send them in for service.
I replaced the air filter since it had a handful of dead bugs, coolant, and other junk on it. I used up our last filter so I'll have to order more.
Some of the "PSL story so far" is in 81193. Code saved to /camilla.compton/Documents/Locklosses/PSL_channels.ipynb
I've plotted H1:PSL-PWR_NPRO_OUT_DQ and H1:PSL-FSS_FAST_MON_OUT_DQ in the 30 seconds and 1 second before each locklosss tagged both IMC and FSS by the lockloss tool.
The FSS oscillations we are seeing now are similar to the FSS_OSCLATION locklosses we saw at the end of O3b (G2201762: O3a_O3b_summary.pdf). Note that the current NPRO is the same NPRO as we used in O3. Is it possible that we fixed the glitches issues but reinstalled a laser with similar issues as we saw in O3b? We could revisit that time period to see if we had similar IMC locking issues then too.
Reminder that the lockloss tool tags "IMC" if the IMC looses lock within 50ms of the arms (ASC_AS_A) and tags "FSS" if H1:PSL-FSS_FAST_MON_OUT_DQ with outside of +/- 3 within 1-5 seconds before the lockloss.
Adding in plots from the end of O3b: 30 seconds and zoomed to 1 second before locklosses tagged FSS. The FSS channel looks simular to after the NPRO swap, but were worse in O3b.
Comparing spectra of the FSS_FAST_MON (calibrated Hz/rtHz), ISS_AOM_DRIVER control signal, and FSS_TPD ref cav trans.
FSS FAST MON looks higher for O3 laser vs. the original O4 laser (maybe this is related to the laser itself?). But ISS spectra of the same O3 laser was noisier in O3 than O4, and similarly the FSS_TPD for the same O3 laser was noisier in O3 than in O4.
Since the secondary microseism has been very high this evening and preventing H1 from locking, we decided to leave just the PMC locked (no FSS, ISS, or IMC) for an extended time and watch for any glitches. At around 23:45 UTC, we unlocked the FSS, Richard turned off the high voltage supply for the FSS, and Jason and I unplugged the DB37 cable from the FSS fieldbox in the PSL-R1 rack in order to ensure no feedback from the FSS made it to the NPRO. Pictures of the DB37 cable's location are attached.
The first attachment shows the changes seen when the FSS was unlocked. Since then, I've seen several instances of groups of glitches come through, such as those shown in the second and third attachments. These glitches in the PMC_MIXER channel are smaller than ones seen previously that have unlocked the IMC (like in alog81228). There have also been times where the PMC_MIXER channel gets "fuzzier" for a bit and then calms down, shown in the fourth attachment; it's possible this is due to the NPRO frequency not being controlled so the PMC sees some frequency changes? Finally, I only recall one instance of the NPRO jumping in power like in the final attachment; the PMC doesn't seem to care much about this, only having one very small glitch at this time.
I'll leave the PSL in this configuration to collect more data overnight as the secondary microseism is still much too high for H1 to successfully lock.
A zoom-in on some of the glitches from the third figure above.
After Ryan's shift ended last night, there were some larger glitches, with a similar amplitude in the PMC mixer channel to the ones that we saw unlocking the reference cavity 81356 (and IMC in 81228)
The first plot shows one of these times with larger glitches, the second one zooms in for 60ms when the glitches were frequent, this looks fairly similar to Peter's plot above.
The period of large glitches started around 2 am (7:37 UTC on Nov 20th), and ended when a power glitch turned off the laser at 7 am (15 UTC) 81376. Some of the small glitches in that time frame time seem to be at the same time that the reference cavity was resonanting (with low transmission), but many of the large glitches do not line up with times when the reference cavity was resonanting.
I've zoomed in on most of the times when the PMC mixer glitches reached 0.1, and see that there are usually small jumps in NPRO power at the time of these glitches, although the times don't always line up well and the small power glitches are happening very often so this might be a coincidence.
Sheila, Jason, Vicky - Compared the PSL + PMC mixer glitches between last night (Nov 22, 2024, no FSS no ISS) and the emergency vent (Aug 2024, PSL+PMC+FSS+ISS), as in 81354.
As a reference, "before" during the emergency vent in August 2024, the Laser + PMC + FSS + ISS were all locked with no PMC mixer glitches for >1 month.
Updating our matrix of tests to isolate the problem, and thinking things through:
Before (vent Aug 2024) | Now (Nov 2024) |
laser + PMC + FSS + ISS good = no glitches | laser + PMC + FSS bad = glitches 81356 |
laser + PMC ??? (presumably good) | laser + PMC bad = same PMC mixer glitches 81371 |
1) Are these +/-0.1 V PMC mixer glitches the problem? Yes, probably.
2) Are these big PMC mixer glitches caused or worsened by the FSS? No. PMC mixer glitches basically same with FSS on 81356 and off 81371.
3) Are the laser + PMC mixer glitches new? Yes, probably. If these PMC glitches always there, could it be that we were previously able to ride them out before, and not now? But this would imply that in addition to the new glitches, the FSS secondarily degraded. Seems very unlikely: already several problems (bad amp, new eom needed new notch, etc) have been fixed with FSS, and the FSS OLTFs and in-loop spectra look good. FSS on/off does not change the PMC mixer glitches, so the problem seems most likely laser or PMC.
Sheila, Daniel, Jason, Ryan S, many others
We think the problem is not the PMC, and likely the laser.
Daniel looked at PMC mixer gliches on the remote scope: 81390. If PMC mixer glitches are indicative of the problem, we can try to track down the origin of the glitches.
Talking with Jason and RyanS, some summary of the NPROs available: