While the PSL team finishes up their work, I wanted to get a jump start on alignment, so I moved the ITMs, ETMs, and TMSs back to where their top mass OSEMs said that they were, the last time that we had good transmission for both ALSX and ALSY while trying to lock. This indeed got somewhat okay flashes on both ALSs, so will be a fine place to start from, once the reference cavity is locked again.
However, I found that TMSY's pitch slider has a minus sign with respect to the OSEM readbacks for pitch. This is the only slider (out of pit or yaw, for all 6 optics that affect ALS alignment for either arm) that seems to have this issue.
In the attached screenshot, when I try to move the TMS up the OSEM readbacks say it went down, and vice versa.
Not any kind of urgent matter, but may make it more challenging for ALS auto-alignment scripts to do their work.
TITLE: 11/22 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: None
SHIFT SUMMARY:
NPRO Swap work continued today by Jason, Vicky, and Ryan, with quite a bit of progress this morning, but the PSL crew is still on the floor (currently seeing flashes of RefCav Trans) and so the status for the return to observing is not clear for tonight. (I will be in for my DAY shift tomorrow.)
Trend of the new PSL HV monitor H1:PEM-CS_ADC_4_28_2K_OUT_DQ is attached. It has been flat/constant for most of the last 24+hrs, but in the last couple hours it has moved a bit (correlated with PMC locking or ISS?).
LOG:
Adam Mullavey from LLO made some code to move the ETM and TMS in a spiral to scan for flashes in the arm. LLO has been using this for O4, and in terms of an automation step, this would be comparable to our Increase_Flashes state. Increase_Flashes is very simple in how it works - move one direction, one degree of freedom at a time, look for better cavity flashes, move the other way if they get worse. While this is reliable, it is very slow since we have to wait for one period of the quad between each step (20seconds) to ensure we dont miss a flash.
The last two days I spent some time converting Adam's state, Scan_Alignment, for use at LHO, adjusting thresholds and other parameters, and trying to improve on parts of it so it might work a bit more reliably. The most notable change I've made is to get data from the fast channel for the arm transmission, rather than collecting slow channel data. This seemed to help a bit, but this completely relies on nds calls that we've historically found not 100% reliable. I've also lowered minimum thresholds, thanks to the previous change. This allows it to start off from basically no light in the cavity and bring it up to a decent alignment.
After these changes it seems to really improve the flashes from a very misaligned starting point, but I'm not sure it's any faster than the Increase_Flashes state. In the attached example it took around 20-30 min to go from little to no light to a decent amount. I'm testing this without the PLL and PDH locked, so it's hard to say exactly how well aligned it is and how much better it can get. Next I'd like to take some time on a Tuesday to test with a PDH&PLL locked ALS, and compare it to the time it takes compared to Increase_Flashes for a very misaligned cavity and a barely misaligned cavity.
I've committed the changes I've made to ALS_ARM.py, ALS_YARM.py (both in common) to the SVN. This created a new state - SCAN_ALIGNMENT - that I'll keep there, but isn't in the state graph so it cannot be reached.
I've commented out the new import in the ALS_ARM guardian, since it was preventing reload of the ALS guardians.
Closes FAMIS 26342. Last checked in alog 81307.
All trends look well. No fans above or near the (general) 0.7 ct threshold. Screenshots attached.
Here are some plots relevant to understanding our uptime and down time from the start of O4 until Nov 13th, with some comparisions to Livingston. I'm looking at times when the interferometer is in the low noise state (for H1 ISC_LOCK >=600, for L1 ISC_LOCK >=2000).
The first piechart shows what guardian states we spend the most time in, this is pretty similar to what Mattia reported in 79623
The histograms of lock segement lengths and relocking times show that L1's lock stretches are longer than H1s, and that we've had 34 individual instances where we were down for longer than 12 hours. (And that H1 has a lot of short locks)
The rolling and cumulative avergage plot shows how the drop in duty cycle in O4b compare to O4a is due to individual problems, including the OFI break, pressure spikes, and laser issues.
Lastly, the final plot shows how we accumulate uptime and down time bined by length of the segments. This shows that L1 accumulates more uptime than Hanford by having more locks in the 30-50 hour range. The downtime accumulation shows that just under half of our downtime is from times when we were down for more than 16 hours (serious problems), and about 1/4 of it is due to routine relocking that takes less than 2.5 hours.
The script and data used to make these plots can be found in DutyCycleO4a.py and H1(L1)ISCLockState_04.txt in this git repo.
Closes FAMIS#26018, last checked 81331
Nothing looks out of the norm
Yesterday Gerardo noticed that the south most section of the EY wind fence had some broken cables on the lower half of the fence on the panel we did NOT replace last summer. I think we have a couple of ways we could go about repairing this. We will discuss options, but the weather is not great for this kind of work.
Fri Nov 22 10:10:42 2024 INFO: Fill completed in 10min 39secs
Gerardo confirmed a good fill curbside through the tumbleweeds. Minimum TC temps are getting close to their trip temps (trip=-100C, TCmins=-117C,-115C). I have increased the trip temps to -90C for tomorrow's fill. Looking at a yearly trend, we had to do this on 25 Nov last year, so we are on schedule.
On Tuesday NOV 19th Eric started the replacement of the ceilign light fixtures in the PCAL Lab.
Francisco and I grabbed 3 HAM Door covers to stretch out over the PCAL table to minimize dust particles on the PCAL optical bench.
I also put all the Spheres away in the cabinet and made sure that they all had their apature covers on.
I went in to shutter the laser using the internal PCAL TX module shutter, and the shutter stopped working.
I then just powered off the laser, removed the shutter, and repaired the shutter in the EE lab.
Put in an FRS ticket: https://services1.ligo-la.caltech.edu/FRS/show_bug.cgi?id=31730
The Shutter is repaired and just waiting for the FAC team to finish the Light fixture replacement to reinstall.
Update: Friday NOV 22nd, there are 2 more light fixtures in the PCAL LAB that need to be replaced. One directly above the PCAL Optics table.
PCAL Laser Remains turned off with the key out.
Since this event the Lab shutter from inside the Tx module hasn't been reading back correctly. Today Fil and I figured out why after replacing the switching regulator LM22676.
There is a reed switch Meder 5-B C9 on the bottom of the PCB that gets switched on via a magnet mounted on the side of the shutter door.
One of these reed switches is stuck in the "closed" position, which leaves the OPEN readback LED on all the time. Parts incoming.
Obviously, lots of reds and zeros due to ongoing NPRO work.
Laser Status:
NPRO output power is 1.839W
AMP1 output power is -0.6214W
AMP2 output power is 0.07093W
NPRO watchdog is GREEN
AMP1 watchdog is RED
AMP2 watchdog is RED
PDWD watchdog is RED
PMC:
It has been locked 0 days, 0 hr 0 minutes
Reflected power = -0.4062W
Transmitted power = -0.02552W
PowerSum = -0.4317W
FSS:
It has been locked for 0 days 0 hr and 0 min
TPD[V] = -0.01703V
ISS:
The diffracted power is around 4.0%
Last saturation event was 0 days 22 hours and 52 minutes ago
Possible Issues:
AMP1 power is low
AMP2 power is low
AMP1 watchdog is inactive
AMP2 watchdog is inactive
PDWD watchdog is inactive
FSS TPD is low
Service mode error, see SYSSTAT.adl
TITLE: 11/22 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
SEI_ENV state: MAINTENANCE
Wind: 15mph Gusts, 9mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.30 μm/s
QUICK SUMMARY:
H1 is DOWN due to continued work for another PSL NPRO swap which continues.
Microseism has drifted beneath the 95th percentileover the last 24hrs. Winds are somewhat low and it was a rainy drive in.
J. Oberling, R. Short
Since we've convinced ourselves that the new glitching is coming from the recently-installed NPRO SN 1661, we are swapping it out for our final spare, SN 1639F. This is the NPRO originally installed with the aLIGO PSL back in the 2011/2012 timeframe; it was removed from operation in late 2017 (natural degradation of pump diodes) and sent back to Coherent for refurbishment, and has not been used outside of brief periods in the OSB Optics Lab since.
We first installed the power supply for this NPRO and tweaked the potentiometers on the remote board to make sure our readbacks in the PSL Beckhoff software were correct. We then swapped the NPRO laser head on the PSL table and got the new one in position. The injection current was set for ~1.8W of output power; we need 1.945A for ~1.805 W output from the NPRO. We test the remote ON/OFF, which worked, and the remote noise eater ON/OFF, which also worked. We optimized the polarization cleanup optics (a QWP/HWP/PBSC triple combo for turning the NPRO's naturally slightly elliptically polarized beam into vertically polarized w.r.t. the PSL tabletop). The power was turned down and the beam was roughly aligned using our alignment irises (with the mode matching lenses removed). At this point we did a beam propagation measurement and Gaussian fit in prep for mode matching to Amp1. The results:
Using this we got a preliminary mode matching solution in JamMT, using the same lenses we used for NPRO SN 1661, so we installed it. I managed to get a picture before JamMT crashed on us, see first attachment. Before tweaking mode matching we checked our polarization into Faraday isolator FI01. We have ~1.602 W in transmission of FI01 with ~1.701 W input, a throughput of ~94.2%. We then proceeded with optimizing the mode matching solution. It took several iterations (7, to be exact), but we finally were able to get the beam waist and position correct for Amp1 (the target is a 165 µm waist 60mm in front of Amp1, or 1794.2mm from the NPRO):
To finish, we set up a temporary PBSC to check that the polarization going into Amp1 was vertical w.r.t. the PSL tabletop. We put a power meter in transmission of the temporary PBSC and adjust WP02 to minimize the transmitted power; the lowest we could get is 0.41 mW (with a roughly 1.6 W beam) in the wrong polarization, which matches what we had during the most recent NPRO swap, so we were good to go here. We forgot to measure the final lens positions before leaving the enclosure, we will do that first thing tomorrow morning. The new NPRO is now set, aligned, and mode matched to Amp1 and we will continue with amplifier recovery tomorrow.
We left the NPRO running overnight with enclosure in Science mode; the first shutter (between the NPRO and Amp1) is closed.
TITLE: 11/21 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: The IFO was down all day as the NPRO swap started today and is still ongoing as of 16:30.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
21:17 | SAF | LVEA LASER SAFE | LVEA | N | LVEA IS LASER SAFE | 01:17 |
16:50 | PSL | Jason, RyanS | PSL enc | Y | Start the NPRO swap | 20:16 |
17:57 | SEI | Jim, Mitch | EndX | N | HEPI investigation | 18:31 |
18:00 | SUS | Ibrahim, Oli | CER | N | Going to the sock shop, for the BBSS | 18:19 |
18:02 | EE/PSL | Fil | CER | N | Pull cable for PMC HV, turn off HV | 19:20 |
18:07 | EPO | Rick | LVEA | N | Tour group | 18:45 |
18:11 | SUS | Betsy | CER | N | Join BBSS sock search | 18:19 |
18:26 | VAC | Gerardo | EndX then Y | N | VAC pump checks | 19:57 |
18:32 | FAC | Tyler + vendor | Carpenders shop | N | Backflow checks | 19:12 |
18:52 | FAC | Karen | Optics lab, vac prep | N | Tech clean | 19:28 |
19:15 | VAC | Jordan | LVEA | N | Measure a pump | 19:24 |
19:24 | VAC | Jordan | EndY | N | Pump checks | 19:57 |
19:43 | FAC | Karen | Wood shop | N | Tech clean | 19:53 |
20:46 | ALS | TJ | CR | N | ALS Y testing |
On going |
21:01 | EE | Marc | EndX then Midy | N | DAC swap then parts grab | 22:10 |
21:38 | PSL | Jason, RyanS | PSL encl | Y | NPRO swap | On going |
22:39 | VAC | Gerardo, Jordan | EndY | N | Scroll pump checks | 23:27 |
23:28 | VAC | Gerardo, Jordan | EndX | N | Scroll pumps | 00:04 |
00:15 | SEI | Jim | His office | N | Remote SEI testing | On going |
Trending the new PSL HV monitor H1:PEM-CS_ADC_4_28_2K_OUT_DQ from alog81401
I've made a script that uses a la mode to double check the mode matching that Jason and Ryan are working on using Jammt , I'm posting it here so that people can use it in the future if they want to. It is saved in sheila.dwyer/PSL/modematching along with a copy of the a la mode code (alm).
I took the beam profile measurements that they saved in /ligo/gitcommon/labutils/beam_scans/Data/NPRO_a_21Nov2024.txt and used them for fitting. In this text file the distances are a rail location, I used the information in 80895 and compared it to what I got by fitting the data in NPRO_26Oct2024.txt to find the offset between their rail distances and their coordinate system.
From their data I get a horizontal waist at 162.1um at -0.097 meters, and a vertical waist of 182.1 um at -0.176 meters (see first attachment). This is more astigmatism than the laser 1661 had. If no mode matching had been done at all with this NPRO swap we would end up with an overlap of 87% to amp1 (sqrt(X overlap* y overlap)).
I used the average of the vertical and horizontal waist locations, and asked a la mode to optimize lens 2 + 3 for that average waist (as Jason and Ryan were only planning to move lens 2 + 3 to avoid changing the waist through the EOM). Then using that lens solution to propagate the fitted x and y waists, the overlap given by sqrt(Xoverlap * Y overlap) for that solution is 95% for lens locations of:
label z (m) type parameters
----- ----- ---- ----------
L1 0.1020 lens focalLength: 0.2220
L2 1.1511 lens focalLength: -0.334
L3 1.2787 lens focalLength: 0.2220
This is in pretty good agrement with the photo that Jason sent me of their Jammt solution. I also plugged in their solution, and get an overlap of 94%:
label z (m) type parameters
----- ----- ---- ----------
L1 0.1020 lens focalLength: 0.2220
L2 1.1440 lens focalLength: -0.334
L3 1.2780 lens focalLength: 0.2220
Using the lockloss page, I picked out some times when the tag "IMC" or "FSS_OSSCILATION" was triggered. Then, I just made some simple time series comparison plots between several PSL channels and PSL PEM channels. In particular, the PSL channels I used were:
H1:PSL-FSS_FAST_MON_OUT_DQ, H1:PSL-FSS_TPD_DC_OUT_DQ, H1:IMC-MC2_TRANS_SUM_IN1_DQ, H1:PSL-PWR_NPRO_OUT_DQ, H1:PSL-PMC_MIXER_OUT_DQ, H1:PSL-PMC_HV_MON_OUT_DQ. I compared those to the following PSL PEM channels:
H1:PEM-CS_ACC_PSL_PERISCOPE_X_DQ, H1:PEM-CS_ACC_PSL_PERISCOPE_Y_DQ, H1:PEM-CS_ACC_PSL_TABLE1_X_DQ, H1:PEM-CS_ACC_PSL_TABLE1_Y_DQ, H1:PEM-CS_ACC_PSL_TABLE1_Z_DQ, H1:PEM-CS_ACC_PSL_TABLE2_Z_DQ, H1:PEM-CS_MIC_PSL_CENTER_DQ. For this analysis, I've only looked at 3 time periods but will do more. Below are some things I've seen so far:
From my very limited dataset so far, it seems like the most interesting area is where the PSL_PERISCOPE_X/Y and PSL_TABLE1_X/Y/Z channels are located.
For the same 3 time periods, I checked if the temporary magnetometer H1:PEM-CS_ADC_5_18_2K_OUT_DQ witnessed any glitches that correlate with the IMC/FSS/PMC channels. Attached are some more time series plots of this. During the time period on September 13th, I do not see anything interesting in the magnetometer. However, during the other two periods in November I do see a glitch that correlates with these channels. The amplitude of the glitch isn't very high (not as high as what is witnessed by the periscope_x/y and table1_x/y/z channels), but it is still there. Like in the original slides posted, I don't see any correlations with the PWR_NPRO channel with any glitches in the pem channels on any of the days.
WP12214
Marc, Ryan C, Dave:
h1susex has been fenced from Dolphin and powered down in preparation to re-establishing the LIGO DAC 28ao32 as the driver of the ETMX L1/L2 and ESD channels, essentially undoing what was done on Tuesday.
Marc has completed the cable move, h1susex is powered back up.
All watchdogs have been cleared, Ryan is recovering SUSTEMX, SUSTMSX and SUSETMXPI.
Thu21Nov2024
LOC TIME HOSTNAME MODEL/REBOOT
13:37:36 h1susex h1iopsusex
13:37:49 h1susex h1susetmx
13:38:02 h1susex h1sustmsx
13:38:15 h1susex h1susetmxpi
Fil is borrowing one of the PEM adc channels I was borrow for the Guralp 3T huddle testing. He unplugged one of the horizontal readbacks from the Guralp on ADC4, leaving the other 2 dofs plugged in. Channel he connected to is ADC 4 28, H1:PEM-CS_ADC_4_28_2K_OUT_DQ.
Sheila and I spent some time today trying to calibrate the PMC channels into units of frequency so we can compare the glitches seen by the PMC with other channels. Peter's alog comment last night (81375) leads us to understand the glitches are around 2 kHz in frequency, so above the PMC bandwidth. Therefore, we want to calibrate the PMC error signal.
Luckily for us, Jeff Kissel recently did some PMC scans to determine the PMC PZT Hz/V calibration for a different purpose, alog 73905 (thanks Jeff!). We used his DTTs to determine the time it took to scan one FSR, see screenshot. We determined that it took 0.52 seconds to scan one FSR, and the scan rate is approximately 6.75 V/s (we know the PZT is nonlinear, but we figure this estimate is good enough for our purposes). This gives us 3.51 V/FSR on the PZT, or 0.0236 V/MHz, using the FSR = 148.532 MHz from T0900616.
We are still thinking about how to calibrate the PMC mixer signal.
Daniel calibrated the PMC mixer signal using the PMC PDH signal, lho81390.
H1:PSL-PMC_MIXER_OUT_DQ calibration is 1.25 Vpp / 1.19 MHz (fwhm) = 1.05 V / MHz.
See his note about H1:PSL-PMC_MIXER_OUT_DQ: "the channel recorded by the DAQ has a lot of gain and clips around +/-80mV, but it is calibrated correctly" compared to live traces on an oscilloscope.
2024 Nov 12
Neil, Fil, and Jim installed an HS-1 geophone in the biergarten (image attached). HS-1 is threaded to plate and plate is double-sided taped to the floor. Signal given was non-existent. Must install pre-amplifier to boost signal.
2024 Nov 13
Neil and Jim installed an amplifier (SR560) to boost HS-1 signal (images attached). Circuitry checked to ensure signal makes it to the racks. However, when left alone there is no signal coming through (image attached, see blue line labelled ADC_5_29). We suspect the HS-1 is dead. HS-1 and amplifier are now out of LVEA, HS-1's baseplate is still installed. We can check one or two more things, or wait for more HS-1s to compare.
Fil and I tried again today, we couldn't get this sensor to work. We started from the PEM rack in the CER, plugging the HS1 through the SR560 into the L4C interface chassis, confirming the HS1 would see something when we tapped it. We then moved out to the PEM bulkhead by HAM4, again confirmed the HS1/SR560 combo still showed signal when tapping the HS1. Then we moved to the biergaren and plugged in the HS1/SR560 right next to the other seismometers. While watching the readout in the DAQ of the HS1 and one of the Guralps I have connected to the PEM AA, we could see that both sensors could see when I slapped the ground near the seismometers, but the signal was barely above what looks like electronics noise on the HS1, while the Guralp showed lots of signal that looked like ground motion. We tried gains from 50-200 on the SR560, none of them really seemed to improve the snr of the HS1. The HS1 is still plugged in over night, but I don't think this particular sensor is going to measure much ground motion.
One check for broken sensors - A useful check is to be sure you can feel the mass moving when the HS-1 is in the correct orientation. A gentle shake in the vertical, inverted, and horizontal orientations will quickly reveal which orientation is correct.