This morning, prior to sunrise, zone 2A in the LVEA experienced a rapid temperature decline. This was partially due to broken heating elements which prevented the duct heater from producing as much heat as it needed to. I have replaced the broken elements.
The LVEA has been swept following maintenance, nothing really of note.
The POP-X readout chain has been moved from 45.5MHz to 36.4MHz. All RF chains have been checked and they look fine.
H1:ASC-POP_X_RF_DEMOD_LONOM was changed from 24 to 23.
POP_X 45MHz signal trends while powering up. If we believe the POP_X_DC calibration it detects around 12mW at full input power.
Same trend with 36MHz signals.
All RF signals have an analog whitening gain of 21dB. There is also a digital gain in the I/Q input segments of ~2.8 at the beginning, and ~5.6 at the end of the power up. At the same time of the gain change, the 45 MHz modulation idnex is reduced by 6dB, whereas the 9MHz one is reduced by 3dB. This means the 45MHz signals are correctly compensated, whereas the 36MHz signal are not and the outputs become 3dB smaller than expected. The maximum observed INMON signal is around 8000 count and factor ~4 from saturating.
Tue Feb 11 10:05:34 2025 INFO: Fill completed in 5min 31secs
Gerardo confirmed a good fill curbside. TCmins [-67C, -65C] OAT (-4C, 24F), DeltaTempTime 10:05:34
Mayank, Sheila
We now have a number of power measurements at the HAM3 viewport, with corresponding estimates of the beam spot position on PR2 (from July 2024 78878, yesterday's measurement of 8mW at PR3 yaw -74 urad 82722)
date range | PR3 yaw slider value | Y2L coefficient | spot position on PR2 (in the plus Y direction from the center of the optic) | power measured at HAM3 viewport [mW] | |
July 2018 until July 2024, except for a few days | 152 | -7.4 | 14.9 | 47 | |
July 5 2024 | 132 | 28 | |||
July 5 2024 | 110 | 19 | |||
July 5 2024 | 100 | 17 | |||
July 2024- Feb 6 2025 | 100 | -6.25 | 12.588 | ||
May 21st 2024, and Feb 6th- Feb 10th 2025 | -74 | -3 | 6 | 8 | |
Feb 10 2025 | -230 | to be measured | to be measured |
We used Elenna's new calibration of PRC power 8724 to estimate that we have 2.53kW of power in the PRC beam.
It looks like we could benefit from re-running A2L, as well as SRCL and PRCL feedforward.
We've moved the spot position on PR2 by about 12 mm since Ibrahim's check 82605
TITLE: 02/11 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 9mph Gusts, 6mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.58 μm/s
QUICK SUMMARY: Looks like we lost lock a little over and hour ago and the IFO was in the middle of an alignment. I stopped that and brought it to IDLE so we can get some maintenance items started.
Workstations updated and rebooted. This was an os packages update. Conda packages were not update.
TITLE: 02/11 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: The range has been fairly steady, we stayed locked all shift with 2 superevents.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
17:16 | SAFETY | LASER SAFE ( \u2022_\u2022) | LVEA | SAFE! | LVEA SAFE!!! | 16:42 |
16:43 | SAF | Camilla | LVEA | N->Y | LVEA hazard transition | 16:53 |
16:54 | SAF | LASER HAZARD | LVEA | YES | LVEA is HAZARD!! | 16:54 |
16:57 | ISC | Sheila, Jennie | LVEA | Yes | HAM3 VP beam spot power meas. | 17:17 |
17:46 | ISC | TJ, Sheila, Jennie | LVEA | YES | HAM3 VP beam spot power meas | 18:03 |
17:58 | FAC | Kim | MY, mX | n | Tech clean | 19:12 |
19:23 | ISC | Sheila, Matt | LVEA - ISCT1 | Y (local) | Aligning beatnotes | 19:43 |
20:49 | ISC | Daniel | LVEA | y | Checking on whitening settings at rack | 20:58 |
21:04 | ISC | Jennie, Mayank, Keita, Siv | Opt Lab | Y (local) | ISS array work, Siv in @ 23:00, Jenne out 23:35, Keita out 23:57 | 01:20 |
21:35 UTC Observing
02:26 UTC supervent S250211aa
04:36 UTC superevent S250211be
I took a look at the locklosses during the calibration measurements the past week. Looking at DARM right before the locklosses, both times a large feature grows around ~42 Hz right before the lockloss. Sat LL Thur LL
Thursday:
DARM_IN was dominated by the 42 Hz long oscillation and a ~505 short oscillation until the LL, DARM_OUT was dominated by the harmonic of the violins ~1020 Hz.
Saturday:
DARM_IN had a long and a short oscillation, the fund violin modes, ~510 Hz and ~7.5 Hz, DARM_OUT was dominated by the harmonic of the violins ~1020 Hz
I'm not sure how to/where to see exactly what frequencies the simulines were injecting during and before the lockloss.
Looking into what's going awry.
I pushed for a change of calibration sweep amplitudes on the Pcal and the PUM (which had been tested a couple of month's back) which was instilled into the calibration sweep wiki last week, labeled informatively as "settings_h1_20241005_lowerPcal_higherPUM.ini
".
Both of these sweeps were very near the end, where Pcal is driving at 7.68 Hz and PUM is driving at either 42.45 Hz or 43.6 Hz, which should clarifiy the source of the signals you are pointing out in this aLog.
The driving amplitude of the Pcal at 7.68 is about 20% lower than the injections that were being run the week before, deliberately done to reduce kicking the Pcal during ramping to reduce broad band coupling into DARM which would affect other measurement frequencies like the L1 which is driving at ~12 Hz at this time.
The driving amplitude of the PUM at ~42 Hz is unchanged from injections that had been running up until last week.
Not seeing any SUS stage saturating at lock losses. Presently unconvinced lock losses are related to new sweep parameters.
Both locklosses coincided with the ramping ON of the final DARM1_EXC at 1200 Hz
Tagging CAL properly
To rebuild the spare ISS PD Array unit (D1101059 S1202967, note that I myself don't have an access right for the S-number document on DCC) and align the PDs, we opened the transport container (D1400368) in the optics lab.
We initially had a hard time to open the container as the viton gasket (D1400366) was REALLY firmly stuck to the container lid (D1400367) and the container base (D1400365). We put the container on top of small stainless steel cart and used screwdrivers to pry the cover off from the base plate. Even after two corners were freed, we could not lift the lid just by hand, and we had to continue prying the lid until all four corners were freed.
Contamination:
After finally removing the lid, we found two bases for concern, the first is the contamination.
The first two pictures shows the container assembly before and after removing the lid.
Note that the second picture shows the class A "cover" for the assembly tilted back. It turns that the cover was just put there, free to rattle. Connection rods that are supposed to attach the cover to the cage structure were not bolted to the cage.
The third picture shows how filthy the container base plate was. It also shows a half of QPD retainer parts that were found there, next to a swagelock fitting attached to the base plate.
Fourth picture shows the pre-soaked wipe I used to lightly clean the top surface of the PD array base plate. I immediately picked up some black stuff. You can also see that the PEEK parts are covered with black stuff.
Fifth picture shows the PEEK parts. I wiped the top parts but the bottom one is yet to be cleaned. You can easily see the difference.
6th and 7th picture shows deep scuff on the class A surface of the array PD structure, which seemed to have most accumulated the black stuff.
Glass chipping
We also found what seemed like tiny pieces of glass on the container base plate as well as the ISS PD base plate.
We inspected plate optics and found that all of them have chips. The worst one is one of the high reflectors.
The first three pictures show the damage of the leftmost HR optic seen from the beam entry point (i.e. array PDs are facing you). There appears to be a big chip where the adjustment cam is supposed to touch, but the cam was found detatched from the assembly and was on the transport container base plate.
The fourth picture shows the chip at the corner of the middle optic (BS). It also highlights that what seems to be the deep scuff marks or maybe grinding marks from the manufacturing process on the high reflector to the left (which is a different optics from the first three pictures as you're looking at the assembly from the array PD position so to speak) seem to have accumulated black stuff. That optic also has bad chipping on the front edge, which is more clear on the fifth picture.
The 6th picture shows one of the bigger glass pieces found on the surface of the transport container base plate. Note the black smudge on the glass (if it is glass).
Sheila, Matt Todd
We are having more incidences of the nonstationary noise 20-60 Hz that correlates with the CO2 ISS channel.
Here's a collection of links related to this issue in the past:
Here is a Lasso result for the noisy lock from this weekend: lasso Feb 9. Jane Glanzer is working on running lasso for some of the recent noisy times including the max as well as mean channels, so that may provide additional clues. We still see the ITMX CO2 ISS channel correlated with range. Note, on Feb 7th Lasso chooses H1:IOP-OAF_L0_MADC2_EPICS_CH15 which is the same thing as ITMX_CO2_ISS_CTRL2_INMON,
For the 9th the Rayleigh statistic also clearly shows this issue: summary page for Feb 9th, but comparing this to one of our normal days (range just below 160 and stable on Feb 4th) we also see nonstationarity at these frequencies. So, it is possible that we normally have this non stationary noise at a lower level and it is always limiting our sensitivity.
Feb st we had an incident where the ISS CO2 channel was correlated with the range, screenshot. Feb 1st there was also a remarkable change in the FC length control signal peak to peak, which has not shown any correlation with these range drops in the last week and a half, but did last may (78485). Matt found this alog about squeezer issues on the 1st, 82581, we adjusted the SHG temperature, and fiber polarization, and there was a temperature excursion in the FCES. The FC length control signal was noisy from Jan 2nd to Feb 1st, and has been back to normal since.
Operator request: If operators see the range fluctuating with lots of noise between 20-40 Hz, (similar to Feb 9th), could you drop out of observing and go to no squeezing for 10 minutes or so? We would like to see if this problem comes and goes with squeezing as it did last May.
I have run lasso for four different time periods as suggested by Sheila. As mentioned, these lasso runs differ from a traditional run in that I am using .max trends of the auxiliary channels to model the bns range. Below are links to each run, along with brief comments on what I saw.
Feb 1st 10:20 - 18:40: The top channel is a SQZ channel, H1:SQZ-PMC_TRANS_DC_NORMALIZED. H1:ASC-POP_X_RF_Q4_OUTPUT and H1:TCS-ITMX_CO2_ISS_CTRL2_OUT_DQ were also picked out. The CO2 channel is correlated with the bigger dip around ~15 UTC.
Feb 7th 12:21:58 - 15:17:03: For some reason, lasso only runs until 15 UTC, when I have specified it run until 18:30 UTC. I am not 100% sure why this happens. I think it may be because of a large range drop around 15 UTC. I have still included what lasso found for the initial 3 hours or so. Some top channels picked out are H1:SUS-ITMY_M0_OSEMINF_F2_OUT16 & H1:ASC-AS_A_RF45_Q2_OUT16. These are new channels that I don't think have been picked out before. I will say that farther down the list of correlated channels H1:PSL-ISS_SECONDLOOP_QPD_SUM_OUT16.
Feb 7th 21:55:32 - Feb 8th 01:40:00: Top channel is H1:TCS_ITMX_CO2_ISS_CTRL2_OUT_DQ.
Feb 9th 07:04:07 - 11:59:26: Top correlated channels are H1:PSL-ISS_SECONDLOOP_PDSUMOUTER_INMON & H1:TCS-ITMX_CO2_ISS_CTRL2_OUT_DQ. These are also the top two channels from the regular lasso run (with .mean trends) as viewed from the summary pages.
The most consistently picked out channel is the ITMX_CO2 channel, but the .max trend method also seems to pick up some PSL, SQZ, SUS, and ASC channels as well.
TITLE: 02/10 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 7mph Gusts, 4mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.39 μm/s
QUICK SUMMARY:
TITLE: 02/10 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Commissioning this morning and a few lock losses. The PR2 spot was moved again so the lock reacquisition was slower since Sheila needed to pico for POP QPDs. Other than that relocking has been automated.
LOG:
H1 back to observing at 21:18 UTC following commissioning time. Were getting some DIAG_MAIN notifications after going to max power about the POP X PZT being railed; Sheila says we should fix this but can wait for now. The TMS_SERVO Guardian has a note about this being due to different PR3 alignment.
Accepted new POP_A offsets of 0 in ASC OBSERVE.snap table once at NLN (screenshot attached).
During commisioning this morning, we added the final part of the ESD glitch limiting, by adding the actual limit part to ISC_LOCK. I added a limit value of 524188 to ETMX_L3_ESD_UR/UL/LL/LR filter banks, which are the upstream part of the 28 bit dac configuration for SUS ETMX. These limits are engaged in LOWNOISE_ESD_ETMX, but turned off again in PREP_FOR_LOCKING.
In LOWNOISE_ESD_ETMX I added:
log('turning on esd limits to reduce ETMX glitches')
for limits in ['UL','UR','LL','LR']:
ezca.get_LIGOFilter('SUS-ETMX_L3_ESD_%s'%limits).switch_on('LIMIT')
So if we start losing lock at this step, these lines could be commented out. The limit turn-off in PREP_FOR_LOCKING is probably benign.
Diffs have been accepted in sdf.
I think the only way to tell if this is working is to wait and see if we have fewer ETMX glitch locklosses, or if we start riding through glitches that has caused locklosses in the past.
Using the lockloss tool, we've had 115 Observe locklosses since Dec 01, 23 of those were also tagged ETM glitch, which is around 20%.
Since Feb 4th, we've had 13 locklosses from Observing, 6 of these tagged ETM_GLITCH: 02/10, 02/09, 02/09, 02/08, 02/08, 02/06.
Jim, Sheila, Oli, TJ
We are thinking about how to evaluate this change. In the meantime we made a comparison similar to Camilla's: In the 7 days since this change, we've had 13 locklosses from observing, with 7 tagged by the lockloss tool as ETM glitch (and more than that identified by operators), compare to 7 days before the change we had 19 observe locklosses of which 3 had the tag.
We will leave the change in for another week at least to get more data of what it's impact is.
I forgot to post this at the time, we took the limit turn on out of the guardian on Feb 12, with the last lock ending at 14:30 PST, so locks since that date have had the filters engaged, but since they multiply to 1, they shouldn't have an effect without the limit. We ran this scheme between Feb 3 17:40 utc until Feb 12 22:15 utc.
Camilla asked about turning this back on, I think we should do that. All that needs to be done is uncommenting out the lines (currently 5462-5464 in ISC_LOCK.py):
#log('turning on esd limits to reduce ETMX glitches')
#for limits in ['UL','UR','LL','LR']:
# ezca.get_LIGOFilter('SUS-ETMX_L3_ESD_%s'%limits).switch_on('LIMIT')
The turn off of the limit is still in one of the very early states is ISC_LOCK, so nothing beyond accepting new sdfs should be needed.
Ryan Crouch, Rahul Kumar
The assembly, balancing and testing of all 12 HRTS (Ham Relay Triple Suspension) for O5 is now complete. Currently all 12 HRTS has dummy optic installed at the bottom stage and once the mirrors are ready (with prisms bonded) then it will be replaced (later this year). I am attaching several pictures from the lab which shows all 12 HRTS staged on the optical bench. Later, six of the suspensions will be transported to LLO.
This plot compares the of transfer function performance of all 12 HRTS for all 6 degrees of freedom. We are still analyzing this data and there is scope for improving a couple of them (especially highlighted in green trace). Sometimes, it is as simple as adjusting the flag position wrt to LED/PD in bosems, and other times further fine tuning the balance and alignment of the blade springs.
The final two HRTS which were assembled by us are of OM0 configuration. This has bottom mass (M3 stage) actuation using AOSEM standoff assembly (as per D2300180_v2) as shows in the picture over here. The magnets used at M3 stage is 2.0 mm D x 0.5 mm T, SmCo. The transfer function results for both the OM0 configurations are as follows - attachment01 and attachment02. Both of them look healthy, when compared with the model.
Given below are the OLC, offsets and gains of the bosems attached to OM0 sn02,
s/n 622 31669 | 15834 | 0.947 |
s/n 639 32414 | 16207 | 0.925 |
s/n 637 28430 | 14215 | 1.055 |
s/n 632 27399 | 13699 | 1.094 |
s/n 684 26138 | 13069 | 1.147 |
s/n 698 32767 | 16383 | 0.915 |
I remeasured the associated suspension with the lime green trace (2024-9-16), a suspended version of the HRTS with structure s/n 012. Through adjusting the flags centering and position I was able to improve the measurement results, especially the verticle DOF, yaw also looks better. Previous measurement vs new measurement.
In alog 65804 Ross, Mitchell and I adjusted and dithered ITMX to see how much reflection off the ITM the ETM Hartmann sees. See attached for the ETMX HWS beam refected off the ITMX. This is a known issue in both ETM HWSs. It may explain why ETM ring heater tests look okay but not powerups.
Re-calculating for current 530nm M530F2 HWS beams, you can clearly see why the retrofections off the ITM are less of an issue with the 530nm source.
Using spec sheets C1103238 for ETM and C1103261 for ITM.