I took two single bounce OMC scans today with the help of TJ and Tony. Here are some notes to future me and others to reference if we want to do single bounce scans:
Edit to add: unfortunately the scan results from today look pretty bad. In short, the peaks look "lopsided" somehow, and so I'm not sure the results are usable. Looking back at Jennie W's previous scans, it looks like she had to slow them down to 200 second scans. I only did a 100 second scan with amplitude 105 so maybe I scanned too fast. I'm not sure what the correct resolution of this is, because the scans I did in 2022 were 100 second scans and the results were fine. Adding this note here for reference in the future to think about the appropriate scan length and amplitude.
TITLE: 03/18 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 7mph Gusts, 4mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.24 μm/s
QUICK SUMMARY:
Be careful using the Roll up down between the receiving area and LVEA. It seems to be in need of maintenance.
SNEWS Test T565015 16:00:15 UTC
HAM7 ISI WD Tripp17:09 UTC
HAM7 CPS Glitch 17:10 UTC
2 Norco trucks went in the Y armdirection from the parking lot.
Mechanical room has fumes in it , doors have been opened to allow ventilation while Roger Bros
HAM7 CPS glitches 18:03 UTC
Dust alarms have been telling us there is dust.
Expected work:
LVEA:
Ham1 Laser Barrier taken down?
Lighting Fixtures above HAM1
Fil HAM1 pulling cables.
Vac roughing pumps & compressor
OMC scans by Elenna
EX:
PCAL ES Beam Mv over by 9am
Laser hazard Camilila & TJ TMS & ALS Return beam
Rahuls OPLEV SUS Charge measurements
EY:
Rahuls OPLEV SUS Charge measurements
Vac EY comp still in progress
CDS:
Dan LDAS maint.
Eric BS camera server testing
Still on going:
ITMY CPy adjustments.
M. Todd, C. Compton, T.J. Shaffer
We went down to EX with the intention of getting the reflected ALS beam to be a little cleaner, as we've been plagued with this lobey beam seen in alog 82752. If we were to succeed in cleaning up the beam we would have taken beam profile measurements; however, we were unable to make the beam any clearner, and any profile measurements did not make sense.
We started off a little cleaner than the previous log (82752) but we still had some weaker lobes that looked like ears (lending a sort-of Mickey Mouse shape).
Measuring the ALS return beam at ETMX, we misaligned ITMX and ITMY. Starting slider values were:
20250318 - 16:00:00 UTC | P | Y |
ETMX | -34.6 | -145.1 |
TMSX | -87.0 | -90.2 |
Moved to ETM Y: -2.4, TMSX: 634.8. (seems too big check this number on ndscope??), got worse, now has almost two lobes
Other way in Tat: ETM Y: -346.4, TMSX: -275.2. looks better, similar to original: Mickey Mouse.
Yaw back, not in Pitch: ETM P :204.4, TSMX P 139.0, now worse in Pit than yaw
LVEA has been swept
Plot attached, reset at 19:30UTC.
There has been activity near that TCS rack this AM, both cable pulling and wire racks moved nearby. We've seen this happen before, LLO is currently trying to investigate why.
Tue Mar 18 10:12:51 2025 INFO: Fill completed in 12min 47secs
h1caley's OBSERVE.snap is now a sym link to its safe file, bringing it in line with h1calex and the LLO models. h1caley_safe.snap was modified to monitor all of its channels.
lrwxrwxrwx 1 controls advligorts 62 Mar 18 11:01 h1caley/h1caleyepics/burt/OBSERVE.snap -> /opt/rtcds/userapps/release/cal/h1/burtfiles/h1caley_safe.snap
lrwxrwxrwx 1 controls advligorts 62 Apr 21 2015 h1caley/h1caleyepics/burt/safe.snap -> /opt/rtcds/userapps/release/cal/h1/burtfiles/h1caley_safe.snap
lrwxrwxrwx 1 controls advligorts 62 Apr 28 2017 h1calex/h1calexepics/burt/OBSERVE.snap -> /opt/rtcds/userapps/release/cal/h1/burtfiles/h1calex_safe.snap
lrwxrwxrwx 1 controls advligorts 62 Apr 21 2015 h1calex/h1calexepics/burt/safe.snap -> /opt/rtcds/userapps/release/cal/h1/burtfiles/h1calex_safe.snap
Joe Betzwieser, Dripta
With Joe's help, I populated some existing calibration time dependent correction factors related EPICS channels to display actuation kappas in physical units of N/ct.
The six EPICS channels that were updated today were:
H1:CAL-CS_TDEP_UIM_ACT_SCALE
H1:CAL-CS_TDEP_UIM_ACT_NORM_REFERENCE
H1:CAL-CS_TDEP_PUM_ACT_SCALE
H1:CAL-CS_TDEP_PUM_ACT_NORM_REFERENCE
H1:CAL-CS_TDEP_TST_ACT_SCALE
H1:CAL-CS_TDEP_TST_ACT_NORM_REFERENCE
Note for the reference channels, today's values were used. So, with kappa_TST value of 0.982 and SCALE and NORM_REFERENCE channels updated with the front end value of 1.08538e-12 N/cts, NORM channel displays 0.982 * 1.08538e-12/1.08538e-12 = 0.982.
The six SDF value changes were accepted.
Sheila, Oli
Vlad had mentioned in the last commissioning meeting that our bandpass filters for our PIs were too wide, so I looked and sure enough, they are way too wide (up to 6 Hz wide!) and contained part/most of other peaks.
Since PI24 and PI31 are the two that we care about the most right now, I went in and narrowed them as well as shifted the band location to actually be centered on those modes (h1susprocpi diffs, accepted 83421).
I'll be taking a look at PI28 and PI29's bandpasses and adjusting those to what makes sense.
Table below shows before and after locations of the band passes (the values are after the downconversion, which is why they are in the 400 Hz range instead of the 10.4 kHz range)(the before/after links show the bandpasses on the bottom plot using vertical lines)
Mode | Peak Location | Before | Now |
PI 24 | 431.75 | 428.5 - 433.5 | 430.75 - 432.75 |
PI 31 | 428.625 | 425 - 431 | 427.125 - 430.125 |
Workstations were updated and rebooted at about 1415 UTC. This was an os packages update. Conda packages were not updated.
This morning I took down and then moved the laser curtains around the two laser tables next to HAM1/2. I didn't fully disassemble them, but "accordion folded" them and then propped them up along the South wall of the West bay, a bit West of the TCSY table. Robert gave the OK with them propped up on the wall for our last 2 weeks of observing.
TITLE: 03/18 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 10mph Gusts, 6mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.23 μm/s
QUICK SUMMARY:
When I walked in, H1 was in Move_Spots[508].
I don't think we were going to make it all the way to NLN before the Injections start but I will let it continue to lock until it is unlocked by activiites today or 8pm.
Well we made it all the way to Observing before 14:50 UTC.
I see I have some red on CDS overview:
H1SUSPROCPI has a pending config.
We have a 6 hour Maintenance day today!
Notes:
PCAL team and anyone else in the End stations needs to keep cellphones away from VAC Cold Cathode; airplane
mode may be desired.
Tripping Hazard near HAM6: a cable tripping hazard near HAM6 when Robert starts working over near the output arm.
Expected work:
LVEA:
Ham1 Laser Barrier taken down?
Lighting?
Fil HAM1 pulling cables.
Vac roughing pumps & compressor
EX:
PCAL ES Beam Mv over by 9am
Laser hazard Camilila & TJ TMS
Oplev
EY:
Oplev work Rahul
Vac EY comp
CDS:
Dan LDAS maint.
Eric BS camera server testing
I loaded Oli's new h1susprocpi PI_PROC_COMPUTE_MODE[24,31]_BP filters at 08:11 PDT.
TITLE: 03/18 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Extremely quiet shift with H1 observing throughout and has been locked for just over 10 hours. Range slightly lower this lock due to different SQZ angle?
TITLE: 03/17 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: I restarted standdown_alerts around ~15:30 UTC this morning after it died overnight, igwn_alert died with the network issues (GraceDB error) and I restarted that around 18:30 UTC. Yellow dust alarms in the optics lab throughout the day.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:16 | FAC | Kim | Optics, VAC prep | N | Tech clean | 15:39 |
15:37 | PEM | Robert | LVEA | N | Measurement prep | 15:46 |
15:41 | ISC | Corey | Optics lab | N | Optics parts search | 16:36 |
15:46 | SQZ | Sheila | LVEA, SQZT1 | LOCAL | SQZT1 work | 16:42 |
16:55 | FAC | Nellie | Midx | N | Tech clean | 18:03 |
16:55 | FAC | Kim | MidY | N | Tech clean | 18:00 |
17:17 | FAC | Chris | Staging to Xarm | N | Big green tumbleweed clearing | 18:56 |
17:31 | FAC | Mitch | LVEA | N | Measure rollup door width | 17:34 |
17:49 | ISC | Sheila | LVEA | LOCAL | Measure POP power | 18:29 |
18:35 | PEM | Robert | LVEA | N | Turn off injections | 18:39 |
20:04 | SEI | Mitch, Jim | MidY | N | Prep for tomorrows work | 20:52 |
20:51 | ISC | Camilla | Optics lab | N | Collecting parts | 21:32 |
22:27 | ISC | Camilla | Optics lab | N | Parts put away/ finding parts | 22:43 |
23:18 | SUS | Keita | Optics lab | N | ZM tiptilt sus investigation | Ongoing |
CAL sweep ESD Observations:
While checking the ESDs for saturations during the CAL Sweeps I found this signal shape during a CAL Sweep that we survived.
I can see there are different sections to the CAL sweep, and we tend to lose lock on this last section which is a 1200Hz signal.
While zooming in we can see a 40Hz oscillation that is ontop of the 1200hz injection.
There are multiple Locklosses in this section of the CAL Sweep, but there seems to be a common 40ish Hz oscillation on top of the 1200Hz that apears when we loose lock, But not when we survive it.
I am aware that there is an alog 82827 that mentions a 43.75 hz signal... which may need to be look into as the source of this oscillation.
Perhaps I should try to figure out what frequencies are being injected during the CAL sweeps.
We lost lock from the calibration, so we tried to lock ALS without the linearization (some background in this alog: 83278.) An active measurement of the transfer function from DRIVEALIGN_L to MASTER out was 1 without the linearization, and -0.757 with the linearization on. So I've changed the DRIVEALIGN gain to -1.3 in the ALS_DIFF guardian when the use_ESD_linearization is set to false.
We tried this once, and it stayed locked for a DARM gain of 400, but unlocked as the UIM boosts were turning on. We tried this again but it also didn't lock DIFF, so it is now out of the guardian again.
I looked at a few more of the past ALS DIFF locks, both sucsesful and unsucsesful attempts we are saturating the ESD (either the DAC or the limiter in the linearization) in the first steps of locking DIFF. We do these steps quite slowly, stepping the darm gain to 40 waiting for the DARM1 ramp time, stepping it to 400, then waiting twice the ramp time, then engaging the boosts for offloading to L1. I reduced the ramp time from 5 seconds to 2 seconds to make this go faster. This worked on the first locking attempt, but that could be a coincidence.
We will leave this in for a while, so that we can compare how frequently we loose lock at LOCKING_ALS. In the last 7 days we've had 48 LOCKING_ALS locklosses, and 19 locklosses from NLN, so roughly 2.5 ALS locklosses per lock stretch.
Since the time of this alog, around 19 UTC on March 13th, we've had 68 locking_ALS locklosses and 12 NLN locklosses, so about 6 locklosses per sucsesful lock. It seem though that the change to 2 seconds was never in place, and the guardian code still said 5 seconds. So this issue seems to be getting worse without any change.
Now I've loaded the change to 2 seconds, so this should be sped up after today's maintence window.
I've looked at a bunch more of these locklosses, and they mostly happen in the time when the DARM gain is ramping, less often as the boosts are coming on in L1, and 1 I saw happened while COMM was locking.
In all the cases the linearization seems to hit its limiter before anything else goes wrong.
We removed the BSC1 temperature sensor to run some testing in the lab. Will reinstall next week.
After testing in the lab, the sensors are working as designed, outputs are clean. We reinstalled the BSC1 temperature sensor but left it floating (not attached to BSC1). The BSC3 sensor we isolated from BSC3 with a thermal pad, and the power supply was isolated from the pier it is resting on. This is an effort to remove ground loops on these systems. Will revisit next week.
M. Pirello, F. Clara
After review of the data over the week, we saw a marked improvement in temprature information so we reapplied the BSC1 sensor to the chamber along with a thermal pad to maintain electrical ground isolation. We also isolated the power box for both supplies.
M. Pirello, F. Clara, D. Barker
Jennie, Sheila, and I ran OMC scans this morning and realized that the proper way to slow down the scan to avoid weird saturation effects is to reduce the excitation frequency in the template. The nominal templates have excitation frequencies of 0.01 Hz, so sweeping over 200 seconds just sweeps at the same speed twice. To sweep once, slower, you have to increase the sweep time to 200 seconds AND reduce the sweep frequency to 0.005 Hz.
Sheila and I want to note some things that are "obvious" but easily forgotten: