I took the .5-800hz tfs last night on HAM1 ISI, to see if the 70hz feature was improved by the periscope brace. Short version, it has been and I now see the ~170hz that I measure here. There's also a feature at ~102hz that could be a number of things, but I didn't see that in the B&K measurement of the periscope+stiffener.
First attached image are the L2L gs13 tfs I took last night. Data below .5hz is from measurements I took last year, I don't expect those to have changed. Above .5hz is new data. These new measurements were taken with HEPI locked, so features around the pier modes at 15hz might not be permanent. I might also get better resolution above 200hz when the pumpdown is done and it's a little quieter at the chamber.
Second image are the data from last year, after adding the viton to the periscope dog clamps. That viton is still in place, but we now also have the stiffener added, so the sharp feature at 70hz is now moved to 170hz.
Third and fourth images are the CPS tfs. These are harder to compare because HEPI was unlocked for the fourth image (june 2025 tf) and locked for the third (last night).
I will post C2C tfs when I get there. Hope to see if I can increase the loop gain on HAM1 with the 70hz feature gone, we'll see.
These are the gs13 c2c tfs. First image is the data from last night, second image is the data from last year. Similar story, the new tfs look a little easier to wrap loops around, but the 170hz feature will require some notching. There is also still some feature at 75hz in the X and Z dofs. Hmm.
I've touched up the isolation loops, removing the 70hz notches where I could and increased ugfs to more normal levels, from 22hz before to 25-30 hz. This should mostly help below 10hz.
I ran a trend of the BSC2 dust monitor (LVEA10) since its been moved to the platform (alog89137).
Attached are monthly TCS trends for HWS & CO2 lasers. (FAMIS link)
Ongoing earthquake is shaking ITMY to a point where its SWWD was going to DACKILL seib1, I've bypassed both SEI and SUS for now.
Note that ITMY is the only SWWD which is ringing up.
Not sure what was happening here, but it was not just the earthquake ringing up the sus. HWWD killed the ISI coildrivers, the sus continued to shake bad enough it was saturating all of the ISI seismometers for several minutes. I had to turn on off the sus damping, let the quad calm down, go reset the HWWD and ISI coil drivers before the ISI stopped shaking. When I went to the CER, I found all of the ISI coil drivers powered off. I reset the HWWD on the sus rack, then power cycled the ISI coil drivers to get them back on. Quad and ISI have both calmed down, damping is back on, Corey is slowly bringing the ISI back up. Unclear if there was cable pulling in the area that caused this, coincident with the earthquake.
Wed Mar 04 10:09:48 2026 INFO: Fill completed in 9min 45secs
On an unrelated note, now that HAM1's pressure is steady and in range I have re-enabled it in VACSTAT.
TITLE: 03/04 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
SEI_ENV state: MAINTENANCE
Wind: 13mph Gusts, 10mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.15 μm/s
QUICK SUMMARY:
HAM1 continues to pump down (6.4e-6torr). Brick deliveries to MY and other work for CEBEX have already started this morning. LVEA is laser SAFE.
WP 12956
The PSL PMC HV (375V) Kepco power supply in the CER mezzanine was replaced with two Mid Eastern power supplies. The first power supply provides power to the HAM1 PZT (350V). Output is enabled by a vacuum gauge interlock. The second power supply provides power to the PSL PMC (375V), no interlock.
TITLE: 03/04 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
SHIFT SUMMARY: HAM1 main volume is pumping down. Cables were staged in the LVEA, HAM1 SUS transfer functions were taken, and the Guardian machine was rebooted as well today.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 15:58 | FAC | Randy | LVEA | N | Prep for cleanroom move | 17:37 |
| 16:10 | FAC | Chris, Pest Ctrl | LVEA, Outbldgs | N | Pest control | 20:10 |
| 16:24 | VAC | Jordan | LVEA | N | Restart HAM1 pumpdown | 17:44 |
| 16:36 | FAC | Jim, Tony | LVEA | N | Cleanroom move | 17:37 |
| 16:56 | FAC | Richard | LVEA | N | Cleanroom move | 17:31 |
| 16:58 | EE | Fil | LVEA | N | Pictures of electronics | 17:22 |
| 16:59 | FAC | Tyler | LVEA | N | Cleanroom move | 18:39 |
| 17:22 | PSL | Fil, Jason | LVEA | N | PMC/JAC electronics | 19:04 |
| 17:24 | PSL | Marc | LVEA | N | PMC/JAC electronics | 19:08 |
| 17:24 | SUS | Oli | CR | N | HAM1 SUS TFs | 19:30 |
| 17:39 | VAC | Gerardo | LVEA | N | HAM1 pumpdown | 17:44 |
| 18:17 | FAC | Randy | LVEA | N | Craning scissor lift | 19:04 |
| 18:36 | SPI | Corey | Opt Lab | N | Inspecting optics | 19:04 |
| 18:39 | FAC | Tyler | LVEA | N | Checks in biergarten | 19:04 |
| 19:16 | VAC | Jordan | LVEA | N | Shut down pump cart at HAM7 | 19:40 |
| 19:19 | VAC | Gerardo | LVEA | N | HAM1 pumpdown | 19:40 |
| 19:28 | JAC | Jason | Opt Lab | N | Inspecting optics | 20:02 |
| 20:01 | TCS | Camilla, Sophie | Prep Lab | N | Checking electronics | 20:28 |
| 20:27 | FAC | Randy | LVEA | N | Changing out cleanroom bolts | 21:11 |
| 20:51 | VAC | Travis | MX, MY | N | Retrieving parts | 21:42 |
| 21:17 | TCS | Sophie | Opt Lab | N | Retrieving a driver | 21:23 |
| 21:33 | EE | Fil, Marc | LVEA | N | Pulling CHETA cables | Ongoing |
| 21:35 | TCS | Sophie | Prep Lab | N | Re-retrieving driver | 21:42 |
| 22:01 | TCS | Sophie | Prep Lab | N | Dropping off a driver | 22:01 |
| 23:07 | TCS | Camilla | LVEA | N | Consulting on cable pulling | 23:20 |
| 23:53 | VAC | Travis | LVEA | N | Dropping off parts | 00:10 |
| 00:20 | TCS | Sophie | Prep Lab | N | Re-dropping off a driver | 00:26 |
| 00:25 | VAC | Jordan | LVEA | N | Checking HAM1 pump cart | Ongoing |
WP13042
Jonathan, Erik, EJ, Ryan S, Dave:
This morning we merged the h1susauxh8 model into h1susfc2. This frees up a core such that when the ethernet IPC is installed we will still have 2 free cores for non-rt.
As was done previously when Jeff installed the HAM1 JC[1,3] aux on the h1susham1 model, the AUX parts were installed at the top level of h1susfc2 with underscores in their names (FC2_M1, FC2_M2, FC2_M3) which passes the simulink naming rules and preserves the H1:SUS-FC2_Mx_ channel names.
Also as was found with the HAM1 model, the first attempt to build failed because of duplicate MUX and DEMUX part names between the main model and the aux library models. For example FC2_M1_Mux1 in the control's library block and the AUX's library block. The solution, as had been done previously, was to edit the common library model and rename the MUX and DEMUX parts. Previously we had made up a number, e.g. 100. This time I got the number from today's date, e.g. Mux260303, to ensure uniqueness.
Files modifed for the new h1susfc2:
"path": "sus/h1/models/h1susfc2.mdl",
"svn_rev": 34641,
"path": "sus/common/models/SIXOSEM_T_STAGE_MONITOR_MASTER.mdl",
"svn_rev": 34642,
"path": "sus/common/models/FOUROSEM_STAGE_MONITOR_MASTER.mdl",
"svn_rev": 34642,
Prior to the restarts I verified that the fast and slow channels to be added to h1susfc2 were identical to the existing channels on h1susauxh8 (28 fast, 496 slow). In addition, slow channels associated with the two ADCs used by AUX were accounted for.
Erik made the puppet changes to remove h1susauxh8 from CDS. He also removed it from the testpoint.par file.
Initially we were going to restart the EDC as part of the DAQ restart to move its DAQ send timing to see if we could fix the h1susb13 DAQ 1-leg CRC issues which started last Tuesday. We quickly realized we actually needed an EDC restart to remove the FEC-172 and DAQ SUSAUXH8 channels.
The model change was made with the sequence:
stop all models on h1cdsh8 and fence it from Dolphin
reboot h1cdsh8, new h1susfc2 installed and h1susauxh8 completely removed.
It was at this point we realized the EDC needed an edit for FEC-172 removal. This took longer than anticipated because of a mismatch between old and new DAQ meant python scripts kept installing the aux, so hand editing was needed but at one point puppet undid that. Long story short, it took several rounds of restarts to get the EDC correct.
The DAQ was restarted, and this was also pretty messy. FW1 restarted itself several times, and strangely NDS1 restarted itself after 40minutes of running. We don't think we had seen this before.
Once the system was stable I wrote a python script to 'activate' the AUX filtermodules to pass the signal through to the OUT_DQ channels. I then trended the 28 DQ channels spanning from when they were on h1susauxh8 to now they are on h1susfc2 to verify they are the same signals, they are.
I updated DAQSTAT to expect 124 dcuids (was 125).
h1susfc2 DAQ data rate increased from 300kB to 500kB per second. Processing time increased by 1uS to 17uS (out of 60uS).
NDS1 restart was caused by 1 second of missed data. Incoming data skipped from 1456605821 cycle 15 to 1456605823 cycle 0. The daqd log is below.
Mar 03 12:43:25 h1daqnds1 daqd[1470118]: Dropped data from shmem or received 0 dcus; gps now = 1456605823, 0; was = 1456605821, 15; dcu count = 124
Mar 03 12:43:25 h1daqnds1 daqd[1470118]: expected gps = 1456605822
Mar 03 12:43:25 h1daqnds1 daqd[1470118]: expected cycle = 0
Mar 03 12:43:25 h1daqnds1 daqd[1470118]: expected nano = 62500015
Mar 03 12:43:25 h1daqnds1 daqd[1470118]: first 20 dcuids seen
Mar 03 12:43:25 h1daqnds1 daqd[1470118]: saw dcu 38 - gps: 1456605823 nano: 0 cycle: 0
Mar 03 12:43:25 h1daqnds1 daqd[1470118]: saw dcu 31 - gps: 1456605823 nano: 0 cycle: 0
Mar 03 12:43:25 h1daqnds1 daqd[1470118]: saw dcu 39 - gps: 1456605823 nano: 0 cycle: 0
Mar 03 12:43:25 h1daqnds1 daqd[1470118]: saw dcu 40 - gps: 1456605823 nano: 0 cycle: 0
Mar 03 12:43:25 h1daqnds1 daqd[1470118]: saw dcu 41 - gps: 1456605823 nano: 0 cycle: 0
Mar 03 12:43:25 h1daqnds1 daqd[1470118]: saw dcu 170 - gps: 1456605823 nano: 0 cycle: 0
Mar 03 12:43:25 h1daqnds1 daqd[1470118]: saw dcu 171 - gps: 1456605823 nano: 0 cycle: 0
Mar 03 12:43:25 h1daqnds1 daqd[1470118]: saw dcu 107 - gps: 1456605823 nano: 0 cycle: 0
Mar 03 12:43:25 h1daqnds1 daqd[1470118]: saw dcu 108 - gps: 1456605823 nano: 0 cycle: 0
Mar 03 12:43:25 h1daqnds1 daqd[1470118]: saw dcu 43 - gps: 1456605823 nano: 0 cycle: 0
Mar 03 12:43:25 h1daqnds1 daqd[1470118]: saw dcu 45 - gps: 1456605823 nano: 0 cycle: 0
Mar 03 12:43:25 h1daqnds1 daqd[1470118]: saw dcu 44 - gps: 1456605823 nano: 0 cycle: 0
Mar 03 12:43:25 h1daqnds1 daqd[1470118]: saw dcu 46 - gps: 1456605823 nano: 0 cycle: 0
Mar 03 12:43:25 h1daqnds1 daqd[1470118]: saw dcu 22 - gps: 1456605823 nano: 0 cycle: 0
Mar 03 12:43:25 h1daqnds1 daqd[1470118]: saw dcu 166 - gps: 1456605823 nano: 0 cycle: 0
Mar 03 12:43:25 h1daqnds1 daqd[1470118]: saw dcu 167 - gps: 1456605823 nano: 0 cycle: 0
Mar 03 12:43:25 h1daqnds1 daqd[1470118]: saw dcu 168 - gps: 1456605823 nano: 0 cycle: 0
Mar 03 12:43:25 h1daqnds1 daqd[1470118]: saw dcu 169 - gps: 1456605823 nano: 0 cycle: 0
Mar 03 12:43:25 h1daqnds1 daqd[1470118]: saw dcu 174 - gps: 1456605823 nano: 0 cycle: 0
Mar 03 12:43:25 h1daqnds1 daqd[1470118]: saw dcu 122 - gps: 1456605823 nano: 0 cycle: 0
Mar 03 12:43:25 h1daqnds1 daqd[1470118]: [Tue Mar 3 12:43:25 2026] ->3: start trend 60 net-writer 1456605420 60 {"H0:FMC-CS_LVEA_ZONE2B_G_DEGC.min" "H0:FMC-CS_LVEA_ZONE2B_G_DEGC.max" "H0:FMC-CS_LVEA_ZONE3A_F_DEGC.mean" "H0:FMC-CS_LVEA_ZONE3A_F_DEGC.rms" "H0:FMC-CS_LVEA_ZONE3A_F_D>
Mar 03 12:43:25 h1daqnds1 daqd[1470118]: [Tue Mar 3 12:43:25 2026] connection closed on fd=35
Mar 03 12:43:28 h1daqnds1 systemd[1]: rts-daqd.service: Main process exited, code=killed, status=11/SEGV
Mar 03 12:43:28 h1daqnds1 systemd[1]: rts-daqd.service: Failed with result 'signal'.
To complete the removal of h1susauxh8 I regenerated the various MEDMs with model lists, e.g. IPC, RCG-ver, load-times and sdf-ref.
This week, in the optics lab flow bench, I started doing the assembly of the ISIJ SPI reflector assembly. There were a few stumbles, the biggest being a custom peek retaining ring that looks very similar to an incompatible catalog part that got mixed in with the JAC stuff. Once that was found, the reflector assembly more or less went together as described in Sheon's procedure, T2500318. I think the only thing missing is an accuglass cable clamp, but we can add this later or come up with a zip tie to capture the cable.
First photo is the is the reflector assembly itself. This is where I finished yesterday. Second, third and fourth images are the reflector assembly with the shroud attached. Jeff said we may not use this, but it will provide some protection for the time being.
Fifth, sixth and seventh images are the information I have for the lens, beamsplitter and QPD.
Tagging for EPO photos
Laser Status:
NPRO output power is 1.84W
AMP1 output power is 70.39W
AMP2 output power is 138.9W
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 0 days, 0 hr 0 minutes
Reflected power = -0.4136W
Transmitted power = -0.02846W
PowerSum = -0.4424W
FSS:
It has been locked for 0 days 0 hr and 0 min
TPD[V] = 0.003508V
ISS:
The diffracted power is around 4.1%
Last saturation event was 0 days 0 hours and 36 minutes ago
Possible Issues:
FSS TPD is low (dropped at ~1710 UTC today)
PSL stabilization systems were shut down for Fil's PMC and JAC PZT power supply work (alog 89349). Work was completed in the late morning and all PSL systems turned back on, everything is working as it should.
Yesterday, ground loop checks revealed a new and random ground loop issue on the LR BOSEM of the Tip Tilt PM1 suspension (which has been installed and used in vac for a long while now). This morning I worked to swap the BOSEM for a new one, reset the medm offset/gains and recentered. Fil confirmed that the ground loop situation is now resolved on PM1. In Corey's absence, I took a bunch of photos (won't be as good as his but here for posterity) of the chamber components as we see them before closing up. I wiped down some of the under table surface as I could. Jim and I both inspected for left behind items on and under the table. BOSEM SN 263 is the problem OSEM removed, SN 229 was swapped in with OLV 28860. Jim then removed the septum viewport covers (all 4)and unlocked the ISI. Oli ran the final closeout TFs for PM1 (JM1 and JM3 were yesterday). I revisited the closeout check sheet and we launched the door crew (Jordan, Randy, and FAC - thanks for jumping in due to others absences!). 1 door is on now and we are about to check that the JAC Refl beam goes through the viewport on the door as anticipated given the inaccuracy of the viewport simulator fixture.
We installed JM1, balanced it and aligned it. A beam dump was placed behind it though we could not see the transmission with 1W input.
After this, JAC locked with RF without any problem though the input was wobbly when the purge was up.
We searched for unexpected ghost beams (also with 1W input) and didn't find any.
We uninstalled many (but not all) temporary dog clamps and irises.
We revisited the IMC alignment because it's been off in PIT since Thursday or Friday. We locked JAC using dither (because we wanted to turn down the purge air). We enabled the IMC WFS just for MC optics and steered JM3, but weren't able to center the MC2 trans. Steering JM3 just made the IMC transmission worse while making not much impact on the desired degree of freedon (JM3 PIT -> MC2 trans YAW, JAM3 YAW -> MC2 trans PIT).
Tomorrow, we'll revisit the IMC alignment. We'll also measure the power coming into JAC TRANS PD as well as the actual transmission of JAC while locking it with RF so we can use JAC trans PD as the measure of the power into HAM1.
(Travis S., Jordan V., Gerardo M.)
Late entry-
Late Wednesday we vented the annulus system for HAM4 using nitrogen. On Thursday, we removed and replaced the ion pump, a very uncomfortable location for this pump, it is surrounded by cable trays, see photos.
We have not been able to fasten the mounting bolts that hold the ion pump body in place, we have noticed that the pipes for the annulus system leading up to the mounting flange are not parallel to the ion pump flange they are |/ and they should be ||. So, we have to twist the pipe to be able get it parallel to the pump. We have a mechanical block supporting the pump as a temporary support, we'll replace it as soon as possible.
Today, I visited HAM4 ion pump and turned the controller on, it railed at first, and a few minutes later it was at nominal value, and after it reaching a bit below 2 milliamps I closed the isolation valve for the annulus system, an hour or so later the aux-cart, can turbo and hoses were removed. Ion pump pressure signal has turned around and getting lower, nice! Good job, Jordan and Travis.
Tagging for EPO photos