During some ndscope testing Erik ran into a file descriptor limit on h1daqnds1 that required us to restart daqd. The daqd stopped accepting requestions, giving a log message that the accept call had failed. We bumped the file descriptor limit up and that solved the issue. We have made runtime changes (no restart required) on h1daqnds[01]. We have put the new limits into the daqd puppet, but have not applied them yet. We will reconcile the systems with puppet next week, after the long weekend.
with help online from Joe B
I revised this title to "attempted" because this push has failed and we reverted to the calibration from 20250610T224009Z.
Today I pushed a new calibration from calibration report 20250628T190643Z. We changed the SRCL offset on 6/26 which had a small effect on the sensing function, enough that Joe and I (with input from Sheila) decided to push a new calibration. With the change in the sensing function, I tagged a previous report on 6/26 with epoch-sensing
and epoch-hfrl
. When I regenerated the report, I set is_pro_spring
to empty, since the previous iteration of the report showed that there was very little spring in the DARM sensing measurement, at least to 10 Hz. I confirmed with Joe that the resulting corner plots for the sensing that show very poor fits for F_spring and Q are ok- this is because the pipeline is unable to fit any appreciable spring in the sensing function.
I checked the GDS filter results by eye, and confirmed they all looked flat. I then went ahead to push this new calibration, following steps we took last time, specifically:
pydarm commit
20250628T190643Z --valid
pydarm export --push
20250628T190643Z
pydarm upload
20250628T190643Z
pydarm gds restart
Then, we waited ten minutes to begin the calibration measurement. This is where I made an error- I checked that the GDS calib strain channels all looked sensible, and I saw some lines updating on grafana, so I assumed we were good to go. Corey began a calibration suite, which starts with a broadband measurement. However, the broadband results were not very good. We lost lock right at the end of the measurement. This was my mistake- I never checked if kappaC and kappaTST had settled, which it looks like they hadn't. So, I think we need to relock and check the calibration again. If it still looks poor, we can revert to the previous calibration. This must be done before we go back to observing.
Follow up edit below here:
We relocked after the push above and remeasured and saw the calibration was even worse than before (orange trace). We think this may be a fit error to the L2 actuation function, but we're not sure. Joe helped me revert the calibration to the previous version, from 20250610T224009Z. Corey and I ran an early broadband and saw the error was better, red trace. Still not the best, but we were not thermalized yet. Hopefully we can try a new push next week with a better cal.
Sheila, Matt, Corey, Elenna
I reverted the IM2 and IM3 osem positions in order to bring the beam on IM4 trans QPD back to its position before the mystery shift reported in alog 85486. I started by moving IM2 pitch, and noticed that bringing it back towards its previous position did help return the beam on IM4 trans to its previous spot. It was also very largely cross coupled with yaw, and shifted the yaw position slightly as well. I then moved IM3 pitch and further brought the pitch offset on IM4 trans back. I tried adjusting IM2 and 3 yaw slightly, but the adjustments to bring the yaw osems back did not correspond with returning the yaw IM4 trans position to the previous one. Overall, the shif tin yaw position is very small, so I chose to not make any more moves in yaw.
Corey reran input alignment, and I moved PR2 and IM4 by hand significantly in pitch (tens of microradians for both) to get the lock to catch.
While we locked, I reset the POP A offsets to well align PRM to the full lock alignment. This is SDFed in safe, but will be an observe diff too.
The attached ndscope shows the movement that I made. The y markers on the osem plots indicate where the suspensions were before the mystery shift. Note: IM1 also shows movement, but I chose not to adjust it.
Kevin, Matt, Sheila, Elenna, Corey
After yesterday's ETMY RH change to avoid the 10kHz PI 85514, we lost lock 3 times overnight due to 80kHz PIs.
Apartently the RH change also changed a violin mode damping phase for ETMY 1 kHz mode (mode 20 1000.307 Hz), 85526, which did not cause the earlier locklosses but is growing to the point where it is dominating the DCPD RMS in this lock, but responded well the Elenna's sign flip.
Kevin took a look at the 10kHz higher order modes, second attachment. The top panel shows how the higher order modes have been thermalizing before the ring heater change, the bottom panel shows three locks since the ring heater change which is a few minutes before 2 hours into the lock, so these can be compared to the green trace in the top panel. THe x arm higher order modes are sitting around 10.6kHz at this point in the self heating thermalization, the yarm modes were below the scale on this plot before the change and are now moving up to around 10480Hz. If they gain another 20Hz as the self heating kept thermalizing similar to the x arm, this would have put them around 10500 Hz which doesn't have a lot of accoustic modes visible here.
We don't want to revert the ring heater change from last night as that setting had the y arm sitting right below the forest of accoustic modes. Matt estimates that we could move the y arm below this forest of accostic modes by going to 1.5 or 1.6W per segment, but that would mean that our two arm modes are more different from each other. First we tried lowering the power a little more, to 0.9W per segment to see if that helps the 80 Hz modes. Then Matt estiamted that we could put the Y arm 2nd order mode in a similar location to the x arm by using 0.6W per segment, so we've now set them to that.
Matt took a reference here before the PI rang up, it looks like the PI is 80297 Hz, I got a reference that includes the peak and the lockloss transient, which shows the frequency as 80298 Hz. The signal used for the PI damping is DCPDs downconverted at 80kHz, with a bandpass that goes from 294 to 298.5 Hz, so our peak is within the bandpass. The PLL set frequency is 299Hz, we lowered this 297.5Hz. It was being sent to ETMX for damping, which did appear to work once but not as the mode grew. We think this is a y arm PI, since we have been changing the Y arm ring heaters, so I've changed the output matrix to send this to ETMY.
I SDFed the ETMY ring heater change to 1.5 W in observe.
For this new Ring Heater power (1.5), the ETMy Mode20 violin started to ring up again with default settings (+30deg + -1.0gain).
Took the gain to +1.0, but this also rung up the violin.
It's been about an hour, but this seeting has been damping for this mode:
Have not updated lscparams since we are still figuring out a Ring Heater power which works for H1. Once we find a good Ring Heater setting, we should update lscparams (if necessary).
Thu Jul 03 10:11:40 2025 INFO: Fill completed in 11min 36secs
Sheila, Matt, Elenna
In the midst of trying to diagnose various PI issues, we noticed DCPD sum was slowly increasing, but not from PIs. We eventually figured out it was ETMY violin mode 20. The gain was set to -1. I flipped it to +1 and then mode started being damped down.
Sheila updated the ETMY ring heater again to 1.5 W, and it looks like ETMY mode 20 phase has flipped back, and a damping gain of -1 is working once again.
It appears that gains of both -1 and +1 are not damping the mode now. Corey is trying different phase filters.
Thought I had posted this before, but couldn't find it, so here it is. Attached plots compare L2L measurements of the HAM1 GS13s on May 21 during corner pumpdown before adding the periscope viton and June 6 the afternoon after we added viton. The Q and frequency of the 71.8hz mode is somewhat reduced, but the neighboring 69.9hz mode is sharper now, so I'm not sure we gained much. The June 6 measurement was collected in air, so I would still like to collect a set of in-vac measurements. This could probably be done on a Tuesday if there isn't too much activity around HAM1.
I took 5-200hz matlab tfs this morning to compare to the 2 previous measurements above. It seems that the damping is quite effective now. I will try to look at the effect on the isolation filter design, maybe we can get some of the loop gain back. It would still be better to move these modes up above 100hz if possible.
TITLE: 07/03 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 2mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY:
H1 had (3) ~2-hr locks overnight (looks like no wake-up calls for Oli!). H1 is currently locking (just powered up to 25W). Winds stepped down at about 130amPDT (837utc).
Today is Thurs Commissioning from 8am-12pmPDT (15-19utc). H1 won't be thermalized until about 1030-11amPDT, so no calibration...hopefully H1 short locks won't be an issue for calibration where we need H1 thermalized 3hrs. Robert and Sheila will probably start their work
Just want to add a note that all three locks from last night were nearly the exact same length, 1:51, and the locklosses are all tagged with "PI monitor". A ring heater was changed yesterday, 85514, which may have caused this problem.
Additionally, each of the 3 locklosses were from PI28/29 according to VerbalAlarms (the Ring Heater change yesterday was made to address PI24 which riddled our 7hr lock for the last hour before the ultimate lockloss).
While we have been running PIMON live on nuc25, it appears the data from the lockloss hasn't been saved. The newest file in the /ligo/gitcommon/labutils/pimon/locklosses folder is from December 2024. I'm not sure what's going on. We think our problem is an 80 kHz PI, so this would be useful data to have.
TITLE: 07/03 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
Lost lock right as my shift started, and due to the Wind kept H1 from locking until 4:25 UTC.
Dropped from Observing once for Xtreme_PI_Damping at 4:45 UTC, for 3 minutes.
H1 is currently Observing and looks good.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:47 | VAC | Gerardo | LVEA HAM3 | N | Banging vacuum eqipment together next to HAM3. | 00:00 |
23:54 | ISS | Jennie W | Optics Lab | N | Putting away parts | 00:02 |
TITLE: 07/03 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 35mph Gusts, 20mph 3min avg
Primary useism: 0.11 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY:
Lockloss as soon as my shift started 23:29:48 UTC from a PI 24 Ring up.
Wind is picking up speed.
The Ring heater at ETMX was changed to hopefully help with the PI24 Ring ups.
Running Inital_Alignment now.
Jennie W, Rahul
Yesterday we made some measurements to calibrate the spot size on the QPD as we scan the beam position across it.
We used a connector Fil made us to plug in the OT301 QPD amplifier into a DC power supply after checking it contained voltage regulators that could cope with a voltage between 12 and 19 V ( as the unit says it expects DC supply but the previous one we were using was AC with a 100mA current rating and was getting too hot so we assume that was the incorrect one). We hooked it up at 16V (this draws about 150mA of current). The QPD readout looks normal and does not have any of the strange sawtooth we saw with the original power cable.
We moved the M2MS beam measurement system out of the way of the translation stage.
To calibrate the QPD we need to change the lateral position of the M1 mirror and lens to change the yaw positioning on the QPD and measure the X and Y voltages from the QPD.
We need to check we are centred first. The QPD bullseye readout shows the beam is off a tiny bit in yaw but this was as good as we could get at centering the beam when we moved the QPD. All 8 PDs are reading about 4.6 V so this means the beam is well centred in the array plane.
We measure 11000 counts on the bullseye qpd readout at this M1 position.
Translation Stage inch |
QPD X (mV ) |
QPD Y (V) |
---|---|---|
4.13 | 239e-3 | -1.77 |
4.14 | 252e-3 | -1.84 |
4.15 | 2.34 | -1.60 |
4.16 | 4.46 | -1.17 |
4.17 | 4.26 | -1.11 |
4.18 | 5.62 | -835e-3 |
4.19 | 7.80 | -600e-3 |
4.20 | 7.81 | -321e-3 |
4.21 | 8.45 | -222e-3 |
4.22 | 8.82 | +70.6e-3 |
4.23 |
9.19 |
771e-3 |
4.24 | 9.28 | 1.12 |
4.25 | 9.37 | 1.88 |
4.26 | 9.36 | 2.36 |
4.27 | 9.37 | 2.38 |
4.28 | 9.44 | 2.71 |
4.29 | 9.47 | 3.10 |
4.30 | 9.50 | 3.41 |
4.31 | 9.49 | 3.35 |
4.32 | 9.51 | 3.72 |
4.33 | 9.55 | 4.27 |
4.34 | 9.58 | 4.55 |
4.35 | 9.62 | 4.86 |
4.36 | 9.66 | 5.44 |
4.37 | 9.65 | 5.31 |
4.38 | 9.63 | 5.60 |
4.39 | 9.69 | 5.75 |
4.40 | 9.69 | 6.00 |
4.41 | 9.70 | 6.15 |
4.42 | 9.70 | 6.16 |
4.43 | 9.71 | 6.33 |
4.44 | 9.71 | 6.50 |
4.45 | 9.72 | 6.72 |
4.46 | 9.74 | 7.09 |
4.47 | 9.73 | 6.87 |
4.48 | 9.74 | 7.40 |
4.49 | 9.75 | 7.46 |
4.50 | 9.74 | 7.46 |
4.51 | 9.76 | 7.75 |
4.52 | 9.74 | 7.67 |
4.53 | 9.73 | 7.82 |
4.54 | 9.74 | 7.96 |
4.55 | 9.73 | 8.08 |
4.56 | 9.72 | 8.33 |
4.57 | 9.71 | 8.43 |
4.58 | 9.70 | 8.50 |
4.13 | 448e-3 | -1.98 |
4.12 | -1.02 | -2.34 |
4,11 | -1.17 | -2.14 |
4.10 | -2.92 | -2.43 |
4.09 | -4.63 | -3.30 |
4.08 | -5.91 | -3.18 |
4.07 | -6.97 | -3.40 |
4.06 | -8.17 | -4.24 |
4.05 | -8.13 | -4.28 |
4.04 | -8.52 | -4.47 |
4.03 | -8.76 | -4.77 |
4.02 | -8.89 | -5.27 |
4.01 | -9.01 | -5.45 |
4.0 | -9.08 | -5.44 |
3.99 | -9.10 | -5.85 |
3.98 | -9.11 | -5.91 |
3.97 | -9.11 | -5.93 |
3.96 | -9.12 | -6.16 |
3.95 | -9.12 | -6.18 |
3.94 | -9.13 | -6.34 |
3.93 | -9.13 | -6.49 |
3.92 | -9.12 | -6.49 |
3.91 | -9.13 | -6.61 |
3.90 | -9.11 | -6.55 |
3.89 | -9.11 | -6.70 |
3.88 | -9.10 | -6.46 |
4.13 | ||
I plotted the data from lowest reading on the translation stage to highest and fitted the linear region using Calibrate_QPD.m which is attached.
Data is shown in attached pdf.
The slop of the linear region in V/inch is 112 V/inch. Which means to if the beam moved 8.93 e-3 inches on the QPD in yaw, the yaw readout would change by 1 Volt.
I altered the code to plot in mm and the constant is 4.4 V/mm.
D'oh I read the scale on the translation stage wrong so the x readings are actually lower by a factor of 10.
This makes the slope 44.1 V/mm which is more in line with the 65.11 V/mm Mayank and Shiva found for the QPD calibration here.
Ours could be different because we have a slightly different beam size and we moved the QPD in its housing to centre it which could have changed X to Y coupling in the QPD readout.
This implies our beam diameter on the QPD is around 0.4mm which makes a lot more sense considering the diode is 3mm!
As a cross-check we used the QPD 'bullseye' readout unit and Rahul changed the translation stage in yaw and we measured the beam dropping from 10400 counts in the middle of the QPP to 100s of counts at the edges.
Translation Stage [inch] | QPD Sum Counts |
---|---|
0.413 | 10400 |
0.365 | 500 |
0.413 | 10400 |
0.49 | 400 |
diode size ~ ((0.49-0.365)*0.0254*1000) = 3.175 mm.
I redid the graphs for the horizontal motion of the input beam to X motion on the QPD with better labels (first attached graph) and did a fit for the Y data on the QPD collected at each horizontal position of the input beam (second attached graph). The third graph attached is comparing both fits on one graph.
If we take into account the input beam horizontal axis is not aligned with the QPD, we can work out the resultant calibration relative to the mirror displacement as:
V change along mirror displacement axis = sqrt((change V in X)^2 + (change V in Y)^2)
Calibration = V change along mirror displacemnt axis/change in mirror position
= 4.644 V/mm.
angle of QPD horizontal axis with mirror displacement axis = tan^-1(Voltage change V in Y/ Voltage change in X) = 38.8 degrees.
I got the above caluclation of the QPD calibration in the horizontal direction wrong as I use the total change in voltage we measured across the whole range of horizontal scan and not just the linear region where the beam is close to centred on the QPD.
The horizontal beam scan calibration is actually:
sqrt(11.8^2 + 44.1^2) = 10.6 V/mm
with an angle of tan^-1(11.8/44.1) = 14.9 degrees to the X direction on the QPD.
Jim, TJ, Robert
We damped the periscope on ISCT1 by removing the dog clamps one by one and inserting a strip of 1/16" viton between the dog clamp and the base of the periscope before retightening, making sure that the strips crossed the corner of the base. This is not the most effective way of damping the periscope but it was the fastest, safest and most simple damping we could do. Jim measured the Q to be around 1000, so we didn't have to do much to get an improvement. In Jim's first transfer functions after the damping, the peak looked a little wider.
This was HAM1 not ISCT1
FranciscoL, RickS
On April 4, 2024 we used DTT to measure the amplitudes of the Pcal lines used in the Pcal X/Y calibration comparison in both the Pcal end station Rx sensor outputs and the DARM_ERR signal. The attached plots shows the peaks in the spectra measured with 0.001 Hz requested BW, 50% overlap, 10 averages during a long lock stretch. The second image is a page from Francisco's lab book.
The SNRs for the lines from the Pcal Rx sensors is about 5e-5 and
the SNRs of the lines in the DARM_ERR signal are about 1350.
Typo: The SNRs for the lines from the Pcal Rx sensors is about 5e-5 --> 5e5