We have evidence that DARM noise in the 10-50 Hz region seems to be modulated by residual motion of DHARD_Y at 2.6 Hz.
At the same time, now DHARD_Y is not limiting DARM by linear coupling above 10 Hz, due to the improved A2L (-30 db)
So there's room to test a new DHARD_Y controller that could give us 2x more suppression at 2.6 Hz, at the price of about 2x more noise above 10 Hz.
Attached a proposed new controller that could be tested to see if DARM stationarity improves.
I uploaded the new controller in FM1 as a multiplicative change to the previous controller. So one can turn on this 2.6 Hz boost by engaging FM1 (5 second ramp) together with the other FMs while locked.
Tested on and off a couple of times while in NLN, the transition is smooth. The effect is shown in the attached spectrum:
All in all the DHARD_Y RMS is reduced to 0.64 times the value with the old controller.
I left this controller active in Observing and Cory accepted the SDF. We should take a look at DARM in the next few hours to see if 1) there is less non-stationary noise in the 20-50 Hz region 2) the higher DHARD_Y control noise doesn't affect us.
I have updated ISC_LOCK to engage this controller when in LOWNOISE_ASC. I did not reload ISC_LOCK code
Attached is the FM1 diff from Gabriele's change this afternoon.
Gabriele has also made a change for this FM1 change in ISC_LOCK, line 4490. Next time H1 is OUT of OBSERVING, please hit LOAD on ISC_LOCK.
ISC_LOCK has been re-loaded.
The new controller improved a lot the DHARD_Y motion at 1.3 Hz and 2.6 Hz, reducing the RMS.
It looks like CHARD_Y coudl use a similar improvement, since it still has a large 1 Hz and 2.6 Hz peaks. The 1 Hz peak is due to gain peaking in the current design, and the loop has little gain at 2.6 Hz
DARM still shows bicoherence with CHARD_Y
All pumps are running smoothly. Temps are within operating range.
Mon Jul 24 10:08:00 2023 INFO: Fill completed in 7min 56secs
Travis confirmed a good fill curbside.
FAMIS 19986
Jason's incursion last Tuesday shows clearly on the environmental trends; these settled just fine. There are also noticable changes to some of the amp laser diodes, but overall power output of each amp is relatively the same as what they were before.
Both PMC transmitted and reflected power are lower following the incursion. RefCav transmission is higher and has remained stable.
TITLE: 07/24 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 141Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 4mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.04 μm/s
QUICK SUMMARY:
Handed a recently-put-to-OBSERVING H1 by RyanC. Sent Bubba a text regarding RO alarm. L1 reports logging until about noon-1pm PT. I also touched base with Dave about nuc34 (he plans to wait to reboot with Erik). Winds are mostly low (Corner Station is noticeably above the weeds.), and only 77degF outside this morn.
NOTE: There is a forecast of high winds from 11am-11pm today with gusts up to 30mph!
TITLE: 07/24 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 139Mpc
SHIFT SUMMARY:
LOG:
No log
Last Saturday, while damping violin modes, I wanted to check if DHARD_Y noise below 7-8 Hz was limited by test mass M0 Y damping, as suspected (71466).
I tried to reduce all test mass M0 Y damping loop gains from -1 to -0.5, but with that gain the peak at about 1 Hz was slowly rising, so I aborted that test.
I then reduced all M0 DAMP_Y loop gains from -1 to -0.7, and this configuration was stable:
You can see the reduction in the spectra clearly for the damping loop signals. It's harder to judge how much DHARD_Y improved, so the second plot shows the ratio of all signals when the gains were -0.7 to when they were -1. The reduction is evident and pretty close to the expected factor 0.7.
So in conclusion: DHARD_Y signal below a few Hz is dominated by test mass damping noise.
No clear reason for the lockloss
STATE of H1: Observing at 145Mpc
We've been locked for 8:33, everythings stable.
TITLE: 07/24 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
SHIFT SUMMARY:
23:17 Detector locked for 7:51 and in Observing
00:41 Moved itself out of Observing and into Commissioning, and then immediately lost lock (71635). H1_MANAGER did good with relocking
1:36 Got into OMC_WHITENING
2:25 Moved into NOMINAL_LOW_NOISE
2:26 Got into Observing
3:10 On SEISMON_ALERT bc of incoming earthquake from Tonga
3:52 Back to CALM
4:09:01 Pushed out of Observe and into Commissioning by SDF_DIFF syscssqz but diffs immediately disappeared
4:09:10 I put us back into Observing
7:00 In Observing and Locked for 4:36
Relocking after the lockloss went smoothly and quickly.
ISC_LOCK went through CHECK_VIOLINGS_BEFORE_POWERUP so quickly that Verbals didn't even register it.
Dust in the PSL got up to 440pcf >300nm around 2:40
LOG:
No log
At this time some of the SQZ GRDs changed. I looked into these syscdsqz sdf diffs, by looking though sdf all channels table for sdf diffs recently changed and trending them. It was H1:SQZ-FIBR_SERVO_COMGAIN and H1:SQZ-FIBR_SERVO_FASTGAIN that changed from 20 to 17 and 15 during this 04:09UTC time, see attached. Unsure what this means. Tagging SQZ.
Naoki, Camilla
We looked at the different TTFSS channels as Daniel and Vicky suggested that a PD could be reaching it's threshold and making the TTFSS think it's unlocked or similar but we seen nothing suspicious, see attached. H1:SQZ-FIBR_EOMRMS channels show a change but we this could be because of the gain change rather than causing the gain change. It was also seen in 71653.
TITLE: 07/24 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 22mph Gusts, 14mph 5min avg
Primary useism: 0.07 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY:
Got back into Observing from the lockloss (71635) a bit over an hour ago. Currently on SEISMON_ALERT because of an earthquake from Tonga, but the first part of the R-wave has passed and I think we'll be able to ride it out fine.
Lockloss at 00:41. Detector moved itself out of Observing and into Commissioning, and then immediately lost lock. Cause not yet known.
Got back to NOMINAL_LOW_NOISE at 2:25, and went back into Observing at 2:26. Damping violins for only 50 minutes!!!!
I wasn't able to find anything about what could have caused this lock loss, but I did notice something that I found interesting, although there might be a very obvious explanation for it.
Looking at the H1:ASC-AS_A_DC_NSUM_OUT16 and H1:SQZ-LO_SERVO_ERR_OUT_DQ channels (Attachment 1), the H1:SQZ-LO_SERVO_ERR_OUT_DQ channel first sees the lockloss only 22ms after it starts. This is still during the initial small change in slope of ASC-AS_A_DC that leads into the large slope and subsequent dropoff.
I've also attached the ISC_LOCK and SQZ_MANAGER logs from when the lockloss occurred. The last four locklosses that we have had (possibly more before that), SQZ_MANAGER has given us the 'SQZ ASC AS42 not on?? Please RESET_SQZ_ASC' message ~0.1 seconds after the lockloss started. At least looking at the ASC and LSC channels that come up in 'lockloss select', none seem to respond to a lock loss that quickly.
Nice alog Oli. If you use the H1:ASC-AS_A_DC_NSUM_OUT_DQ channel (sampled at 2kHz rather than 16Hz) and zoom in on the y-axis you can see that the AS_A channel actually sees a discreet bump and runs away before the SQZ_LO_SERVO shows it's glitch, see attached. The SQZ-LO is locked to the OMC TRANS 3MHz sideband so it probably a sign that the light at the OMC has changed.
The messages in the log about SQZ AS42 is because we have already lost lock but ISC_LOCK hasn't yet realized: see the first "refined time" plot in the lock loss tool where the ISC_LOCK_STATE_N shows the lockloss happened nearly 1 second after H1:ASC-AS_A_DC_NSUM_OUT_DQ sees it happen, is this longer than usual? We expect the SQZ AS42 not to be able to be on when we are unlocked, tagging SQZ.
STATE of H1: Observing at 146Mpc
We've been locked for 9:20, no issues aside from another MY low temperature alarm.
Briefly dropped from Observing from 08:18:42 to 08:18:59 from an unknown issue, whatever SDF diff or grd node issue resolved itself before I could open the medms.
Using commands 'guardctrl log -a "07/23/2023 08:17 UTC" -b "07/23/2023 08:20 UTC" IFO' and 'guardctrl log -a "07/23/2023 08:17 UTC" -b "07/23/2023 08:20 UTC" DIAG_SDF' I can see that it was syscssqz that had 2 sdf diffs. Trended a in 71652 to see this was H1:SQZ-FIBR_SERVO_COMGAIN and H1:SQZ-FIBR_SERVO_FASTGAIN that changed from 20 to 17 and 15 back to 20 in ~0.3 seconds.
Oli also saw this 07/24 04:09UTC 71652, tagging SQZ, maybe we need to edit something or unmonitor these channels.
Earlier today I did some 2.6 Hz sine injection in DHARD_Y (71590) to understand the upconversion phenomenon seen previously (71505). This is maybe related to the non-stationary noise in the 20-50 Hz (71092) and the observed bicoherence (71005).
In summary: DARM noise between 20 and 80 Hz gets worse for positive excursions of DHARD_Y_IN1 from zero, but not for negative excursions.
It doesn't look like it is necessaarily correlated with the 2.6 Hz peak (although that peak contains a lot of the RMS in normal conditions).
This might indicate that we have a static offset in DHARD_Y alignment.
The first atttached spectrogram shows in the top panel a DARM spectrogram, whitened to the median of a quiet time, so that a value of 1 means normal noise levels. The bottom panel shows the DHARD_Y 2.6 Hz excitation amplitude, and the DHARD_Y error signal value. The most interesting feature is that noise in DARM is higher when DHARD_Y value is positive and large, but not when it is large and negative. The second plot shows a zoom in time of the same spectrogram, which makes the correlation evident.
To better understand, I computed the DARM BLRMS in the 20 to 50 Hz region [using the spectrogram, averaged over 2-s-long windows with 1-s overlap] , and compared it with the maximum value of DHARD_Y [in the corresponding 1-s window]. The third plot shows the corrrelation and the fourth plot is a time zoom showing the clear correlation with the positive peak value of DHARD_Y.
There is a clear correlation between the DARM BLRMS between 20 and 50 Hz and the maximum positive value of DHARD_Y_IN1, but not the maximum negative value. The last plot shows how the BLRMS is correlated with large positive excursions of DHARD_Y.
Possible causes for this behavior:
Some next useful tests:
As a side note, the RMS of DHARD_Y is dominated by a 1.3 Hz and a 2.6 Hz peaks. The 2.6 Hz peak could be the high frequency plant resonance, but the lowest plant resonance is at 1.05 Hz (71489). Previously I showed that there is some evvidence that the test mass M0 damping could be responsible for most of the DHARD_Y motion below a few Hz (71466). So maybe another useful test would be to check the status of the test mass M0 damping, and do some noise injections to properly project the OSEM noise contribution to DHARD_Y
On Saturday, while damping violin modes, I repeated this test, adding an offset to the DHARD_Y error point. The results are different than before: it looks like DARM is more noisy for negative excursions of the error signal from zero, instead of positive excursions as observed before. It appears that adding an offset that makes the error signal positive reduces the non-stationary noise in DARM.
It's not clear to me why this behavior is now different. Two ideas: 1) there are ASC error point offsets that change from lock to lock 2) this test was performed while dampiing violin modes, with a IFO still warming up, while the previous test was with a thermalized IFO
Daniel, Camilla
Following on from Ryan's 71420, we looked at the movement of ITMY (only has ASC control) during DHARD_WFS and see there is now larger spikes during locking than before 29th, see attached plot where H1:SUS-ITMY_L2_OSEMINF_{U,L}{L,R}_INMON spikes during locking are larger after the t-cursor (see bottom purple plot for when violins rang up).
Zooming in, attached plot, we can see when the Input to the DHARD_Y filter is turned on that the output to DHARD_Y filter has a large transient lasting ~1second, not seen in DHARD_Y inmon. The only filter that is on during that time FM6 and although it has a larger step response, there is a 5s TRAMP so Daniel doesn't see an issue with this.
Nothing obvious happened to DHARD on the 29th, only alog around that date shows a change that was reverted 70928.
During relock on Tuesday we plan to increase this DHARD_Y ramp time (currently 5s) to see if we can reduce this transient that appears to have increased by a factor of 10 around June 29th. See attached plots of relock before and after the violins rang up.
There were some SDF Diffs (71670) for H1:ASC-DHARD_Y_TRAMP and H1:ASC-DHARD_P_TRAMP that I accepted, which bumped the ramp times up to 15 seconds.