Gabriele's brucos with OM2 cold showed that we seem to have a slight increse in coherence with DHARD Yaw. At Sheila's suggestion, I did a quick tune-up of the ITMY A2L coefficients during commissioning this afternoon (we've been locked for a little over 2.5 hours). The new (only very slightly different) values are in lscparams. Yaw moved away from center (was -1.5, now -1.65, where center is 0 for yaw), while I think pitch moved closer to center (was -0.15, now 0, where center is 0.85 for pitch).
H1 has dropped observing for commissioning time in coordination with L1. Expected to last ~3 hours.
"BSC high freq noise is elevated for these sensor(s)!!!
ITMX_ST2_CPSINF_H1
ETMX_ST1_CPSINF_V2 "
RyanS noted this morning that the temperature change in the LVEA seems to have caused some alignment shifts to optics in the corner station (first screenshot, from RyanS), and so he had to run an initial alignment this morning. I suspect that, since the temperature change primarily happened while we were locked, the ASC was able to follow along over the last day or so, however when we lost lock (due to the earthquake) and the ASC histories got cleared (as they do at every lockloss), the residual alignments were too poor for lock acquisition. After initial alignment, things seem to be going smoothly.
Out of an abundance of caution, I ran the rubbing checking script, and all the optics in the corner station seem to be fine, and clear of egregious rubbing (I didn't check the ETMs, TMSs, or FC2), and all consistent with the last time the script was run by RyanC in alog 74522. I think this means that there is no need for Facilities to change the setpoint of the LVEA. If the system can keep us where we are right now, then this seems to be a fine place to be.
I'll note that it does look like both RM1 and RM2 have a little bit of fuzz to their spectra between 7-9 Hz. This fuzz was there in RyanC's checks on the 1st of Dec, but is not there in the reference time from the 20th of Nov. We don't have vertical control over the RMs, so we can't try moving them to see if that alleviates the fuzz. Overall we seem to be doing fine, but that fuzz zone may be something to keep an eye on. While the rubbing script looks at the L, P, Y dofs, I had a look at the individual OSEMs. The fuzz seems to only show up in Length in the dof view, and that's consistent with my finding that it looks about the same in each individual OSEM (because it's cancelled out in the P and Y dofs). I attach in the second screenshot one representative individual OSEM to show the fuzz that I'm looking at (green is a reference from 20th Nov 2023, and blue is earlier today when the IFO was down, but the earthquake was mostly done).
Thu Dec 21 10:02:15 2023 Fill completed in 2min 13secs
I viewed the online MEDM trends at the time, it looked like a good fill. There is no DAQ data for this time, so no plot attached.
Erik, Jonathan, Dave:
We upgraded cdslogin to deb12 at 09:45 PST this morning. This caused the EDC on h1susauxb123 go constantly restart. For a short term solution we removed all the channels being served by cdslogin from the H1EDC.ini channel list (lock-loss-alert and remote-access channels) and restarted the DAQ. The EDC is running stably now.
The loss of EDC channels caused a secondary Guardian issue, which is now resolved.
The h1cdssdf system would not run, again because it is connecting to cdslogin lock-loss-alert channels. We have temporarily removed these channels from the monitor.req and safe.snap files. This also caused a secondary Guardian problem which has now been resolved.
Current situation:
Alarms system is running
Lock loss alert system of offline.
EDC is missing all of its cdslogin channels
No slow channels from EDC in the DAQ between the times of 09:50:00 and 10:21:00
Erik is working on a deb11 container as a temporary solution to get the LLA code running again on cdslogin. Another possible solution is to move the code to another machine. We hope to get text and phone call alerting back online before tonight's operator owl shift.
Lock loss alerts are online again. I tested twilio texts and phone calls were working. I reset the settings to 09:15 by converting the safe.snap into a set of caput operations. At next TOO we will add the LLA channels back into h1cdssdf to put them back under SDF control.
The text/phone alert system is now running in a Debian 11 container. The remote access IOC is also running in a Debian 11 container.
A crash of the alert system, similar to the crash of the EDC, seen in Debian 12 hasn't recurred in Debian 11, so we should try re-attaching the EDC to these systems.
TITLE: 12/21 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Earthquake
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: EARTHQUAKE
Wind: 3mph Gusts, 2mph 5min avg
Primary useism: 0.39 μm/s
Secondary useism: 0.27 μm/s
QUICK SUMMARY:
A M6.3 earthquake from near the Aleutian Islands knocked H1 out of lock around an hour ago, and H1 has been properly waiting to try relocking until the seismic configuration leaves EQ mode. The alignment spot on the AS AIR camera looks quite bad, so I imagine I'll start with an initial alignment once the ground motion calms enough.
All other systems look good.
H1 back to observing at 19:20 UTC
TITLE: 12/21 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Currently Observing at 155Mpc and have been Locked for 10 hours. We temporarily were taken out of Observing a bit earlier due to the squeezer pzt hitting its limit and unlocking SQZ, but it was able to relock itself within a few minutes. Pretty quiet night otherwise.
LOG:
00:00 Detector Observing and Locked for 2hours
00:33 Popped out of Observing and into Commissioning for SQZ
00:36 Back into Observing
07:14 SQZ unlocked and took us out of Observing due to SHG PZT hitting its limit
07:17 SQZ relocked itself and I put us back into Observing
Observing at 157Mpc and have been Locked for 6 hours now. Nothing to report.
We have made significant improvements to our low frequency sensitivity since I last put a coarse Newtonian noise estimate on the same plot as DARM, back in 2015 (alog 22113) so this is an update to that plot. The highest level conclusion is that Newtonian noise is still more than a factor of 10x below our current DARM sensitivity, for both H1 and L1. It turns out it's about a similar factor of 25x below DARM at both H1 and L1.
In the attached plot I show the GDS-CALIB_STRAIN_CLEAN channel for both H1 and L1. I also show an estimate of the Newtonian noise at each of H1 and L1 (after Jim helped me find a missing 1/(2*pi) - thanks Jim!). Since this NN estimate comes entirely from an average of our ground seismometers (1 per building), we expect that this has not changed much now vs. back in 2015, and indeed it doesn't.
In the attached notebook (download, remove the .txt and make sure it has .ipynb, and then run it), the final plot shows that indeed Jan's estimate of NN from T1500284 matches the estimate I get with more recent seismic data. As Jan noted back in 2015, to do a more proper estimate, we need to look at an array of sensors, but this is a fine coarse-grained estimate to show that NN is not responsible for our noise limitations at this time.
Dave, Ansel, Camilla
Dave has written script /opt/rtcds/userapps/release/cds/h1/scripts/hws_camera_control.py that is running on in a tmux session called "camera control" ITMX and ITMY hws computers (h1hwsmsr/h1hwsmsr1). Now we will be able to see HWS data for lockloss, locking and power up without contaminating DARM with combs when observing: more details in 74915. FRS4559 updated.
It takes takes two arguments (optic_name and cam_id) and runs in an infinite loop, every minute it gets the camera status and the IFO lock state. It monitors if H1 is locked and turns the cameras on and off accordingly (via an external triggering setting): OFF if ISC_LOCK > 580, on if ISC_LOCK < 580.
It logs its actions to the screen. Each cycle it writes the camera status to a file /opt/rtcds/lho/h1/hws/{optic_name} with the future plan to write this to a trendable channel. For now we can trend if the camera is on or off by seeing if the HWS data is updating, see attached.
Looks like a success; no sign of the 7 Hz comb in DARM on Dec 20th. Pre/post plots attached to demonstrate the improvement.
Overnight, Dave's code sucessfully ran on ITMX. ITMY and ETMX, plot attached. The couple of times there is a jump in spherical power is when the the camera got re-requested to be off, this doesn't matter. This logs in the tmux session show same.
e.g. ITMX on h1hwsmsr:
Naoki, Sheila, Camilla
After our SQZ data taking 74935, we adjusted H1:SQZ-ADF_OMC_TRANS_PHASE from 80 to 78 to have the SQZ_ANGLE_ADJUST servo keep the sqz angle slightly lower, to sacrifice some high frequency squeezing for more squeezing in the DARM bucket. We can already see this has improved the image (135Hz) BLRMS, plot attached. Yellow 350Hz BLRMs is the same. Plot attached.
Note that the SQZ_ANGLE_ADJUST servo wasn't on in the two previous locks 74918 so that may slightly effect SQZ angle.
Oli dropped us out of observing so Naoki and I could revert this change at 00:35UTC. The SQZ angle had changed 15degrees (normally ~5deg) to 185 and the SQZ BRLMS had got worse, see attached.
[Jenne, RyanS, JoeB remote at LLO]
We've started the 45 min stochastic magnetic injection set after having recently reacquired lock.
We turned on the amplifier in the LVEA by clicking the ON button on H1:CDS-PULIZZI_ACPWRCTRL_VERTEX0_OUTLET_1 (sitemap -> CDS -> CDS AC Power Control), then put in the same gps time (1387146396) as LLO into the script from userapps/pem/h1/scripts/inject_mag_10to40.py, which is using waveform = 'CorrMagInj_timestream_2700sec.dat'.
The attached screenshot shows the vertex magnetometers and a (not necessarily well calibrated) version of DARM with the references before the injections, and the 'live' traces during a time of the injection.
This will run for about 45 mins, then we'll turn off the amplifier and go to Observing for the night.
This is finished. I turned off the amplifier, and reset the gain H1:PEM-CS_GDS_0_GAIN back to its nominal value of 1.0, then RyanS took us to Observing.
Injection was succesfull and is coherently recovered in magnetometers as well as h(t).
Fig1 &2: Hx magnetometer - Lx magnetometer: Coherence/CSD, before, during and after injection
Fig3 &4: H strain - Lstrain: Coherence/CSD, before, during and after injection
Channels used:
Hx mag = H1:PEM-CS_MAG_LVEA_VERTEX_X_DQ
Lx mag = L1:PEM-CS_MAG_LVEA_VERTEX_X_DQ
H strain =H1:GDS-CALIB_STRAIN
L strain = L1:GDS-CALIB_STRAIN
Times used:
Before: start: 1387126878 (Dec 20 - 17:01:00 UTC) - duration: 43min, 10 sec fft, 50% overlap (since no good period of coincident lock just before injection, I took the last part of previous lock)
Injection: start gps = 1387146456 (Dec 20 - 22:27:18 UTC) - duration: 43min, 10 sec fft, 50% overlap
After: start: (Dec 21 - 02:01:00 UTC) - duration: 43 min, 10 sec fft, 50% overlap (since no good period of coincident lock just after injection, I took the first part of next lock)
TJ, Ansel, Dave, Camilla
After finding that 1Hz and 5Hz DARM combs were caused by HWS camera sync frequency in 74617, we've turned off all HWS at 21:00UTC so we can check that nothing else from HWS is effecting DARM.
Stopped all HWS code (ETMY was already off 73915). Tuned off all cameras and RCX CLinks. Turned off the ITM SLEDs, ETMX laser will still be on but was tested in 69648 not to effect DARM.
As long as the computers aren't restarted I think that the H1:TCS-E{OPTIC}_HWS_ channels will remain reading their last values, if not Dave might need to "green-up" the CDS overview.
I temporarily took out the DIAG_MAIN notifications for all HWS. We will have to remember to put them back in though when we are done turning them on and off.
Unfortunately the 7 Hz comb is still present in DARM (figure 1) and in the magnetometer channel where we were tracking it (figure 2).
Extra: just to check that this really is a 7 Hz comb that started after the camera sync frequencies were moved to 7 Hz on Dec 5 (74617), and not something pre-existing, I re-plotted the magnetometer data for the time period when both were set to 5 Hz. This time period avoids any obscuring effects of the 1 Hz comb in magnetometer data. (The 1 Hz comb is much weaker than 7 Hz in DARM so it shouldn't really be an issue, but in the magnetometer data both are clear and there's a little more ambiguity there-- and anyway it's easy to check.) Indeed, no sign of 7 Hz during that hour except at frequencies that are also 5 Hz multiples (figure 3).
I put modified HWS() notifications back into DIAG_MAIN now Dave has a running script that turns the HWS camera/code off within 60 seconds of ISC_LOCK reaching state 580 and on within 60s of IFO being DOWN 74951. DIAG_MAIN may give the warning 'HWS code stopped for ITMX / HWS code stopped for ITMY' as soon as we loose lock for up to 60s, if this is annoying, we can make DIAG_MAIN cleverer. svn link