TITLE: 11/15 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 2mph Gusts, 1mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.35 μm/s
QUICK SUMMARY:
IFO is LOCKING at PRMI_ASC
I arrived and guardian has recently finished an initial alignment and was at CHECK_IR. Microseism is much lower than last few days.
TITLE: 11/15 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: TJ
SHIFT SUMMARY:
Started shift with beginning locking for H1. Had a couple of locklosses (as posted earlier), but made it to NLN on the 3rd attempt. had a lockloss a few min ago due to M6.7 EQ from south pacific. Have been touching base with TJ (who will be on OWL), and he wanted to receive Owl alerts and if locking is rough overnight, he'll switch to the PMC/FSS tests overnight.
LOG:
After tonight's Initial Alignment, H1's been fairly good with getting through the bulk of ISC_LOCK, but have had 2 consecutive locklosses a couple of states after MAX POWER:
Locking Notes:
0003-0043 INITIAL ALIGNMENT (w/ ALSy needing touch up by hand)
0044 LOCK#1
LOCK #2: DRMI looked its ugly self. Needed to run CHECK MICH FRINGES and the BS was definitely the culprit for the nasty alignment and was fixed. PRMI & DRMI both locked immediately.
Will continue locking for the next 2.5-3hrs, and then take H1 to IDLE and leave FSS & PMC -ON- for the night.
But if H1 makes it to NLN, will contact Louis or Joe B for a ~15min calibration check.
Vicky, Ryan, Sheila
Zooming in on this 2:05 UTC lockloss, MC2 trans dropped about 150ms after the IFO lost lock, so we think this was not due to the usual PSL/IMC issue, even though there was a glitch in the FSS right before the lockloss.
Ansel reported that a peak in DARM that interfered with the sensitivity of the Crab pulsar followed a similar time frequency path as a peak in the beam splitter microphone signal. I found that this was also the case on a shorter time scale and took advantage of the long down times last weekend to use a movable microphone to find the source of the peak. Microphone signals don’t usually show coherence with DARM even when they are causing noise, probably because the coherence length of the sound is smaller than the spacing between the coupling sites and the microphones, hence the importance of precise time-frequency paths.
Figure 1 shows DARM and the problematic peak in microphone signals. The second page of Figure 1 shows the portable microphone signal at a location by the staging building and a location near the TCS chillers. I used accelerometers to confirm the microphone identification of the TCS chillers, and to distinguish between the two chillers (Figure 2).
I was surprised that the acoustic signal was so strong that I could see it at the staging building - when I found the signal outside, I assumed it was coming from some external HVAC component and spent quite a bit of time searching outside. I think that this may be because the suspended mezzanine (see photos on second page of Figure 2) acts as a sort of soundboard, helping couple the chiller vibrations to the air.
Any direct vibrational coupling can be solved by vibrationally isolating the chillers. This may even help with acoustic coupling if the soundboard theory is correct. We might try this first. However, the safest solution is to either try to change the load to move the peaks to a different frequency, or put the chillers on vibration isolation in the hallway of the cinder-block HVAC housing so that the stiff room blocks the low-frequency sound.
Reducing the coupling is another mitigation route. Vibrational coupling has apparently increased, so I think we should check jitter coupling at the DCPDs in case recent damage has made them more sensitive to beam spot position.
For next generation detectors, it might be a good idea to make the mechanical room of cinder blocks or equivalent to reduce acoustic coupling of the low frequency sources.
TITLE: 11/15 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 15mph Gusts, 10mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.68 μm/s
QUICK SUMMARY:
H1 was down for troubleshooting all day shift and at the end of TJ's shift he started an Initial Alignment which we are running now.
The goal for today is similar to last night's shift: Will try locking from 430-9pm.
Operator NOTE from TJ: When restoring for locking--order is PMC -> FSS -> ISS (as they are listed on the Ops Overview)
Environmental Notes: µseism is worse than last night (is clearly higher than the 95th percentile & touching "1 count" on FOM). Had been windy most of the day, but has become calmer in the last hour.
Initial Alignment Note (since it has just completed while I've been trying to write this alog for the last 45min!): ALSy wasn't great after INCREASE FLASHES, Elenna touched up the ETMy by hand and this immediately helped-----EVEN WITH HIGH MICROSEISM!
TITLE: 11/15 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Corey
SHIFT SUMMARY: The entire shift was dedicated to troubleshooting the laser glitching issue. We also had high winds and high useism, so it was good timing. The wind has died down and the troubleshooting has ended for the day so we are starting initial alignment.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
21:41 | SAF | Laser | LVEA | YES | LVEA is laser HAZARD | 08:21 |
16:22 | ISC | Sheila | LVEA | yes | Unplugging cable for IMC feedback | 16:42 |
16:47 | TCS | Camilla | LVEA | yes | Power cycle TCSY chassis | 17:04 |
17:44 | EE | Fernando | MSR | n | Continuing work on the ISC backup computer | 22:38 |
18:05 | PSL | Patrick, Ryan, Fil | LVEA | yes | Power cycle PSL Beckhoff computer | 18:26 |
18:39 | PSL/CDS | Sheila, Fil, Marc, Richard | LVEA | yes | 35MHz swap or wiring | 19:09 |
19:18 | PSL | Ryan | LVEA | yes | Reset noise eater | 19:49 |
19:58 | TCS | TJ, Camilla | LVEA | YES | Checking TCSY cables | 20:58 |
19:58 | PSL | Vicky | LVEA | YES | Setting up PSL scope & poking cables | 20:58 |
22:43 | PSL | Jason, Sheila | LVEA | yes | PMC meas. | 00:20 |
22:44 | PSL | Ryan | LVEA | yes | PMC meas | 23:14 |
22:44 | PSL | Vicky | LVEA | yes | Setting up sr785 | 00:21 |
Vicky and Jason measured the PMC olg, and I grabbed the data from the SR785 for them. The plots are attached. The second measurement is a zoomed in version of the first.
Looks like the feature above 5 kHz is around the same frequency as the peak we are seeing in the intensity and frequency noise (alogs 80603, 81230)
These are the steps I took to get the data:
> cd /ligo/gitcommon/psl_measurements
> conda activate psl
> python code/SRmeasure.py -i 10.22.10.30 -a 10 --getdata -f data/name
This will save your data in the data folder as "name_[datetime string].txt"
To confirm connection before running, try
> ping 10.22.10.30
you should get something like
PING 10.22.10.30 (10.22.10.30) 56(84) bytes of data.
64 bytes from 10.22.10.30: icmp_seq=1 ttl=64 time=1.26 ms
64 bytes from 10.22.10.30: icmp_seq=1 ttl=64 time=1.54 ms (DUP!)
64 bytes from 10.22.10.30: icmp_seq=2 ttl=64 time=0.748 ms
64 bytes from 10.22.10.30: icmp_seq=2 ttl=64 time=1.03 ms (DUP!)
64 bytes from 10.22.10.30: icmp_seq=3 ttl=64 time=0.730 ms
64 bytes from 10.22.10.30: icmp_seq=3 ttl=64 time=1.02 ms (DUP!)
^C
--- 10.22.10.30 ping statistics ---
3 packets transmitted, 3 received, +3 duplicates, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 0.730/1.054/1.538/0.282 ms
----
run Ctrl-C to exit. If you don't get these messages, you are probably not plugged in properly.
To plot the data (assuming that you are measuring transfer functions), use
> python code/quick_tf_plot.py data/name_[datetime str].txt
Craig has lots of great options in this code to make nice labels, save the plot in certain places, etc. if you want to get fancy. Also, he has other scripts that will plot spectra, multiple spectra or multiple tfs.
If you want to measure from the control room, there are yaml templates that run different types of measurements, such as carm or imc olgs.
We restarted the calibration pipeline with a new configuration ini such that it no longer subtracts the 60 Hz (and harmonics up to 300 Hz). The configuration change is recorded in commit here: https://git.ligo.org/Calibration/ifo/H1/-/commit/53f2e892a38cfb18815912c33b1f1b8385cfff62 I restarted the pipeline at around 9:35 am PST. Around the same time as this was done at LLO (LLO:74051). The IFO was down at the time, so I left a request with the H1 operators to contact both me and Joe B. when H1 is back at NLN but before going to Observing mode so that we (whoever is first) can confirm that the GDS restart is behaving as expected. Initial checks at LLO indicate that things are working properly, which is promising.
Joe B., Louis D., Corey G. Corey called as soon as H1 reached NLN. The gstlal-calibration pipeline restart with 60 Hz subtraction turned off looks like it's behaving as expected so we gave Corey the green light from Cal to go into Observing. The 60Hz line and its harmonics up to 300Hz look good (i.e. NOLINES looks identical to STRAIN since subtraction for those lines was turned off).
Ryan, Jason, Patrick, Filiberto As part of troubleshooting the PSL we hardware power cycled the PSL Beckhoff computer in the diode room this morning, along with all of the associated diode power supplies and a chassis in the LVEA. I had guessed that everything would autostart, but I was wrong, so I took the opportunity to set it up to do so. This required putting a shortcut to the EPICS IOC startup script in the C:\TwinCAT\3.1\Target\StartUp directory (see attached screenshots), and selecting an option in the TwinCAT Visual Studio project to autostart the TwinCAT runtime. We software restarted the computer again to test this, and after logging in, the Beckhoff runtime and PLC code started, along with the EPICS IOC, but the visualization did not. I found documentation that pointed to the location of the executable that starts the visualization, and added a shortcut to that to the startup directory as well. We didn't have time to restart the computer again to see if that would autostart correctly. For some reason there seemed to be issues with processes reconnecting to the EPICS IOC channels. I tested running caget on the Beckhoff computer itself and got a message about connecting to two different instances of the channel, and a couple of pop up windows related I think to allowing network access, which I said to allow. caget worked, although it gave a blank space for the value, so I tried it again with an invalid channel name, which it correctly gave an error for. On the Linux workstation we were using, the MEDM screens were not reconnecting, even after closing and reopening them, but again caget worked. We had to restart the entire medm process for it to reconnect. The EDCU and SDF also had issues reconnecting, and they had to be restarted too.
As Patrick mentioned, channel access clients which had been connected to the IOC on h1pslctrl0 would not reconnect after its restart.
The EDC stayed in its disconnect state for almost an hour, even though cagets on h1susauxb123 itself were connecting, albeit with "duplicate list entry" warnings:
(diskless)controls@h1susauxb123:~$ caget H1:SYS-ETHERCAT_PSL_INFO_TPY_TIME_HOUR
Warning: Duplicate EPICS CA Address list entry "10.101.0.255:5064" discarded
H1:SYS-ETHERCAT_PSL_INFO_TPY_TIME_HOUR 18
The restart of the DAQ EDC did not go smoothly, I had added a missing channel to H1EPICS_CDSRFM.ini (WP12195) in preparation for next Tuesday maintenance and so the EDC came back with a different channel list to that of the rest of the DAQ. I reverted this file change and a second EDC restart was successful.
11:38:35 h1susauxb123 h1edc[DAQ]
11:46:17 h1susauxb123 h1edc[DAQ]
The slow controls h1pslopcsdf system was also unable to reconnect to the 4 PSL WD channels it monitors. This was restarted at 12:08 14nov2024 PST.
Erik found that MEDM on some workstations would continue to show white-screen for h1pslctrl0 channels and a full restart of MEDM was needed to resolve this.
TJ, Vicky, Camilla
Vicky set up an oscilloscope on a cable simular to the PMC mixer and we watched the second trend of Sheila's PSL ndscope. The largest cause of repeatable glitches by touching cables: ISS AOM 80MHz that we found loose and tightened, it sounds like the PSL team found the PSL side of this cable loose and tightened on Tuesday too. Times noted below if we want to go back and look at the raw data.
Thu Nov 14 10:15:06 2024 INFO: Fill completed in 15min 3secs
Jordan confirmed a good fill curbside.
New Vacuum section on CDS Overview shows CP1 LLCV percentage open as a red bar. Bar limits are between 40% and 100% open so normally you won't see any red in this widget.
Sheila, Vicky, Elenna, TJ, Marc, Filiberto, Richard, Daniel, Jason, Ryan Short
The IMC did not stay locked overnight, after Corey left it locked at 2W with the ISS on (screen shot from TJ). Vicky noticed that the SQZ 35MHz LO monitor sees something going on sometimes before the IMC lost lock, screenshot from Elenna. A few days ago Nutsinee flagged this squeezer 35MHz LO monitor channel, it does show increased noise when the FSS is unlocked which doesn't make sense but it seems like this is at least partially due to cross talk (when we intentionally unlock the FSS, there is extra noise in this channel).
A bit before 8:29 I unplugged the cable from the IMC servo to the PSL VCO, and we left the FSS and PMC locked with the ISS off. At 8:31 pacific time the FSS came unlocked, and glitches were visible in the PMC mixer and HV (screenshot) . The new channel plugged in on TUesday H1:PSL-PWR_HPL_DC_OUT_DQ might show some extra noise, more visible in the zoomed in screenshot. There are some small glitches seen in the SQZ 35MHz LO monitor at the time of the reference cavity glitches, the squeezer 35MHz LO is shared by the SQZ + PSL PMCs.
8:44 unlocked the FSS and sitting with the PMC only locked, a few seconds later we had a few glitches in the mixer and HV. PMC alone screenshot.
The PSL was powered down to restart the beckhoff, a few minutes before 11 until 11:30 or so.
A few minutes before 11 pacific time, Marc Filiberto and Richard went to the CER, measured the power out of the 35MHz source (11.288dBm) and adjusted the Marconi setting to match the power measured on the RF meter to 11.313dBm. There is a 10MHz signal which is locked to gps through the timing system plugged into the back of the Marconi, Daniel says that he thinks the Marconi is locked to that source if it is plugged in.
At 19:33 (11:33 UTC) the PMC is relocked after the beckhoff reboot with the 35MHz source changed.
TITLE: 11/14 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 33mph Gusts, 23mph 3min avg
Primary useism: 0.08 μm/s
Secondary useism: 0.66 μm/s
QUICK SUMMARY: The IMC dropped lock many times overnight. I imagine that this will shape our plans for the day once we have time to dicuss next steps.
TITLE: 11/14 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 161Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
Had a nice 5hr lock for H1. Running an IMC-only lock overnight to see if there are any FSS/IMC glitches.
LOG:
The plan for tonight's shift was to see how locking would go for H1. Well....TJ handed over a locked H1, and then it stayed locked for 5hrs of the 6hr Eve shift!
Vicky mentioned that the Lockloss at 0459utc looked like possible FSS glitch, but it looked "different". She is going to post her plot about that.
This lockloss (1415595557, 2024-11-13 20:58:59 PT) looks related to IMC (/FSS /ISS / PSL)... 10-15 seconds before LL, FSS starts to see some glitches, then more glitches starting ~5 sec before LL. ~30 second trends here.
Then zooming in ~100ms before LL (plot), maybe a small glitch is seen on ISS second loop PDs, and then the IMC loses lock. At the same time that the IMC loses lock, then lockloss pulse and AS port and LSC_REFL see the lock loss. So it seems like an IMC lockloss with some FSS glitching beforehand.
H1 was dropped out of OBSERVING due to the TCS ITMy CO2 laser unlocking at 0118utc. The TCS_ITMY_CO2 guardian relocked the 2min.
It was hard to see the reason why at first (there were no SDF Diffs, but eventually saw a User Message via GRD IFO (on Ops Overview) pointing to something wrong with TCS_ITMY_CO2. Oli was also here and they mentioned seeing this, along with Camilla, on Oct9th (alog)---this was the known issue of the TCSy laser nearing the end of its life. It was replaced a few weeks later on Oct22nd (alog).
Here are some of the lines from the LOG:
2024-11-13_19:43:31.583249Z TCS_ITMY_CO2 executing state: LASER_UP (10)
2024-11-14_01:18:56.880404Z TCS_ITMY_CO2 [LASER_UP.run] laser unlocked. jumping to find new locking point
.
.
2024-11-14_01:20:12.130794Z TCS_ITMY_CO2 [RESET_PZT_VOLTAGE.run] ezca: H1:TCS-ITMY_CO2_PZT_SET_POINT_OFFSET => 35.109375
2024-11-14_01:20:12.196990Z TCS_ITMY_CO2 [RESET_PZT_VOLTAGE.run] ezca: H1:TCS-ITMY_CO2_PZT_SET_POINT_OFFSET => 35.0
2024-11-14_01:20:12.297890Z TCS_ITMY_CO2 [RESET_PZT_VOLTAGE.run] timer['wait'] done
2024-11-14_01:20:12.379861Z TCS_ITMY_CO2 EDGE: RESET_PZT_VOLTAGE->ENGAGE_CHILLER_SERVO
2024-11-14_01:20:12.379861Z TCS_ITMY_CO2 calculating path: ENGAGE_CHILLER_SERVO->LASER_UP
2024-11-14_01:20:12.379861Z TCS_ITMY_CO2 new target: LASER_UP
CO2Y has only unlocked/relocked once since we power cycled the chassis on Thursday 14th (t-cursor in attached plot).
0142: ~30min later had another OBSERVING-drop due to CO2y laser unlock.
While it is normal for the CO2 lasers to unlock from time to time, whether it's from running out of range of their PZT or just generically losing lock, this is happening more frequently than normal. The PZT doesn't seem to be running out of range, but it does seem to be running away for some reason. Looking back, it's unlocking itself ~2 times a day, but we haven't noticed since we haven't had a locked IFO for long enough lately.
We aren't really sure why this would be the case, chiller and laser signals all look as they usually do. Just to try the classic "turn it off and on again", Camilla went out to the LVEA and power cycled the control chassis. We'll keep an eye on it today and if it happens again, and we have the time to look further into it, we'll see what else we can do.
Locklosses overnight:
1415716992 not a PSL/IMC problem, the IMC losses lock 260ms after the IFO, from observing, not sure what it was
1415703738 also not a PSL/IMC problem, the IMC losses lock 240ms after the IFO, from observing, not sure what caused it
1415697803 not a PSL/IMC problem, there was a small earthquake while we were in the ESD transitions. This lockloss is listed as being from state 558 (the one that Elenna increased a ramp time in to avoid locklosses 81260), however the lockloss actually happened in the state before when DARM was still controled by the ITMX ESD. This is just a less robust state to large ground motion.
Ibrahim looked at the long time between the earthquake and relocking, about an hour of that was waiting in ready for the ground motion to come down (which the guardian does independently now), then Ibrahim saw that there were 20 something PRMI locklosses in a row, all about 2 seconds after PRMI locked. (We will look into this more)
We also looked back at Corey's shift, and think that lockloss during the ETM transitions at 2:05 was not a PSL/IMC glitch (noted as LOCK #2 in Corey's alog). So this means we think that we have had 17 hours or so without a lockloss due to the PSL. We will wait and see how today and the weekend goes.
As a reminder, we saw FSS glitches yesterday morning, then did several things before this 17 hour stretch without glitch locklosses.