Derek, Iara, Zach Following up on Vicky's request to determine if the continuous turning on and off of the ESD drive in response to PI ringup is causing noise. We examined 28 June 2023 at 07:25 UTC and 19 July 2023 at 16:19 UTC where the ESD drive was repeatedly turned on and off. We referenced strain and omicron segments for the same times and found no significant increase in glitches or average strain. We find no additional noise from the engagement of the ESD drive. Additionally, we notice no significant difference between the time periods when the ESD drive is on vs. when it is off. Attached are images of the PI ringups and resulting ESD damping and the corresponding times in strain and omicron.
Andy, Marissa, Gaby, Brennan, Tabata, Shania
We investigated glitches that occurred in Calib strain on June 27 2023 (see red box in glitch-gram and attached omegascan) which are related to the OM2 TSAMS used to improve mode matching and increase BNS range (see aLog 70886). The glitches in Calib strain appear while the temperature is changing (see attached TSAMS plots) and occur around 300 Hz. These glitches are also witnessed by OMC-ASC_QPD_{A/B}_YAW (see attached omegascan). Once the temperature is stable the glitches are no longer present. These glitches also occur in Livingston (March 15 2023) when the TSAMS are used (see aLog 63983 for TSAMS plot and attached L1 glitch-gram).
Now that we're back to our lower noise situation with OM2 hot, we're redoing the test that Oli ran in 71304 to look at how much different our noise is with and without the calibration lines on. We expect that this will primarily show that the noise right around the line frequencies is reduced, as suggested by Gabriele in 71614. I don't think we expect any major changes other than right around the lines, but if we do, that would be very interesting.
Since the low frequency calibration lines are back on this week (alog 71706), it took more 'doing' than normal to turn off the calibration lines. ISC_LOCK's NLN_CAL_MEAS does not currently turn off the CAL_AWG_LINES guardian-controlled lines, so I selected LINES_OFF in that guardian. However, that didn't actually stop all of the lines, so I also did an awg clear 8 * to stop the lines going to the DARM1_EXC. TJ is looking into ensuring that NLN_CAL_MEAS takes care of the awg lines.
I'm a little surprised, but I'm not seeing much of a difference in the spectra when the lines are on vs. off. All 4 panels are the same 4 traces (noted above in the bullet points), but zoomed differently. These traces are all of the GDS-CALIB_STRAIN_NOLINES, so for times that the calibration lines are on, they've been subtracted out of the data here (note that the 'new' CAL_AWG_LINES are not yet subtracted, so those are still present in this channel). I expected this channel to show some noise around the calibration lines for times when the Cal lines were on. But, I'm not really seeing anything.
I'm not sure if the PCALY_DARM lines are coming back on at the same height each time or not. It's possible that they are, and it's just that the attached ndscope can only look at 16 Hz channels, and so the beating between lines / aliasing is causing it to look like they are not. But, just to flag that we should check to ensure that the lines are coming on with awg at the amplitude requested.
Lock loss 1374437133
Caused by commissioning activities.
Wed Jul 26 10:07:00 2023 INFO: Fill completed in 6min 56secs
Travis confirmed a good fill curbside
Yesterday we added h1digivideo3 to the CDS hosts stats system, and I noticed that the MEDM was missing some of the recently added network ports, specifically the totals and the loopbacks.
I have updated my script which generates the H1CDS_HOST_STATS MEDM window, the new version is shown in the attachment.
This MEDM can be accessed either from the CDS Overview, as the "COMPUTERS" button lower-right, or from the SITEMAP from CDS->"CDS Machine Stats"
Shivaraj sent Rick and I a message about some noise found on H1:PCALX_TX_PD and H1:PCALX_RX_PD.
https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20230703/cal/pcal_x/
This particular noise took place on the 3rd of July, LHO was empty of people except an operator. Clear examples of this type of noise on these channels can also be found on the 4th, the 5th, and 17th of july.
Checking the same channels on EY:
There is some form of noise on H1:PCALY_TX_PD that shows up on the 11th , 12th, and 13th of this month that looks similar, though not as often and not as intense as the noise found on PCALX and it's not always on both the PCALY_TX and PCALY_RX PD at the same time like seen at EX.
I have reached out to Shivaraj, to try to learn more about this and see if it's a problem for DARM. Which it doesn't seem to be according to what he saw in Bruco .
This noise could point to a problem with our PCAL lasers since its in both TX and RX PD's on at EX. But it also could be the AOM, or the OFS being saturated , or otherwise interacting with changes in temp or humidity.
This could also be a DAQ issue, like a chassis or board because it's showing up on both channels at the same time at EX. Shivaraj mentioned that there might be "cross talk between different channels in a board, and if the glitches are in the light and seen by both PD's they would also show up in other channels, which we could likely use to our advantage."
I took this issue to the Noise sprint on Wednesday and Adrian Helming-Cornell, Jane Glanzer,Vishal Yalla took up the project.
Dave Barker and Erik also apparently tried to also look into this as well and by Wednesday's lunch time there was some sharing of information
The Noise Sprint group started a google doc where we put all the information that we were gathering:
https://docs.google.com/document/d/127y-9zX6So-zWHxpziH0cU9SAjMjV1lUiJrwKRHdB4A/edit
That may not be a clickable link so here is the content:
alog: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=71725
PCAL X Noise found by Shivaraj
Shivaraj sent Rick and I a message about some noise found on H1:PCALX_TX_PD and H1:PCALX_RX_PD.
https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20230703/cal/pcal_x/
This particular noise took place on the 3rd of July, LHO was empty of people except an operator. Clear examples of this type of noise on these channels can also be found on the 4th, the 5th, and 17th of july.
Checking the same channels on EY:
There is some form of noise on H1:PCALY_TX_PD that shows up on the 11th , 12th, and 13th of this month that looks similar, though not as often and not as intense as the noise found on PCALX and it's not always on both the PCALY_TX and PCALY_RX PD at the same time like seen at EX.
I have reached out to Shivaraj, to try to learn more about this and see if it's a problem for DARM. Which it doesn't seem to be according to what he saw in Bruco .
This noise could point to a problem with our PCAL lasers since its in both TX and RX PD's on at EX. But it also could be the AOM, or the OFS being saturated , or otherwise interacting with changes in temp or humidity.
This could also be a DAQ issue, like a chassis or board because it's showing up on both channels at the same time at EX. Shivaraj mentioned that there might be "cross talk between different channels in a board, and if the glitches are in the light and seen by both PD's they would also show up in other channels, which we could likely use to our advantage."
PCAL Background:
-> PCAL = Photon Calibration
Used for calibration using test masses at the ends stations of the interferometer using a physical force
PCAL chassis layout: https://dcc.ligo.org/cgi-bin/private/DocDB/ShowDocument?.submit=Identifier&docid=S1400489&version=5
Potential Causes:
Tasks:
Channel Names:
(Might have correlation between calibration channels; however chassis channels are not resolving without calibration channels)
Calibration channels:
Link to GSTAL If you find a time that the kappas have some weird signals then check out the GSTAL data for those times as well.
Chassis documentation: https://dcc.ligo.org/LIGO-D1400153
Other instances:
June 9
June 10 - few lines
June 11 - few lines
June 19
June 24
July 3
July 4
July 5
July 11
July 17
July 18
July 19
July 20
July 21 - few lines
July 25 - few lines
I have 2 gps times that I have narrowed this down to happen between.
Between 1372456038 -1372456158
Calibration channels:
July 4th, 2023: 16:00:00 - 18:00:00 UTC (1372510818 GPS)
H1:CAL-CS_TDEP_KAPPA_PUM_OUTPUT, raw,start: 2023-07-04 16:00:00 (1372521618) len: 2:00:00.
H1:CAL-CS_TDEP_KAPPA_UIM_OUTPUT, raw,start: 2023-07-04 16:00:00 (1372521618) len: 2:00:00.
H1:CAL-CS_TDEP_KAPPA_TST_OUTPUT, raw,start: 2023-07-04 16:00:00 (1372521618) len: 2:00:00.
July 3th, 2023: 14:00:00 - 16:00:00 UTC (1372510818 GPS)
H1:CAL-CS_TDEP_KAPPA_PUM_OUTPUT, raw,start: 2023-07-03 14:00:00 (1372428018) len: 2:00:00.
H1:CAL-CS_TDEP_KAPPA_UIM_OUTPUT, raw,start: 2023-07-03 14:00:00 (1372428018) len: 2:00:00.
H1:CAL-CS_TDEP_KAPPA_TST_OUTPUT, raw,start: 2023-07-03 14:00:00 (1372428018) len: 2:00:00.
https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20230703/cal/pcal_x/
More searching is needed to ensure that this is resolved.
TITLE: 07/26 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
CURRENT ENVIRONMENT:
SEI_ENV state: EARTHQUAKE
Wind: 8mph Gusts, 6mph 5min avg
Primary useism: 0.23 μm/s
Secondary useism: 0.05 μm/s
QUICK SUMMARY: Austin handed me an IFO that was just about up to max power when we had another aftershock roll through and break the lock. Starting again, still fully auto relocking.
TITLE: 07/26 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 144Mpc
SHIFT SUMMARY:
- 9:16 - intention bit went to COMMISSIONING, CDS_CA_COPY/CAMERA_SERVO froze, but I was able to move it back to OBSERVING @ 9:19 - Tagging ISC
- Lockloss @ 12:51
- Leaving H1 to TJ with the IFO currently relocking at POWER_10W
LOG:
No log for this shift.
H1 has been observing for 10 hours. All subsystems appear to be stable.
Plots of this week's (07/26/2023) SUS charge measurement attached.
You can see from the in-lock charge plots that V_eff changed the direction of it's trend at certain times, roughly listed below. In the plot attached, the bias voltage applied to the DC ESD changed at certain times 68123 67698. I wanted too see if these times agreed (shown in G1600699). More investigation is needed as the current plots only go back as far as March but this could explain the trend in ITMY V_eff. Not shown in plot is that EX bias was off 10 March to 05 April 2023 (68446).
| Optic |
V_eff changed direction, plots above. estimated dates |
Bias voltage applied to the DC ESD changed, plot attached |
| ITMX | Feb/March 2023 | No change |
| ITMY | April 2023 | 23rd April 2023 |
| ETMX | No change | 28th Feb 2023 |
| ETMY | end of April 2023 | 28th Feb 2023 |
TITLE: 07/26 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 1:14 UTC. Nothing of note to report.
See lockloss alog for details on 23:54 UTC Lockloss (nothing remarkable). See midshift alog for details on lock acquisiton (near fully automatic).
Weekly SUS Charge (see alog 71721)
LOG:
TITLE: 07/26 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 144Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 15mph Gusts, 12mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.05 μm/s
QUICK SUMMARY:
- H1 locked and in observing for just over 6 hours
- SEI/CDS/DMs ok
IFO is in OBSERVING as of 01:14 UTC
We lost lock at 23:54 UTC (alog) with no apparent cause
Lock Acquisition:
1. IR was not found on ALS_DIFF and so I moved the Diff Offset slider from 397 to 417 and IR caught immediately thereafter. No further touching was needed all the way to NLN. There was no wait between OMC_Whitening and NLN.
2. There were ASC SDF diffs (screenshot below), but they showed a difference of 0 (though were initially showing up as red). Hitting the "observe" intention too early on my part I think stopped guardian from clearing these because as soon as I toggled it back, all guardian nodes were ready for observing.
The sdf diff screen in the screenshot isn't the real diffs. This is showing "FULL TABLE" and "SORT ON SUBSTRING: DHARD_Y". How I left it yesterday. To see the actual diffs, you should change the dropdown boxes to "SETTING DIFFS" and "SHOW ALL". I expect the diffs were from waiting for the ADS camera's to converge.
Lockloss from NLN (and Observing) at 23:54 UTC
No known reason for the lockloss. Re-locking now.
Jenne, Camilla
In 71559 Ryan C and Daniel found that there was a DHARD_Y Transient during DHARD_WFS state that was larger since June 29th, coinciding with the time the violins starting to ring up 71404.
Yesterday TJ and Oli increased the DHARD_P and _Y TRAMPs from 5s to 15s in this state 71672 and in the two relocks since then we've only had to stay in OMC_ WHITNIENTING damping violins <15 minutes 71676. The transient is much smaller after this TRAMP increase but still there and a factor 2 larger than pre-June 29th.
The step response of the only filter on at the time (FM6 "newcntrl") is very big when it's turned on, but over in 1sec, attached. Currently FM6 is turned on and then 1 second later the input and gain is turned on, plot attached. Jenne suggests as the step response starts when the input is turned on, we should turn the input on with FM6 and then after 1 second, ramp the gain up. I've added this to ISC_LOCK and loaded. Hope this will remove the transient and then we can reduce the tramp back to 5s.
Daniel pointed out that FM4,7,9 are turned off when FM6 is turned on. These filters all have 5 second ramp times so we are probably getting the transient from these filters being still ramping down as the gain is ramped up. I've accepted these as OFF in the safe.snap sdf see attached. So they should be reverted on sdf revert.
We saw a similar transient in DHARD_P but haven't looked at it yet so we should repeat this process with other filters.
Most of the individual filters in this module have a 2-5s ramp time. For whatever reason, FM4/5/7/9 are all on just before DHARY gets engaged, They then get turned off while FM6 is turned on, a short time before the input to the moduleis turned on. However, there is not enough time for these filters to ramp off, so they are still on during the inital gain ramping.
This transient is not visible in the optics anymore, see attached. I'll put the tramp back to 5s but ISC_LOCK will need to be reloaded when we are out of observe. Opened and closed FRS 28647.
Brina, Genevieve and Lance are going to look for transients from other filters and at other times, Elenna suggested checking DHARD_P.
Summary:
This is a continuation of single bounce beam analysis. In the past we've done OM2 hot/cold measurements for ITMX in alog 70502 and 71100, this time we've done a different thing (OM2 cold, ITM CO2 off/on for ITMY beam).
When ITM CO2 was off, the OMC scan looked like the first attached (for Jennie: about 16:46:35 - 16:47:58 UTC). 20 peak is ~1.0 while 00 peak is ~16 (off the scale in the plot).
With CO2 heating of 1W (started ~16:51:13) , the 20 peak started decreasing but it was much, much slower than we expected.
At around 17:25:00 UTC we had to stop due to other maintenance tasks. The last usable scan before this (for Jennie: 17:23:08-17:24:38 UTC) is shown in the second attachment. 20 peak was still slowly decreasing, but anyway at that moment 20 peak was down to 0.6.
Given this slow time constant, Daniel points out that maybe we should have waited longer after the IFO unlocked before starting the single bounce scan (both for today and for the past measurements). FYI IFO was unlocked at about 15:07 UTC.
I'll do my mode matching simulation as soon as Jennie gets the 20/(20+00) numbers.
What was done:
10W into IMC, ITMY single bounce. ASC-AS_A and AS_B DC centering (DC3 and DC4) were on. RF sidebands were turned off.
Manually locked OMC (OMC guardian auto, asked for prep-omc-scan, then go manual, scan the OMC-PZT2_EXC to find 00 peak, stop scan and adjust the PZT2_OFFSET so we're on the 00 resonance, ask for OMC_LSC_ON, then OMC_Locked and go AUTO, that's what I kind of remember).
Manually refined the alignment using OM3 and OMCS. Disabled the OMC LSC, OMC guardian DOWN, and started scanning. We ended up using 0.01Hz Ramp signal with 110V amplitude (PZT2_OFFSET zero) to make sure to use the full range of the PZT.
OM2 was cold throughout the scan (H1:AWC-OM2_TSAMS_THERMISTOR_1_TEMPERATURE=21.748 to 21.749, H1:AWC-OM2_TSAMS_THERMISTOR_2_TEMPERATURE= 22.149 to 22.147)
TCS was off at first. The first scan (16:45:35-16:47:58) was about 1h 40min after the lock loss.
TCS central heating of 1W was turned on at about 16:51:13.
Daniel restored the RF SBs and brought all settings back.
How to turn off RFSBs.
Disconnect the cable for 118MHz on the patch panel at the bottom of the PSL rack (1st picture).
On top of the patch panel there's a 24MHz amplifier, don't turn it off.
On top of the 24MHz thing, there are amplifiers for 9MHz and 45MHz. You will turn off the output of both (2nd picture showing the 45MHz unit with the RF output switch in OFF position).
If we just believe TCS frontend simulation, H1:TCS-SIM_ITMY_SUB_DEFOCUA_FULL_SINGLE_PASS_OUTPUT was ~17.05uD during the last OMC scan before we gave up.
We might be able to use this to distinguish between the two patches in the MM parameter space (update in https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=71477) but I'll wait for the OMC scan fitting results.
Executive Summary: The mode mis-match with no central heating on the ITM is 8.2%, the mode mis-match with central heating on the ITM is 3.6%.
For the first scan:
T0 = 1373734011
delta T = 87s
OMC scan is shown in the first png image.
Fitted C20/02 peak is shown in the first pdf.
We expect the HOM spacing to be 0.588 MHz as per this entry and DCC T1500060 Table 25.
The mode spacing is 148.796 - 149.388 = 0.592 MHz.
The ratio of second order to zeroth order carrier is (0.575 + 0.853)/(0.575 + 0.853 + 15.90) = 0.082 = 8.2 % mode mis-match
To run the code checkout git branch /dev of labutils and run measurement.
python OMCscan_nosidebands3.py 1373734011 87 "Sidebands off, 10W input, cold ITM + OM2" "single bounce" --verbose -m -o 2
and for the split peak fitting:
python fit_two_peaks_no_sidebands3.py
For the second scan:
T0 = 1373736206
delta T = 90s
OMC scan is shown in second png image.
Fitted C20/02 peak is shown in the second pdf.
The mode spacing is 148.741 - 149.338 = 0.597 MHz.
The ratio of second order to zeroth order carrier is (0.201 + 0.428)/(0.201 + 0.428 + 17.02) = 0.036 = 3.6 % mode mis-match
Run the following on the same git branch.
python OMCscan_nosidebands4.py 1373736206 90 "Sidebands off, 10W input, hot ITM + cold OM2" "single bounce" --verbose -m -o 2 -p 0.01
and for the split peak fitting:
python fit_two_peaks_no_sidebands4.py
data is in labutils/omc_scan/data/2023-07-18
files in labutils/omc_scan
figures in labutils/omc_scan/figures/2023-07-18
Summary:
Incorporated the fit results and updated the plot. Original analysis is in alog 71145.
In the attached, there are two pairs of patches, each pair comprising yellow and lighter blue, that represent the previous measurement (alog 71145 where ITMX single bounce was used with no TCS, OM2 hot/cold) and two pairs, each pair comprising greenish blue and darker blue, that represent the measurent done this time (ITMY single bounce, cold OM2, TCS ON/OFF).
Since it's impossible that the beam parameters of ITMX single bounce beam on the OMC are totally different from those of ITMY single bounce, you can just look at the distance between pairs and judge which ones represent the reality. In this case, the patches in the left half plane are the clear winners.
Details and caveats:
Calculation done for the ITMY single bounce is exactly the same as ITMX except that the measured losses are different and the mode actuator is ITMY central TCS instead of OM2.
As for the TCS optical power I used H1:TCS-SIM_ITMY_SUB_DEFOCUS_FULL_SINGLE_PASS_OUTPUT~17uD for the central heating (zero for no heating). I simply doubled the number for double-pass effect. If this is grossly off the result might look different.
Since 1st order HOM power was not negligible in ITMY single bouncer scan, as a first order approximation, I used P2/(P0+P1+P2) as the measured mode matching loss where P0, P1 and P2 are the power of 00, 1st order and 2nd order mode (for the 2nd order mode, 20 and 02 were resolved by the fit code). I've done this to ITMX single bounce scan too just for consistency.
If the model is perfect and has everything, the difference between yellow "X, OM2" and greenish-blue "Y, OM2 Cold/TCS OFF" should be explained by the difference in the ITM ROC, substrate lensing/heating including the TCS (and IFO heating prior to lock loss, since we haven't waited for hours and hours after the IFO was unlocked). It would be interesting to see if ITM difference will make the plot look any different.
However, the model doesn't have ITMX and ITMY, it's just a single ITM at the average location. Though it's easy to implement that feature in principle, I have a suspicion that the numbers used for ITM substrate lens effect in the past could be off, and I've contacted GariLynn. Wait for the conclusion of that discussion.
A big caveat is that you cannot quickly draw conclusion about the full IFO mode matching from this. At the very least, you have to take into account that the arm mode is primarily determined by the HR and that the carrier coming to OMC from inside the arms only experience the ITM lensing once (-ish).
Another big caveat is that the ADC was railing for the 00 mode peak. Look at the 2nd attachment bottom where H1:OMC-DCPD_B_STAT_MIN=-(2^19). It's not as bad as the finesse measurement (alog 71888) as the scan was slower, but if we want a better data we need to redo it with lower power or w/o x10 gain.
Last attachment shows what happens when you change OM2 (left, 1 step in the plot = maximum range of the T-SAMS) or ITM heating (right, 1 step = 10uD single pass).
Perhaps unsurprisingly given its previous history, the strong 1.6611 Hz comb that disappeared (alog 69791) in late May has resurfaced. It shows up clearly in Fscans; I did some additional digging and it looks like the first traces appear on June 27th in the 12:00-14:00 UTC range. This corresponds in time with some of the work described in alog 70849, but OM2 heater changes don't account for the previous disappearance of the comb; Sheila confirms that heater wasn't on earlier in May. So it's still not clear what's going on.
Update: it's coherent with H1_PEM-EX_VMON_ETMX_ESDPOWER48_DQ and H1_PEM-EX_VMON_ETMX_ESDPOWER18_DQ, and *not* with CS or EY VMON channels.
(Last time we tried to hunt this comb down, I think we didn't have high resolution coherence plots generated to high enough frequencies for these channels.)
Plots attached. The gray dots are harmonics of a separate 99.9989 Hz comb.
It looks like the behaviour of this comb changed again on July 13, shifting slightly in frequency, before then disappearing again on July 14. It is as yet unclear what caused the changes. The attached weekly average Fscan From July 12 - 19 shows these changes around 280 Hz especially.
This comb seems to reappear between 7:30 and 9:00 UTC on July 19, 2023. Hopefully this time range can point to something that specifically changes. See attached daily Fscan image