The attached plot shows both the DARM offset change and the modulation index changes. Strangely, decreasing the DARM offset seems to increase the reflected power on REFL_B but much less so on REFL_A?
As kappa_c is changing with OM2 TSAMS being turned on, 70849, we took a Broadband Calibration Measurement at 14:24 UTC. Screenshot of IFO beforehand attached.
bb output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20230627T142443Z.xml
this xml can be plotted using /ligo/home/louis.dartez/projects/20230627/plot_cal_bb.py
The name of the default conda environment on control room workstations is now 'lhocds'.
The name change to 'lhocds' from 'cds' was needed in order to pin the workstation environments to a specific version. 'cds' will continue to be updated to the latest conda environment, while 'lhocds' will be updated after discussion, work permits etc.
'lhocds' and 'cds' are identical for the time being. Installed packages have not been changed.
Control room workstations were updated and rebooted, except zotws17. Opslogin0 ( nomachine ) was updated and rebooted as well. These were operating system package updates and not an updated to any conda environment.
Out of Observing 12:00 to 12:05UTC for DARM Offset Steps as described in 70835, back in Observing now with H1:AWC-OM2_TSAMS_POWER_SET set to 4.6, sdf accepted for 2 hour test.
As OM2 warms up, our Kappa C is dropping, and reported range increasing, see plot. Our SQZ BLRMs are reporting better squeezing around 100Hz, visible in DARM too, see attached. Unsure if this is real or due to the calibration changing...
No SQZ time with hot OM2 was taken 14:11 to 14:16 UTC. Plot attached.
Optical gain seem to have reduced by ~2%, cavity pole higher by ~3Hz.
Looking closer at the sqz vs. no-sqz times from Camilla, at hot vs. cold OM2 settings, here are some things I notice:
Coherences for a time with lower range (OM2 cold) and higher range (OM2 hot):
https://ldas-jobs.ligo-wa.caltech.edu/~gabriele.vajente/bruco_GDS_1371896710_lower_range/
https://ldas-jobs.ligo-wa.caltech.edu/~gabriele.vajente/bruco_GDS_1371906344_higher_range/
Coherence with jitter is reduced with the hot OM2. Also there is some brodband improvement.
SRCL coherence is slighlty larger when the range is higher, so there might be even more improvement to be gained with a retuned FF.
Also, CHARD_P has larger coherence, while CHARD_Y is slightly better, so I guess the optimal A2L must have changed
Tagging CAL.
This is when the OM2 TSAMS heater first gets turned ON during O4.
As Camilla indicates, the "ON" button was hit -- i.e. H1:AWC-OM2_TSAMS_POWER_SET was set to 4.6 V at
2023-06-27 12:04:02 UTC (05:04:02 PDT)
-- yes, at 5a in the morning on Tuesday prior to Maintenance day; she was on OWL shifts during O4, prior to when owl shifts became remote/on-call.
WE were
back in observing by 2023-06-27 12:05:30 UTC (05:05:30 PDT).
The thermistors on the TSAMS heater unit took much longer to thermalize, with
- H1:AWC-OM2_TSAMS_THERMISTOR_1_TEMPERATURE taking *days* to reach equilibrium 33.0 [deg C].
- H1:AWC-OM2_TSAMS_THERMISTOR_2_TEMPERATURE taking *hours* to reach equilibrium at 56.60 [deg C].
Indeed the calibration measurements taken on Tuesday 2023-06-28 01:50 UTC (2023-06-27 18:50 PDT -- see LHO:70902 and LHO:70908), the lock and observation stretch *after* the above mentioned turn on segment was taken in the middle of the THERMISTOR_2_TEMPERATURE thermalization.
The TSAMS heater remained ON until 15 days later on 2023-07-12 14:48:47 UTC -- LHO:71285.
On Wednesday 21st June, we adjusted NLN input power to 60W 70648. Since then these have been our locklosses from NLN:
I checked 'lockloss select' command lsc and asc plots for all "unknown" locklosses. Couldn't see any ring ups. The IMC unlocks after light falls off AS_A_DC in each of these. The DARM_IN1 signal as each looses lock is attached, 06/26 03:31UTC has the quickest change in signal.
We are getting a 'bash: python: command not found' error on all the workstations. Pyhton3 is working fine and located in /usr/bin/python3. I think this must be a new change since today as I checked zotws17, where Sheila is logged in with a script running 70835, and this computer has a valid python path to /var/opt/conda/base/envs/cds/bin/python that doesn't exit on my workstation.
TITLE: 06/27 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 137Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 4mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.05 μm/s
QUICK SUMMARY: In NLN for 5h45. ITMY Mode5/6 violins slightly high
Dust monitors, SUS, SEI, CDS, VAC okay. Our DMT online.ligo.org/ "Observe" data isn't updating.
ITMY modes 5/6 damped fine once I engaged the nominal damping settings (IY05 @ -0.04). As the modes were rang up the VIOLIN_DAMPING guardian had set IYmode5 gain to "max_gain" value of 0, we could change this to -0.04 or -0.01, tagging SUS.
TITLE: 06/26 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 137Mpc
SHIFT SUMMARY:
- H1 had a power glitch, but has since been recovered
- During an IA post recovery, I have to move SRM by 20 ur in P, and 13 in Y to get SRY to catch - something we haven't had to do for a bit
- Lockloss - due to commissioning error
- Upon relocking - I had an issue not getting any light on ALS X. After trying to move the EX sus to no avail, I tried doing a slider revert for ETMX and TMSX to the last time green arms were locked - no dice
- Held at OMC whitening to damp violins for ~10 minutes
- Acquired NLN @ 1:22, OBSERVE @ 1:23
- S230627c @ 1:53
- EX saturation @ 2:55
- Leaving H1 to Camilla with H1 still going strong :)
LOG:
No log for this shift.
H1 has been locked and in obeserving for just under 2 hours. Ground motion and wind are low, and the IFO appears stable. Event @ 1:53 UTC.
Since we are back to a lower and more stable operating power, I reduced the power step up time in the MAX_POWER guardian state. Each 5W step up from 25W took 60 seconds, now that time is down to 45 seconds. Tomorrow, I'd like to try a 30 second step time, which I think should be fine since the ASC is much more stable up to 60W. This has already reduced the relocking time by almost 2 minutes, and will reduce it more.
Today I reduced this timer to 30 seconds. We just had a successful power up! This is now guardianized. Should shave 3.5 minutes off the locking process.
Note: the ASC loops are notoriously unreliable. I encourage operators to pay attention to the power up process and if there are problems with ASC instabilities during the final power up, increasing this timer will likely be a good solution. Tagging ops.
When relocking (currently at MAX POWER), Elenna and I noticed a possible error on the violin mode overview. The overview itself was reading that some of the 500 Hz monitor filters, particularly ITMY 5/6 and ETMX 18/19 are showing abhorrently high values. However, DARM is showing that the peaks, while not great, are nowhere near as bad as the violin overview is saying it is - both screenshots attached. In addition, the DCPD overview is actually converging and were are getting no verbal saturations anywhere. This is the first we're seeing of this, so we're not really sure what to make of it (Elenna theorized that maybe it could have been a faulty impulse response). Will keep monitoring as we continue up, curious what will happen with the VIOLIN_DAMPING guardian once ISC LOCK gets to DAMP VIOLINS FULL POWER.
(Jordan V., Gerardo M.)
Today at 21:06 utc, we removed BSC5 AIP controller and replace it with a "new" one, this was done in an attempt to solve the oscillation noted on BSC5 readback channel, see attachment.
Unfortunately, after looking at the trend data it appears that the oscillation remains despite the new controller. We will continue to look into this.
I've adapted Camilla's script for squeezer measurements for a measurement that we plan to do early in the morning before the maintence window.
The test will chop the DARM offset up and down twice, sitting for a minute at each DARM offset. Once it has finished it should set the DARM offset back to the original value and then set the OM2 ring heater to maximum. The DARM offset step will be an SDF difference that will take us out of observing, after the steps are done (by 5:05 am) we should be able to go back to observing, but will have to unmonitor or accept the OM2 setting. The script will then wait 2 hours an repeat the DARM offset changes at 7am.
The script is attached here, and at /ligo/home/sheila.dwyer/OMC/script_to_move_DARM_offset_move_TSAMS.py
to run it: python script_move_DARM_offset_move_TSAMS.py -s 1371902418 where the gps time is the start time. This script is already running on ZOTWS17.
For the owl shift operator (Corey), all that you should need to do is wait for the script to run, then when it finishes at around 5:05, accept the SDF difference for H1:AWC-OM2_TSAMS_POWER_SET. At a few minutes after 7 am it will again change the DARM offset and kick us out of observing.
I caused a lockloss from 2W DC readout testing this script, because the DARM offset value I used there was too extreme. Sorry!
Tagging CDS, please do not restart zotws17 until this script has finished runnning.
All the HEPI and ISI watchdogs tripped at both Ends and we saw the lights flicker in the control room. No issues on CDS, I tried to untrip them after seeing the values were below the trip points, but the HEPIs retripped after a few minutes and the ISIs stayed untripped.
Attached mainsmon channels for EX show a glitch at 13:39 PDT
Here is a zoomed-in plot of the 3 phases for each building, about 5 cycles of the 60Hz shown. I think I can convince myself that the CS glitch was not as great at the EX, EY glitches.
Storms were just SW of us at the time of the lights flickering and although didn't look crazy, the radar shows more sever thunderstorms in the area than anticipated. A series of them appeared to move East and a bit North within the hour.
Transient is considerably reduced when IFO is kept at 25W with annualr CO2s for 40minutes, see plot. Is this caused by annular CO2s being on or from the 25W circulating arm power for 40minutes before final power up to NLN?
Dan's next suggested test is to not turn off CO2s on lockloss and try to lock with them.
Leaving the CO2s on all the time should get rid of this 5+ hour transient we are seeing when we get to max power, as Camilla's plots are nicely hinting at. The uniform heating time constant is much faster, so we'd expect to see the arm powers reach a steady state in about 1-1.5 hours.
Plots attached for the surface and substrate spherical power change over time, ring heater included for reference.
Plots show the spherical power change from 1W of uniform (scaled down by 0.05) and annular CO2. There are two longer time constants when switching the CO2 on due to the CP eventually radiating on to the ITM AR surface. One is from the ITM substrate eventually developing a thermal lens which oppposes the annular CO2 CP lens, hence the overshoot. The other is the thermo-elastic deformation of the ITM HR surface from the radiating CP. Both effects are still changing the IFO over about 10 hours, the thermal lens taking the longest to level off.
As can be seen in the steady state, watt for watt, the CO2 actuates about 40% of the RoC actuation that 1W of RH does. In terms of substrate lensing the annular CO2 generates a lens that is about 30% more than the RH.
Today we are applying 4W of annular CO2 on X and 1.7W on Y. From a cavity eigenmode perspective this is somewhat equivalent to applying 1.6W of RH on X and 0.68W on RH Y. However, the ratio of RoC to substrate lensing is different, so we can't just swap to using a constant ITM RH power and expect the same result if we plan to have constant CO2.
These 5+ hour time constants also making picking the CO2 powers a pain (as with everything TCS related), as the fast CO2 changes we have been doing only show us the CP substrate lensing effect on the IFO.
In alog 66700 I show that leaving the ring heaters on didn't have a large effect on the 2 hour transient, though we only had one true lockloss and relock with the CO2s remaining on.