I made a comparison of DARM_IN1_DQ and CAL-DELTAL_EXTERNAL_DQ in nominal VS new DARM offloading scheme (the new scheme itself is explained in alog 74887). Data for the NEW_DARM configuration was taken from Dec/21 (alog 74977) when Louis and Jenne successfully transitioned but with calibration that did not make sense.
The main things you must look at are the bottom left panel red and blue, i.e. the coherence between DARM_IN1 and CAL-DELTAL_EXTERNAL in the NEW (red) VS the old (blue) configuration. Blue trace is almost 1 as it should be, but the red drops sharply between 20Hz and 200Hz.
This does not make any sense because CAL-DELTAL_EXTERNAL is ultimately a linear combination of DARM_IN1 and DARM_OUT (see https://dcc.ligo.org/G1501518). Since DARM_OUT is linear to DARM_IN1, no matter where and how the noise is generated and no matter how you redistribute the signal in the ETM chain, CAL_DELTAL_EXTERNAL should always be linear to DARM_IN1, therefore coherence should be almost 1.
So what's the issue here?
The only straightforward possibility I see is that somehow excessive numerical noise is generated in the calibration model even with the frontend's double precision math. Maybe something is agressively low-passed and then high-passed, or vice versa, that kind of thing.
It is not an artefact of the single precision math of DTT. Both CAL_DELTAL_EXTERNAL and DARM_IN1 is already well whitened, and they're entirely within the dynamic range of single precision. For example, RMS of red CAL-DELTAL_EXTERNAL_DQ trace is ~7E-5 cts. From that number, I'd expect that the noise floor due to single precision is very roughly O(7E-5/10**7 /sqrt(8kHz)) ~ O(1E-13) cts/sqrtHz if it's close to white, give or take some depending on details, but the actual noise floor is ~10E-8 cts/sqrtHz. Same thing can be said for DARM_IN1.
It's not the numerical noise in DARM filter as the coherence between DARM_IN and SUS-ETMX_L3_LSCINF_L_IN1 (which is the same thing as DARM_OUT for coherence purpose) is 1 from 1Hz to 1kHz for both configurations (old -> brown, new -> green). (It looks as if the coherence goes down above 1kHz for the old config, but that's irrelevant for this discussion, and anyway it's an artefact of DTT's single precision math. See e.g. the top left blue (old config DARM_OUT) with RMS of 20k counts, corresponding to O(2E-5)/sqrtHz single noise floor due to single precision, give or take some. See where the actual noise floor is.)
It's not a glitch, noise level of CAL_DELTAL_EXTERNAL spectrum didn't change much from one fft to the other for the entire window (I used N=1 exponential to confirm this).
Note that there's also a possibility that excessive noise is generated in the SUS frontend too, polluting DARM_IN1 for real, not just for calibration model. I cannot tell if that's the case or not for now. The difference between the green (new) and brown (old) DARM_IN1 spectrum in the top left panel could just be a difference in gain peaking due to different DARM loop shape.
I'll see if double precision channels (recorded as double) in calibration model are useful to pinpoint the issue. Erik modified the test version of DTT so it handles the double precision numbers correctly without casting into single, but it's crashing on me at the moment.
some more time windows to look into while we were in the NEW DARM state are listed at LHO:75631.
The purge air compressors continue to run as VAC works on getting ready to vent, VAC team hopes to get doors off tomorrow (thurs 18th) or friday (19th).
After looking at beeps and error lights on h0epics2 it looks like we have migrated all the jobs that were on it off already. We powered off h0epics2 at 11:12am localtime.
We found one medm screen that wanted channels from an ioc on h0epics. * h0video - this controlled some of the analog camera system. We have moved that to h0epics and are inquiring if we can just not run this anymore. Our old wiki pages also point to a h0tidal ioc, however its channels have not been in the frame since 2015 and the ioc itself is no longer in the shared drive. So having migrated h0video to h0epics we will keep h0epics2 off.
We are power cycling the FMCS EPICS computer (fmcs-epics-cds) as a first try to regain stability.
After the first cell phone alarms were sent, I've bypassed them for a couple of hours.
Jonathan, Patrick, Dave:
The FMCS IOC computer is back online. The restart code is running again. The cell phone alarm bypass has been removed.
Wed Jan 17 10:04:57 2024 INFO: Fill completed in 4min 53secs
TCs started high, trip temps were -60C for this fill.
Late entry. The activities carried out on 1-16: - 44" GVs: GV2 and GV7 closed nicely, without issues, however GV5 was a bit stubborn - it needed 50 psi and some time to close. The wiring was also messed up, Fil corrected it - Other GVs: the GVs of the relay tube (RV1, RV2); the GVs between HAM7 and BSC3 (FCV1, FCV2); the GVs along the FCT, after BSC3 (FCV3, FCV4); and the GVs before HAM8 (FCV7, FCV8) are closed - The RGAs for the corner (OMC, HAM6, HAM7) are being pumped. On 1-17 RGA scans. - The leaky HAM7 fiber feedthrough was leak checked, see details here: https://services1.ligo-la.caltech.edu/FRS/show_bug.cgi?id=30141 - The Kobleco is running, seemingly without any issues So far everything went as planned.
For the VAC team to bag and leak check the HAM7 leaky fiber feedthorugh FRS30141, yesterday afternoon I unplugged the HAM7 FC 532nm Fiber from the Feedthrough, covered with a plastic endcap and then replugged in when the VAC team was finished.
Tagging EPO for fiber feedthru pics.
Patrick, Jonathan, Erik Dave:
While the fmcs ioc continues to be unstable, I wrote an auto-restart script which restarts the IOC if its EPICS values flatline for more than 9 minutes.
In order to control the IOC code we moved it from a screen environment to a procServ, and converted the code to a systemd service.
The auto-restart code runs as david.barker on cdsmanager. Every minute it gets the value of the EX chiller yard water temperature channel H0:FMC-EX_CY_H2O_SUP_DEGF.
If the value of this channel does not change for 9 successive minutes, the code restarts the fmcs_ioc.service on fmcs-epics-cds
ssh root@fmcs-epics-cds 'systemctl restart fmcs_ioc.service'
I started the auto-restart code at 23:12 PST Tue night, since that time there have been 3 auto-restarts
Tue 16 Jan 2024 11:35:41 PM PST
Wed 17 Jan 2024 02:29:52 AM PST
Wed 17 Jan 2024 04:32:58 AM PST
Full details can be found in the wiki page h0fmcsbacnet
TITLE: 01/17 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
SEI_ENV state: MAINTENANCE
Wind: 2mph Gusts, 1mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.25 μm/s
QUICK SUMMARY:
Gerardo found that h0epics2, rack12 in the MSR, was beeping. The beeping later stopped, but the error light is now blinking at approx. 3 Hz.
Powersupplies are both green. I was able to log in. I couldn't find any errors on the system. The EDC has no disconnections.
TITLE: 01/16 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: None
SHIFT SUMMARY:
The O4 Break officially started at the beginning of the shift!
Lots of activity today--miainly in prep for upcoming chamber incursions. No volumes were vented (this could possibly start tomorrow).
H1 taken to PLANNED ENGINEERING via Observatory Mode!
LOG:
Per the plan, the O4 break was paused today at both LHO and LLO in order to execute a pre-planned list of repairs, maintenance, updates, measurements, and commissioning tasks.
The schedule is shown in the attached PDF and is an estimate based on work loads and task duration.
Today, many preparations were made which push towards venting the isolatable volumes:
Corner chambers
HAM7 chamber
Relay Tube between HAM5 and HAM7
HAM8 chamber
EX
We plan to do inchamber work in HAM6, 7, 3, and 8, and port work on a few different chambers, some at EX.
FAMIS26165
These haven't been checked since middle of Dec. according to the paper logs. I added 350mL to TCSX and 180mL TCSY. Filters were in good shape.
Picket fence was updated to the latest version. As part of the update, the LAIR station was dropped for the DING station.
Excellent! I just checked the display and it looks as intended. Thank you very much Erik!
STATE of H1: Observing at 152Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 7mph Gusts, 5mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.44 μm/s
QUICK SUMMARY:
IMC_LOCK Issue:
IMC_LOCK Is stuck in a loop, between Close_ISS (50) and LOCKED (100)
IMC_LOCK [CLOSE_ISS.run] USERMSG 0: Diffracted power jumped too much, toggling secondloop
Found Camilla's alog about this issue: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=71357
But it seems like it resolved it's self for her relatively quickly when she ran into this issue.
Tried to take IMC_LOCK to CLOSE_ISS and to To MCWFS_OFFLOADED but I still end up in the same loop.
Camilla Suggested maybe changing the IMC_lock code to allow for a bigger Diffracted Power Jump
Line 529 ,in IMC_LOCK.py, I wanted to try this, but was hesitant.
Called up Jason, Who then Mentioned that Keita would know more about the specifics of this particular issue.
Rang Keita, who said changing that line is likely not the real solution.
While we were making a plan to resolve the issue. It resolved it's self after about 2 hours of not working properly.
The issue was very likely: the electronic offset in the board has changed enough that the combination of the 2ndLoop REF Servo H1:PSL-(ISS_SECONDLOOP_REFERENCE_SERVO_OUT16) and 3rdLoop offset can no longer compensate for the changes it needs any longer. Tagging PSL team.
Note to other operators!:
If H1 has a lockloss, contact Jason to touch up the RefCav as this RefCav transmission is too low and liekly a contributing factor to the IMC issue.
FMCS Issues: The FMCS IOC keeps going down and thus the FirePump alerts are going off along with the Temperature sensors not giving us any data through MEDM or NDSCOPES.
Dave is aware of this and has been actively trying to find a way to automate the Restarting process of that IOC to limp us along until tomorrow. so far it's failed twice during this shift. at 9:47UTC and 10:45 UTC. If you have gotten an alert it is likely due to the FMCS IOC errors we have been having. Tagging CDS and FMCS.
See my annotation of infinite loop of 2nd loop enabling-disabling due to mismatch between the electronics offset and the 3rd loop offset. 1st loop is REALLY slow to respond to the change in the 2nd loop board DC output (because the 2nd loop output is added to the already whitened 1st loop sensor signal), which doesn't help either.
The way this works is that the diffraction average is measured just before the board output is enabled, then the guardian waits for 10 seconds (waiting for the reference servo to take care of any remaining electronics offset that the third loop offset could not counteract), enables the 2nd loop servo, wait for a while, measures the diffraction again, and if that's close enough to the original number it's satisfied.
In this case, it seems that the electronics offset drifted enough so 10 seconds is not quite enough for the reference servo to take care of that. The solution would be to tune the 3rd loop offset.
After the conclusion of O4a, I adjusted H1:PSL-ISS_THIRDLOOP_OUTPUT_OFFSET (was originally 949, now 959.5).
The procedure is really simple.
1. Put the ISS 2nd loop in the configuration shown in the 1st screenshot. Power into IMC doesn't matter, this is only about electronics, but it's better if the 1st loop is working so that you can confirm that your work is not doing any harm. (When I came into the control room the ISS was already in a good state to start the work.)
2. Confirm that H1:PSL-ISS_SECONDLOOP_EXC_MON is not changing much at DC by making a trend. If it's still trending up/down, wait for two minutes. Ignore the change of 0.1/minute, we're talking more about 1/minute.
3. Read H1:PSL-ISS_THIRDLOOP_OUTPUT_OFFSET value (it was 949). Add the average DC value of H1:PSL-ISS_SECONDLOOP_EXC_MON (was about 11) to make a new number (i.e. 960 in this case).
4. If this is to be done while the IFO is locked, you might want to set H1:PSL-ISS_THIRDLOOP_OUTPUT_OFFSET_TRAMP to 100. I did that even though it wasn't necessary in this case just to show that doing so will make this procedure almost transparent to the 1st loop (2nd attachment, t~-40min, see the 1st loop readout of the 2nd loop output and the diffraction).
5. Put the new number (960 in this case) into H1:PSL-ISS_THIRDLOOP_OUTPUT_OFFSET.
6. Wait for ~2 minutes to see that H1:PSL-ISS_SECONDLOOP_EXC_MON goes down to less than 1. You can also fine-adjust it, but it will drift after some time. If this is less than 1 it's already pretty good.
7. If you changed H1:PSL-ISS_THIRDLOOP_OUTPUT_OFFSET_TRAMP to 100, bring it back to 3.
8. Be happy.
[Measurements attached]
FAMIS LINK: 25973
BSC CPS: Received following from CPS script terminal output:
NOTE: If one was to follow the above channel as an example, it would seem several other BSCs had CPS' with spectra similarly high by eye:
HAM CPS: Looks good.