Baffle installation team is currently out for lunch, and has been making great progess. OMC work in HAM 6 and TCS table and leak checks are currently ongoing in the LVEA.
Wed Jan 31 10:11:43 2024 INFO: Fill completed in 11min 39secs
Could somebody have a look? Pressing buttons won't change the screen.
TITLE: 01/31 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 3mph Gusts, 1mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.62 μm/s
QUICK SUMMARY:
- HAM 3 baffle work continues
- TCS work will continue today
- HAM 7 work slated for sometime this week or the beginning of next
Today's activities: - HAM3 Y+ door has been removed - The pumpdown at EX has been started, with the Hepta. In the end of the day, the pressure is in the low E+0s - a.k.a. a few Torrs; so transitioning to Turbo tomorrow. The purge air won't be switched off until the leak tests - In parallel with the HAM3 door removal, the purge air was hooked up to the X-manifold as well (now it is purging at the input beam; at the x-manifold; and at HAM7) - BSC8 AIP railed - possibly there is a leak between the main volume and the annulus volume, will be further investigated. Will be taken care of tomorrow
Robert, Mitchell, Corey, Tony, TJ, Alena, Eddie, Betsy, the SLiC team.
The baffle by HAM2 can, at 5 degrees, specularly reflect some scattered light back to beam spots and has been shown to produce scattering noise (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=74772). Today we changed the angle of the baffle to 10 degrees using new hardware designed at CIT. The new hardware and the procedure worked well. The figure shows photos of the work.
TITLE: 01/30 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
SHIFT SUMMARY:
Took over for Tony @ 1145 to allow him to work with the HAM 3 baffle installation team.
Currently FARO work and tour ongoing in the LVEA.
Travis and Janos at EX running pump checks.
LVEA is currently laser safe.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 16:04 | FAC | Chris S | LVEA FCES | No | Replacing Eye wash stations | 19:17 |
| 16:12 | FAC | Kim | LVEA | No | Technical Cleaning | 17:58 |
| 16:13 | FAC | Karen | LVEA | No | Technical cleaning | 17:13 |
| 16:13 | VAC | Jordan | LVEA | No | VAC work | 16:22 |
| 17:11 | VAC | Jordan & Travis | LVEA | No | HAM3 Door work | 20:34 |
| 17:13 | OMC | Sheila & Preet | LVEA HAM6 | no | Gate valve and curtain near HAM6 & 7 | 19:38 |
| 17:14 | VAC | Betsy | LVEA | yes | LVEA walk around checking status. | 19:03 |
| 17:14 | FAC | Karen | Mid Y | No | Technical cleaning | 19:24 |
| 17:22 | SQZ | Camilla | LVEA HAM 6& 7 | YES | Helping with OMC and SQZ work | 17:38 |
| 17:29 | Beckhoff | Daniel | CRTL RM | N | Restarting Beckhoff. | 19:22 |
| 18:17 | EE | Fil | End X | No | Working on a scroll pump for the Vac team | 20:13 |
| 18:29 | FAC | Kim | End X | No | Technical Cleaning | 20:03 |
| 18:59 | FAC | Tyler | FCES | No | Check safety & fire suppresion equipment? | 19:24 |
| 19:03 | VAC | Betsy | LVEA HAM6 | no | Checking on HAM6 status | 19:24 |
| 19:18 | VAC | Jordan & Tyler | LVEA HAM3 | NO | Taking the HAM3 Door off | 20:35 |
| 19:22 | Surveying | Jason & Ryan C | LVEA | No | Surveying the LVEA floor. | 20:21 |
| 19:27 | VAC | Betsy | LVEA HAM3 | No | Hovering | 20:55 |
| 19:38 | ISC | TJ | HAM 3 | N | Lock ISI | 20:53 |
| 19:39 | SUS | Rahul | HAM 6 | N | Measurements | 20:48 |
| 20:30 | ISC | Tony | HAM 3 | N | Join HAM crew | 20:53 |
| 20:54 | ISC | Robert/Mitch/Corey/Tony/TJ | HAM 3 | N | Baffle installation | 23:54 |
| 21:01 | VAC | Jordan | LVEA | N | Purge air inspection | 21:22 |
| 21:24 | EE | Marc/Jabari | MY | N | Install accelerometers | 23:15 |
| 21:39 | FARO | Jason/Ryan C | W Bay | N | FARO work | ?? |
| 22:08 | SUS | Rahul | CR | N | TF measurements | 22:38 |
| 22:39 | OPS | Betsy | HAM 3 | N | Check on team | 22:52 |
| 22:47 | ISC | Julian | Optics lab | N | Parts testing | 0:02 |
| 22:57 | VAC | Travis/Janos | EX | N | Pump checks | ?? |
| 23:39 | EPO | Mike + 2 | LVEA | N | Tour | ?? |
Here are the GPS times we've spent in the NEW DARM state. This is primarily useful for Keita, Sheila, and I as we try to better understand why weren't able to stay in the state in early Jan.
Time based on Sheila's investigative work from earlier this month: LHO:75308 and LHO:75348. The table of NEW DARM periods below does not include failed attempts. Instead, I just included successful transitions to the NEW DARM state (GRD STATE 710) in hopes of following up on LHO:75432. In addition, I've check the CAL-CS filterbank and gain states for each of the NEW DARM transitions in the table below as well. All but the last entry contain at least 10 minutes of "undisturbed" time during which CAL-CS was not changed. During the last window (GPS 1387237749-1387239372), CAL-CS saw many changes.
| 1386536646 | 1386544194 | 02:06:09 | more info in LHO:75348; no CAL-CS changes during stretch |
| 1386619903 | 1386623884 | 00:35:41 | more info in LHO:75348; no CAL-CS changes during stretch |
| 1387033311 | 1387033995 | 00:11:10 | successful transition, more info in LHO:75348; no CAL-CS changes during stretch |
| 1387140065 | 1387141308 | 00:20:51 | more info in LHO:75348; no CAL-CS changes during stretch |
| 1387237749 | 1387239372 | 00:26:49 | more info in LHO:75348, initially discussed in LHO:74977 CAL-CS changed throughout. see NEW_DARM_scope_1387237749.png |
ndscope H1:GRD-ISC_LOCK_STATE_N . H1:CAL-CS_DARM_FE_ETMX_L1_LOCK_L_SWSTAT , H1:CAL-CS_DARM_FE_ETMX_L1_DRIVEALIGN_L2L_SWSTAT , H1:CAL-CS_DARM_FE_ETMX_L1_LOCK_L_GAIN H1:CAL-CS_DARM_FE_ETMX_L1_DRIVEALIGN_L2L_GAIN . H1:CAL-CS_DARM_FE_ETMX_L2_LOCK_L_SWSTAT , H1:CAL-CS_DARM_FE_ETMX_L2_DRIVEALIGN_L2L_SWSTAT , H1:CAL-CS_DARM_FE_ETMX_L2_LOCK_L_GAIN H1:CAL-CS_DARM_FE_ETMX_L2_DRIVEALIGN_L2L_GAIN . H1:CAL-CS_DARM_FE_ETMX_L3_LOCK_L_SWSTAT , H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_SWSTAT , H1:CAL-CS_DARM_FE_ETMX_L3_LOCK_L_GAIN H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN . H1:LSC-DARM1_SWSTAT , H1:LSC-DARM1_GAIN
In LHO:75560, Dana found that cal report20230505T200419Zwas not thermalized. As such, it should not be included in the GPR and uncertainty budget calculations. I've removed thevalidtag in that report.
As soon as the VAC team removed the HAM3 West door, Betsy and I went in to lock the ISI. I locked them in reverse order (D,C,B,A) due to access, climbing into the beamtube to get to B and A.
This work is following up on LHO:73735. I've written a script that retroactively fixes the pyDARM parameter issues discussed in LHO:73735. The script lives here on the CDS workstations:/ligo/home/louis.dartez/projects/20240124_script_fix_bad_inis_from_alog_LHO73735/fix_bad_site_inis.py. Since the pyDARM parameter INI files were initially written in error, anyone trying to process old corresponding to the affected reports' times would be using the wrong IFO models. As such, I've used the script above to fix the affected reports in situ. The original reports have been copied into/ligo/groups/cal/H1/reports/archive/reports_preserved_from_fix_for_LHO73735.
WP11646 New h1sqz model
Daniel, Dave:
A new h1sqz model was installed. DAQ restart was required
WP11651 Add New SQZ_PMC Guardian nodel to DAQ
Vicky, Camilla, Daniel, Dave:
The new SQZ_PMC GRD node was added to H1EPICS_GRD.ini. DAQ + EDC restart required
DAQ Restart
Dave:
The DAQ was restarted for the above changes. Sequence was 0-leg, EDC, 1-leg.
No major problems with the restart, both GDS daqds had to be restarted a second time for channel list synchronization.
DAQ Changes:
key- <channame> <datatype 4=float> <datarate>
Fast Channels Removed
none
Fast Channels Added
< H1:SQZ-PMC_REFL_LF_OUT_DQ 4 2048
< H1:SQZ-PMC_REFL_RF35_I_NORM_DQ 4 16384
< H1:SQZ-PMC_REFL_RF35_Q_NORM_DQ 4 2048
< H1:SQZ-PMC_SERVO_CTRL_OUT_DQ 4 16384
< H1:SQZ-PMC_SERVO_ERR_OUT_DQ 4 16384
< H1:SQZ-PMC_SERVO_SLOW_OUT_DQ 4 2048
< H1:SQZ-PMC_TRANS_LF_OUT_DQ 4 16384
Slow Channels Removed
> H1:SQZ-FIBR_PD_AWHITEN_SET1 4 16
> H1:SQZ-FIBR_PD_AWHITEN_SET2 4 16
> H1:SQZ-FIBR_PD_AWHITEN_SET3 4 16
> H1:SQZ-FIBR_PD_LF_MASK 4 16
Slow Channels Added
< H1:GRD-SQZ_PMC_ACTIVE 4 16
< H1:GRD-SQZ_PMC_ARCHIVE_ID 4 16
< H1:GRD-SQZ_PMC_CONNECT 4 16
< H1:GRD-SQZ_PMC_ERROR 4 16
< H1:GRD-SQZ_PMC_EXECTIME 4 16
< H1:GRD-SQZ_PMC_INTENT 4 16
< H1:GRD-SQZ_PMC_LOAD_STATUS 4 16
< H1:GRD-SQZ_PMC_MODE 4 16
< H1:GRD-SQZ_PMC_NOMINAL_N 4 16
< H1:GRD-SQZ_PMC_NOTIFICATION 4 16
< H1:GRD-SQZ_PMC_OK 4 16
< H1:GRD-SQZ_PMC_OP 4 16
< H1:GRD-SQZ_PMC_PV_TOTAL 4 16
< H1:GRD-SQZ_PMC_READY 4 16
< H1:GRD-SQZ_PMC_REQUEST_N 4 16
< H1:GRD-SQZ_PMC_SPM_CHANGED 4 16
< H1:GRD-SQZ_PMC_SPM_MONITOR 4 16
< H1:GRD-SQZ_PMC_SPM_TOTAL 4 16
< H1:GRD-SQZ_PMC_STALLED 4 16
< H1:GRD-SQZ_PMC_STATE_N 4 16
< H1:GRD-SQZ_PMC_STATUS 4 16
< H1:GRD-SQZ_PMC_TARGET_N 4 16
< H1:GRD-SQZ_PMC_VERSION 4 16
Restart/Reboot log:---------------------------------------------------------------------------------------
Tue30Jan2024
LOC TIME HOSTNAME MODEL/REBOOT
08:57:21 h1susb123 h1iopsusb123 <<< Recovery from Monday Dolphin Glitch
08:57:35 h1susb123 h1susitmy
08:57:49 h1susb123 h1susbs
08:58:03 h1susb123 h1susitmx
08:58:17 h1susb123 h1susitmpi
09:00:02 h1sush2a h1iopsush2a
09:00:16 h1sush2a h1susmc1
09:00:30 h1sush2a h1susmc3
09:00:44 h1sush2a h1susprm
09:00:47 h1sush34 h1iopsush34
09:00:58 h1sush2a h1suspr3
09:01:01 h1sush34 h1susmc2
09:01:15 h1sush34 h1suspr2
09:01:29 h1sush34 h1sussr2
09:02:37 h1sush56 h1iopsush56
09:02:56 h1sush56 h1sussrm
09:03:10 h1sush56 h1sussr3
09:03:24 h1sush56 h1susifoout
09:03:38 h1sush56 h1sussqzout
09:37:30 h1lsc0 h1sqz <<< New sqz model
09:40:45 h1daqdc0 [DAQ] <<< 0-leg restart
09:40:58 h1daqfw0 [DAQ]
09:40:58 h1daqtw0 [DAQ]
09:40:59 h1daqnds0 [DAQ]
09:41:07 h1daqgds0 [DAQ]
09:41:33 h1susauxb123 h1edc[DAQ] <<< EDC restart for GRD node
09:42:06 h1daqgds0 [DAQ] <<< 2nd gds0 restart
09:44:58 h1daqdc1 [DAQ] <<< 1-leg restart
09:45:07 h1daqfw1 [DAQ]
09:45:08 h1daqtw1 [DAQ]
09:45:09 h1daqnds1 [DAQ]
09:45:18 h1daqgds1 [DAQ]
09:45:51 h1daqgds1 [DAQ] <<< 2nd gds1 restart
day 1 (alog 75548), day 2 (alog 75557), day 3 (alog 75575), day 4 (alog 75601)
Recovery from reboots
After some things were rebooted, we've found that the suspension slider offsets for all OMs, SRM and ZM5 were reverted back to old-ish numbers. I and Camilla manually restored that.
Camilla saw that the beam was not quite back to the old position on one of the HAM7 QPDs but wasn't that bad.
The beam was already on ASC-AS_C but not centered, so I centered it using SRM.
Following that, I had to make a minor tweaking of OM1/2/3 to recenter OMC qpds quickly.
| SRM | OM1 | OM2 | OM3 | |
| PIT slider (yesterday/today) | 1944.6/1904.6 | 20/90 | 0/-80 | -590/-550 |
| YAW slider (yesterday/today) | -2940.6/-2944.6 | 650/610 | 760/760 | 60/-74 |
| DAC max (yesterday/today) | didn't care/57k (18bit DAC) | 11k/7.1k | 7k/4.5k | 9k/6.7k |
HAM6 irises are centered
Preet and Sheila recentered the two irises on HAM6. From this point on, these irises are a fiducial for the IFO beam.
OMC trans video beam and OMCR beam dump will be done later this week
LVEA was transitioned to laser safe for HAM3 door removal. We'll continue suspension work in HAM6.
Replaced the Endevco Accelerometer Power Conditioners with LIGO Accelerometer Power Conditioners. WP11653
Order in the rack:
U19 - Accelerometer Power Conditioner S2300062
U16-U17 AA Chassis S1300101
U14 - Accelerometer Power Conditioner S2300063
U13 - Accelerometer Power Conditioner S2300065
U12 - Accelerometer Power Conditioner S2300069
U11 - Accelerometer Power Conditioner S2300064
Still have EX, EY, MX, MY, FCES to complete.
M. Pirello, J. Jimerson. R. Schofield
Mid Y, we were not able to connect power, we installed the chassis and will revist.
Installed Accelerometer Chassis at EX, cables connected and chassis powered up. One of the Acelerometer cable connectors is loose and will be repaired later this week but it is functional.
Signal Order Asbuilt:
Slot = Cable Number = Accelerometer Number
1 = 4 = PEM EY BSC10
2 = 7 = PEM EY EBAY
3 = empty = empty
4 = 1 = PEM EY OPLEV
5 = empty = empty
6 = empty = PEM EY BSC6
7 = 6 =Black Cable
8 = 2 = PEM EY BSC10
9 = 3 = PEM EY TRN
10 = 5 = PEM EY BSC ACC X
We have no slot 10 on the new chassis therefore we moved cable 5 to slot 5, and moved the PEM EY BSC ACC X cable to match on the back.
Once we matched the cables we rearranged the signals to match the numbering on the front, i.e. cable 1 goes to slot 1, cable 2 goes to slot 2, etc...
Slot = Cable Number = Accelerometer Number
1 = 1 = PEM EY OPLEV
2 = 2 = PEM EY BSC10
3 = 3 = PEM EY TRN
4 = 4 = PEM EY BSC10 (this cable had a loose connector, but registered as an accelerometer)
5 = 5 = PEM EY BSC ACC X (this cable did not register as an accelerometer)
6 = 6 = Black Cable (this cable is unlabeled)
7 = 7 = PEM EY EBAY
8 = empty (This needs AA and signal)
9 = empty (This needs AA and signal)
J. Jimerson, M. Pirello
Further investigation into EY Accelerometers
Slot = Cable Number = Accelerometer Number
1 = 1 = PEM EY OPLEV
2 = 2 = PEM EY BSC10_Y (Should be BSC10_ACC_X)
3 = 3 = PEM EY TRN (should be BSC10_ACC_Y)
4 = 4 = PEM EY BSC10_Z
5 = 5 = PEM EY BSC ACC X (should be TRN_TBL_ACC_Y)
6 = 6 = Black Cable (this cable is unlabeled) * same issue as EX
7 = 7 = PEM EY EBAY
8 = empty (This needs AA and signal)
9 = empty (This needs AA and signal)
Same as EX, CH2 moves to CH3, CH3 moves to CH5, CH5 moves to CH2.
Removed S2300069 from CS PEM rack, this is not supposed to be installed here.
Dana, Louis
We analyzed every measurement of the sensing function taken between the start of O4 and October 27th to see if they were reliable and came up with the following, summarized in the table below:
| Report ID | GPS time [s] | Time locked prior to measurement[h] |
|---|---|---|
| 20230504T055052Z | 1367214670 | 6+ |
| 20230505T012609Z | 1367285187 | 5.2 |
| 20230505T174611Z | 1367343989 | 5.2 |
| 20230505T200419Z | 1367352277 | 0.2 |
| 20230506T182203Z | 1367432541 | 4.7 |
| 20230508T180014Z | 1367604032 | 6+ |
| 20230509T070754Z | 1367651292 | 5.8 |
| 20230510T062635Z | 1367735213 | 3.5 |
| 20230517T163625Z | 1368376603 | 6+ |
| 20230616T161654Z | 1370967432 | 3.4 |
| 20230620T234012Z | 1371339630 | 2.9 |
| 20230621T191615Z | 1371410193 | 2.1 |
| 20230621T211522Z | 1371417340 | 4.0 |
| 20230628T015112Z | 1371952290 | 4.8 |
| 20230716T034950Z | 1373514608 | 6+ |
| 20230727T162112Z | 1374510090 | 6+ |
| 20230802T000812Z | 1374970110 | 2.6 |
| 20230817T214248Z | 1376343786 | 6+ |
| 20230823T213958Z | 1376862016 | 4.3 |
| 20230830T213653Z | 1377466631 | 3.7 |
| 20230906T220850Z | 1378073348 | 3.9 |
| 20230913T183650Z | 1378665428 | 6+ |
| 20230928T193609Z | 1379964987 | 6+ |
| 20231004T190945Z | 1380481803 | 4.7 |
| 20231018T190729Z | 1381691267 | 6+ |
| 20231027T203619Z | 1382474197 | 6+ |
Ideally, the detector should be in lock state at least three hours before making a sensing function measurement to make sure the thermalization process is complete. However, there were a couple measurements that were made when the detector had only been locked for about two hours (06/21, 08/02), and there was one particularly problematic measurement that was made when the detector had only been locked for about 10 minutes (05/05). This last measurement should certainly not be included in the GPR calculation.
The code used to obtain the detector lock state and history given a report ID is attached below. Note: To run this code, you will need access to pydarm, so run the following command in the terminal before executing the file: source /ligo/groups/cal/local/bin/activate
report 20230505T200419Z changed to 'invalid' in LHO:75629.
[Julian, Naoki, Camilla, Sheila, Vicky]
Summary to get SQZ alignment beam: Launched 76mW into seed fiber, ~25 mW incident on opo cavity, ~0.85 mW transmitted through opo cavity. Had to find opo transmission past the VIP, for this we used green SK path as a reference. This ~0.85 mW opo transmission was bright on an IR card at the HAM5 gate valve, and enough to iris the SQZ beam in HAM7 and HAM6 (for OMC work 75512). DC 3/4 centering loops engaged easily, then OMC A/B QPD's saw the sqz beam.
----------- Notes from today ------------------------------------------
Launched power into SEED fiber (SQZT0): 76 mW
OPO IR REFL (CLF_TRIG_REFL_DC_POWERMON @ SQZT7): 24.8mW (when opo is dither-locked).
Fiber rejected power PD in HAM7 (CLF_REFL_REJ) is 5.3 mW.
--> Seed fiber coupling: ~34% of the seed fiber launched power was incident on the opo cavity.
--> 40% coupling through fiber, ~6% mispolarized and rejected after fiber. This is similar to recent fiber alignment 75344, even after more recent on-table work 75486.
We had to find the OPO IR transmitted beam after the VIP. Nothing at first despite restoring suspensions 75502. Notes to self on what worked to find the SQZ beam post-vent:
OPO IR TRANS (OPO_IR_PD_DC_POWERMON @ SQZT7): 0.85 mW -- Just before opening the HAM7/5 gate valves. Opened the beam diverter, SQZ beam was bright on an IR card held at the HAM5 gate valve.
After opening gate valve, immediately saw the beam on AS A/B/C QPDs.
ASC-AS_A/B_DC_SUM_OUTPUT ~ 60. ASC-AS_C_NSUM_OUT16 ~ 0.65-0.67.
HAM6 crew irising SQZ beam, 75512.
HAM7 crew irising SQZ beam. Julian has some photos of HAM6 and HAM7 irses. Sheila -- looks like there is some clipping on the VIP (we have not totally optimized FC_REFL path slider alignments post-vent, just found the beam). Revisit this FC REFL alignment later.
Engaged DC 3/4 centering. It just worked. Control signals near 0.
We see the beam on the OMC QPD's, power is consistent with ASC-AS_C power, around 300-400e-6 on each OMC QPD A/B. Power goes away when SQZ beam diverter is closed. See omc powers screenshot with as_wfs powers.
TO-DO SQZ work later:
tagging for EPO
Accepted ZM1,2,3,4,5,6, FC1,2 OPTICALIGN sliders in sdf. Attached is the photo so we'll know where to bring them back to after pumpdown.
ZM4,5,6 are not monitored - should remember why that is....
Camilla, Naoki, Vicky
Next day 1/23, we tried to help check OMC alignment, but after turning the SQZ laser back on, we didn't find the beam past the VIP at first.
To re-find the sqz beam, we had to move FC1 quite a bit (pitch slider by 100 counts, yaw slider by 65 counts). See FC1 SDF's of today's move, compared to what Camilla just accepted in SDF after we first found the beam yesterday 75517.
Naoki checked FC1 and ZM1-2-3-4-5-6 SUS, and did not see any anomolous movements of the optics.
We left FC1 with an alignment that maximizes signal on both HAM7 FC WFS (RLF QPD's). Both QPD's are now saturated with 75mW into the fiber. With 5mW into the fiber to un-saturate, both QPD's are close-ish to centered.
Hopefully this is enough to use AS A/B WFS centering tomorrow. Today when we tried it, DC3 worked, but DC4 railed as the beam wasn't hitting AS_B well.
Sheila, Camilla, Vicky
We re-found the SQZ beam in HAM6 this morning after opening the ham5 gate valve. Steps taken this morning 1/24, after yesterday using FC1 to align onto the HAM7 FC WFS QPD's:
For convenience while vented, I made the following guardian changes so far:
All guardian edits (+sqz angle servo flag in sqzparams.py) commited to svn revision 27088.
Attached Julian's photos of the iris locations on HAM6 and HAM7.
I made a comparison of DARM_IN1_DQ and CAL-DELTAL_EXTERNAL_DQ in nominal VS new DARM offloading scheme (the new scheme itself is explained in alog 74887). Data for the NEW_DARM configuration was taken from Dec/21 (alog 74977) when Louis and Jenne successfully transitioned but with calibration that did not make sense.
The main things you must look at are the bottom left panel red and blue, i.e. the coherence between DARM_IN1 and CAL-DELTAL_EXTERNAL in the NEW (red) VS the old (blue) configuration. Blue trace is almost 1 as it should be, but the red drops sharply between 20Hz and 200Hz.
This does not make any sense because CAL-DELTAL_EXTERNAL is ultimately a linear combination of DARM_IN1 and DARM_OUT (see https://dcc.ligo.org/G1501518). Since DARM_OUT is linear to DARM_IN1, no matter where and how the noise is generated and no matter how you redistribute the signal in the ETM chain, CAL_DELTAL_EXTERNAL should always be linear to DARM_IN1, therefore coherence should be almost 1.
So what's the issue here?
The only straightforward possibility I see is that somehow excessive numerical noise is generated in the calibration model even with the frontend's double precision math. Maybe something is agressively low-passed and then high-passed, or vice versa, that kind of thing.
It is not an artefact of the single precision math of DTT. Both CAL_DELTAL_EXTERNAL and DARM_IN1 is already well whitened, and they're entirely within the dynamic range of single precision. For example, RMS of red CAL-DELTAL_EXTERNAL_DQ trace is ~7E-5 cts. From that number, I'd expect that the noise floor due to single precision is very roughly O(7E-5/10**7 /sqrt(8kHz)) ~ O(1E-13) cts/sqrtHz if it's close to white, give or take some depending on details, but the actual noise floor is ~10E-8 cts/sqrtHz. Same thing can be said for DARM_IN1.
It's not the numerical noise in DARM filter as the coherence between DARM_IN and SUS-ETMX_L3_LSCINF_L_IN1 (which is the same thing as DARM_OUT for coherence purpose) is 1 from 1Hz to 1kHz for both configurations (old -> brown, new -> green). (It looks as if the coherence goes down above 1kHz for the old config, but that's irrelevant for this discussion, and anyway it's an artefact of DTT's single precision math. See e.g. the top left blue (old config DARM_OUT) with RMS of 20k counts, corresponding to O(2E-5)/sqrtHz single noise floor due to single precision, give or take some. See where the actual noise floor is.)
It's not a glitch, noise level of CAL_DELTAL_EXTERNAL spectrum didn't change much from one fft to the other for the entire window (I used N=1 exponential to confirm this).
Note that there's also a possibility that excessive noise is generated in the SUS frontend too, polluting DARM_IN1 for real, not just for calibration model. I cannot tell if that's the case or not for now. The difference between the green (new) and brown (old) DARM_IN1 spectrum in the top left panel could just be a difference in gain peaking due to different DARM loop shape.
I'll see if double precision channels (recorded as double) in calibration model are useful to pinpoint the issue. Erik modified the test version of DTT so it handles the double precision numbers correctly without casting into single, but it's crashing on me at the moment.
some more time windows to look into while we were in the NEW DARM state are listed at LHO:75631.
Louis, Jenne, TJ, Sheila
Today we continued to try to transition to the new Darm configuration, which we had suceeded in doing in December but weren't able to repeat last week (75204).
In our first attempt today we tried a faster ramp time, 0.1 seconds. This caused immediate saturation of ETMX ESD. We struggled to relock because of the environment.
Because Elenna pointed out that the problem at roughly 3 Hz with the earlier transition attempts might have been the soft loops, we thought of trying to do the transition before the soft loops are engaged, after the other ASC is on. We tried this first before transitioning to DC readout which wouldn't work because of the DARM filter changes. Then we made a second attempt at DC readout. We also lost lock due to a 2 Hz oscialltion, even without the soft loops on.
Some gps times of transitions and attempt:
Adding two more times to this list:
The second screenshot here shows the transitions from Dec 13th, 14th, and 19th. These are three slightly different configuration of the UIM filters and variations on which PUM boosts were on when we made the transition. On the 14th the oscillation was particularly small, this was with our new UIM filter (FM 2 + 6) and with both PUM boosts on L2 LOCK FM1,2,10 already during the transition. This is the same configuraition that failed mulitple times in the last two weeks.
Today I went back to three of these transitions, December 14th (1386621508 sucsesful no oscillation) and Jan 4 (1388444283) + Jan 5th (1388520329) which were unsucsesfull attempts. It also seems as though the only change to the filter file since the Dec 14th transition is a change copy the Qprime filter into L1 drivealign, which has not been used in any of these attempts (this can't be used because tidal is routed through drivalign).
In short, it doesn't seem that we made a mistaken change to any of these settings between December and January which caused the transition to stop working.
| L1 DRIVEALIGN L2L | 37888 | no filters on | |
| L1 LOCK L | 37922 | FM2,6 (muBoostm, aL1L2) | |
| L2 DRIVEALIGN L2L | 37968 | FM5,7 (Q prime, vStopA) | |
| L2 LOCK L | 38403 | FM1,2,10 (boost, 3.5, 1.5:0^2, cross) on the 5th FM1+ 2 were ramping while we did the transition | |
| L3 DRIVEALIGN L2L | 37888 | no filters on | |
| L3 LOCK L | 268474240 | FM8, FM9, FM10, gain ramping for 5 seconds (vStops 8+9, 4+5, 6+7) | |
| ETMX L3 ISCINF L | 37888 | no filters on | |
| DARM2 | 38142 | FM2,3,4,5,6,7,8 | |
| DARM1 | 40782 | FM2,3,4,7,9,10 |
I added start and end time windows for the successful transitions in LHO:75631.