Ibrahim, Oli, Betsy, Arnaud, Fil
Context: In December ('23) We were having issues confirming that the damping, OSEMs, electronics and model were working (or rather, which wasn't working).
I have more thorough details elsewhere but in short:
Eventually, we were able to go through Jeff and Oli's alog 74142. Here is what was found:
All "crude push around offsets" in the test bank yielded positive drives in the damp channels. These are the ndscope screenshots. Different offsets were needed to make the offset change more apparent in the motion (such as with L). A minimum of 1,000 was arbitrarily chosen and was usually enough.
Transfer Functions: where it gets interesting... (DTT Screenshots)
In these DTTs, each reference (black) are the transfer functions without the damping, while the red traces are with the damping.
All "translation" degrees of freedom (L, V, T) showed correct damping, peak location and resonance
All "rotation" degrees of freedom (P, R, Y) showed completely incorrect damping, usually showing shifted peaks to the right (higher freq).
In trying to figure out why this is, we asked:
(In)conclusion:
It seems that whenever the OSEMs push in the same direction, everything goes as planned, hence why all translation damping works. When we ask the OSEMs to push in opposing directions with respect to one another though, they seem to freak out. This seems to be the prime "discovery" of finally getting the transfer functions.
This is the "for now" update - will keep trying to find out why until expertise becomes available.
Rahul, Ibrahim, Austin
Context: After hopping off the TS call where we decided to try the Pitch TF again and reducing the Damping gain, I met with Rahul and Austin in the control room and we decided to check some more basic OSEM health first.
Something I forgot: When Oli and I were in the control room taking the transfer functions the first time around, I noticed that for the rotational degrees of freedom (P, Y, R), the OSEM outputs were railing immediately (both visibly in number and on the overflow page). I wondered whether I should re-do the TFs without the saturations, by empirically testing the gain until it doesn't overflow. I ultimately kept the nominal -1G in order to report the initial "this is how bad it is" results. This will become relevant later.
Rahul was concerned that the OSEM spectra for the OSEMs that are in M1 were too noisy so we took some spectra measurements of the OSEMs themsevles to see if this was the case ... and it was. These are the screenshots below. We tried them with and without damping to see if damping works, and it doesn't seem like the damping is working exactly as it should be. Additionally, the <10Hz noise is 1-2 orders of magnitude too high according to Rahul. This is a way more "up the chain" (down the chain?) issue and could result in the weirdness we're seeing at the TF level. Why is this?
The Plan
Following these quick checks - once I'm out of the Staging Building:
Minor tasks also include:
Updates incoming.
TITLE: 01/03 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
- Commissioning period took place from 20:00 - 23:00 UTC
- EX saturation @ 22:58
- 15:02 - Temperature alert for the chillers at MX
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
17:30 | FAC | Karen | OLab/Vac prep | N | Tech clean | 17:55 |
18:33 | FAC | Karen | MY | N | Tech clean | 19:33 |
19:08 | FAC | Kim | MX | N | Tech clean | 21:08 |
21:09 | CDS | Erik | EX/EY | N | Swap HWS server | 23:09 |
22:10 | VAC | Janos | MX | N | VAC checks | 22:35 |
One of the things we had on our to-do list with the cold OM2 was to check if there was a different OMC alignment that would improve our optical gain. I moved the OMC QPD offsets around a bit, and I can certainly make the optical gain worse. I think I found a place where we've got about 0.5% more optical gain (kappa_c went from 1.010 to 1.015-ish), so I've accepted those QPD offsets in both our Observe and safe.snap files (see the observe.snap screenshot attached).
The second attachment shows that, while I didn't raster, I went both directions with pit and yaw on both the A and B QPDs, and there weren't any dramatically better places. The one peak where the optical gain poked up as high as 1.018 seems to just be a fluctuation. We've been sitting in the same alignment (according to both the QPD offsets as well as the OM3 and OMC OSEMs), and haven't seen that again. Despite the flucutations, our average seems to consistenly now be above where it was before today's commissioning period began.
WP11598 Upgrade HWS computer hardware
Jonathan, Erik, TJ, Camilla, Dave:
Yesterday Jonathan and Erik replaced the original h1hwsmsr computer with a spare V1 computer. They moved the bootdisk and the /data RAID disks over to the new computer, and restored the /data NFS file system for the ITMY HWS code (h1hwsmsr1). At the time the new computer was not connecting to the ITMX HWS camera.
This morning Camilla worked on initialized the camera connection and we were at that time able to control and see images from the camera.
This afternoon at 1pm during the commissioning period we stopped the temporary HWS ITMX IOC on cdsioc0 and Camilla started the actual HWS ITMX code on h1hwsmsr. We verified that the code is running correctly, images are being taken, settings were restored from SDF.
During the few minutes between stopping the dummy IOC and starting the actual IOC both the EDC and h1tcshwssdf SDF had disconnected channels, which then reconnected over the subsequent minutes after channel restoration.
Erik is building a new h1hwsex computer and will install it at EX in the next hour.
h1hwsex had crashed and only needed a reboot.
I installed the "new" V1 server as h1hwsey at EY. It's physically connected and running, but is not on the network. It requires some more in-person which we'll do Friday or earlier when out of observe.
I've restarted the camera control software on h1hwsex (ETMX) and h1hwsmsr (ITMX). All the HWS cameras are now off (external trigger mode).
As Naoki did in 75023, with SQZ_ANG_ADJUST in DOWN, I adjusted OPO temperature and H1:SQZ-ADF_OMC_TRANS_PHASE to improve SQZ in the yellow BLRMs (350Hz region). Our SQZ angle has changed from 167 to 180degrees. Since I did this the servo moved the angle away from and back towards this optimum 180degrees, will continue to watch.
BLRMs 350Hz and below improved from this change. Other BLRMs remained the same. The change in SQZ angle was from Jenne's adjustments in OMC QPD offsets, effecting the SQZ ASC (expected) and then the SQZ angle servo.
Plots of SQZ angle and SQZ ASC attached.
When I changed the OPO temp and SQZ angle, the AS42_SUM_NORM almost halved, unsure if that is expected (see upper left of plot).
I ran a broadband calibration suite at 20:00:40 UTC using pydarm measure --run-headless bb
diag> save /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240103T200048Z.xml
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240103T200048Z.xml saved
diag> quit
EXIT KERNEL
INFO | bb measurement complete.
INFO | bb output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240103T200048Z.xml
INFO | all measurements complete.
After this completed I ran the simulines sweep using gpstime;python /ligo/groups/cal/src/simulines/simulines/simuLines.py -i /ligo/groups/cal/src/simulines/simulines/settings_h1.ini;gpstime
GPS start: 1388347800
GPS stop: 1388349124
2024-01-03 20:31:46,637 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240103T200944Z.hdf5
2024-01-03 20:31:46,656 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240103T200944Z.hdf5
2024-01-03 20:31:46,668 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240103T200944Z.hdf5
2024-01-03 20:31:46,680 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240103T200944Z.hdf5
2024-01-03 20:31:46,691 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240103T200944Z.hdf5
Attached is a screenshot of the calibration monitor, and the pydarm report.
H1 is still locked will begin commissioning at 20:00 UTC until 23:00 UTC in coordination with LLO. Main things going on during this period is a calibration suite, SQZ work, and a CAL DARM measurement.
Wed Jan 03 10:06:32 2024 INFO: Fill completed in 6min 29secs
Gerardo confirmed a good fill curbside
TITLE: 01/03 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 1mph Gusts, 0mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.43 μm/s
QUICK SUMMARY:
- H1 currently on a 16 hour lock
- Seismic activity is low, CDS/DMs ok
Yesterday John Z restarted the BLRMS GDS monitor, the control room SEIS BLRMS FOM on nuc5 started displaying data from 9pm onwards.
Verbal Alarms, which had been reporting leaps.py issues, started working normally yesterday afternoon. We currently don't understand the original error or why it cleared up.
TITLE: 01/03 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:
Other than the one hitch related to the Leap Second, this was a fairly nice and smooth shift post-Maintenance.
LOG:
After the glitches last night, I went to HAM3 to do the usual fixes. I powered off the corner 2 CPS, unplugged the boards in the satellite chassis at the chamber, put it all back together and powered everything back on. The CPS haven't glitched since then, so seems like maybe things are fine now.
The 65-100 hz blrms are a good witness glitches for a couple hours before they started tripping the HAM3 ISI. Attached trend shows about 5 hours before the first HAM3 trip on Monday. The top row are the raw H2 and V2 CPS in counts, the middle row is the watchdog state, and the bottom row are the 65-100hz blrms for the corner 2 and corner 3 blrms. The H2 and V2 cps start seeing glitches that don't trip the ISI about 3 hours before the first trip, these glitches don't really show up in the in the corner 3 CPS either. These glitches also don't coincide with locklosses, if the ISI doesn't trip. Under normal circumstances, these blrms are well below 10 nm, the first few glitches are up to 600 nm, but a glitch of ~1000nm causes the ISI to trip. There haven't been any glitches since I touched the CPS yesterday, so I think we are in the clear for now.
I'm still not sure of the right way to alarm on this, but some sort of days-ish timeseries trend when ISI trips on CPS would probably be a good place to start.
At 23:37 Sun 31 Dec 2023 PST the h1hwsmsr computer crashed. At this time: EDC disconnect count went to 88, Slow Controls SDF (h1tcshwssdf) discon_chans count = 15, GRD DIAG_MAIN cannot connect to HWS channel
The main impact on the IFO is that the ITMX HWS camera cannot be controlled and is stuck in the ON state (taking images at 7Hz).
Time line for camera control:
23:22 Sun 31 Dec 2023 PST | Lock Loss, ITMX and ITMY cams = ON |
23:37 Sun 31 Dec 2023 PST | h1hwsmsr computer crash, no ITMX cam control |
04:37 Mon 01 Jan 2024 PST | H1 lock, ITMY cam = OFF, ITMX stuck ON |
Tagging DetChar in case the 7Hz comb reappears since the ITMX HWS camera was left on for the observing stretch starting this morning at 12:41 UTC.
I also removed ITMX from the "hws_loc" list in the HWS test in DIAG_MAIN and restarted the node at 18:08 UTC so that DIAG_MAIN could run again and clear the SPM diff (tagging OpsInfo). This did not take H1 out of observing.
Similar to what I did on 23 Dec 2023 when we lost h1hwsex, I have created a temporary HWS ITMX dummy IOC which is running under a tmux session on cdsioc0 as user=ioc. All of its channels are zero except for the 15 being monitored by h1tcshwssdf which are set to the corresponding OBSERVE.snap values.
EDC and SDF are back to being GREEN.
The H1:PEM-CS_MAG_LVEA_OUTPUTOPTICS_Y_DQ channel 74900 shows the 7Hz has been present since 07:37UTC 01 Jan 2024 when the h1hwsmsr computer crashed. Plan to restart the code turning the camera off during locks 74951 during commisioning today.
In 75124 Jonathan, Erik and Dave replaced the computer and today we were again able to communicate with the camera (needed to use the alias init_hws_cam='/opt/EDTpdv/initcam -f /opt/EDTpdv/camera_config/dalsa_1m60.cfg'). At 18:25-18:40UTC we adjusted from 7Hz to 5Hz, off and left back at 7Hz. We'll plan to stop Dave's dummy IOC and restart the code later today. Once this is successful, the CDS team will look at replacing the h1hwsex 75004 and h1hwsey 73906. Erik has WP 11598
From 23:35UTC these combs are gone, 75159.
STATE of H1: Observing at 154Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 8mph Gusts, 5mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.94 μm/s
QUICK SUMMARY:
InLock Sus charge measurements likely caused the lockloss this morning. Though there was a PI message at the same time. but the PI didn't seem high enough to break a lock.
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1387641172
Relocking started before 1600UTC.
16:45 UTC NOMINAL_LOW_NOISE reached and OBSERVING at 16:51 UTC
SQZ manager dropped us into commissioning at 17:22 UTC
Back to Observing at 17:24 UTC
N2 truck arrived at Y end around 17:34.
I missed the time that the N2 truck left. I believe it was shortly after 1900 UTC
The Temps in the in the VPW ranged from 66F to 73F. Im not sure what the correct range should be, but the one that read 73 was nearest the warmed server exhaust on that side of the room.
Attached is a plot of today's noisy events related to the LN2 delivery to the tank for CP7. Since the IFO was locked flagging the respective groups.
We lost lock during the SETUP step of ESD_EXC_ETMX, plot attached. ETMX_L3_DRIVEALIGN_L2L (bottom right) had an output but I don't think it should have as the feedback is on ITMX at this point and the excitation hadn't started yet. Should check this before next Tuesday.
ESD_EXC_ETMX log before lockloss:
2023-12-26_15:52:32.263999Z ESD_EXC_ETMX [SETUP.main] ezca: H1:SUS-ETMX_L3_DRIVEALIGN_L2L_TRAMP => 1
2023-12-26_15:52:32.264996Z ESD_EXC_ETMX [SETUP.main] ezca: H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN => 1
2023-12-26_15:52:33.394157Z ESD_EXC_ETMX [SETUP.main] ezca: H1:SUS-ETMX_L3_DRIVEALIGN_L2L_SW1S => 0
2023-12-26_15:52:33.645345Z ESD_EXC_ETMX [SETUP.main] ezca: H1:SUS-ETMX_L3_DRIVEALIGN_L2L => ONLY ON: OUTPUT, DECIMATION
Looking at a successful ETMX SETUP state, you can see that there is still a small time where ETMX_L3_DRIVEALIGN_L2L has an output, the IFO just survives the glitch. This happens when the gain is changed from 0 to 1 (to allow excitation through) but before the INPUT is turned off. I've swapped the order of these two lines and added a 1sec sleep between them to make sure that the input is turned off before the gain is ramped to 1. Edit has been saved and will reload during next commisioning period.