Two photos of SQZT0: (1) shows the existing EOM on the green path, and (2) shows the location for the new SHG EOM.
At the start of commissioning at 19:01UTC, we went out of observing to turn CO2X power up from 1.53 to 1.67W. The power had dropped 7% since May, see 72943. I've edited lscparams and reloaded TCS_ITMX_CO2_PWR guardian expecting we'll want to keep this change.
Over the last 2 weeks CO2X power has dropped 0.04W (2%), plot attached. We should keep an eye on this and maybe bump up the requested power again in the comming weeks. We plan to replace the CO2X chiller when a new one arrives. We may also replace want to replace the laser with the re-gased laser.
H1 has dropped observing as of 19:12 UTC for scheduled commissioning in coordination with L1.
H1 has been locked for just over 1 hour.
FAMIS 25957, last checked in alog 72829
BSC:
The script reports that ITMX_ST2_CPSINF_H3 & H1 are elevated
HAM:
HAM7_CPSINF_V3 have reduced at high frequency
HAM7_CPSINF_V3 & V2 have reduced at high frequency
Everything else looks nominal
In short - I've added a bit to the ALS_DIFF node in the FIND_IR and SAVE_OFFSET states. The goal was to have it handle the case that was seen the other night where it goes to an offset and there is the smallest bit of light above the noise, but it's still be low threshold and won't fine tune. Just lowering the threshold won't work since this noise seems to vary a bit, so I added in a few other checks.
I didn't fully test this since we've already been down for a few hours from an earthquake and I would have had to restart locking. If this causes any problems, please give me a call or revert ALS_DIFF.py in SVN. I already have a copy saved.
I had a chance to test it out a bit more thoroughly after a lock loss around 130 PT. This time I forced it into a situation where it would overshoot resonance and have a very small amount of light in the signal. It then jumped back and got close enough for the fine tuning threshold to move the node into the next state to fine tune.
This situation has been the most frequent cause of the node not finding IR resonance this year, so hopefully this helps reduce assistance required requests. Next steps for this node are to make a "long search" state that will search through the full range for good buildups, and fix the times that it will fine tune indefinitely, circling around the correct value. This latter issue should just be a minor tuning change, but it's challenging to test.
Wed Sep 20 10:08:30 2023 INFO: Fill completed in 8min 26secs
Jordan confirmed a good fill curbside. Note that because the outside temp is low (58F, 14C) the TCs flatline at a higher temp: TCA=-175C, TCB=-166C. We will monitor this to see when the trip temp needs to be raised.
Hepi pumps are running smoothly. No changes.
Dust monitors are running smoothly and within temp range.
This was a good test of our new Picket Fence monitor EPICS channels. The PKT circle on the CDS Oveview is red, and the Picket Fence MEDM (attached) shows that the last update is many minutes ago (should never exceed 4 seconds).
I'm taking a look at the server and will restart it.
No need to restart the Picket Fence process, after almost exactly 20 minutes at 09:53 PDT the service sprung back to life and is updating normally now with nothing done at our end.
Good to know that the automatic restart is working! Also, very nice detailed interface, I'm a big fan.
It's a bit awkward that it took almost 20 minutes to come back again. I'd prefer it to be closer to 5 minutes, will check the code to see if there's anything amiss. There were no alerts/logs on the copy we're running at Stanford, so I assume this is not a problem from the USGS side.
TITLE: 09/20 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 10mph 5min avg
Primary useism: 0.06 μm/s
Secondary useism: 0.30 μm/s
QUICK SUMMARY: H1 recently lost lock at 14:38 with an ETMX saturation immediately before (looking into it), but relocking is challenging as SEI_ENV just entered earthquake mode. USGS doesn't report any notable quakes, but Picket Fence certainly is seeing activity.
I started the well pump to replenish the fire water tank. The pump will automatically shut off in 4 hours.
TITLE: 09/19 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
- Reaquired NLN @ 16:35 UTC, OBSERVE @ 16:55
LOG:
No log for this shift.
H1 has now been locked for 3.5 hours, range currently at 150 Mpc. Seismic motion is low, winds are moderate (~20 mph), but otherwise a quiet night so far.
Follow up investigation to Ryan's alog from this morning, indicating that recently, we have had issues getting IR in the arms and subsequently lose lock at CARM to TR on reaquistions. In addition, it has been noted that despite recently running initial alignments, ALS (particularly Y) would have to go through increase flashes for it to catch. I did a bit of digging, and these are my findings.
I decided to trend back some channels to find potential drift on the Y arm, particuarly in the past 24 hours. Plot 1, is a graphical representation of the Y arm witness sensors and oplev P/Y/sums for E/ITMY in addition to some wind and temp plots to see if I could see any coupling. A bit hard to read, but the point of this plot is to mostly look to see if there is anything that immediately sticks out as being direct evidence of the cause of our Y arm problems. From this plot, nothing really stands out; there is no temperature swings in the VEA, nor any consistent high wind speeds that could explain this recent issue, and no egregious shifts in any of the ETM or ITMY channels either.
However, I wanted to narrow down some of these channels to take a closer look, and in doing so, found something interesting. Plot 2 show the oplev sum counts for the Y arm suspensions and I found that the sum counts have been steadily decreasing over the past week, with ITMY being the worst of the two. According to the plot, the sum counts for ETMY have dropped by roughly 350 counts, while ITMY has dropped by 1350 counts. Plot 3 shows the same channels, but with the oplev P/Y readbacks added. Interestingly, despite the drop in oplev sum counts, the pitch and yaw for both Y arm oplevs don't seem to have moved a whole bunch, at least not in parallel with the sum count drop. However, something did appear to happen yesterday (~9/19 2:25 PDT/9:25 UTC), as denoted by the vertical plot lines (in plot 3), where there seemed to be a hop in p/y readbacks for the Y arm oplevs and a more drastic drop in sum counts that followed (at a higher rate compared to the original rate of drop in counts leading up to it). So what happened during this time? A lockloss! (Note: it should be expected that a lockloss will drastically change the sum/pitch/yaw counts, but the point I'm trying to make is regarding the jump in oplev p/y counts and the counts staying at those increased values for an extended period of time, which is not seen in other locklosses).
It does look like the p/y oplev readbacks eventually made its way back to a roughly nominal state, but the process did take roughly a day for this to happen (could this be why we had alignment issues when relocking yesterday?). Plot 4 takes a closer look at the "hop" I mention earlier, using the ETM/ITMY witness sensors, and from what I see, there is some shift, but not a whole lot. However, we again see both the witness sensors and oplev values remaining at this elevated value and not dropping back down to its nominal values, for a period of at least 12 hours. Plot 5 shows the Y arm slider values following the lockloss. And again, from what I'm seeing, not a whole lot of movement. It might be noteworthy that ITMY P moved ~1 microradian since the lockloss event, which is a lot for ITMY.
All this to say, I'm not sure if I can call this weird movement/drift the smoking gun, but might be worth looking into. A quick search through recent alogs of commissioning/site events that could potentially relate to this issue also turned up moot. I think the ITMY sum oplev counts dropping as much as they did is pretty suspicious, but then again, could be just a coincidence (or could just be the oplev laser itself dying). I do think it's possible that that particular lockloss could have kicked something more aggresively than normal which could have then caused misalignment issues when reacquiring lock in the Y arm. Or maybe it's just a red herring and not related at all. Maybe all this information is a red herring. who knows ¯\_(ツ)_/¯
The X-End RGA was powered off today at 18:05 UTC, this unit was a bit noisy, fan noise. Also, it is known that this unit has an internal leak located on its main isolation valve.
Around the same time, I found a wireless phone that was ON, its location was at the termination slab, west side of the beam tube, located next to the magnetic coil. The power block was connected to an outlet, and the phone was powered on, and complaining about not being able to find base, the power block was disconnected and the batteries were removed from the phone. See this aLOG for a visual location of the magnetic coil.
Erik, Camilla
This was already installed on ETMY and the code described by Huy-Tuong Cao in 72229. Today Erik installed conda on h1hwsex and we got the new code running. I had to take a new ETMX reference while the IFO was hot (old py2 pickle files have a different format/encoding to py3). So next Tuesday I should re-take this reference with a cold IFO. This update can be done to the ITMs tomorrow.
To install conda or add it to the path, Erik ran command: '/ligo/cds/lho/h1/anaconda/anaconda2/bin/conda init bash'
Then after closing/reopening a terminal I can pull the hws-server/-/tree/fix/python3 code:
Stop and restart the code after running 'conda activate hws' to use the correct python paths.
Had a few errors that needed to be fixed before the code ran successfully:
New code now running on all optics. Took new references on ITMs and ETMX.
New ETMX reference taken at 16:21UTC with ALS shuttered, after IFO had been down for 90 minutes. Installed at ITMX, new reference taken at 16:18UTC. Installed at ITMY, new reference taken at 16:15UTC. realized where I previously edited code in /home/controls/hws/HWS/ I should have just pulled the new fix/python3 HWS code I went back did this for ITMs and ETMX.
ITMX SLED power has quickly decayed, see attached, so I adjusted the frame rate form 5Hz to 1Hz, instructions in TCS wiki page. This SLED was recently replaced 71476 and was a 2021 SLED so not particularly old. LLO has seen similar issues 66713 but over hours rather than weeks. We need to deicide what to do about this, we could try to touch the trimpoint or contact the company. ITMY SLED is fine.
There was two separate errors on ITMY that stopped the code in the first few minutes, a "segmentation fault" and a "fatal IO error 25". We should watch that this code continues the run without issues.
ITMY code keeps stopping and has been stopped for last few hours. I cannot access the computer, maybe it has crashed? There is an orange light on h1hwsmsr.
ITMX spherical power is very noisy. This appears to be because the new ITMX reference has a lot of dead pixels, I've noted them but haven't yet added them to the dead pixels files as cannot edit the read-only file- TCS wiki link.
ITMY- TJ and I restarted h1hwsmrs1 in the MSR and I restarted the code. We were getting a regular "out of bounds" error, attached, but the data now seems to be running fine. The fix/python3 code didn't have all the master commits in it so when we restarted mrs1 some of the channels were not restarted, found by Dave in 73013. I've updated the fix/python3 code and we should pull this commit and kill/restart the ioc and hws itmy code tomorrow.
ITMX - the bad pixels were added and the data is now mcuh cleaner, updated insruciton in the TCS wiki.
Pulled new python3 code with all channels to both ITM and ETMX computers (already on ETMY). Killed and restart softIoc on h1hwsmsr1 for ITMY following instructions in 65966. The 73013 channels are now running again.
Took new references on all optics at 20:20UTC after the IFO had been unlocked and CO2 lasers off for 5 hours. RH settings nominal IX 0.4W, IY 0.0W, EX 1.0W, EY 1.0W.
Detchar, please tell us if the 1.66Hz comb is back.
We changed the OM2 heater driver configuration from what was described in alog 72061.
We used a breakout board with jumpers to connect all OM2 thermistor readback pins (pin 9, 10, 11, 12, 22, 23, 24, 25) to Beckhoff at the back of the driver chassis. Nothing else (even the DB25 shell on the chassis) is connected to Beckhoff.
Heater voltage inputs (pin 6 for positive and 19 for negative) are connected to the portable voltage reference powered by a DC power supply to provide 7.15V.
BTW, somebody powered the OM2 heater off at some point in time, i.e. OM2 has been cold for some time but we don't know exactly how long.
When we went to the rack, half of the power supply terminal (which we normally use for 9V batteries) was disconnected (1st picture), and there was no power to the heater. Baffling. FYI, if it's not clear, the power terminal should look like in the second picture.
Somebody should have snagged cables hard enough, and didn't even bother to check.
Next time you do it, since reconnecting is NOT good enough, read alog 72286 to learn how to set the voltage reference to 7.15V and turn off the auto-turn-off function, then do it, and tell me you did it. I promise I will thank you.
Thermistors are working.
There is an OSEM pitch shift of OM2 at the end of the maintenance period 3 weeks ago (Aug 29)
Having turned the heater back on will likely affect our calibration. It's not a bad thing, but it is something to be aware of.
Indeed it now seems that there is a ~5Mpc difference in the range calculations between the front-ends (SNSW) and GDS (SNSC) compared to our last observation time.
It looks like this has brought back the 1.66 Hz comb. Attached is an averaged spectrum for 6 hours of recent data (Sept 20 UTC 0:00 to 6:00); the comb is the peaked structure marked with yellow triangles around 280 Hz. (You can also see some peaks in the production Fscans from the previous day, but it's clearer here.)
To see if one of the Beckhoff terminals for thermistors is kaput, I disconnected thermistor 2 (pins 9, 11, 22 and 24) from Beckhoff at the back of the heater driver chassis.
For a short while the Beckhoff cable itself was disconnected but the cable was connected back to the breakout board at the back of the driver chassis by 20:05:00 UTC.
Thermistor 1 is still connected. Heater driver input is still receiving voltage from the voltage reference.
I checked a 3-hour span starting at 04:00 UTC today (Sept 21) and found something unusual. There is a similar structure peaked near 280 Hz, but the frequency spacing is different. These peaks lie on integer multiples of 1.1086 Hz, not 1.6611 Hz. Plot attached.
Detchar, please see if there's any change in 1.66Hz comb.
At around 21:25 UTC, I disconnected OM2 thermistor 1 (pins 10, 12, 23, 25 of the cable at the back of the driver chassis) from Beckhoff and connected thermistor 2 (pins 9, 11, 22, 24).
Checked 6 hours of data starting at 04:00 UTC Sept 22. The comb structure persists with spacing 1.1086 Hz.
Electrical grounding of the beckhoff systems has been modified as a result of this investigation -- see LHO:73233.
Corroborating Daniel's statement that the OM2 heater power supply was disrupted on Tuesday Aug 29th 2023 (LHO:72970), I've zoomed in on the pitch *OSEM* signals for both (a) when the suspected time of power distruption (first attachment), and (b) when the power and function of OM2 was restored (second attachment). One can see that upon power restoration and resuming the HOT configuration of TSAMS on 2023-09-19 (time (a) above), OM2 pitch *decreases* by 190 [urad] over the course of ~1 hour, with a characteristic "thermal time constant" exponential shape to the displacement evolution. Then, heading back to 2023-08-29, we can see a similar shaped event that causes OM2 pitch to *increase* by 160 [urad] over the course of ~40 minutes (time (b) above). Then at 40 minutes IFO recovery from maintenance begins, and we see the OM2 pitch *sliders* adjusted to account for the new alignment, as had been done several times before with the OM2 ON vs. OFF state changes. I take this to be consistent with: The OM2 TSAMS heater was inadvertently turned OFF and COLD on 2023-08-29 at 18:42 UTC (11:42 PDT), and The OM2 TSAMS heater was restored turned ON and HOT on 2023-09-19 18:14 UTC (11:14 PDT).