TITLE: 05/30 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 124Mpc
SHIFT SUMMARY:
Lock#1:
Started off with an initial alignment after seeing some drifting on the OPLEVS, & OSEMS (esp. SR3 pitch, which seem to drift a large amount for only 4 hours). There doesn't appear to be a clear temperature correlation?
Both arms went through increase flashes, Yarm went smoothly and locked at a high 90%, Xarm struggled a bit and took more than twice as long, I tapped ETMX in pitch 0.3 microrads and it was enough for it to catch after it finished increase flashes and was still struggling on ENABLE_WFS. Good beatnotes, -4.5 (X) & -6 (Y).
SRM was badly aligned, after lots of SRM saturations and the ASC-AS_A signal was low (~2 instead of 4/5) I paused INIT_ALIGN and brought ALIGN_IFO to offload SR2 then I went to PREP_FOR_SRY and adjusted SRM mostly in pitch to make AS_AIR more symmetrical and to get higher highs, and lower lows on the H1:ASC-AS_A_DC_NSUM_OUT16 trace. I then unpaused INIT_ALIGN and rerequested SRC_ALIGN which went smoothly the 2nd try.
Lost it at FIND_IR, Yarm looked a little glitchy then it lost it
Lock#2:
Lost it at FIND_IR again, it seems like Yarm is killing it?
Lock#3:
Lost it at LOCKING_ALS, Yarm again I think
Lock#4:
COMM IR was found pretty easily, DIFF took a while, went through PRMI got it, took a little while for it to get DRMI despite good looking flashes.
Lockloss at MOVE_SPOTS 20:57, DCPD saturation right beforehand
Lock#5:
Couldn't get DRMI went through PRMI again, I had to tap PRM in pitch to catch, then in DRMI_1F I adjusted the BS in pitch
Had to stop at OMC_WHITENING for the violins to damp, IX12 & IY8 have changed phase, a negative gain damped IX12 down (-2, max of -4) instead of the +2 its set to in lscparams. IY8 also was rising, I changed its gain sign from 0.08 to -0.08 and it started to fall - Tagging SUS
Aquired NLN at 22:25UTC, we had to wait for ADS to converge for the CAMERA_SERVO as usual and some SEI SDF diffs to be checked
In Observing at 22:39UTC
DARM was glitching out a bit around 22:55 on the FOM
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 15:02 | FAC | Tyler+Sierra | CS | N | Infloor smoke detectors | 17:10 |
| 15:08 | FAC | Chris+Contractors | LVEA | N | Vinyl work | 18:46 |
| 15:09 | FAC | Kim | EndY | N | Technical Cleaning | 16:11 |
| 15:12 | FAC | Tyler+Johnson | EndY | N | FIre alarm testing | 16:09 |
| 15:21 | VAC | Jordan | EndX | N | Set up a pump, cp8 | 16:15 |
| 15:43 | VAC | Travis | EndX | N | Purge air compressor | 17:31 |
| 15:48 | FAC | APS | CS | N | Door access, FCES, High bay | 18:45 |
| 15:48 | EE | Fill | Ends | N | HEPI cabling, rack temps | 18:51 |
| 15:58 | CDS | Erik | CS | N | ADC card work, h1seih16 | 16:42 |
| 16:10 | FAC | Tyler+Johnson | OSB, Mids | N | Fire alarm tests | 22:00 |
| 16:12 | FAC | Kim | EndX | N | Quick checks, Garb check | 16:49 |
| 16:29 | FAC | Betsy | LVEA | N | Container checks, noise checks | 16:44 |
| 16:37 | CDS | Johnathan | MSR | N | DMT work | ?? |
| 16:00 | SEI | Jim | REMOTE | N | ISI BLND work | 17:22 |
| 16:50 | FAC | Kim | LVEA | N | Technical cleaning | 19:01 |
| 16:58 | VAC | Jordan, Gerardo | LVEA HAM5 | N | FTIR sample | 17:40 |
| 17:08 | FAC | Cindi | LVEA | N | Tech clean | 18:43 |
| 17:20 | FAC | Christina | OSB | N | Recycling items | 17:40 |
| 17:44 | VAC | Gerardo | Arms, Y2-8, X2-8 | N | Solar panel checks | 18:51 |
| 18:29 | FAC | Chris | LVEA | N | Check with vinyl crew, and pest traps | 19:01 |
| 18:32 | FAC | Betsy | LVEA | N | Sweep | 18:39 |
| 18:39 | VAC | Jordan | EndX | N | Turn off pump | 19:02 |
| 18:40 | FAC | Richard | LVEA | N | Check with APS | 18:46 |
| 20:15 | VAC | Jordan & Travis | MidX | N | Set up a pump | 20:35 |
| 21:00 | EE | Fill | MidY | N | Parts | 21:38 |
I'm not fully sure that it's meaningful, and I've let Betsy and TJ know since they're doing a deeper dive into our temperatures and suspensions, but it seems that today (~5 hours ago, while we were unlocked) we had more motion around the microseism for several of our optics, including all 4 quads for some of their top mass degrees of freedom. However, the BLRMS of our microseism don't seem to have increased by the ~order of magnitude that some of these suspension degrees of freedom seem to have moved. I don't have any evidence that this is related to our locking difficulties, but it seemed worth writing down.
Arnaud just pointed out to me that several of the oplevs on the summary page (including SR3 and ITMX, for example) all see this increased motion, and it was for a finite duration of time. So, probably unrelated to our locking difficulties, and just a result of temporary excess ground motion (perhaps the road construction, as TJ conjectured?). EDIT: Arnaud again to the rescue, it's not motion seen in the ground seismometers (which is consistent with the lack of elevated motion in the BLRMS). So, some other maintenance-related activity.
So, my plots of elevated motion are just a case of a poorly chosen time of day (I was looking for a time when we were DOWN, to make an easier-to-interpret set of plots to check for rubbing). Thanks Arnaud!
We've been having issues during initial alignment with the ALIGN_IFO node falsely convinced that SRY is locked when SRM is actually very misaligned. This was due to the SRCL trigger turning on too early (see H1:LSC-SRCL_TRIG_MON). Sheila and I looked at the LSC SRCL trig thresholds and decided to bump these up to ON:0.3 OFF:0.2 (last two changes of this - alog54799 and alog63678). I also added a trigger delay of 0.5s since it seems to flash above this threshold even at times that we are very misaligned. This trigger looks at the POP A DC signal, so we aren't sure why the amount of light has changed on this PD. These settings worked for the one time we tried it with a well aligned SRM, so we should keep an eye on it the next few times we do alignments.
This second alog63678 reminds future us that we should recommission SRY to use the AS WFS rather than the REFL WFS. I'll remind future, future us the same thing.
Had this issue again. Described with symptoms in alog 70028.
I've bumped the trigger thresholds up a bit more as it still was triggering for very poor alignments. Looking back over the last week or two as a reference I changed enable threshold to 0.35 and the disable at 0.23. During an initial alignment this morning I tried it out by heavily misaligning SRM by ~70urads to check trigger values, then I brought it back. All seemed good. These values are loaded into ALIGN_IFO and committed to the svn.
Sheila, Jim, TJ, Jenne
TLDR: PR3 optical lever calibration seems wrong by a factor of 4, for some reason it didn't witness the ISI problem of last week.
Last week we had difficulty with alignment, (69924, 69907 and others), before realizing that there was an accelerating drift of HAM ISIs (which Jim has now installed a fix for 70006). One thing that has been bothering me is why we didn't catch this faster, especially because we looked at PR3 optical lever trends many times and didn't see a drift.
The first attached screenshot shows the time when the ISI was reset, resulting in a 8urad shift of the ISI shown as about a 2 urad shift in the optical lever. (ISI channels are in nrad, sus channels in urad) When the suspension slider was moved by 2 urad after that, the optical lever sees a shift of 0.5 urad, so this seems to indicate that the optical lever is seeing both of these types of shifts, just with a factor of 4 calibration error.
The second attachment shows the longer term trend over 1 month, where the ISI is drfiting by 8 urad. We should expect to see a 2 urad drift in the optical lever corresponding to this, but the optical lever doesn't show this trend and has less than 1 urad long term drift. I don't undertstand this.
The third attachment shows the weekend incident where Tony had to ajust PR3 alignment again, 69980 . The ISI, HEPI and the top mass yaw don't show a shift, but the optical lever does show that when Tony set PR3 to restore the ALS COMM beatnote, he brought PR3 back closer to the alingment it had been at Friday when we aligned that table. Betsy points out that the PR3 top mass osems are moving by 20um with changes in the LVEA temperature, which might have contributed to this.
See also TJ's check of LVEA temperature and suspension drifts: 70000
We set up an ISP250 to pump on the dewar jacket for CP6 (Mid-X). The pump was turned on at 1:23 PM. The pump was placed on two pieces of foam, see attached picture.
Will add a comment to this report when the pump is turned off today.
We haven't turned this pump off in the end at MX. Instead, we've let it run until 5/31 (Wednesday) morning at CP6, and transposed it to CP5. Currently, it is still pumping.
WP 11173
Rack 1 – Electronics Bay
Rack 2 – FE Rack (Receiving area)
Continued with troubleshooting of the EY rack temperatures. On May 4th, (alog 69322), we swapped the field rack 1 and rack 2 cabling on the End Link Chassis. Issue followed field cabling not Beckhoff channel. This morning I reverted the cable swapp and moved cables from ports 1&2 to 3&4. Internal cabling from rear panel to EL3202 terminal was re-wired.
All connections on both temperature sensors (RTP PT100) were re-landed to make sure all connections were making solid contact. A new field cable and temperature sensor was staged in the electronics bay incase it is needed for more troubleshooting.
The M12 connectors (ports 1&2) were replaced on the End Link Chassis. The field and internal chassis cabling was moved from ports 3&4 back to ports 1&2.
WP11227 h1seih16 2nd ADC timing error
Jim, Fil, Erik, Jonathan, EJ, Dave:
This morning Erik worked on the h1seih16 IO Chassis to see what we should try next (timing hardware or PCIe). He found that the second Adnaco backplane link to the front-end computer had status led issues, so we chose to move all the ADC/DAC cards from the second backplane (A2) to the empty third backplane (A3). The ribbon cables are tight, but reach. The updated as-built drawing is attached.
I turned off my program which was clearing the ADC1 TIM errors, any errors will now be latched on.
At time of writing, 3.5 hours in, we have had no errors. But, over the weekend we had a day with only 2 errors total, so we will have to wait at least a couple of days before declaring victory.
Restarts
Quiet maintenance, no model changes, no DAQ restarts
Tue30May2023
LOC TIME HOSTNAME MODEL/REBOOT
09:22:22 h1seih16 ***REBOOT***
09:23:50 h1seih16 h1iopseih16
09:38:52 h1seih16 ***REBOOT***
09:40:24 h1seih16 h1iopseih16
09:40:37 h1seih16 h1hpiham1
09:40:50 h1seih16 h1hpiham6
09:41:03 h1seih16 h1isiham6
All LVEA activities concluded so I performed the walkthru at noon today per T1500386.
The end station activities were minor and lights were verified to be off via camera. Wireless access confirmed off via the H1CDS_UNIFI_WAP_CUST.adl screen (end station ethernet cables likely still disconnected at the rack from me last week).
* Outstanding: The 2 VE racks need some door closing work, which Gerardo and I couldn't get on in at the stroke of noon without taking alot more time. Will get them on next Tues with better planning now that I've seen the issues.
I think I've fixed the issue with the bad blends we found on Friday, so we should have any more slow drifts in RZ CPS on the HAMs. First attached screen shot shows the zeros and poles as read by foton for the new filter (top row, installed as SUPERSENS5 on all the table) and the old bad blend (bottom row, SUPERSENS4). The issue with the bad blend were 3 poles at "0 hz" and 3 zeros at "0 hz" in the high pass, the low passes appear to be identical. I missed a step doing a "minimal realization" of the filters in my design script, and it wasn't caught by some of the checks I usually do, so I'm going to look at adding some alarms for that. These erroneous poles and zeros don't show up in any normal bode plot or step response (second image).
For now, I've installed the fixed filters on all chambers in the SUPERSENS5 path, set and tested the blend guardians to use the new filter and accepted the settings in SDF for all of the chambers. I've left the old filters in, but they aren't being used, I'll remove them in a couple weeks but I want to use them as a check against the filter I installed today. I'm also going to leave in the DIAG_MAIN test in, but once I'm convinced the problem is definitely fixed I'd like to remove it.
Since this fix, the RZ RESIDUAL monintors have been rock solid, oscillating with low-normal-levels of noise around 0.0 nrad. This indicates that the tables are no longer slowly drifting away in yaw, and this has fixed the problem. The attached plot shows the long term trend of the RESIDUALMON channels from all HAM platforms before the fix and the few weeks since. The crosshair pin-points the data/time of this installed fixed. (The nastiness seen in HAM7, shown in brown, a few days later was the unrelated overnight issues with that chamber's ISI interface chassis; see LHO:70117.) Nice work Jim!
Tue May 30 10:06:55 2023 INFO: Fill completed in 6min 55secs
Changing HVAC settings, HVAC cooling issues, large diurnal outside temperatures, and probably some other things have had our LVEA temperatures moving more than usual lately. I've trended some oplevs and osems to see how much this has our suspensions moving. I'm not sure I can say that this has stopped us from locking or is the cause of our short locks, but our optics are clearly moving and I'm sure that has made relocking more challenging and lengthy.
Attachment 1 - Two week trend of LVEA zones 1A. 1B, 4, 5 (vertex, output arm, input arm) and the PR3 and SR3 oplevs. Before HVAC changes were made zone 4 had the largest diurnal changes (0.6F p2p) and this is seen in SR3 yaw* (0.6 urads p2p). After the changes more of the zones had large diurnal swings, up to 1F. Richard tells me that zone 4 is a smaller volume so it's more difficult to not have the larger swings as its more susceptible to the outside changes.
Attachment 2 - Comparing the PR3 and SR3 oplevs with their top mass osem positions. They agree with each other in some regards, but often the oplevs seem to be more sensitive to temperature. I could imagine the oplevs having a bit of temperature dependence since their hardware are external to the vacuum.
Attachment 3 - 45 day trend of LVEA temps, outside temps, PR3 P oplev, SR3 Y oplev. This shows how our temps and optics are in a different state compared to how they have been.
* This seems a bit odd to me that we we're seeing yaw change the most here with temperature changes.
WP11227
Jim, Erik, Fil, EJ, Dave:
h1seih16 is currently powered down. Erik has found a possible issue with the second Adnaco backplane in the IO Chassis, he is in the process of moving all the cards from the 2nd to the 3rd Adnaco backplanes.
EJ confirmed that a communication issue with the second backplane (which holds A1, A2, A3 and D1) would raise an timing error on A1 but not the other cards.
1 DAC and 3 ADCs were moved from adnaco PCIe expansion board 2 to board 3. h1seih16 was powered back up and is running. Board 2 appears to be working again with no IO cards plugged into it.
CS:
It looks like FAN4_170_1&2 was shut off almost 4 days ago, and FAN5_170_1 is a little high at 0.4
Out:
Looks good, apart from EY_FAN1_470_1 which is a little noisy at 0.4
TITLE: 05/30 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
CURRENT ENVIRONMENT:
SEI_ENV state: MAINTENANCE
Wind: 6mph Gusts, 3mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY:
FAMIS 19978
No major events of note on these trends.
One interesting note, roughly 4 days ago (these trends were taken on Monday, this comment posted on Tuesday) the LVEA temperature as measured by the PSL temperature sensor on the outside of the PSL enclosure (H1:PSL-ENV_LVEA_TEMP_DEGF on the Weekly Env plot) dropped from ~69 °F to ~67.5 °F (moving between 67 °F and 68 °F). This appears to be a real temperature drop as it can be seen in a multitude of PSL sensors: Table North and South and AC North and South in the PSL Laser Room, in the PSL Anteroom, the Amp/DB cooling water, the amplifier and diode box temperatures, the LVEA control box, and in the output power from the 1st and 2nd amplifiers. No info in the alog re: HVAC work from last Friday (5/26), but this could coincide with Ryan's Saturday alog trending 5 days of LVEA temperatures (note the drop shortly after -1 day) and some Saturday morning messages in the LHO Remote Control Room on Mattermost (discusing CS z-axis motion improvement and temperature variations from Friday, 5/26).
Looks like the 3IFO large Container #3 needs to be checked for any purge line issues. Although I'm not sure of the unit, this readout does not look like the others, at "160 H2O ppm" while the others are ~10-50 ish. A trend shows this on a slight rise.
While inspecting, Randy and I found that the guage ball on container unit #2 looked stuck (reading very low on the screen also) so he dialed down the pressure, and tapped it to unstick it, before resetting the pressure. The trend of this container shows pretty flat line and hashy for the last 100 days and now it is back to trending something. Hopefully it is showing flow now.
The #3 unit is still "higher" than the other when glancing at the medm values, but a longer year trend shows it goes up and down. Likely all of these channels need some recalibration or something since the units and signs are not obvious.
Betsy, Jason, Sheila, Fil, Marc, Travis, Adrian
This morning, a few of us made some walk-thrus of the Corner and End station VEAs to check and turn off items utilized during non-Observing run times, per T1500386 (bold were items which needed attention this time around). All of the following were checked off.
• Make sure no one is in the LVEA
• Cranes in their "parking spots" & their lights are OFF
• Monitors/work stations are turned OFF (except VAC computers) - Powered down ITMX camera setup computer
• Phones unplugged (wall-wars & RJ11 plugs) & batteries pulled from handsets (Phone locations here)
• Confirm no mechanical shorts onto HEPI.
• Cleanrooms OFF
• PSL in Science Mode - bit of an audible hum in the LVEA after everything turned off, kinda of seemed like it was more in the South bay, maybe fans or HVAC still need to be checke din this area
• ISC Table fans OFF
• Confirm wifi access points are unplugged (instructions)
• Electronics racks (i.e. make sure no test equipment connected to a rack, unless work permit for it.) - A few unconnected cables hanging in the PSL, ISC, SQZ racks, but all determined to be not an issue (some used for temp needs). Added termination plugs to unused RF plugs in SQZ racks, and 1 in the PSL rack. Lots of PEM BNC cables still run to various areas from PEM racks. O-scope connected and powered on near West bay corner for PEM coil use. Adrian/Robert confirm all PEM is in a nominal run configuration. Will spend another Tuesday with folks to finish cleaning up and stowing cables.
Temp dust mon at HAM2 unplugged.
End stations - HWS camera power supplies under IIET upgrade WIP, so temp plug-in to wall power.
EX weather station equipment (and PS) in rack on VEA floor removed by Fil.
HAM6 RGA ion pump controller sitting on a cart at the end of HAM6 chamber will be left on, but RGA/fan was turned off.
• Forklift NOT connected to charger
• Unplug unused power supplies/extension cords - Unplugged some
• Lights OFF (for end stations check lights via webcams)
• Unplug power supplies for Valcom Paging System and 48V DC H1PSL Phone in Communications Room 163. Also, here is a mouse in this room 163. The animal kind.
• ALOG the LVEA has been swept.
LVEA not as silent as we remember -
After all of the sweeping to unplug items, etc, the LVEA was not as quiet as many of us recall during O3. There is a quietish high pitched hum (like a fan) somewhere, but after Jason, TJ, and Gerardo and I listened for a bit, we couldn't specifically tell where it was coming from. It isn't a fan from an ISC table, nor is it the equipment in the PSL area emergency egress closet thing. Maaaybe it's the SQZ racks between HAM4 and 5, but you can also hear it when walking from there to the PSL. The SQZ racks are all new this time around however. Or, Gerardo suggests checking to see if it is the dust monitor pump in the mech room which is a bit loud. There is a temp power supply under the HAM4 HWS table but it has a slightly different quieter hum.
Richard also reminded us that the LVEA VAC rack along the Y-manifold area now has the back door removed and may be noisier than before. Will look into this next Tues.
We never got a picture of the gustmeter instrumentation when I set them up and they came up in the pre-O4 sweep. We are leaving the EY gustmeter setups in place, plugged in and taking data near the emergency door in the EY VEA. A picture of the current setup is attached. The gustmeter on ADC channel 12 has failed at some point.
The FCES was swept this morning before the run start. All well there with the above items looked at.
A noise source was identified in alog 69927, namely a loud dust mon pump.