TITLE: 06/01 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 134Mpc
SHIFT SUMMARY:
Handing off to Camilla
Lock#1:
The failed Coildriver was replaced around 17:00UTC (FRS ticket) and I began relocking, I tried quickly to see if it could get past DRMI but no luck so I went into an initial alignment. During the IA TJ adjusted SRM during SRC align as it was not well aligned.
NLN reaquired @ 18:45 UTC with no interventions. In observing @ 18:58UTC
EX saturation @ 19:33, 20:51, 20:52, 21:07 UTC
I cautiously tried to find a better setting for IY5/6, I first tried stepping up the gain to -0.04 and it brought it down more than twice as quickly (~30%/hour)
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 13:47 | Richard | EX, EY | N | Rocker Switch Investigation, Chiller Search | 14:47 | |
| 15:59 | EE | Fil | CS | N | Run some transfer functions then replace the coildriver | 16:54 |
| 16:30 | FAC | Randy | EndY | N | Checks | 17:00 |
| 16:30 | FAC | Karen | MidY | N | Technical cleaning | 17:34 |
| 16:39 | CAL | Jenne | REMOTE | N | CALIB_STRAIN_CLEAN tests | 17:20 |
| 16:55 | EE | Fil | EndX | N | Install new coildriver | 17:08 |
| 16:55 | FAC | Betsy | LVEA | N | Parts | 17:00 |
| 18:31 | FAC | Karen | Wood shop | N | Tech clean | 19:00 |
| 21:04 | VAC | Janos | MidX | N | Check on pumps | 21:23 |
EX PUM Coil Driver Chassis pulled by R. McCarthy for troubleshooting. Differences noted between spare and installed unit:
Replaced failed capacitor on EX unit (Channel 2, C24). Capacitor was shorting +18V rail to ground. Ran transfer functions on all channels in both Low Pass and Acquire mode. Using CH1 as a baseline, all channels gave same result. Unit reinstalled.
Spare unit will be modified per ECR 2100204.
PUM EX- S1000343
PUM Spare - S1102649
F. Clara, R. McCarthy
Thu Jun 01 10:07:58 2023 INFO: Fill completed in 7min 57secs
Jordan confirmed a good fill curbside.
WP11227 FRS27187
Jonathan, Fil, EJ, Erik, Dave:
We have had zero ADC1-TIM errors on h1seih16 since Erik moved all the cards from the second Adnaco backplane over to the unused third backplane 48 hours ago. Previously we had at least one error per day, so it looks like this problem was with the computer to Adnado link and presented itself as a timing issue with the first ADC present on the backplane, ADC1 in this case.
Note this most probably means that the original ADC1 which was removed Tue 23may2023 is most probably a good ADC. SN=110124-06
WP and FRS have been closed.
TITLE: 06/01 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 5mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
Taking over from Ibrahim
TITLE: 06/01 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
SHIFT SUMMARY:
EX PUM Coil Driver Failure Story Summary and Findings
See above for more detailed LOG of events.
At 12:44 UTC the ISI and PUM Watch Dogs tripped seemingly out-of-nowhere.
From the above details, we can safely conclude that the PUM Coil Driver failure is what caused both ISI and PUM watchdogs to trip. It also explains why the PUM WD button “doesn’t work” - it’s that the WD is just being re-tripped as the driver continues to fail and flail. Richard called Fil and he’s on the way to do the needed maintenance. IFO is in down until and OPS mode is in “corrective maintenance” until the issue is resolved.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 13:47 | Richard | EX, EY | N | Rocker Switch Investigation, Chiller Search | 14:47 |
IFO STATUS: IFO is at NLN and in OBSERVING as of 10:13 UTC
TITLE: 06/01 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 131Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 13mph Gusts, 9mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.10 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING as of 7:01 UTC
TITLE: 06/01 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 132Mpc
SHIFT SUMMARY: One lockloss from the wind, one from an unknown cause (length glitch beforehand), just got back to Observing.
LOG:
Relocking was hard, from 30mph wind and trouble again with SRM in init align 70013, this time it was swinging around! After touching PRM, we lost lock at ENGAGE_DRMI_ASC. I then ran an unsuccessful initial alignment: ALIGN_IFO SRC alignment wasn't working and was saturating SRC. So I skipped it by taking ALIGN_IFO DOWN and touched SRM by hand in DRMI locking knowing that everything else was already well aligned.
Closes FAMIS 25068 and FAMIS 25066.
All plots are attached. Only seeing some slow V_eff drifts. This agrees with what Ryan and Rahul are seeing in the Oplev charge measurements (69432 69252). We can compare them with the script Ryan C made, instructions in 69252, see attached plot. Both in-lock and op-lev trends show V_eff moving away from zero, we wonder if a sign is incorrect in ETMX.
Two FAMIS tasks closed as Austin was unable to run the 16th May measurements analysis as the log file hasn't been clearing correctly. I've corrected the logfiles and edited the RUN_ESD_EXC.py code to correctly clear the logfile, now using logfile.clear() rather than logfile = [] in (69233).
These charge measurements have been causing a lockloss just before Tuesday maintenance (69437) so I've lengthened the final ramp time back from 20 seconds to 60 seconds (69234).
We were observing at 135+MPc for the end of a 5h30 lock until a lockloss at 01:19UTC, 1369617587. We had wind gusts of 30+mph and a little ground motion from a small earthquake that I think caused this lockloss. Some 2-4Hz wiggles beforehand.
Currently at state MOVE_SPOTS after an unsuccessful initial alignment. INIT_ALIGN SRC alignment wasn't working so I skipped it and touched SRM by hand in DRMI locking.
I tried a few permutations of turning on the retrained cleaning that I described in alogs 70043 and 70023. In the end, I'm just leaving on the updated jitter subtraction. The LSC subtraction worked well, but then we updated some parameters (see Elenna's alog 70052), and it now needs to be retrained. The laser noise subtraction is quite confusing, and so is not on.
The first attachment shows all 3 (LSC, jitter, laser noise) updated subtractions on. You can see that the low freqs (below ~500 Hz) shows improvement, as does very high freqs (above 5 kHz). But, the middle-high freqs (500 Hz to 5 kHz) has injected noise. The green trace is the contribution from the Laser noise subtraction, and you can see that it is the dominant term above 500 Hz. This implies that we've got something strange going on that we have a sign flip around 5 kHz somewhere. It's possible that this is due to not long enough thermalization, but I suspect that we're just missing some term or sign or filter phase somewhere.
The second attachment shows just the LSC subtraction, and it does very well. This plot was taken with all old settings (old SRCL offset, old SRCL FF), so is just illustrative of what we should be able to achieve.
The third attachment shows just the Jitter subtraction, and it also does very well, except that I've had to put in a minus sign in the filter bank gain. A difference between what I had been using for Jitter subtraction and what is now currencly running is that what had been in use was trained on a measured exported-from-DTT transfer function between the OAF-NOISE_EST channel and GDS-CALIB_STRAIN_CLEAN, and today I'm using the filters that should be in use in the GDS calibration pipeline. Those two things have a difference of a minus sign, which is why today I have a minus sign in the filter bank gain.
Since the SRCL FF was changed after these plots were taken, the LSC cleaning now needs to be retrained. I'll get some data from overnight tonight, and retrain the LSC tomorrow.
The fourth attachment is the range for our 3 versions of CALIB_STRAIN from the summary page. Around 2300 UTC, there is a marked difference between pink and blue - that's with both LSC and Jitter subtraction on, after Shiela tuned up the squeezer. Then the last time the range is high in this plot, around 2330 UTC, only the Jitter subtraction is on, but Elenna had turned on the updated SRCL FF filter, which is why our overall pink and orange range is higher than at 2300 UTC. Hopefully once I've retrained the LSC cleaning, we'll again get blue up a little higher.
I've accepted all the associated SDFs for these in both the safe and observe snap files.
I took some recent times at 60W and 76W and made some power trend plots of the first three hours of the lock. The 60W time is from a long lock around March 29th, and the 76W time is from a long lock around April 29th. Unfortunately the time for the April 29th lock we had the POP beam diverter closed, so we are missing the sideband trend during thermalization.
I found a shorter lock at 76W on April 20th where the POP beam diverter was open. Trends attached.
I have attached another set up plots, this time adding in a 60W lock trend from early January.
One major effect we have considered when looking at these plots is the trend of the 9 MHz sideband as the IFO thermalizes. If you look at the first plot in both the main alog and the two comments, the bottom right corner has two plots showing the POPAIR RF18 and RF90 trends, effectively the 9 and 45 MHz buildups. When we lose our 9 MHz sideband, PRCL becomes unstable. Currently, the thermalization guardian works to correct exactly this, by increasing the PRCL digital gain to maintain a ~30 Hz UGF. We have mainly seen this as an issue with operating at 75 W. However, these power trends are showing us that the 9 MHz loss is present at 60W too, and Evan and Dan held the IFO at 25W a few months ago and demonstrated that this loss occurs even at low power.
The new plot I have added is an interesting comparison. On the POPAIR RF18 plot, you see that the blue trace- 76W lock from April 20, shows that we power up and then lose significant 9 MHz power. The red trace shows something similar from Jan 6 in a 60W lock. We also see this loss in a March 60W lock. I included the red trace to compare with the green trace because in the green trace we stopped at 25W for long enough to lose some 9 MHz, and then continued up in power. The green trace is also after we have improved the mode matching with new ring heater settings. It appears that build-up wise, our current situation is very similar to where we were in January.
However, this is further proof that we see 9 MHz loss no matter which operating power. I will look for more locks to compare and see if the difference between the red and green traces at 60W is a result of the TCS change. One thing to note in particular is that the ring heater changes reduces the build ups, so from January in red to March in green, we reduced the power recycling gain at 60W.
During the commisoning window this afternoon we got some data for different squeezer configurations.
Here is a summary of times:
First, a set of data at the current SRCL offset, summarized in the first screenshot, references are saved in /ligo/home/sheila.dwyer/SQZ/SQZ_vs_SRCL/SQZ_vs_SRCL_DARM_FOM.xml.
Then we tried to repeat a scan of sqz angle and srcl offsets similar to what Vicky and Elenna did here, however the impact wasn't as noticable as in that alog, perhaps because we don't have a lot of anti-sqz right now. Second attached screenshot shows the summary of this.
Then a repeat of the same data set as above but taken at a SRCL offset of -165, as shown in the third attachment.
We have recently reduced the ETMX ring heater by 0.1 W to avoid the 80 kHz PI. As a result, our buildups have increased slightly and we have been operating closer to 440 kW. We ahve also had much shorter locks than we have had in the past operating around 430 kW. Looking at the power and lock trends, our longest locks occur when the circulating power is slightly lower. This small variation in circulating power can result from slightly different input powers every lock from how the ISS second loop closes. Jenne and I think this is enough evidence to drop the PSL input power by 1W. We will now operate at 75W input from the PSL, which should bring us closer to the 430 kW operating power that works best for us.
The requested power level for NLN in lscparams is now 75W. I have loaded the ISC_LOCK, IMC_LOCK and LASER_PWR guardians accordingly.
Tagging CAL. This power decrease should be covered by TDCFs, but let's be sure.
This will be a catch-up Alog as it dates back to last week. It started with trying to adjust the airflows (per Robert S.) and the rising temps in the LVEA, so I started investigating and found a large amount of condensate on the floor of AHU-1. We (Tyler, Randy, Chris and I) cleaned up all of the water with wet/dry vacuums and then adjusted mechanical linkages on Fan 1 & 2 to achieve the desired airflows. At that time, it was discovered that cooling coil 4 was much warmer than the other coils. I was not sure what other adjustments had been made by others so I continued to monitor. I took Friday off but was checking coil 4 and talking with Richard throughout the day. Towards the end of the day coil 4 was still much warmer than normal and allowing warm air into the LVEA. I decided to turn Fan 4 completely off to see if I could get the temps to stabilize. This strategy helped. Yesterday, being maintenance day, I took the opportunity to investigate a little more into why coil 4 was not cooling. After checking functionality of all the valves, I decided to open the strainer and found that it was very full of debris. I cleaned the strainer, reinstalled and turned Fan 4 back on. The coil is now running at normal temperature and the LVEA is coming back to a stable temperature. Because of the rising temps last week, many of the zone temperatures were lowered to try and help with the warmer areas. These setpoints have all been returned to the original desired temps. A work permit will be put in soon to inspect and clean all remaining strainers on the next Tuesday maintenance period.
Adding plots of LVEA temperature over the last 10, 16 and 30 days. With cross-hairs showing dates temperatures changed. This plot is available with command: ndscope /opt/rtcds/userapps/release/cds/h1/scripts/fom_startup/nuc32/VEA_temperatures.yaml
Adding some quantitative time and information to this aLOG series. First, I cover the HVAC system and the channels that I'm using for metrics of the LVEA HVAC system. The LVEA has 2 giant cooling actuators, or "air handlers" that live in the Mechanical Room, +X -Y of the beam splitter. These air handlers add together to cool the LVEA as a common actuator. There are separate, local *heaters* in each zone of the LVEA that serve as "differential" actuators. The system is depicted pretty well in the MEDM overview screen found under sitemap > "FMCS OVERVIEW" > "Air Handler 1,2" The channels that are useful metrics of the system are listed below with their function and units External, outside the building temperature H1:PEM-CS_TEMP_ROOF_WEATHER_DEGF Temperature on the LVEA building roof, in deg F (look for corresponding "DEGC" channels for deg C) LVEA temperature sensors << our primary "canary in the coal mine" indicator that the suspensions will likely misaligned H0:FMC-CS_LVEA_ZONE1A_DEGF Temperature, in deg F, around BSC2 (think H1 Beamsplitter and ITMs) H0:FMC-CS_LVEA_ZONE1B_DEGF ", ", around 3-IFO Area (think old H2 beamsplitter) H0:FMC-CS_LVEA_ZONE4_DEGF ", ", Output arm (think SR3) H0:FMC-CS_LVEA_ZONE5_DEGF ", ", Input arm (think PR3) Local, corresponding heater units H0:FMC-CS_LVEA_HEATER_ZONE1A_PC percentage of heating power being applied to the zone H0:FMC-CS_LVEA_HEATER_ZONE1B_PC H0:FMC-CS_LVEA_HEATER_ZONE4_PC H0:FMC-CS_LVEA_HEATER_ZONE5_PC Common HVAC Air Handler 1 -- often referred to as just "AHU1" These channels are is human controllable via the HVAC control system H1:FMC-CS_LVEA_AH_DAMPER_1_PC "Percentage of open" Intake air reduction "damper" valve (0% full closed, 100% full open) H0:FMC-CS_LVEA_AH_COOLTEMP_1_DEGF Temperature of the "Coiling Coil" H0:FMC-CS_LVEA_AH_COOLTEMP_2_DEGF H0:FMC-CS_LVEA_AH_AIRFLOW_1 Output Air flow into the LVEA in cubic feet per minute (cfm, or CFM) H0:FMC-CS_LVEA_AH_AIRFLOW_2 Common HVAC Air Handler 2 -- often referred to as just "AHU2" These channels are is human controllable via the HVAC control system H0:FMC-CS_LVEA_AH_DAMPER_2_PC H0:FMC-CS_LVEA_AH_COOLTEMP_3_DEGF H0:FMC-CS_LVEA_AH_COOLTEMP_4_DEGF H0:FMC-CS_LVEA_AH_AIRFLOW_3 H0:FMC-CS_LVEA_AH_AIRFLOW_4 The story starts earlier than the adjustment to the air-handler units to reduce the overall cooling airflow into the LVEA. Here's the timeline of things changing, given that IFO problems with temperature started late April 2023. Here's every event. Apr 26 2023 10:56 PDT (Wednesday, mid-morning) Air handler 2's damper has a step-change in behavior, going from diurnal fluctuations between 50% and 80% open, to between 60% and fully 100% open. This only causes a minor "glitch" change in the LVEA temperatures, mostly in the diurnal fluctuations getting "reset," changing the diurnal pattern but no overall average temperature change. May 02 2023 07:57 PDT (Tuesday, first-thing, minutes before the official start of maintenance day) Air handler 2's damper, and corresponding cooling temperature 3&4 glitches for 30 minutes. For the most part, the damper returns to the normal 50% and 80% open behavior This only causes a minor change in the LVEA temperatures, again in the diurnal fluctuations changing their pattern but no overall average temperature change. May 10 2023 13:46 PDT (Wednesday afternoon) aLOG: LHO:69516 Air handler unit 2, AHU2 fails and turns off. Channels showing this: H1:FMC-CS_LVEA_AH_DAMPER 2_PC Goes all the zero closed at 0% H0:FMC-CS_LVEA_AH_AIRFLOW_3 Goes to zero, as airflow ceases H0:FMC-CS_LVEA_AH_AIRFLOW_4 Goes to zero, " H0:FMC-CS_LVEA_AH_COOLTEMP_3_DEGF Goes up from stable at 50-60 deg F to really high at ~65 deg F H0:FMC-CS_LVEA_AH_COOLTEMP_4_DEGF This causes a major, a mean value excursion in LVEA temperature, but after the air handler function is restored, all zones come "back to normal" by May 11 20232 12:38 PDT, ~24 hours later. May 11 2023 12:38 PDT With Air Handler 1 restored, the LVEA temperatures return to normal but the airflow is now much larger, with Fan 1 producing ~12500 cfm worth of flow, where it used to only put out 6000 cfm. Channels that show this: H0:FMC-CS_LVEA_AH_AIRFLOW_1 This doesn't change the LVEA temperature, but means that there's a lot more air flowing into the LVEA. Maybe this is what cause Robert to pay attention? May 17 2023 09:42 PDT (Saturday morning) Damper for Air Handler Unit 2 damper starts to open up, constantly at 100%, and has stayed like this since, only occasionally coming off the rails to 80% in the first few days. Channels showing this H1:FMC-CS_LVEA_AH_DAMPER 2_PC This causes a marked change in Zone 4, 0.5 deg F decrease in temperature. That's not a lot, but work calling out. The diurnal pattern of Zone 4 changes as well. Further, the "coiling coils" for AHU2 also start to run warmer increasing from (diurnal fluctuations around) 51 deg F to (diurnal fluctuations around) 53 deg F. Then we get to where Bubba Says - "It started with trying to adjust the airflows (per Robert S.)" May 24 2023 08:48 PDT (Tuesday maintenance) In reference to LHO:69894, where Bubba says: "Fans 1 & 2 [of Air Handler Unit 1] supplying the LVEA were readjusted to [reduced] airflows for the Observation Run. The **total** CFM for AHU 1 was reduced from ~ 26,000 to ~ 11,200 CFM" With a bit more detail, - Fan 1 (H0:FMC-CS_LVA_AH_AIRFLOW_1) is reduced from ~12500 cfm to it's level that it was prior to the May 11 2023 fix of Air Handler 1, now back to ~6000 cfm. - Fan 2 (H0:FMC-CS_LVA_AH_AIRFLOW_2) is reduced dramatically from the value it's been in a long time ~11000 cfm down to 4500 cfm. Thus Bubba's statement about the total from (12500+11000 =) 23500 cfm to (6000+4500 =) ~10500 cfm. As a result of this we land on Bubba's statement "At that time, it was discovered that cooling coil 4 was much warmer than the other coils" (from the May 17th 2023 change in damper behavior) because you see both coil 3 and coil 4 from Air Handler 2 (H0:FMC-CS_LVEA_AH_COOLTEMP_3_DEGF and H0:FMC-CS_LVEA_AH_COOLTEMP_4_DEGF) at the higher, ~53 deg F level. This dramatic airflow change causes the LVEA temperatures to merge high for a bit until May 24 18:38 PDT -- Zone 4 (think SR3), the output arm gets warmer to meet the other zones. May 24 2023 22:30 PDT yo May 25 2023 06:52 PDT (Late Night Tuesday to Early morning Wednesday) Air Handler 1's cooling coils are slowly stepping up in temperature (by the control system? by a human?) from 46 deg F to as much as 54 deg F. Somehow, this slowly starts to bring all temperatures *down* in the LVEA in all zones together until the next morning at May 25 2023 06:45 PDT. May 25 2023 06:52 PDT (Early morning Wednesday) Air handler 1's cooling coils 1 and 2 get restored back to 46 deg F, and the heater in Zone 1A kicks on for 2.5 hours. (by a the control system? by a human?) This brings all temperatures *up* in the LVEA to a relatively high value of 69 deg F until another change on May 26 2023 15:56 PDT. May 26 2023 15:56 PDT (Friday Afternoon) Air Handler 2 Fan 4's air flow drops to zero, and cooling coil 3 and 4 temperature drops from 54 deg F to 49 deg F. These are Bubba's actions when he says "Towards the end of the day [Friday Afternoon, May 26th] coil 4 was still much warmer than normal and allowing warm air into the LVEA. I decided to turn Fan 4 completely off to see if I could get the temps to stabilize. This strategy helped." This brings the temperatures collective *down* in the LVEA from ~69 deg F, to 67 deg F, closer to the "normal" values for the LVEA, though Zone 5 (think PR3) and Zone 1B (think 3IFO) are lower than their "normal" values. May 30 2023 09:50 PDT - 11:51 PDT (Tuesday, Maintenance Day) All hell breaks loose with the HVAC as the Fire Alarm system overrides the HVAC controls -- only really mentioned in the operator log during maintenance that day -- see LHO:70003, and drives TJ to investigate drifts in temperature, LHO:70000 thinking "it's happening again!!" Also, in the mean while Bubba's action he mentions, " Yesterday, being maintenance day, I took the opportunity to investigate a little more into why coil 4 was not cooling. After checking functionality of all the valves, I decided to open the strainer and found that it was very full of debris. I cleaned the strainer, reinstalled and turned Fan 4 back on. The coil is now running at normal temperature and the LVEA is coming back to a stable temperature. " However, upon restoration of the HVAC settings, Air Handler 2's "Fan 4 is turned off" setting from May 26 2023 (Friday Afternoon) get lost, and FAN4 comes back on at full blast at 12000 cfm. Further, Air Handler 1's cooling coils come back on much hotter than they've ever come on before, changing from 45 deg to 54 to 65 deg. Once AHU2 Fan 4 comes back on, and air flow to the LVEA re-increases, the temperatures in the LVEA begin to drop again, tanking down as low as 66 deg F. May 31 2023 06:23 PDT (Early Wednesday Morning) Zone 4 and Zone 5 heaters come in to play (by human? or by the control system?) turning on for the first time with Zone 4 at 40 percent, and Zone 5 left to oscillate diurnally between 20% and 40%. This brings all LVEA temperatures back up to a really tight cluster, for the first time in months around 67 deg F. This is great, we should hold here if we can. June 04 2023 18:34 PDT (5 days later, Sunday Evening) For reasons I can't determine (no other visible change in the metrics I've been using), Zone 1B (think 3IFO) jumps up in temperature 0.5 deg F from 67.2 deg F to 67.7 deg F. This is a pretty inconsequential change in temperature that shouldn't affect suspensions. June 06 2023 09:53 PDT (2 days later, Tuesday Maintenance Day) The fire alarm maintenance team is back at it, and this causes the intake damper of Air Handler 2 (H0:FMC-CS_LVEA_AH_DAMPER_2_PC) to drop to zero. This causes the entire LVEA temperature to rise again from its stable 67 deg period starting on May 31 2023 06:23 PDT to zones going as high as 69 deg F. June 06 2023 11:52 PDT (same day) The operations team realizes the fire alarm closed the AHU2 damper again, called in facilities and re-opened it, and the temperatures come back down, but overshoot because Zone 1B and Zone 4 configuration got lost. June 06 2023 17:02 PDT (same day, but later in the evening) Some one, or some thing, turns the Zone 1B and Zone 4 heater configuration back on. Temperatures return to the "good" May 31 2023 06:23 PDT configuration change
We've been having issues during initial alignment with the ALIGN_IFO node falsely convinced that SRY is locked when SRM is actually very misaligned. This was due to the SRCL trigger turning on too early (see H1:LSC-SRCL_TRIG_MON). Sheila and I looked at the LSC SRCL trig thresholds and decided to bump these up to ON:0.3 OFF:0.2 (last two changes of this - alog54799 and alog63678). I also added a trigger delay of 0.5s since it seems to flash above this threshold even at times that we are very misaligned. This trigger looks at the POP A DC signal, so we aren't sure why the amount of light has changed on this PD. These settings worked for the one time we tried it with a well aligned SRM, so we should keep an eye on it the next few times we do alignments.
This second alog63678 reminds future us that we should recommission SRY to use the AS WFS rather than the REFL WFS. I'll remind future, future us the same thing.
Had this issue again. Described with symptoms in alog 70028.
I've bumped the trigger thresholds up a bit more as it still was triggering for very poor alignments. Looking back over the last week or two as a reference I changed enable threshold to 0.35 and the disable at 0.23. During an initial alignment this morning I tried it out by heavily misaligning SRM by ~70urads to check trigger values, then I brought it back. All seemed good. These values are loaded into ALIGN_IFO and committed to the svn.
TITLE: 05/24 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
SHIFT SUMMARY:
Lock#1
Couldn't lock PRMI, lockloss
Lock#2
Went right into an initial alignement, OM1 & OM2 were saturated during PRC in initial alignment
DRMI's lock didn't seem great on the buildups but the spot looked ok
OMC locked first try on its own
NLN @ 08:44, 30 ASC SDF diffs but they were all for the camera servos, I waited for ADS to converge for the camera servos to turn on (~ 16mins) which cleared these diffs
Observing mode @ 09:02 while we thermalize
Out of observing at 11:05UTC for a new FF filter test/measurement and then a calibration suite, new FF filter applied at 11:07UTC
I used Coreys template (/ligo/home/corey.gray/Templates/dtt/DARM_05232023.xml) which I had to enable "read data from tape" for it to run. Measurement started at 11:10UTC, finished at 11:12UTC. My DTT session then immediately glitched and crashed before I could save it, great... I restarted the measurement at 11:15UTC, finished at 11:16UTC. It wouldn't let me save it (error: Unable to open output file), but I added it as ref 27 on the previously mentioned xml file in Coreys directory but this might have not saved from that issue. I grabbed a screenshot of it.
I switched back to the old filters to take the calibration suite at 11:21UTC, I wasn't sure if we wanted them on or not for this, apologies if we did want them on.
Lockloss @ 11:25, possibly from PI29, but on NUC25s scope guardian appeared to be successfully damping it? It was tapering down when when the DCPDs saturated and we lost lock, it also coincided with a ground motion spike from that 5.5 from NZ. I then stepped down the EX ring heaters to 1.2 using the console commands Sheila provided in her alog.
Lock#3:
Yarms power was drastically lower after the lockloss and looked clipped on the camera, increase flashes ran twice and wasn't able to get it above 50%, I stepped in and still wasn't able to even get it to 80%. I gave guardian another shot after this and increase flashes ran another 2 times and was not able to get it, I tried again to lock it and was unsuccesful. Lockloss at LOCKING_ALS after some more rounds of adjusting and trying to lock.
Lock#4-11
ALS locklosses
Lock#12
Beatnotes aren't great, -19 & -20, I'm starting another initial alignment. Lots of SRM saturations during SRC align, trending SRMs OSEMs there doesn't appear to be any unusual motion in the past 20 hours.
After a suggestion from Betsy and Jenne, I check PR3 and it seem to have drifted a bit, I moved PR3 in yaw 0.8 microradians negatively which increased the COMM beatnote. I was able to lock ALS after this but got no flashes on PRMI so I started another initial alignment but it didn't do the SRC align correctly again, TJs going back to try and fix this.
Handing off to TJ
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 14:23 | FAC | Betsy | FCES | N | Closeout checks | 14:38 |
| 14:56 | EE | Ken | Carpenter shop | N | 15:56 |
I've attached a screenshot of the DARM spectrum and coherences with MICH and SRCL during the feedforward test. This was done 2.5 hours after power up, the second attached screenshot shows where we were on the thermalization transient.
It seems that the MICH FF is worse, while SRCL is better, similar to the test done at the start of a lock here: 69813
If anyone wants to use this template for a future feedforward check, they can find it at /ligo/home/sheila.dwyer/LSC/DARM_FOM_LSC_FF_check.xml
I think this recent attempt at 80 kHz PI damping, which was our first try with this Monday's guardian changes 69800, might have been somewhat successful?
From the screenshot, the DTT shows that when 80 kHz PI damping started, the HOM's were in the same place as that has recently caused locklosses, (comapre the pink/blue vs. black trace). And we see its aliased down 14.76 kHz peak, which visibily shifts down by a few Hz over several averages. Maybe this is a result of our PI damping; we've seen the mode move around before from driving it (68165). And playing the DTT forward in time, you can see the PI doesn't run away like it normally does. It seems likely to me that the 80 kHz didn't cause this lockloss.
From the ndscopes in the screenshot: the first scope shows a recent 80 kHz PI lockloss from Monday, where after the guardian starts damping, the mode's RMS grows ~1e4 in 5 minutes. Then Wednesday, from when the guardian first starts damping, the mode only grew x100 in ~8 minutes! Between Mon/Weds, we started damping with a stonger PI ESD drive in the guardian 69800 (10x DAMP_GAIN, from 5000 --> 50,000), still driving coils differentially (has been like this after 69759).
The ALS issues reported in this aLOG were symptomatic of the true problem: the HAM-ISIs were all drifting off in Yaw (RZ) slowly, but surely, for weeks -- see LHO:69934.
Tagging CAL. It's not super explicit, but this is the aLOG when the ETMX (EX) ring heater power was reduced from 1.3 W to 1.2 W on both segments. It has been later revealed that this has increased the lever of 1064 nm main laser power in the arm cavities, from ~435 kW thermalized to ~440 kW thermalized LHO:70042. This may have changed the optical gain, cavity pole frequency, and the SRCL cavity detuning. The first two should be measured and correct for via the TDCF system, but we should confirm. We should measure more sensing functions and/or turn on the CAL_AWG_LINES low frequency calibration lines to confirm if anything's changes the the SRC detuning.
Although we did not get any inlock charge measurements today, we did get a new value for ETMY last week so I reran my compare code after making some updates to it.
If anyone else wants to use it, its located at: /ligo/home/ryan.crouch/Desktop/Charge_measurements/compare.py and you should just be able to run it as a regular python script. The only things one might need to modify in the code would probably be the plotting range at the top if you want to change it from the 6months its set to currently.
IE:
python3 /ligo/home/ryan.crouch/Desktop/Charge_measurements/compare.py
Note that you need to be in directory /opt/rtcds/userapps/release/sus/common/scripts/quad/InLockChargeMeasurements/LHO_ESD_coeff_data to run this script.