Mon Nov 06 10:08:39 2023 INFO: Fill completed in 8min 35secs
Travis confirmed a good fill curbside.
Summary of the report:
This is a link to the full report.
TITLE: 11/06 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 2mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.18 μm/s
QUICK SUMMARY: Locked for 9.5 hours. No current alarms, temps are very stable. Our range looks to be slowly decreasing, I'll look into it.
TITLE: 11/06 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 162Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
LOG:
No log for this shift.
Lockloss @ 05:33 UTC - no obvious cause.
Back to observing at 06:32 UTC
State of H1: Observing at 157Mpc
H1 has been locked and observing for just over 4 hours. Quiet evening so far; microseism is slowly falling.
Genevieve, Lance, Tyler, Robert
In August, we determined that the 52 Hz peak in DARM could be dramatically reduced or eliminated by turning off the chilled water pump CHWP-2 at EX (72331), which was running at 52 Hz. On Friday I decided to check if the water pump was lining up in frequency with a mechanical resonance that made the coupling worse. So I used the variable frequency drive to move the pump frequency around. The figure shows that I found three frequencies where the water pump did not produce an obvious peak in DARM and one where it produced a bigger peak in DARM.
After consulting with Tyler, I have left the frequency at 58 Hz (98% on the FMCS screen). Of course whatever resonance was associated with the peak in DARM at 52 Hz will still be excided by non-pump seismic motion, so the 52 Hz peak may not go away completely. I may move the pump frequency around some more this week to better understand the coupling.
TITLE: 11/05 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Austin
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 11mph Gusts, 9mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.29 μm/s
QUICK SUMMARY: H1 just reached NLN and waiting for ADS to converge.
TITLE: 11/05 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Aquisition
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
- Started off with running an IA, which ran through with no issue
- Went through the rotating call list: contacted Daniel, left a message, then called Jim who noticed that all 4 quad ISI SC guardians had not been reset since the master timing was swapped
- After the issue was solved, relocking was automated, back to NLN @ 19:47/OBSERVE @ 20:10
- H1 went into COMMISSIONING @ 21:50 to adjust SQZ temperature/check SQZ angle - attached SDF
- Lockloss @ 22:58, cause unknown
- Relocking:
LOG:
No log for this shift.
Adding onto Austin's note about the TCS powers: the CO2Y laser power is lower than previous lock stretches (trend attached). Current power is around 1.59W, and the DIAG_MAIN message shows when below 1.6W. This appears to have started with the lock today after being down for most of last night.
Lockloss @ 22:58, looks like ASC-AS_A_DC saw motion first. Also noticed that the IMC unlocked right before the lockloss (does this usually happen?)
H1 has just back it back to NLN (@ 19:47 UTC), after 18 hours of being down due to issues relocking. Problem was diagnosed to be a culmination of high microseism and the ISI SC guardians not being in the right state for all 4 quads after the master timing was swapped on Friday. The ISI SC issue has since been fixed and microseism is finally starting to trend down.
Sun Nov 05 10:10:16 2023 INFO: Fill completed in 10min 13secs
Last night, while LLO was down and before we lost lock, I used 1/2 hour to inject 13.1 Hz ground currents onto the chasis of AC1 in the CER, and also imitated the 13.1 Hz sound AC1 made using the large speaker. Neither the acoustic nor the electronic ground injections produced noise in DARM at the same level as AC1 had.
TITLE: 11/05 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: SEISMON_ALERT
Wind: 5mph Gusts, 4mph 5min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.43 μm/s
QUICK SUMMARY:
- Issues still ongoing from last night with a bad COMM/locklosses at TR_CARM, will start trying to troubelshoot
- Microseism is on the rise, making ALS hard to lock
- CDS/DMs ok
Lots of issues keeping GREEN_ARMS locked (maybe from microseism) and finding IR (keeps missing DIFF), we're unable to survive past CARM_TO_TR. The common mode servo board output signals don't look abnormal, there was some glitching but trending back it usually does this. I tried holding at START_TR_CARM and PREP_TR_CARM and we can survive, I looked at a bunch of the signals and they all seemed normal, MICH, ADS, IMC/LCS REFL_SERVO. When I tried to move on I got the NO IR in arms and we lost lock. Other times in the same state we lost lock at various stages of the CARM gain reduction.
Both the arms have been pretty unstable the whole shift (Y more so), but the wfs has managed to keep it locked for a decent amount of the time, it got worse towards the end. While the microseism is high we've been able to lock with it at the same levels before so it's probably not the sole issue but it certainly isn't helping.
All of the test mass BSC ST2 sensor correction guardians were complaining of being in the wrong state. The gains were all zero for these, so all of the test masses were moving more than intended at 1/2hz. Init-ing these guardians recovered the gains, then I init'd the SEI_ENV guardian to get everything back to managed. This allowed Austin to get past DRMI and finally to NLN. I should have glanced at this after hearing about the front-end crashes on Friday, all of the ST2 SC guardians had yellow notifications, so it was easy to spot. When we restart an ISI front-end we should probably cycle SEI_ENV through maintenance then back to CALM to make sure everything is in the right state.
I've done some injections today but I'll alog about that later.
After I was done with the injections, the breakout board was removed from the back of the OM2 heater driver and the Beckhoff cable was fully connected again. This happened at around 21:40 UTC today.
Yesterday during the maintenance, Fernando and Daniel changed the Beckhoff configuration such that the Beckhoff terminal won't switch between thermistor 1 and thermistor 2 (alog 73886). Now it's only reading thermistor 1 (cold one) w/o switching. Thermistor 2 (hot one) is not read out. Detchar, please see if you can see the comb.
Good news- no sign of the 1.66 Hz comb in Fscan daily spectra after Nov 1 (checked the 2nd-4th).
Further change: Beckhoff was rewired to use 2 separate terminals to measure the 2 thermistors.
Took MICH FF measurements following instructions in lsc/h1/scripts/feedforward/README.md but running MICHFF_excitation_ETMYpum.xml. Data saved in lsc/h1/scripts/feedforward/ as .txt files. Last tuned MICH in 73420.
I saved a new filter in FM6 as 27-10-23-b (red trace) but it made the MICH coupling worse between 20 and 80Hz so we left the original (pink trace). We could try to re-fit the data to load in Wednesday's commissioning period.
I re-fit this data and think i have a better filter saved (not loaded) in FM3 as "27-10-23-a". We could try this during a commissioning period this week if we wanted to try to further improve MICH.
Tried this FM3 FF 2023/11/14 16:04:30UTC to 16:06:30UTC. It did not cause a lockloss. I did not run the MICH comparism plot but DARM looked slightly worse. Plot attached.
From 16:07:05, I tried FM6 which is the currently installed MICH FF (FM5) without the 17.7Hz feature 74139.
I tried to test a new MICH FF FM3 Camilla made. First I measured the current MICH FF FM5 as shown in the attached figure. The pink and black curves are the current MICH FF FM5 on 20231027 and 20231103, respectively. The current MICH FF gets worse between 30-80 Hz in a week. The MICH FF on 20231103 was measured after 6.5 hours into lock. Then I ramped the MICH FF gain to 0 and turned off FM5 and turned on FM3. After I ramped the MICH FF gain to 1, a lockloss happened immediately.
Sorry that this caused the 1383077917 lockloss.
Unsure why this FM3 would be unstable. Lockloss occurred 10 seconds after MICHFF had finished ramping on (13s - 3sec ramp time). FM3 MICH_FF looks to be outputting ~ factor of 2 higher than the current FM5 filter. Don't see any obvious instabilities in the 10seconds before the lockloss.
LSC and ASC plots attached. I wonder if the lockloss was just badly timed. We could attempt to repeat this before our Tuesday Maintenance period.