At 0617utc (1117pmPT), H1 dropped out of OBSERVING due to a fast SQZ sdf diff (I took it back within 45seconds).
Tony and I were chatting at the time and I mentioned the situation and he pointed me to some recent (within the last week) similar instances where this happened with these Camilla alogs: 71653 & 71652.
Attached is a plot showing the same culprits (H1:SQZ-FIBR_SERVO_COMGAIN & H1:SQZ-FIBR_SERVO_FASTGAIN) dropping to lower value for 0.5secs and then returning to their normal value.
Do we need to not monitor these channels? Make an FRS?
At 00:00:02, received the Verbal: "Wap On in LVEA, EX, EY"
Not really sure what this was about. When I went to the sitemap -> CDS -> unifi wap screen, I saw that all were off (except MSR). On NDSCOPE it showed that for a brief time it looked like they went from a value of 2 to 0 back to 2. Not clear what this was about, but just found it odd and worth noting.
Closes FAMIS 25077. Last done in alog 71721.
Only have results for 3 of 4 Test Masses (ETMx measurement not available).
SUMMARY:
This evening we wanted to coordinate going out of Observing for L1 & H1 to run (2) calibration measurements (~30min). Unfortunately, I was not able to run the broadband measurement, but ran the simuLines measurement. It also took me extra time to return to Observing because it was not clear how to run this measurement with the new H1 Manager and H1 did not restore smoothly and had excitations which kept us out for a bit.
Attempt To Run Broadband PCAL measurement:
Vladimir and Jenne coordinated running 30min of calibration measurements at 5pm PT. I spent my first hour getting everything set up so I could quickly run the measurements.
corey.gray@cdsws13:~$ pydarm measure --run-headless bb
INFO | config file: /ligo/groups/cal/H1/ifo/pydarm_cmd_H1.yaml
INFO | available measurements:
pcal: PCal response, swept-sine (/ligo/groups/cal/H1/ifo/templates/PCALY2DARM_SS__template_)
ERROR: measurement 'pcal' template file not found: /ligo/groups/cal/H1/ifo/templates/PCALY2DARM_SS__template_
corey.gray@cdsws13:~$
After consulting with Jenne & Vladimir, we canceled attempting this measurement and moved on to the simuLines measurement.
simuLines script
Attached is a screenshot of the terminal and the testpoints which were on the board while it was running.
Delays In Restoring H1 & Returning to Observing:
Once simuLines was complete, I thought I could have H1_MANAGER restore H1 back to NLN, but after an attempt (or 2), that didn't work. So I had ISC_LOCK go to AUTO and then selected NLN and then took it to MANAGED. I then ran an INIT on H1_MANAGER, and I believe this restored H1. Still not sure if the latter is the preferred way.
Next, I followed the wiki instructions to work on CAL_AWG_LINES. It was in IDLE. But the wiki instructions said "Make sure CAL_AWG_LINES guardian is in state 10, LINES_ON, and you see its PCALY lines ON between 5 and 25 Hz." I believe I saw the PCALY lines, but we were in IDLE, so I toggled the Lines On/off via guardian per the instructions. But I could not go to Observing due to SDF Diff and CAL_AWG_LINES in bad state. Soooo...
But I still had excitations! (see attached) So, then I spent the next few minutes figuring out how to kill these excitations. Then finally returned to OBSERVING.
Since I'm not totally sure on the best way H1 should be restored in this sort of situation, I will not update that part of the wiki with regards to my experience tonight, due to:
Apologies -- with the amount of software work needed to convert CAL_AWG_LINES to front-end oscillators -- I forgot the most important part: human coordination! CAL_AWG_LINES is now deprecated with the addition of new front-end oscillators as of yesterday, Aug 01 2023 (2023-08-01) -- check out LHO:71881. That means one no longer has to do any funny business finagling CAL_AWG_LINES to drive the collection of low-frequency, 8. 11. 15. and 24 Hz, lines. I've now updated the wiki page that Corey cites -- see that TakingCalibrationMeasurements no longer has any mention of CAL_AWG_LINES. Indeed, you (Corey), did the right thing by reverting the PCALY_DARM_GAIN; that was the path thru which CAL_AWG_LINES drove the above mentioned 4 lines. I'll make sure there're no more guardians that are toggling this gain today.
Smooth sailing for H1. Ran a calibration measurement (and couldn't run the broadband one)---will alog specifics separately. H1 did ride through a 5.7M Tonga earthquake which is nice.
TITLE: 08/01 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 144Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 7mph Gusts, 5mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.05 μm/s
QUICK SUMMARY:
Receiving an H1 which has been locked almost 2hrs (and Observing for over an hour).
Also preparing to bump out of Observing shortly to run a calibration measurement coincident with LLO.
There are atleast (3) cameras (ETMX, ETMY, & ISCT6_AS_AIR on nuc35 & nuc 30) which flicker blue.
Winds are under 20mph and temp is 91degF.
As noted in my maintenance summary alog, while FW0 was being restarted this morning FW1 spontaneously restarted itself at the same time. This is the first time we had seen this, and as far as we can tell it is a complete coincidence. Previous spontaneous FW restarts have happened at random times typically within 30 minutes of the orginal restart.
I have written a python script to report the individual missing frame files resulting from a FW restart, and reporting an error is these times coincide between FW0 and FW1.
Output for today:
FW0 Missing Frames [1374951040, 1374951104]
FW1 Missing Frames [1374950784, 1374950848, 1374951040, 1374951104]
ERROR: no frame written for GPS 1374951040!!!
ERROR: no frame written for GPS 1374951104!!!
h1digivideo3, the computer serving thew flicker cameras, ran out of memory. Camera 15 was taking up the bulk of the memory.
Before any action could be taken, the OS kill the Camera 15 server, which later restarted with a much smaller memory footprint, resolving the immediate issue.
Logs for Camera 15 were filled with queue overflow messages for both udpsink, the streaming portion of the server, and appsink, the centroid calculation. These might indicate the cause of the memory leak.
Other camera servers are also using large amounts of memory. We might want to restart these servers every Tuesday.
TITLE: 08/01 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
SHIFT SUMMARY:
Decently busy maintenance day, Model changes (PCAL, ISIs), DAQ restart, CP1 thermocouple, turbopump test, SQZT0 table work, OMC measurement... Handing off to Corey
Lockloss at 15:16 for the start of maintance activities.
Lock#1:
Yarms power was lower than usual and oscillating a bit but we were still experiencing some shaking from local EQs. During Find_IR it was complaining about Yarm alignment being bad and not searching due to it, I tried to touch up Yarm but it lost lock.
Lock#2:
Yarm went through increase flashes and locked at a normal looking power and had no issues finding DIFF_IR.
We got back into NLN at 21:30UTC and had to wait for; ADS to converge, a potential issue with the new CAL lines to be investigated, and a squeezing issue to be investigated. Back into Observing at 22:02UTC
We dropped out of Observing again at 22:11UTC for a PCAL Y issue that was noticed, back in Observing at 22:16UTC.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 15:04 | FAC | Karen | EndX | N | Tech clean | 16:04 |
| 15:12 | FAC | Cindy | HAM shack | N | Tech clean | 16:05 |
| 15:16 | VAC | Travis | LVEA | N | CP1 thermocouple | 16:41 |
| 15:27 | VAC | Jordan | LVEA | N | Join Travis CP1 | 17:25 |
| 15:28 | EE | Fil | LVEA | N | Check with Travis | 16:30 |
| 15:38 | ISC | Jason | FCES | N | Find alignment equipment | 15:48 |
| 15:44 | FAC | Richard | LVEA | N | check out VAC racks | 16:04 |
| 15:45 | CAL | Tony | PCAL lab | LOCAL | PCAL work | 22:35 |
| 15:52 | Rick +1 | LVEA | N | Quick tour | 17:06 | |
| 15:54 | VAC | Gerardo | LVEA | Y | Join CP1 thermocouple crew | 18:23 |
| 16:05 | FAC | Cindy | High bay | N | Tech clean | 16:31 |
| 16:05 | SEI | Jim | LVEA | N | HAM1 checks | 16:41 |
| 16:13 | SQZ | Sheila, Briana | LVEA, SQZT0 | LOCAL | Table work | 17:10 |
| 16:17 | FAC | Christina | OSB, Ends, FCES | N | Move water pallets | 17:49 |
| 16:29 | ISC | Keita | LVEA | N | Check board connections, connect if necissary | 16:36 |
| 17:05 | CDS | Erik | Ends, X then Y | N | Stage a setup | 18:08 |
| 17:10 | SQZ | Sheila | LVEA | N | Laser hazard transition | 17:29 |
| 17:23 | EE | Fil + APS | LVEA | Y | Bring APS around H2, output arm | 19:05 |
| 17:29 | SQZ | Sheila, Naoki, Genevieve, Briana | LVEA, SQZT0 | Y | Table work, alignment | 19:27 |
| 17:30 | SAF | LVEA IS LASER HAZARD | LVEA | Y | LVEA IS LASER HAZARD | 19:45 |
| 17:50 | FAC | Christina | LVEA | Y | Checks | 18:02 |
| 17:06 | FAC | Cindy | LVEA | Y | Tech clean | 18:07 |
| 18:33 | VAC | Jordan | LVEA | Y | Turn off turbopumps | 18:40 |
| 18:34 | ISC | Keita | LVEA | Y | Undo connection | 18:36 |
| 19:31 | FAC | TJ | LVEA | Y | Sweep | 19:48 |
| 20:02 | SQZ | Sheila | LVEA | N | Undo changes | 20:07 |
| 21:08 | VAC | Gerardo | MidX | N | Turn on a compressor | 21:32 |
| 20:50 | SEI | Jim | LVEA | N | Grab something by ham1 in @ 20:50 | 21:09 |
| 21:42 | Jim & Mitch | Mids | N | Parts hunt | 22:25 |
These plots show the Gaurdian states of H1 and L1, and the lock time of H1 compared to the (maximum) input power. The Guardian state plot is similar to this one on the summary pages, but specifically shows what percentage of time the Guardian state was ≥600 (≥2000 for L1), regardless of whether or not we were observing. Based on the input power plot, it looks like there may be a bit of improvement in the Guardian state after the power was decreased.
Out of the corner of my eye, I noticed some of the cameras flickering to blue (frozen). It's certainly happening to the OMC trans camera (which isn't so critical), but I think it was also happening to some of the cameras on Nuc35, although I can't say which of them were flickering. If either ETM or the BS camera freeze for a long time, we'll get pushed out of Observing as the camera servo guardian will transition us back to ADS. I ping-ed Dave and Patrick in the control room chat channel to see if they had any thoughts.
I've seen the ETM cameras go blue for a second or two a couple of times today, they go blue and then come back at the same time.
Erik, Tony, and Patrick were having a look at this earlier (and noted it in alog 71893). It seems to all be fine for now. It does look like the OMC trans camera (which is the camera 15 that they noticed restarting) came back with much higher than usual exposure. I have set H1:VID-CAM15_EXP_REQ back to its nominal (according to ndscope) value of 500, since we're out of Observe anyway for calibration measurements).
J. Kissel, D. Barker As of today Dave helped me install the new front-end, EPICs controlled oscillators discussed in LHO:71746. Then, after crafting a few new MEDM screens (see comments below), I've turned ON some of those oscillators in order to replace the unstable function of the CAL_AWG_LINES guardian. So, there're no "new" calibration lines (not since we turned CAL_AWG_LINES back ON last week at 2023-07-25 22:21:15 UTC -- see LHO:71706) -- but they're now driven by front-end, EPICs controlled oscillators rather than by guardian using the python bindings for awg (which was unstable across computer crashes, and other connection interruptions). This is true as of the first observation segment today: 2023-08-01 22:02 UTC However, due to a mishap with me misunderstanding the state of the PCALY SDF system (see LHO:71879), I accidentally overwrote the PCALXY comparison line at 284.01 Hz, and we went into observe. Thus, The short observation segment between 22:02 - 22:11 UTC is out of nominal configration, because there's no PCALY line contributing to the PCALXY comparison. The was rectified by the second observation segment starting on 2023-08-01 22:16 UTC. Also, because of these changes the subtraction team should switch their witness channel for the DARM_EXC frequencies to H1:LSC-CAL_LINE_SUM_DQ. The PCALY witness channel remains the same, H1:CAL-PCALY_EXC_SUM_DQ, as the newly used oscillators sum in to the same channel. Below, I define which oscillator number is assigned to which frequency. Here's the latest list of calibration lines: Freq (Hz) Actuator Purpose Channel that defines Freq Changes Since Last Update (LHO:69736) 8.825 DARM (via ETMX L1,L2,L3) Live DARM OLGTFs H1:LSC-DARMOSC1_OSC_FREQ Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG 8.925 PCALY Live Sensing Function H1:CAL-PCALY_PCALOSC5_OSC_FREQ Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG 11.475 DARM (via ETMX L1,L2,L3) Live DARM OLGTFs H1:LSC-DARMOSC2_OSC_FREQ Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG 11.575 PCALY Live Sensing Function H1:CAL-PCALY_PCALOSC6_OSC_FREQ Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG 15.175 DARM (via ETMX L1,L2,L3) Live DARM OLGTFs H1:LSC-DARMOSC3_OSC_FREQ Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG 15.275 PCALY Live Sensing Function H1:CAL-PCALY_PCALOSC7_OSC_FREQ Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG 24.400 DARM (via ETMX L1,L2,L3) Live DARM OLGTFs H1:LSC-DARMOSC4_OSC_FREQ Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG 24.500 PCALY Live Sensing Function H1:CAL-PCALY_PCALOSC8_OSC_FREQ Former CAL_AWG_LINE, now driven by FE OSC; THIS aLOG 15.6 ETMX UIM (L1) SUS \kappa_UIM excitation H1:SUS-ETMY_L1_CAL_LINE_FREQ No change 16.4 ETMX PUM (L2) SUS \kappa_PUM excitation H1:SUS-ETMY_L2_CAL_LINE_FREQ No change 17.1 PCALY actuator kappa reference H1:CAL-PCALY_PCALOSC1_OSC_FREQ No change 17.6 ETMX TST (L3) SUS \kappa_TST excitation H1:SUS-ETMY_L3_CAL_LINE_FREQ No change 33.43 PCALX Systematic error lines H1:CAL-PCALX_PCALOSC4_OSC_FREQ No change 53.67 | | H1:CAL-PCALX_PCALOSC5_OSC_FREQ No change 77.73 | | H1:CAL-PCALX_PCALOSC6_OSC_FREQ No change 102.13 | | H1:CAL-PCALX_PCALOSC7_OSC_FREQ No change 283.91 V V H1:CAL-PCALX_PCALOSC8_OSC_FREQ No change 284.01 PCALY PCALXY comparison H1:CAL-PCALY_PCALOSC4_OSC_FREQ Off briefly between 2023-08-01 22:02 - 22:11 UTC, back on as of 22:16 UTC 410.3 PCALY f_cc and kappa_C H1:CAL-PCALY_PCALOSC2_OSC_FREQ No Change 1083.7 PCALY f_cc and kappa_C monitor H1:CAL-PCALY_PCALOSC3_OSC_FREQ No Change n*500+1.3 PCALX Systematic error lines H1:CAL-PCALX_PCALOSC1_OSC_FREQ No Change (n=[2,3,4,5,6,7,8])
As a part of depricating CAL_AWG_LINES, I've updated the ISC_LOCK guardian to use the new main switches for the DARM_EXC lines for the transitions between NOMINAL_LOW_NOISE and NLN_CAL_MEAS. That main switch channel is H1:LSC-DARMOSC_SUM_ON, which enables excitations to flow through to the DARM error point when set to 1.0 (and blocks it when set to 0.0). I've committed the new version of ISC_LOCK to the userapps repo, rev 26039.
Here's the updated
/opt/rtcds/userapps/release/lsc/common/medm/
LSC_OVERVIEW.adl
LSC_DARM_EXC_OSC_OVERVIEW.adl
LSC_CUST_DARMOSC_SUM_MTRX.adl
The new DARM oscillators screen (LSC_DARM_EXC_OSC_OVERVIEW.adl) is linked in the top-middle of the LSC_OVERVIEW.adl. The only sub screen on the LSC_DARM_EXC_OSC_OVERVIEW.adl is the summation matrix (LSC_CUST_DARMOSC_SUM_MTRX.adl).
I have not yet gotten to adding all the new PCAL oscillators to their MEDM screens, but I'll do so in the fullness of time.
detchar-request git issue for tracking purposes.
I found a bug in the
/opt/rtcds/userapps/release/lsc/common/medm/
LSC_DARM_EXC_OSC_OVERVIEW.adl
where DARMOSC1's TRAMP field was errantly displayed as all the 10 oscillator's TRAMPs; a residual from the copy pasta I made during the screen generation.
Fixed it. Now committed to the above location as of rev 26170.
Finally got around to updating the PCAL screens. Check out
/opt/rtcds/userapps/release/cal/common/medm/
PCAL_END_EXC.adl
CAL_PCAL_OSC_SUM_MATRIX.adl
as of userapps repo rev 26179.
See attached screenshots.
Sheila, Naoki, Brina, Genevieve
Pump AOM alignment
Today we went to SQZT0 and aligned the pump AOM to get more pump. First we made the ISS drivepoint 0 to get only 0th order beam and maximized its AOM throughput by alignment of AOM. The pump power after AOM was 23mW before alignment and it becomes 37mW after alignment. We also aligned the pump fiber. Although the SHG output power is only 49mW, the pump going to fiber is 31mW and the ISS can be locked with OPO trans of 80. Since the pump power is increased, we adjusted the OPO temperature. The fiber alignment might not be optimized and we will check it soon.
Issue of SHG demod signal
To check if we have mode hop on SQZ laser, we scanned SHG and checked the trans signal. We found no evidence of mode hop, but found the strange behavior of SHG demod signals. We found that the I signal, which is used for SHG lock, is much smaller than Q. We changed the demod phase, but the Q signal is always larger than the I signal.
To use the Q signal for SHG lock, we swapped the I and Q cables and maximized the Q signal by optimizing the demod phase and flipped the sign of servo. Although we should have 10 times larger signal with Q than I, the UGF of SHG with Q signal is around 250 Hz, which is almost the same as I signal. We suspect that the demod board has an issue and we will check it soon.
Finally, we switched back to the nominal I signal for SHG lock. We increased the SHG gain from -11 to 5 to have UGF around 1kHz. The SHG guardian is updated.
Sheila, Naoki, Genevieve, Brina
Here are some images from altering the phase of the SHG demod signal, the first image is where we began where the phase was not altered (0 degrees added), the second image we moved it by 45 deg, third image it was moved by 90 deg, and fourth image it was moved 180 deg. (yellow is the I signal, blue is the Q signal)
Channel one has a 50 terminator enabled. Not sure it matters because the signal is small but it will have some effect. Probe setup on channel 2?
We ran the functionality test on the main turbopumps in the corner during Tuesday Maintenance (8/1/23). The scroll pump is started to take pressure down to low 10^-02 Torr, at which time the turbo pump is started, the system reaches low 10^-08 Torr after a few minutes, then the turbo pump system is left ON for about 1 hour, after the hour the system goes through a shut down sequence.
No issues were encountered while performing the functionality test on this 3 stations.
Output Tube Turbo: Note: The flex hose connecting the turbo exhaust to the WTCB1 header line has been physically disconnected.
Bearing Life:100%
Turbo Hours: 5617
Scroll Pump Hours: 5558 - Needs Tip Seal Swap
XBM Turbo: Note: The flex hose connecting the turbo exhaust to the WTCB1 header line has been physically disconnected.
Bearing Life:100%
Turbo Hours: 782
Scroll Pump Hours: 780.5
YBM Turbo:
Bearing Life:100%
Turbo Hours: 599
Scroll Pump Hours: 1875
Closes WP 11340
Summary:
Scanned the frequency of the PSL instead of the OMC PZT to measure the OMC finesse.
The idea is to inject into IMC servo (i.e. ultimately AOM) while the OMC pzt voltage was left close to the 00 mode resonance. We cannot do this very quick (1.6Hz pole in the VCO), unfortunately, but if the OMC length doesn't change much during the injection this is a good alternative to the PZT scan.
The analysis will come later.
What was done:
Injected into H1:IMC-L_EXC at 0.5Hz, 40000cts_pp. (At first we connected H1:LSC-EXTRA_AO_2_EXC to the EXCB on the IMC board to inject into the fast path , but we didn't have much range because of 1/10 attenuation in the EXC input. IMC_L was better).
H1:IMC-F_OUT_DQ calibration is 59.97dB Hz/cts (i.e. F_OUT is already pretty well calibrated in kHz).
This was measured by injecting at very low frequency (0.05Hz) so we can use the counter (H1:IMC-VCO_FREQUENCY) as our frequency readback. The dtt template used is /ligo/home/keita.kawabe/PSL_VCO_CAL_20230801.xml.
We used the single bounce beam from ITMY. ASC-AS_A and ASC-AS_B DC centering was turned on (DC3 and DC4). OMC ASC was on OMC QPDs.
We adjusted H1:OMC-PZT2_OFFSET to bring the OMC roughly to its 00 carrier resonance (1st attachment).
After everything was adjusted properly we left it from 11:30:50 to 11:31:15 local time (2023/08/01/18:30:50 - 18:31:15 UTC).
Just by looking at the close up (2nd attachment), the scan might have been a bit too fast, but not terribly so, as the transmission peaks in DCPD_SUM always rose faster than they fell regardless of the sign of e.g. the slope of the IMC_F_OUT_DQ.
Multiplying the frequency by 2 to account for the AOM double path, and fitting the XY-data gets us a finesse of 405. The fit error is small, the systematic probably less so.
It seems that both DCPDs were railing (B was worse), and that's why the peaks are not symmetric around the resonance.
H1:OMC-DCPD_A_GAINSET=0 (and H1:OMC-DCPD_B_GAINSET=0) means HIGH gain. We must repeat this with LOW gain.