Ran out of time working on end Y HEPI pump controller upgrade and did not get to WP 11250.
Wrapping up Tuesday Maintenance. Main activities were:
The temperature swing shown in this aLOG is relatively small (~ +/-0.4 deg F), entirely explained by the maintenance day activities on the HVAC system, and recovered to 67 +/- 0.2 deg F by 17:00 PDT. See LHO:70428.
Daniel, Sheila
We turned off the 9 Mhz, 45 MHz, and 117 MHz sidebands in order to do an OMC loss measurement. We used a single bounce beam off of ITMX, with 10W input from the PSL. We spent some time trying to improve the alignment before making OMC scans.
locked: 1370711576 (OMC REFL avg 3.51mW, OMC DCPD sum 15.23mA)
unlocked: 1370711782 (OMC REFL avg 24.73 mW, OMC DCPD sum 0.078 mA)
OMC scan start: 1370712036 duration 100 seconds (2nd order modes are roughly 8% of the 00 mode).
shutter blocked: 1370712337 (OMC REFL avg -0.030 DCPD SUM 8e-4 mA).
Jennie Wright plans to analyze this data to estimate OMC losses.
Here are the plots of ASC-AS_C_NSUM, OMC-QPD_A_NSUM, OMC-QPD_B_NSUM and OMC-REFL_A_LF, during these measurements. ASC-AS_C_NSUM shows between 22.8 and 32.1mW, OMC-QPD_A_NSUM 23.4mW, OMC-QPD_B_NSUM 23.0mW, and OMC-REFL_A_LF 24.8mW. According to Keita OMC-REFL_A_DC has an incorrect calibration and shows 25.2mW. The average of the 2 QPDs would be 23.2mW, which is about 6.5% lower than 24.8mW.
Second screen shots shows a time when the IMC was unlocked. The DC offsets are in the 10s of uW at most.
Using data from the scan I adapted labutils/OMCscan class to plot the fitted scan and adapted labutils/fit_two_peaks.py to fit a sum of two lorentzians functions for distinguishing carrier 20/02 modes.
The first graph is the OMC scan plot, the second is the curvefit for the second order carrier modes.
We expect the HOM spacing to be 0.588 MHz as per this entry and DCC T1500060 Table 25.
The spacing for the modes measured is 0.592 MHz.
From the heights of the two peaks this suggests mode-mismatch of the OMC to be C02+C20/C00 = (0.83+1.158)/(15.32+0.83+1.158) = 11.0% mode mis-match.
From the locked/unlocked powers on the OMC REFL PD the visibility on resonance is 1-(3.51+0.03/24.73+0.03) = 85.7% visibility.
If the total loss is 14.3%, this implies that the other non mode-matching losses are roughly 1.3%.
To run the OMC scan code go to
/ligo/gitcommon/labutils/omc_scan/ and run
python OMCscan_nosidebands.py 1370712036 100 "Sidebands off, 10W input" "single bounce" --verbose --make_plot -o 2
in the labutils conda environment and on git branch dev.
To do the double peak fitting run:
python fit_two_peaks_no_sidebands.py
in the labutils conda environment and on git branch dev.
These scans were done with OM2 cold.
For comparison with new OMC measurements I used Sheila's code to process the visibility, but updated dit to use nds2utils instead of gwpy as I was having trouble using it to get data.
The code is attached and should be run in the nds2utils conda environment on the CDS workstations.
Power on refl diode when cavity is off resonance: 24.757 mW
Incident power on OMC breadboard (before QPD pickoff): 25.239 mW
Power on refl diode on resonance: 3.525 mW
Measured effiency (DCPD current/responsivity if QE=1)/ incident power on OMC breadboard: 70.4 %
assumed QE: 100 %
power in transmission (for this QE) 17.760 mW
HOM content infered: 13.472 %
Cavity transmission infered: 82.111 %
predicted efficiency () (R_inputBS * mode_matching * cavity_transmission * QE): 70.367 %
omc efficency for 00 mode (including pick off BS, cavity transmission, and QE): 81.323 %
round trip loss: 1605 (ppm)
Finesse: 371.769
Tue Jun 13 10:10:41 2023 INFO: Fill completed in 10min 40secs
Gerardo confirmed a good fill curbside.
As one of the last steps of WP 11245 we formally add the new raw trend archive directory to the nds server's puppet config. Today I ran puppet in a verbose and noop mode on all the daqd systems to see what differences there were (we expected a difference in nds0). I then reviewed the differences and updated the puppet config to match what is in production. This was a good run to do we found a few things that we had changed in production that we had not put in puppet. * nds0 - added the new archive raw trend directory. * nds1 - found that we had not added the last archive raw trend directory to puppet, so added that. * tw0/1 - put into puppet a smaller circular buffer size that we had added to the systems to deal with larger channel lists. * gds1 - updated the firewall settings to match gds0, we had done some experimentation in the past to trouble shoot items. Now puppet matches reality and is a faithful record of the daqd configs. In addition I removed the mount to /opt/cdscfg from both the daqd systems and the daqd puppet. This is brought in due to front end needs on the puppet, but is not required for daqd and just causes issues when we switch boot servers.
The EY Beckhoff controller has the temporary name h1hpipumpctrley1 (10.105.0.64/24) and I have enabled port 14 of the EY VEA vacuum rack switch (sw-ey-aux) for this computer.
This morning Tyler, Chris and myself cleaned the strainer on coil 3 for the chilled water system. This strainer was not as bad as coil 4 which we cleaned 2 weeks ago. It did need cleaning and did improve the flow of the cooling coil. We plan to clean the other 4 strainers as time permits on Tuesdays. We also found the condensate drain plugged on AHU 1 again. We are working to unplug that drain now. The F/B damper on AHU 1 was closed completely so I have manually opened it to 70% for the time being to see if we can reduce some of the condensate from the coils. Coil 1 & 2 may need cleaning sooner than later if the condensate does not start going away soon.
AHU 2 Cooling Coil 3's strainer cleaning has had minimal impact on LVEA temperature value, nor fluctuations. Excellent! (Note, we've been calling these Fans, but I found out in talking with Bubba and Tyler today that this is a liquid strainer that filters the line that feeds the cooling coils. The *numbering* is still legit, they cleaned AHU2 Cooling Coil 4 strainer on May 30 2023 and today they cleaned AHU2 Coiling Coil 3 strainer) Separately, the change to have AHU 1 Damper at 60% is OK (Bubba had adjusted from 70% he mentions in his above aLOG to 60% shortly after posting; comments on its effect in the timeline below). After unplugging the drain of the air handler, and holding the damper open at 60% in "manual" mode (as opposed to servo controlled "auto" mode), the AHU 1 cooling coils 1 and 2 have now restored to much better/cooler, normal temperatures (~47 deg F) that we had lost by May 30th 2023 (the fateful fire-alarm chaos day; see timeline in LHO:70284). Excellent! LVEA temperature excursions in Zones 1A (this BSC2 / Beam Splitter), Zone 4 (Output Arm), and Zone 5 (Input Arm) never exceeded 0.4 deg F outside of 67 deg F range, and restored to normal ~67 deg F temperatures with small diurnal fluctuations within 7 hours. Excellent! Slowly but surely, I think these maintenance activities are good, and not only restoring expected system behavior, but making it better. We now have a much tighter collection of temperatures in the LVEA, cooling coils are operating at a nice low temperature, and more zone heaters are coming alive such that we have the expected "constant heat, constant cool in order to keep the LVEA temperature nice and controlled" behavior. I now have much more confidence that we can continue to do this kind of maintenance and not have it impact the IFO. Timeline of today's work all within today's Tuesday Maintenance: See Today's trend compared against a 7 day and 21 day trend. Jun 13 2023 08:26 PDT Bubba and Tyler start work, bringing down Air Handler 2, valving out AHU2's cooling coil 3's strainer and clean it. Understandably, all zones temperatures start to rise from 67 +/- 0.1 deg F but only max out 67.4 +/- 0.1 deg F. Also while out there, this is when they find that air handler 1 AHU 1 is flooded "again." Jun 13 2023 09:03 PDT Within a half hour, they're done with AHU2 strainer cleaning, AHU1 drain de-clogging, turn AHU2 back on, and temperatures begin to drop accordingly. Upon restoration, though, and back at the control room work station -- they still see AHU 1's cooling coil temperatures high (~ 61 deg F), as its been since May 30 2023. Indeed, also, the damper for AHU 1 is closed at 0%, as it has been doing diurnally since May 30 2023 -- the HVAC servo opens up AHU 1's damper each day (around when outside temperatures exceed 70 deg F), and then gradually ramps closed again by nightfall's temperature drop. Prior to May 30 2023, AHU 1 damper stayed around ~35% open throughout the day and night. Jun 13 2023 09:03 PDT Bubba intuits that there's too much condensation gathering in AHU 1 because its damper is closed too often, and that condensation doesn't get drained out because its drain is clogged. As a mitigation attempt, instead of having the HVAC servo drive the damper open percentage, he switches over to Manual mode and holds it at 70% as described in his aLOG. This drops both cooling coil's 1 and 2's temperature down from 61 to ~47 deg F. Awesome. But, this starts to scare the wimpy scientists (me) who are paying too close attention too quickly, because they see the temperatures in the LVEA drop below the 67 deg F set point. Jun 13 2023 10:02 PDT Zone 4 (input arm) zone heater turns on, and begins to bring the Zone 4 temperature back in line. Jun 13 2023 11:00 PDT Bubba makes a further adjustment of the AHU 1 damper percent open from 70% to 60% open to try to decrease the cooling in the LVEA. Jun 13 2023 12:07 PDT Zone 1A's zone heater collection turns on (human controlled? servo controlled?) for the first time, much like Zones 4 and 5 heaters came on for the first time after AHU2's cooling coil 4's strainer was cleaned, ramping up to ~70% in 5 minutes by 12:15 PDT. This really starts to turn around the LVEA temperatures for the better; the temperatures bottom out turn back up, overshoot a bit, and settle to yesterday's mid-day value. Jun 13 2023 17:00 PDT Temperatures are all restored within to ~67 +/- 0.2 deg F, and again, they never exceeded +/- 0.4 deg F. The "natural experiment" reveals the impulse response of the Zone 1A is about 5 hours, as long at the zone heater for that zone comes on!
This morning I remotely tweaked the beam alignment into the FSS RefCav, as the TPD has been drifting down since last Thursday (6/8). Once done I ended up with a TPD of ~0.91 V. One note, while I was finishing up the alignment tweak the TPD jumped from ~0.88 V to ~0.90 V; this jump coincided with the IMC_In power increasing from 2W to 10W (see attachment). Upon last on-table beam alignment the TPD was at 0.96 V, so I wasn't able to get it back to where it was last time. This is an indication that an on-table alignment may be needed. I'll monitor the TPD throughout the week and if it starts dropping again will make plans to do the on-table alignment.
After the jump here, we resynchronized the atomic clock with GPS.
The fault codes listed correspond to: 0x16 - reboot alert 0x07 - CBT signal degradation. So this looks like a reminder that we have had the clock running for a long time and it is getting older.
TITLE: 06/13 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
CURRENT ENVIRONMENT:
SEI_ENV state: MAINTENANCE
Wind: 6mph Gusts, 4mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.10 μm/s
QUICK SUMMARY: Starting Maintenance day. Ryan has taken the SEI_CONF to Maintenance, IFO has stayed locked so far.
Dust monitors, VAC, SUS, SEI, CDS okay
TITLE: 06/13 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
SHIFT SUMMARY:
Lock #1
Lock #2
Both arms went through increase flashes, Xarm took a while (10 mins)
Couldn't catch DRMI, or PRMI and it was clear from AS AIR that something was badly aligned so I ran an initial alignment. DRMI still struggled a bit despite good flashes and took about 5 minutes to lock.
NLN at 14:51
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 13:10 | CDS | Erik | Remote | N | Restarting NUCs OPSLogin0 | 13:22 |
| 14:24 | FAC | Tyler | MidY | N | Slowly move snorkle lift to MidY | Ongoing |
Lockloss at 13:16UTC, we were getting hit by a 5.4 from Papau New Guinea, seismic was starting to increase. Verbal did not say if SEI_CONF went to EQ mode
Control room workstations were updated and rebooted, including opslogin0 (nomachine). Only operating systems were updated. Conda environments were not affected.
LOCKLOSS @ 0:07, had a SRM saturation right before the lockloss. Seeing some movement in INP1 P - scope attached