Wed Jun 12 10:12:30 2024 INFO: Fill completed in 12min 25secs
Gerardo confirmed a good fill curbside.
TITLE: 06/12 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 4mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.17 μm/s
QUICK SUMMARY: Locked for 11 hours, much quieter environment today. Just within the last 30 min the range seems to be going down. The 1-2kHz DARM BLRMS are moving up, but perhaps not appreciably. I'll keep an eye on this and run a few checks.
TITLE: 06/12 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
INCOMING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 13mph Gusts, 8mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.24 μm/s
SHIFT SUMMARY:
1:09 UTC got to PREP_DC_READOUT_TRANSITION
Noticed about about 15 minutes later that we were in this state for much longer than usual.
OMC should be trying to get to READY_W_NO_WHITENING but it is saying :
[SET_WHITENING.run] USERMSG 0: DCPD gains not equal after whitening switch -- yikes!
I tried poking it around to a few different states but it seemed to get stuck at the same state each time.
Ryan told me he had seen this before, I found Ryan's Alog and read through it. I'm not sure if I have the right channels or not.
H1:OMC-DCPD_{A,B}0_GAIN Is what I scoped back to June 6th and saw that they were indeed changed that day.
Seeing as I couldnt find the full length of those channels on the OMC screen I did a caget and caput and was greeted with a failure to to write error.
CA.Client.Exception...............................................
Warning: "Channel write request failed"
Context: "op=1, channel=H1:OMC-DCPD_A0_GAIN, type=DBR_STRING, count=1, ctx="H1:OMC-DCPD_A0_GAIN""
Source File: ../oldChannelNotify.cpp line 159
Current Time: Tue Jun 11 2024 18:48:43.327036542
I'm not sure why that was but Jenne D told me to try:
cdsutils write CHANNEL_NAME SetPoint
cdsutils write H1:OMC-DCPD_A0_GAIN 1 worked out well for both A and B Channels and got the OMC going again.
2:24 UTC Reached OMC_Whitening, Waiting for Violins to go Down.
Incoming EQ from northern East Pacific Rise
https://earthquake.usgs.gov/earthquakes/eventpage/us7000mrv4/executive
3:17 UTC Nominal_LOW_NOISE reached
made it to NLN but not in Observing because SQZ_MAN got stuck in FC_WAIT_FDS:
I requested SCAN_SQZANG, to see if that would allow it to cycle through the stages but it got stuck there again.
Naoki jumped on the chance to help the SQZ Manager make it to FDS
I saved some SUS Spot Gain SDFs And an LSC SDF.
4:02 UTC Obsering was reached!
H1 has been Locked and OBSERVING for 4 and a Half hours.
[Jenne, Sheila, Tony]
DRMI ASC had been pulling the SRC out of alignment for the last few days, from where initial alignment leaves things. This is pretty clearly seen in POP90's rise when DRMI ASC is engaged.
Since the AS72 offsets have been working well in EngageASCforFullIFO and bringing POP90 back down, I have put the same AS72 offsets into DRMI ASC. This has worked well twice now (although we're not getting all the way to DC readout, likely because of the high wind). I've also moved where in EngageASCforFullIFO the AS72 offsets come on, so that it's using those offsets all the time (rather than how I had it in alog 78333 where the offset came on later).
- The EX GV20 AIP was not functional since Sunday (for details, see aLog https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=78339). The controller was replaced, and since the pump lost functionality, the pressure in the Annulus system built up, so it was pumped out with a Pfeiffer aux cart. After reaching ~2E-5 Torr, the pump came alive again. The pumping cart was switched off, disconnected, and the AIP was able to hold and decrease the pressure further. - The HAM2 AIP seemingly was not functional since last Wednesday (06/05). It was found that the issue was with the communication cable. The cable was disconnected, then reconnected, and the signal came back.
I tried the FF filters Gabriele put in 78307: the new SRCL filter was on at 17:36, (while Jennie W was adjusting the A2L) the new MICH feedforward was on at 17:45 with quiet time from 17:45- 18:10. In this configuration the SRCL coherence was high, so I switched it back to the old SRCL FF (with a gain of 1, rather than 1.14) at 18:12, and this is how we've been observing.
The PRCL coherence is the highest of the LSC coherences, with MICH and SRCL lower than PRCL but still large from 15-25Hz.
The attached screenshot shows that we still have large ASC coherences with DARM, as well as jitter coherences.
Edit: Adding a second screenshot that shows the coherence of SRCL, MICH and PRCL with DARM at 17:45 UTC, when the new SRCL FF was on with a gain of 1 (and also the new MICHFF).
One potential explanation for what happened is that something happened last Tuesday that means we need an offset in AS72 now, but we were running with SRM misaligned last week before we realized this, which changed the feedforward. Now that we have a hadn tuned offset in AS72, the SRCL and MICH couplings are more similar to what they were before last week, so we better off with the old feedforward filters in.
WP11914, WP11918 Install ADC in h1iscex for h1pemex readout
Fil, Marc, Erik, Jonathan, Dave
In preparation for testing the new LIGO DAC card in h1susex, an ADC was installed in h1iscex which will read the new DAC's signals.
This ADC belongs to PEM, it was purchased to expand the number of accelerometers.
The ADC, its ribbon cable and interface card were installed in the h1iscex IO Chassis (see drawing). A PEM AA-Chassis was installed and connected to the ADC.
Two models were changed:
h1iopiscex: add 5th ADC
h1pemmx: readout the 5th ADC, route all 32 channels into filter modules with the names H1:PEM-EX_ADC_4_CHAN_n where n=0-31
DAQ restart was needed.
Add WAP channels to DAQ
Erik, Dave:
We took the DAQ restart as an opportunity to use the latest H1EPICS_WAP.ini which adds the missing corner station WAPs. EDC and DAQ restart needed.
Add IRIG-B signals back to end station CAL models
Keita, Fil:
The CNS-II Clock GPS receivers independent IRIG-B signals were reattached to the h1isc[ex,ey] AA chassis.
DAQ Restart
Dave, Jonathan:
The DAQ was restarted for the above model and EDC changes. There were several issues:
gds0 needed a second restart
FW1 spontaneously restarted itself after 728 seconds.
Tue11Jun2024
LOC TIME HOSTNAME MODEL/REBOOT
09:00:01 h1iscex ***REBOOT*** <<< Add 5th ADC
09:01:42 h1iscex h1iopiscex <<< new model
09:01:55 h1iscex h1pemex <<< new model
09:02:08 h1iscex h1iscex
09:02:21 h1iscex h1calex
09:02:34 h1iscex h1alsex
09:11:04 h1daqdc0 [DAQ] <<< 0-leg restart
09:11:15 h1daqfw0 [DAQ]
09:11:15 h1daqtw0 [DAQ]
09:11:16 h1daqnds0 [DAQ]
09:11:24 h1daqgds0 [DAQ]
09:12:07 h1daqgds0 [DAQ] <<< 2nd GDS0 restart
09:12:12 h1susauxb123 h1edc[DAQ] <<< EDC for WAP channels
09:16:20 h1daqdc1 [DAQ] <<< 1-leg restart
09:16:31 h1daqfw1 [DAQ]
09:16:32 h1daqtw1 [DAQ]
09:16:33 h1daqnds1 [DAQ]
09:16:41 h1daqgds1 [DAQ]
09:27:57 h1daqfw1 [DAQ] <<< Spontaneous FW1 restart
IRIB-B code was restored at gps=1402156961 for EX and 1402159619 for EY.
I checked H1:CAL-PCALX_IRIGB_DQ and H1:CAL-PCALX_IRIGB_DQ starting at 1402159619 for 20 seconds using /ligo/svncommon/CalSVN/aligocalibration/trunk/Common/Scripts/Timing/irig_check.sh and the IRIG-B code agreed with the timestamp of the data after the leapsecond correction (which the script automatically does).
H1 was running w/o independent IRIG-B from gps=1400946722 to 1402156961 for EX, from gps=1400947702 to 1402159619 for EY.
IRIG-B codes were confirmed good right before disconnection of the IRIG-B cables and right after the restoration.
The timing system behaved w/o any suspicious behavior during the IRIG-B outage, see the attached plot. Top: X and Y and timing error. 2nd from the top: Duotone zerocross check (not the full duotone fit) for iopiscex, ey, iopsusex and ey. 3rd from the top: Duotone zercross check for h1iopomc0. Bottom: h1iopiscex and ey DAC.
Glitch of the OMC duotone at day 7th (Tue June 04) should be Jeff's test during maintenance (alog 78238), and glitch at EX on day 14 is the rebood described in the parent alog of this.
During this period, we had 3 significant LVK GW public alerts, S240531bp, S240601aj and S240601co.
TITLE: 06/11 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Wind
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 30mph Gusts, 20mph 5min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.19 μm/s
QUICK SUMMARY:
H1 is currently trying to lock in Windy conditions.... It would be nice is the Wind would calm down.
TITLE: 06/11 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Wind
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Longer maintenance day today, with activities wrapping up an hour or two ago. We've made it through initial alignment but there was no green light in the arms when Oli and I first started locking. We ran the baffle align scripts for the TMS and ITMs but ended up finding the arms by hand. The rest of initial alignment was hands off. The wind has increased and ALS is now having a rough time staying locked, so we'll see how the rest of locking goes.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
21:03 | SAF | HAZARD | LVEA | YES | LVEA is Laser HAZARD | 15:46 |
15:09 | VAC | Jordan | LVEA | n | Setup RGAs on output arm | 18:11 |
15:10 | FAC | Karen, Kim, Nelly | LVEA | n | Tech clean | 16:18 |
15:14 | VAC | Gerardo | LVEA | n | Input tube AIP work (climbing on tube) | 16:31 |
15:15 | CDS | Erik | EX | n | ADC install | 15:53 |
15:19 | SAF | Oli | LVEA | n | Transition LVEA to laser SAFE | 15:53 |
15:21 | IAS | Jason, TJ O., Tyler, Ryan C | LVEA | yes | FARO work | 21:58 |
15:28 | Film | Mike, Richard, film crew | LVEA | YES | Filming | 22:47 |
15:32 | OMC | Jeff | LVEA - HAM6 | - | DCPD electronics testing | 19:39 |
15:51 | CDS | Erik, Fil | EX | n | Add ADC to h1iscex | 16:18 |
15:52 | DAQ | Dave | remote | n | DAQ RESTART for EX work | 16:21 |
15:52 | VAC | Janos, Isaiah | EX, LVEA | n | Pump testing | 17:34 |
15:54 | FAC | Ken | LVEA | n | Cable tray install on FC tube area | 18:27 |
16:08 | FAC | Chris | LVEA, FCES, EX, EY | n | FAMIS tasks | 18:00 |
16:10 | PEM | Sheila | LVEA | n | Magnetometer move | 16:10 |
16:21 | CDS | Fil, Marc | EY | n | Reconnect IRIGb, install PEM AA chassis but not plug in | 17:01 |
16:32 | VAC | Norco | EY | n | CP7 fill | 19:24 |
16:36 | VAC | Travis | MX, EX | n | Turbo testing | 19:24 |
17:11 | SEI | Jim | Ends | n | Look for regulator | 18:18 |
17:19 | FAC | Kim | EX | n | Tech clean | 18:18 |
17:20 | FAC | Karen, Nelly | EY | n | Tech clean | 18:11 |
17:44 | PCAL | Francisco | EX | LOCAL | PCAL meas. | 19:34 |
17:58 | SAF | HAZARD | EX | YES | EX is Laser HAZARD | 19:34 |
18:11 | VAC | Norco | CS | n | CP2 fill | 20:04 |
18:14 | VAC | Gerardo, Jordan, Fil, Marc | LVEA | n | Pull cable from Y BM to mech room | 18:57 |
19:00 | VAC | Janos, Isaiah | EX | yes | Pump testing | 20:28 |
19:09 | CDS | Fil | High bay | n | Clearing out for genie lift to fit | 19:36 |
19:21 | SQZ | Kar Meng | Opt Lab | local | SHG work | 20:05 |
19:46 | CDS | Jonathan, Jamie, Dan | Remote | n | GDS/DMT restarts | 20:59 |
21:48 | VAC | Gerardo | LVEA | yes | Looking at VPs | 22:12 |
21:54 | VAC | Janos | EX | n | AIP check | 22:12 |
23:13 | Film | Film crew | FCTE | n | Filming the FCTE | 00:13 |
A consequence of WP 11919 today is that we should be able to see the range calculation (eg on the control room wall) for the GDS-CALIB_STRAIN-type channels for more of the time. Now, rather than needing the overall IFO to be ready for Observing (i.e. at NomLowNoise and no SDF diffs), there should be a range calc (for, eg, the thick red line on the range plot on the control room wall) thorughout much of the locking sequence, just like the range that is calculated from Cal-DELTAL_EXTERNAL (the thin line on the control room wall range plot).
No other configurations have changed, but this will be something that looks different from how it had been, so it's worth noting in the alog.
FranciscoL, [Remote: RickS]
After one week of having the inner beam centered (alog 78247), on June 11, we moved the beam by 5 mm on the Rx sensor.
The Rx side cover was compromised due to work being done by the vacuum team (images attached for reference.) Fortunately, the back cover was accessible and the measurements were done accessing the Rx input aperture through there. There might be a minor uncertainty on the alignment of the target - it was done by "projecting" the front of the power sensor by using the front camera of my phone; instead of the usual alignment by eye. Since the procedure had a regular/similar flow as the movements done on previous weeks, uncertainty of the target alignment should not be problematic for our measurements.
Attachment 'AlignedTarget' shows the alignment with the beam height gauge. Attachment 'BothBeamsBefore' shows the beams before making any changes. The pdf 'EndStationLog.pdf' lists the voltage values after each significant step of procedure T2400163. The steps represent writing down a voltage value after a particular change to the beam position. Some steps were recorded multiple times after minor changes.
The 'Initial' measurement is *equal* to the last voltage measurement from the previous movement, done on May 21 (alog 78247). The initial and final voltage measurement during today procedure remained the same despite the increase of the inner beam with the target.
We expect this movement to be symmetric (equal in magnitude, opposite in sign) to alog 77840
J. Kissel TIA D2000592: S/N S2100832_SN02 Whitening Chassis D2200215: S/N S2300003 Accessory Box D1900068: S/N S1900266 SR785: S/N 77429 I've finally got a high quality, trustworthy, no-nonsense measurement of the OMC DCPD transimpedance amplifiers frequency response. For those who haven't seen the saga leading up to today, see the 4 month long story in LHO:77735, LHO:78090, and LHO:78165. For those who want to move on with their lives, like me: I attach a collection plots showing the following for each DCPD: Page 1 (DCPDA) and Page 2 (DCPDB) - 2023-03-10: The original data set of the previous OMC DCPD's via the same transimpedance amplifier - 2024-05-28: The last, most recent data set before this, where I *thought* that is was good, even though the measurement setup was bonkers, - 2024-06-11: Today's data Page 3 (the Measurement Setup) - The ratio of the measurement setup from 2023-03-10 to 2024-06-11. With this good data set, we see that - there's NO change between the 2023-03-10 and 2024-06-11 data sets at high frequencies, which matches the conclusions from the remote DAC driven measurements (LHO:78112) and - there *is* a 0.3% level change in the frequency response at low frequency, which also matches the conclusions from the remote DAC driven measurements. Very refreshing to finally have agreement between these two methods. OK -- so -- what's next? Now we can return to the mission of fixing the front-end compensation and balance matrix such that we can - reduce the impact on the overall systematic error in the calibration, and - reduce the frequency dependent imbalance that were each discovered in Feb 2024 (see LHO:76232). Here's the step-by-step: - Send the data to Louis for fitting. - Create/install new V2A filters for A0 / B0 bank - Switch over to these filters and accept in SDF - Update pydarm parameter file with new super-Nyquist poles and zeros. - Measure compensation performance with remote DAC driven measurement of TIA*Wh*AntiWh*V2A confirm bitterness / flatness Once IFO is back up, running, (does it need to be thermalized?) - Measure balance matrix, Remember -- SQZ OFF confirm better-ness / flatness - Install new balance matrix - Accept Balance Matrix in SDF Once IFO is thermalized - grab a new sensing function. - push a new updated calibration
The data gathered for this aLOG lives in: /ligo/svncommon/CalSVN/aligocalibration/trunk/ Common/Electronics/H1/DCPDTransimpedanceAmp/OMCA/S2100832_SN02/20240611/Data/ # Primary measurements, with DCPD TIA included in the measurement setup (page 1 of the main entry's attachment measurement diagrams) 20240611_H1_DCPDTransimpedanceAmp_OMCA_DCPDA_mag.TXT 20240611_H1_DCPDTransimpedanceAmp_OMCA_DCPDA_pha.TXT 20240611_H1_DCPDTransimpedanceAmp_OMCA_DCPDB_mag.TXT 20240611_H1_DCPDTransimpedanceAmp_OMCA_DCPDB_pha.TXT # DCPD TIA excluded, "measurement setup" along (page 2 of the main entry's attachment measurement diagrams) 20240611_H1_MeasSetup_ThruDB25_PreampDisconnected_OMCA_DCPDA_mag.TXT 20240611_H1_MeasSetup_ThruDB25_PreampDisconnected_OMCA_DCPDA_pha.TXT 20240611_H1_MeasSetup_ThruDB25_PreampDisconnected_OMCA_DCPDB_mag.TXT 20240611_H1_MeasSetup_ThruDB25_PreampDisconnected_OMCA_DCPDB_pha.TXT
Here are fit results for the TIA measurements DCPD A: Fit Zeros: [6.606 2.306 2.482] Hz Fit Poles: [1.117e+04 -0.j 3.286e+01 -0.j 1.014e+04 -0.j 5.764e+00-22.229j 5.764e+00+22.229j] Hz DCPD B: Fit Zeros: [1.774 6.534 2.519] Hz Fit Poles: [1.120e+04 -0.j 3.264e+01 -0.j 1.013e+04 -0.j 4.807e+00-19.822j 4.807e+00+19.822j] Hz A PDF showing plots of the results is attached as 20240611_H1_DCPDTransimpedanceAmp_report.pdf. The DCPD A and B data and their fits (left column) next to their residuals (right column) are on pages 1 and 2, respectively. The third page is a ratio between DCPD A and DCPD B datasets. Again, they're just overlaid on the left for qualitative comparison and the residual is on the right. I used iirrational. To reproduce activate the conda environment I set up specifically just to run iirrational.activate /ligo/home/louis.dartez/.conda/envs/iirrational
Then runpython /ligo/groups/cal/common/scripts/electronics/omctransimpedanceamplifier/fits/fit_H1_OMC_TIA_20240617.py
A full transcript of my commands and the script's output is attached as output.txt. On gitlab the code lives at https://git.ligo.org/Calibration/ifo/common/-/blob/main/scripts/electronics/omctransimpedanceamplifier/fits/fit_H1_OMC_TIA_20240617.py
Here's what I think comes next in four quick and easy steps: 1. Install new V2A filters (FM6 is free for both A0 and B0) but don't activate them. 2. Measure the new balance matrix element parameters (most recently done in LHO:76232. 3. Update L43 in the pyDARM parameter file template at /ligo/groups/cal/H1/ifo/pydarm_H1.ini (and push to git) N.B. doing this too soon without actually changing the IFO will mess up reports! Best to do this right before imposing the changes to the IFO to avoid confusion. 4. When there's IFO time, ideally with a fully locked and thermalized IFO: 4.a move all DARM control to DCPD channel B (double the DCPD_B gain and bring the DCPD_A gain to 0) 4.b activate the new V2A filter in DCPD_A0 FM6 and deactivate the current one 4.c populate the new balance matrix elements for DCPD A (we think it's the first column but this remains to be confirmed) 4.d move DARM control to DCPD channel A (bring both gains back to 1, then do the reverse of 4.a) 4.e repeat 4.b and 4.c for DCPD channel B then bring both gains back to 1 again 4.f run simulines (in NLN_CAL_MEAS) and a broadband measurement 4.g generate report, verify, and if all good then export it to the front end (make sure to do step 3. before generating the report!) 4.h restart GDS pipeline (only after marking report as valid and uploading it to the LHO ldas cluster) 4.i twiddle thumbs for about 12 minutes until GDS is back online 4.j take another simulines and broadband (good to look at gds/pcal) 4.k back to NLN and confirm TDCF's are good.
On Friday 06/07/2024 Dave Barker sent an email to the vacuum group noting 3 spikes on the pressure of the main vacuum envelope, I took a closer look at the 3 different events and noticed that the events correlated to the IFO losing lock. I contacted Dave, and together we contacted the operator, Corey, who made others aware of our findings.
The pressure "spikes" were noted by different components integral to the vacuum envelope. Gauges noted the sudden rise on pressure, and almost at the same time ion pumps reacted to the rise on pressure. The outgassing was noted on all stations, very noticeable a the mid stations, and with less effect at both end stations, and for both with a delay.
The largest spike for all 3 events is noted at HAM6 gauge, we do not have a gauge at HAM5 or HAM4. The one near HAM6 is the one on the relay tube that joins HAM5/7 (PT152), with the restriction of the relay tube, then the next gauge is at BSC2 (PT120), however the spike is not as "high" as the one noted on HAM6 gauge.
A list of aLOGs made by others related to the pressure anomalies and their findings:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=78308
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=78320
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=78323
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=78343
Note: the oscillation visible on the plot of the outer stations (Mids and Ends) is the diurnal cycle, nominal behavior.
Of live working gauges, PT110 appears closest to the source based on time signature. This is on the HAM5-7 relay tube and only indirectly samples HAM6*. It registered a peak of 2e-6 8e-6 Torr with decay time of order 30 17s. Taking a HAM as sample volume (optimistic) this indicates at least 0.02 0.08 torr-liters of "something" must have been released at once. The strong visible signal at mid- and end-stations suggests it was not entirely water vapor, as this should have been trapped in CP's.
For reference, a mirror in a 2e-6 Torr environment intercepts about 1 molecular monolayer per second. Depending on sticking fraction, each of these gas pulses could deposit of order 100 monolayers of contaminant on everything.
The observation that the IFO still works is comforting; maybe we should feel lucky. However it seems critical to understand how (for example) the lock loss energy transient could possibly hit something thermally unstable, and to at least guess what material that might be. Recall we have previously noted evidence of melted glass on an OMC shroud.
Based on the above order-of-magnitude limits, similar gas pulses far too small to see on pressure gauges could be damaging the optics.
It would be instructive to compare before/after measures of arm, MC, OMC, etc. losses, to at least bound any acquired absorption
*corrected, thanks Gerardo
Corner RGA scans were collected today during maintenance, using RGA on Output tube. RGA volume has been open to main volume since last pumpdown ~March 2024, but electronics head/filament was turned off due to the small fan on the electronics head not spinning during observing. Unable to connect to HAM6 RGA, through either RGA computer in control room, or locally at unit with laptop. Only Output tube RGA available at this time.
Small aux cart and turbo was connected to RGA volume on output tube, then RGA Volume isolated from main volume and the filament turned on. The filament had warmed for ~2 hours prior to RGA scans being collected.
RGA Model: Pfeiffer PrismaPlus
AMU Range: 0-100
Chamber Pressure: 1.24E-8 torr on PT131(BSC 3), and 9.54E-8 torr on PT110 (HAM6), NOTE: Cold Cathode gauge interlocks tripped during filming activings in LVEA today, BSC2 pressure not recorded
Pumping Conditions: 4x 2500 l/s Ion Pumps and 2x 10^5 l/s cryopumps, HAM6 IP and HAM7/Relay tube
SEM voltage: 1200V
Dwell time: 500ms
Pts/AMU: 10
RGAVolume scans collected with main volume valve closed, only pumping with 80 l/s turbo aux cart
Corner scans collected with main volume valve open, and aux cart valve closed
Comparison to March 2024 scan provided as well.
Richard posting from Robert S.
I had a work permit to remove viewports so I opened the two viewports on the -Y side of HAM6. I used one of the bright LED arrays at one viewport and looked through the other viewport so everything was well lit. I looked for any evidence of burned spots, most specifically on the fast shutter or in the area where the fast shutter directs the beam to the cylindrical dump. I did not see a damaged spot but there are a lot of blocking components, so not surprising. I also looked at OM1 which is right in front of the viewports. I looked for burned spots on the cables etc but didnt see any. I tried to see if there were any spots on the OMC shroud, or around OM2 and OM3, the portions that I could see. I didnt see anything, but I think its pretty unlikely that I could have seen something.
Repeated 0-100AMU scans of the corner today, after filament had full 24 hours to warm up. Same scan parameters as above June 11th scans. Corner pressure 9.36e-9 Torr, PT 120.
Dwell time 500 ms
Attached is comparison to yesterday's scan, and compared to the March 4th 2024 scan after corner pumpdown.
There is a significant decrease in AMU 41, 43 and 64 compared to yesterday's scan.
Raw RGA text files stored on DCC at T2400198
Naoki, Sheila
To investigate the origin of whistles and rms increase in FC IR signal as reported in 78263 and 78344, we turned off the FC green VCO lock after FC IR transition.
In TRANSITION_IR_LOCKING state of SQZ_FC guardian, the FC CMB servo is disabled after FC IR transition. Since we use the green QPD at FC trans for beam spot control, we also turned off the beam spot control. The SDF is accepted as shown in the attachment.
After this guardian change, FC lost lock after IR transition. I reverted the guardian change and FC lock is fine. The FC CMB input is ON now and it was OFF from 2024/06/10 16:44:06 UTC to 2024/06/11 15:07:58 UTC. The beam spot control is still OFF.
P. Baxi , J. Kissel
Finished characterizing (Channels 1 to 4) of the new 524KHz Analog OMC Anti-Alias Chassis - S2300162
BodePlot_Channel_1_4_freq_start_102.4KHz_Freq_stop_1KHz_Steps_100.pdf (raw data - available in DCC S2300162)
- We see the frequency response has minimal phase impact in the gravitational wave band, with deg of phase loss at 1000 Hz.
ASD_SR785_Channel_1_4_Freq_span_4to32K_32to256K_128to1024K.pdf (raw data - available in DCC S2300162)
- We see the noise of each of these 4 representative ADC channels is around ~20 nV/rtHz
Bode plot
and ASD plots)
Bode plot
and ASD plots)
For the bode: We see the frequency response has minimal phase impact in the gravitational wave band, with 1.5 deg of phase loss at 1000 Hz.