Displaying reports 2161-2180 of 77273.Go to page Start 105 106 107 108 109 110 111 112 113 End
Reports until 00:00, Wednesday 24 April 2024
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 00:00, Wednesday 24 April 2024 (77373)
OPS Eve Shift Summary

TITLE: 04/24 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C - OWL CANCELLED
SHIFT SUMMARY:

IFO is LOCKING and in ENGAGE_DRMI_ASC

5:24 UTC - Reached NLN but waiting as SQZ team (Naoki) tries to better align squeezer

5:40 UTC - Observing

6:39 UTC - Lockloss DRMI

LOG:

Start Time System Name Location Lazer_Haz Task Time End
00:18 STROLL Patrick MY N Walkin' 01:26
01:26 PEM Robert LVEA N Ungrounding SUS Racks 01:56
Images attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 20:43, Tuesday 23 April 2024 (77371)
OPS Eve Midshift Update

IFO is in LOCKING_ARMS_GREEN and still locking after maintenance.

There is a LOT of ongoing and past troubleshooting mostly summarized by Jenne in alog 77368. Since this was posted, I looked at the HAM6 Picomotors and determined that there was no suspicious activity (esp in the on/off switching) since locking issues began (approx 15 hrs ago now).

We just got to OMC Whitening but the violins were horrendous. We stayed there for 30 minutes before an unknown lockloss DRMI. Sheila is on TS helping with SQZ alignment/instructions with SQZ alignment.

H1 ISC
jenne.driggers@LIGO.ORG - posted 19:30, Tuesday 23 April 2024 - last comment - 14:45, Wednesday 24 April 2024(77368)
IFO very unhappy, something likely wrong with alignment, AS_C needs strange offsets

[Jenne, Ibrahim, RyanS, Sheila, Robert, TJ, Elenna, Jennie, Oli, Jim, others]

Something is different with the IFO today, and it's not great.  Since we haven't pinpointed what the problem is, we don't really know if it is related to our mediocre range last night and struggles to relock after that, or if it is new from maintenance day.  So far, the solution to get back to (almost) NLN has been to put in *massive* offsets in AS_C.  This did enable us to get locked (and avoided the ASC ringup that was causing locklosses this afternoon), but it left us with a large scatter shelf in DARM.  Robert had the good suggestion that we see if, once locked, we could walk the offsets back to their nominal values of zero; doing this caused an ASC ringup, and it's probably the same thing that we'd been seeing throughout the day.  So, indeed going toward the nominal offset of zero for pit and yaw on AS_C is not an okay place right now.

We began to narrow in on AS_C since, during initial alignment, it is in-loop for SR2 and SRY alignments, and it was causing SR2 to be pulled very far in yaw.  We were seeing this visually on the AS_AIR camera, which looked really unusually terrible. In the end, Sheila hand-aligned SRY, and then put offsets into AS_C such that the ASC now servos to that point.  But, the offsets used are +0.46 in pit and -0.85 in yaw, so really, really big. However, with these in place, we were able to let DRMI ASC run, and it ran fine. 

Since that worked (the large AS_C offsets), we let the IFO relock the rest of the way, and it kept on working. Per a suggestion from Elenna from earlier in the afternoon, after we completed REDUCE_RF45 I manual-ed to ADJUST_POWER and increased the power 5W at a time (waiting for ASC to mostly converge between steps) until we were at 60W, then I manual-ed back to LOWNOISE_ASC.  After that, I just selected NomLowNoise and guardian finished things.  We ended up 'stuck' in OMC_WHITENING, although the violins were coming down with the damping that guardian was doing.  It was around here that I tried ramping down the AS_C yaw offset with 10 sec ramp times to see if it would reduce the scatter shelf that we saw in DARM.  See first attachment, with the scatter shelf circled.  My first step was to -0.80, second step was to -0.70, and we had an ASC ringup and lockloss at this step.  I wasn't sure after the first step, but definitely as we were ramping to the second step (before we lost lock) RyanS, Robert, and I all agreed that the scatter shelf was getting worse, not better.  But, we lost lock before I could put the offset back.

We still don't understand *why* we need these large offsets in AS_C, just that we do.  Since I hadn't yet SDF-ed them, we had 2 locklosses from ENGAGE_DRMI_ASC when the nominal zero-offsets were used in AS_C.  I have since saved the AS_C offset values, and the offset switch being on, in both the safe.snap and observe.snap SDF files.  The second and third attachments show these.

We've been trying to think through what from maintenance day could possible have had anything to do with this, and the only things we've come up with are the h1asc reboot and the grounding of SUS racks.  From Jeff's alog about the h1asc changes, it seems quite clear that those are just additions of monitor points, and that shouldn't have had any effect. While we were unlocked after our one successful attempt, Robert went into the LVEA and un-grounded those two racks that Fil grounded earlier today.  Robert is quite sure that he's checked the grounding of them in the past without any consequence to our ability to lock, but just in case we've got them undone for tonight. So far, that does not seem to have changed the fact that we need these crazy offsets in AS_C.  Just in case, Robert suggests we consider rebooting h1asc tomorrow to back out other changes that happened today (even though those shouldn't have anything to do with anything).

For right now (as of ~7:20pm), my best plan is to see if we can get to Observing, and should a GW candidate come through, have the detchar and parameter estimation folks exclude all data below ~80 Hz. This is a really very poor solution though, since this huge scatter shelf will be quite detrimental to our data quality and could cause our search pipelines to trigger on this scattering. 

Related thought, but not yet thoroughly investigated (at least by me), is whether our problems actually started sometime yesterday or last night, and aren't at all related to maintenance. As partial 'evidence' in this direction, I'll note that kappa_c has been dramatically low for the last two Observing segments last night.  We haven't gotten all the way to NLN tonight yet (OMC_WHITENING didn't finish before we lost lock), but kappa_c looks like it might be even lower now than yesterday.  The 4th attachment shows kappa_c for last night's locks (screenshot taken before we locked today).  So, maybe we've had some clipping or severe degredation of alignment in the last day or so.   Jim checked all of the ISIs, and nothing seems suspicious there.

Last thing - since we had run the dark offsets script earlier this afternoon (probably was a red herring) and I saved all of the SDFs in the safe.snap, we will have diffs for Observe.snap.  All diffs that have _OFFSET in them should be accepted (and screenshot-ed). 

 

Images attached to this report
Comments related to this report
jenne.driggers@LIGO.ORG - 19:32, Tuesday 23 April 2024 (77370)

Oh, also, the squezing was not at all good when we locked. I suspect that it's because the squeezer alignment isn't matched to this bad-but-working alignment of the rest of the IFO.  Sheila should be able to log on in a little while and can hopefully take a quick look to see if there's anything to do.

sheila.dwyer@LIGO.ORG - 06:56, Wednesday 24 April 2024 (77372)

Adding screenshot that summarizes/ supports what Jenne is saying above. 

A little over 1 day ago, there was a drop in optical gain.  There isn't an indication of a shift in alignment of SR2, or the OMs at that time.  AS_C QPD is in loop, so that signal being zero only shows that we weren't using any offsets at the time.  The drives on OM1 + OM2 show that their alignment also didn't shift at the time.  Since they are used to center the beam on AS_A and AS_B, their drives would need to shift if there is a large shift in the real beam position on AS_C.  You can see that for today's lock, where there were large offsets on AS_C, the alignment of SR2 is different, as well as the alignment of OM1 + 2 (indicating that the alignment onto AS_C is different).  We might be clipping on the OFI, or elsewhere at the AS port.

Here's an image of the AS camera:

you can toggle through these three in different browser tabs to see that the alignment didn't seem to shift until we added offsets.  So all indications are that these offsets aren't good, and we should not be using them, except that they appear to have allowed us to lock the IFO.

Editing to add another screenshot: As Ibrahim checked several times last night, the osems indiczte that SR3 + PR3 haven't moved.  In early Tuedsday lock where the optical gain first dropped, the power recycling gain was unchanged, with the introduction of offsets last night it seems slightly lower (and the optical gain also took another step down).This attachment shows that the offsets moved OM3+ OMC, which I did not expect (since the AS centering beams fix the beam axis arriving on OM3, I wouldn't have expected the AS_C offsets to have moved these optics).  But they didn't move for the TUesday morning low optical gain lock.

Images attached to this comment
jennifer.wright@LIGO.ORG - 11:01, Wednesday 24 April 2024 (77382)

Jennie W, Sheila

Sheila suggested I use the oplevs for SR3, BS, ITMY, ITMX. ETMY, ETMX to compare our alignment out of lock right before we started locking the main IFO (ie. after locking green arms). I chose two times we were in FIND_IR (state 16 in ISC_LOCK).

One of these times was after a lockloss from a very high range lock from Monday afternoon (~157 Mpc).

GPS reference time in lock = 1397864624 (16:43:26 PDT)

GPS reference time in FIND_IR presumably before our current weird alignment/locking problems = 1397868549 (17:48:51 PDT)

The other time was in our last locking sequence Tuesday morning before maintenance Tuesday started. We did not get to NLN but fell out before this at ENGAGE_ASC_FOR_FULL_IFO (state 430 in ISC_LOCK).

GPS reference time in FIND_IR = 1397918790 (07:46:12 PDT)

The main OPLEVS which changed more than 1 micro-radian were ITMY PIT (down by 1.24 microradians)

ETMY PIT (up by 1.02 microradians)

ETMY YAW (down by 1.14 microradians)

ETMX PIT (up by 2.35 microradians)

ETMX YAW (up by 3.21 microradians)

jenne.driggers@LIGO.ORG - 11:27, Wednesday 24 April 2024 (77384)

While SRY was locked (last step in initial alignment), we moved the OFI as much as we could, by putting large offsets in the actuator filter banks.  No discernable power level changes are visible.  This isn't surprising given how weak the OFI actuators are, but seemed worthy of eliminating a possiblity.

Images attached to this comment
H1 AOS
robert.schofield@LIGO.ORG - posted 19:05, Tuesday 23 April 2024 (77369)
New grounding disconnected

Because of the locking problems, I disconnected the grounds that were installed earlier today (77350).

H1 AOS (DetChar)
robert.schofield@LIGO.ORG - posted 18:58, Tuesday 23 April 2024 (77367)
False alarm S240420aw caused by M 2.8 earthquake about 24 km from site

Corey, Robert

Derek reported that S240420aw was likely caused by scattering noise so I looked into the problem. The figure shows that a large seismic pulse (~2 orders of magnitude above background) started something swinging at 1.44 Hz, with a Q of about 300. The scattering shelf's cutoff slowly dropped in frequency over ten minutes,  so there were plenty of chirps reaching different frequencies.  It is tempting to guess that the source of scattering is a TMS, which have transverse resonances of about 1.4 Hz, but I checked and the Qs of TMS motion were much lower than the Q of 300 of the scatter source.  The time of the seismic pulse matches that of the M2.8 earthquake in Richland on Friday night, 24 surface km from the site and 8 km deep. There is quite a bit of light scattered, but at normal ground motion levels I would expect the scattering noise shelf to reach only about 5 Hz, so I don’t think this scattering noise source affects us during normal operation. Andy L. and Beverly B. noticed similar scattering noise back in 2017 ( 37947 ) for a smaller nearby quake.

Non-image files attached to this report
H1 ISC (Lockloss, OpsInfo)
jennifer.wright@LIGO.ORG - posted 16:31, Tuesday 23 April 2024 (77365)
Measured ASC MICH_P and DHARD_P transfer functions

Jennie W, Elenna, Jenne, TJ

 

We have been having trouble locking and keep falling out during the MAX_POWER state of ISC_LOCK as we power up (a and b are examples showing MICH and DHARD during two of these four locklosses).

During some of these we noticed that MICH or DHARD could be ringing up.

We measured MICH open loop gain with Elenna's template which I have saved a version of in /ligo/home/jennifer.wright/Documents/ASC/MICH_P_OLG_broadband_shaped.xml. We foudn the gain to be about 10% too low so increased the loop gain from -2.4 to -2.7 in the loop servo. We also measured DHARD open loop gain with the template in this folder DHARD_2W_P_OLG_broadband_shaped.xml and found it looks normal.

Investigations ongoing...

Images attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:21, Tuesday 23 April 2024 (77364)
OPS Eve Shift Start

TITLE: 04/23 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 10mph Gusts, 8mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.09 μm/s
QUICK SUMMARY:

Still in post-maintenance LOCKING

Attempting to re-lock after maintenance though facing some issues as per alog 77363.

LHO General
thomas.shaffer@LIGO.ORG - posted 16:19, Tuesday 23 April 2024 (77363)
Ops Day Shift End

TITLE: 04/23 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Started an initial alignment just after 1130 local time. Had no issues during the alignment, but when we tried to start main locking we would lose lock around the power up. This happened 4 times in total. Each lock loss would ring the violins up so we would have to damp them at 2 W. After the 3rd one during the time we were waiting for violins to damp, we ran some OLG of MICH P and DHARD P (the two channels we saw extra motion before the lock loss). This has us bump up the MICH P gain, but we lost lock again. We then decided to run the dark offsets and another initial alignment. Were currently noticing that initial alignment looks way off in MICH Bright and SR2 alignment. Working on that.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
15:01 FAC Ken, Fil LVEA n Cables by BSC3 16:50
15:02 CDS Dave CR n Model restarts and DAQ restart 15:15
15:05 FAC Tyler EY n Ladder inspections 15:49
15:10 FAC Karen,Kim FCES n Tech clean 16:28
15:34 PCAL Fransisco, Dripta EX Yes PCAL meas. 17:49
15:42 ISC Jeff LVEA n OMC electronics meas. @HAM6 racks 18:39
15:45 SEI Jim, Mitchell LVEA n Checking on 3IFO equipment 17:51
15:49 FAC Tyler LVEA n Talk with Jim 17:48
15:50 IAS Jason LVEA n Faro surveying 18:12
15:50 - Mike, Brian Smith LVEA n Tour, scouting 16:53
15:50 FAC Kim EX Yes Tech clean 17:02
15:51 FAC Karen EY n Tech clean 16:49
15:59 VAC Janos, Jordan, Gerardo EY, MY, MX, EX, CS, FCES n Pump cart wheels 18:20
16:03 - Betsy EY n Measurements 16:28
16:29 - Richard LVEA, FCTE, FCES n Check on cable trays, etc 16:53
16:50 FAC Ken FCES, FCTE n Cabel tray supports 21:51
17:18 FAC Chris, Eric EY, EX, CS n Axial fan lubrication 19:41
17:20 PEM Robert LVEA n Setup ground injector 18:07
17:23 CDS Marc CER n Accelerometer troubleshooting 17:50
17:32 FAC Karen, Kim LVEA n Tech clean 18:36
17:32 CDS Fil LVEA n Ground cable trays 18:28
18:05 SEI Jim EX, MX, EY, MY n Inventory search 19:05
18:08 PEM Robert EX n Shaker setup 18:45
18:13 - Mitchell Ends, Mech room n FAMIS checks 18:53
18:26 - Mike, Brian Smith OSB roof, Yarm, EY n Tour 20:24
19:11 PEM Robert LVEA n Setup meas. 19:51
22:17 PEM Robert CER n Check on electronics 22:29
H1 ISC
jenne.driggers@LIGO.ORG - posted 15:43, Tuesday 23 April 2024 (77362)
Ran Dark Offsets script (excluding sqz, as usual)

TJ, Jenne, Jennie

We ran the dark offset script today, to see if there were hidden dark offsets that got reverted during today's h1asc boot.  TJ pointed out that we'd been having quite similar locklosses also this morning before maintenance, so it's not super likely that this will magically fix things, but it probably won't hurt.

I saved all the offsets I could find in SDF, but I didn't screenshot them.  I did capture a screenshot of the OMC DCPDs' offsets, since those seemed like they had in place quite round numbers, not values that would have been put in with a script.

I ran /opt/rtcds/userapps/release/isc/h1/scripts/dark_offsets/dark_offsets_exe.py after TJ had taken the IMC to offline, and the script did all the heavy lifting (measured and wrote values).

Images attached to this report
H1 CDS
jonathan.hanks@LIGO.ORG - posted 14:48, Tuesday 23 April 2024 (77360)
WP 11826
As per WP 11826 I reconfigured the firewall on h1daqgds0 and h1daqgds1 to allow transfer of the live CDS data to a new nds2 node in the LDAS cluster.  This was a simple update of the config with no impact on the daqd or existing data streams.

When the new nds2 node (nds-4) is ready I will enable the data stream.  There is no addition configuration required for this on the h1daqgds machines.
H1 ISC (Lockloss)
camilla.compton@LIGO.ORG - posted 14:48, Tuesday 23 April 2024 - last comment - 09:19, Monday 06 May 2024(77359)
Locklosses at state TRANSISITON _FROM_ETMX

Jennie, TJ, Camilla

The Operator team has been seeing more locklosses at state 557 TRANSISITON _FROM_ETMX, more so when the wind is high. Times: 1397719797; 1397722896; 1397725803; 1397273349, 1397915077

Last night we had a lockloss from 558 LOWNOISE_ESD_ETMX with a 9Hz ITMX L3 oscillation, see attached. Compare to a successful transition (still has glitch).

Note that there is a glitch ~30s before state 558 in both cases.  H1:SUS-ETMX_L3_DRIVEALIGN_L2L and _L3_LOCK_L filter changes happens here. Are ramp times finished before these changes?

The timing of the glitch is 2 m 55 seconds after we get to state 557, this is the same time as in the 4 of the last 6  state 557 locklosses.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 16:42, Tuesday 23 April 2024 (77366)

Louis, Camilla. Investigations ongoing but the timing of this glitch is suspiciously when the H1:SUS-ETMX_L1_LOCK_L gain and filters are changed, tramp is 1secinds but FM2 and FM6 foton filters have a 3 seconds ramp. There is an INPUT to this filter bank before the gain and filters are turned on. Plot attached.

Images attached to this comment
camilla.compton@LIGO.ORG - 09:19, Monday 06 May 2024 (77642)

In 77640 we show that the DARM1 filter change is the cause of the glitches/locklosses.

H1 OpsInfo (PSL)
ryan.short@LIGO.ORG - posted 14:07, Tuesday 23 April 2024 (77357)
New PSL Chiller Verbal Alarm

I've added a test to Verbal Alarms that will notify when the PSL chiller is "alarming," which most likely means the chiller's water level is low. This is being added on top of the already existing DIAG_MAIN notification for the same event. When the H1:PSL-LASER_CHILLER1_ALARM channel flips to 1, Verbal Alarms will say "Check PSL chiller," then wait four hours before saying it again if the alarm channel has flipped again since then. The reasoning behind this is that the chiller can run for a while (unsure of exactly how long, but past examples have shown the alarm running for up to almost 24 hours) before it will actually turn off, taking the NPRO and amplifiers down with it.

If operators receive this warning, they can relay the message to anyone on the PSL team or refill the reservoir themselves if on a weekend, making sure to note how much water was added.

H1 General
mitchell.robinson@LIGO.ORG - posted 13:27, Tuesday 23 April 2024 (77356)
Monthly Dust Monitor Vacuum Pump Check

EY dust monitor pump 0715002111 failed. Replaced with 11001311. The pump that failed was left at the end station to cool down. I will retreive and fix the pump next Tuesday.

Ex and corner station pumps running smoothly and within temp range.

H1 DetChar (DetChar)
nicholas.baker@LIGO.ORG - posted 13:23, Tuesday 23 April 2024 (77355)
Seismic Results from Atlas Agro Test Drops
Nick, Robert, Mike

In December of 2023, Atlas Agro, who plans on building a fertilizer production plan that is net carbon neutral on DOE land, made some test seismic noise by dropping a 1600-lb block into an excavated hole. Atlas Agro made nearly 100 test drops. I was asked to see if any of the seismic noise appeared on any of our seismometers. I have been working with Robert Schofield, and have not seen any of the seismic noise. I have attached a DCC link that contains a PDF with some of the time series of the drops and includes links to the rest of the time series.

https://dcc.ligo.org/LIGO-T2400124
H1 ISC (CDS)
jennifer.wright@LIGO.ORG - posted 13:21, Tuesday 23 April 2024 (77348)
Setting up DC7 centering loop for IM moves

Jennie W, Camilla

Today Camilla and were trying to move around the IM1 and IM2 mirrors in turn and try and clip the beam going through the Faraday Isolator to see how centred we are.

Details in a log from Camilla.

As part of this we need to make a move with either IM1 or IM2 and bring the beam back onto the centre of IM4 TRANS with IM3 pitch or yaw moves.

Jenne had the idea we could use a DC centering loop to speed this up by moving IM3 for us.

We used the DC7 centering path after trending the outputs of this back to make sure it was not used during locking or in nominal low noise - checked back at least 70 days and the outputs were zero.

I checked the DC1 centering loop filters and copied over the three that are on during lock for both yaw and pitch into the DC7 foton files, putting all three filters in place of the FM2 module (previously called OFF and having a value of gain(0)) in ASC.adl DC7_P and DC7_Y.

Then we altered the ASC input and output matrices so that DC7 takes IM4 TRANS QPD as an input and moves IM3 to centre on this.

We tried different gain up to +/-50 in the yaw loop but never managed to make the yaw of IM4 TRANS converge to zero.

After talking to Sheila we realised that the DC1 loop servo will not match the plant of the IM3 suspension so to use this for IM moves we would have to put more thought into the filters.

 


NB: My changing and saving of the ASC.txt foton file led to some confusion with Dave B. et al. who were trying to track down a rounding error in the DHARD filter in the same fofon file. But this has been resolved.

Images attached to this report
LHO FMCS
eric.otterman@LIGO.ORG - posted 13:10, Tuesday 23 April 2024 (77353)
End X temperature increase
There was an increase in space temperature at End X, which is evidenced in the trend. This increase was due to our disabling the fans for quarterly greasing. 

Initially, the air handlers were configured to raise their supply air set points when in Start-Up mode; this mode occurs whenever the program loses fan status, such as during unoccupied periods, so that upon start-up, the system is not overloaded trying to pull the supply air temperature down as fast as possible. Since the only time the fans are theoretically off is when we grease them, which only takes thirty seconds, there is no need to gradually reduce supply air temperature.

I have changed the Start-Up supply air temperatures at EX and EY from 61 to 55, which is roughly the supply air temperature the air handlers normally produce. This should reduce any space temperature increases after quarterly greasing tasks. 
H1 SQZ
karmeng.kwan@LIGO.ORG - posted 09:15, Tuesday 23 April 2024 - last comment - 13:54, Tuesday 23 April 2024(77343)
OPO fiber rejected power high, SQZT0 waveplate adjust

Karmeng, Camilla. H1:SQZ-SHG_FIBR_REJECTED_DC_POWERMON power was high. Adjusted the 1/2 and 1/4 waveplate to reduce rejected power from 0.3 to 0.03. Instruction

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 13:54, Tuesday 23 April 2024 (77358)

Vicky questioned if we need to realign the pump AOM /SHG fiber  eg.74479.

With the OPO guardian DOWN, OPO REFL DC POWER is 2.26V, our max is around 2.9V 75270.

With SQZ MANAGER DOWN, SHG fiber shutter closed and H1:SQZ-OPO_ISS_DRIVEPOINT at 0V:
SHG_LAUNCH + SHG_REJECTED = 37 + 7 = 44mW, with 106mW out of SHG. ~75% of SHG output should go to pump (other 25% to FC launch). So we expect 106mW * 75% = 80mW. We are only getting 44mW so 55%.
Launched powers been increasing  for the same Green REFL, plot attached.

We should plan on realigning AOM and fiber next week.

Images attached to this comment
LHO VE
jordan.vanosky@LIGO.ORG - posted 09:53, Wednesday 17 April 2024 - last comment - 15:06, Tuesday 23 April 2024(77241)
LN2 Dewar Inspection 4/16/24 and Vacuum Jacket Pressures

During yesterday's (4/16) maintenance period, a Norco tech came to the site to inspect the 8 LN2 dewars that feed the cryopumps. Inspection report will be posted to Q2000008 once received.

The vacuum jacket pressures were also measured during inspection:

Dewar Pressure (micron/mtorr)
CP1 68 (gauge fluctuated may need replacing, lowest value recorded)
CP2 26
CP3 68
CP4 (not in service) 110
CP5 9
CP6 33
CP7 48
CP8 77 (gauge fluctuated may need replacing, lowest value recorded)

 

Comments related to this report
janos.csizmazia@LIGO.ORG - 15:06, Tuesday 23 April 2024 (77361)
Comparison between the last pressure check for all the dewar jackets:

- CP1: Jun 26th, 2023: 7 mTorr; difference: 68-7=61 mTorr; speed of pressure growth: 61 mTorr / 302 days = 0.202 mTorr/day
- CP2: Jun 26th, 2023: 5 mTorr; difference: 26-5=21 mTorr; speed of pressure growth: 21 mTorr / 302 days = 0.070 mTorr/day
- CP3: Jun 16th, 2023: 4 mTorr; difference: 68-4=64 mTorr; speed of pressure growth: 64 mTorr / 312 days = 0.205 mTorr/day
- CP4: no data, not in service
- CP5: Jun 16th, 2023: 4 mTorr; difference: 9-4=5 mTorr; speed of pressure growth: 5 mTorr / 312 days = 0.016 mTorr/day
- CP6: Jun 2nd, 2023: 5 mTorr; difference: 33-5=28 mTorr; speed of pressure growth: 28 mTorr / 326 days = 0.086 mTorr/day
- CP7: Jun 30th, 2023: 4 mTorr; difference: 48-4=44 mTorr; speed of pressure growth: 44 mTorr / 298 days = 0.148 mTorr/day
- CP8: Jul 8th, 2023: 5 mTorr; difference: 77-5=72 mTorr; speed of pressure growth: 77 mTorr / 290 days = 0.266 mTorr/day
Displaying reports 2161-2180 of 77273.Go to page Start 105 106 107 108 109 110 111 112 113 End