Displaying reports 16321-16340 of 86625.Go to page Start 813 814 815 816 817 818 819 820 821 End
Reports until 14:03, Friday 25 August 2023
H1 ISC (DetChar, ISC)
gabriele.vajente@LIGO.ORG - posted 14:03, Friday 25 August 2023 - last comment - 14:20, Friday 25 August 2023(72430)
Retuned MICH feedforward

As the title says, we retuned the MICH feedforward, and the new filter performs better at all relevant frequencies.

Guardian has been updated to engage FM9 instead of FM8.

Quoting Elenna: "It's been 0 days since we retuned the LSC FF"

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 14:20, Friday 25 August 2023 (72431)

I have accepted the SDF diff in both OBSERVE and SAFE. Forgot to screenshot both times, sorry.

Process for accepting in SAFE:

Select "SDF_TO_SAFE" guardian state in ISC_LOCK

Wait for SDF table to switch to safe

Search for my SDF diff in the LSC table and sorting on substring

Accept diff

Confirm

Select "Nominal Low Noise" in ISC_LOCK guardian

H1 FMP (DetChar, ISC)
jeffrey.kissel@LIGO.ORG - posted 13:41, Friday 25 August 2023 - last comment - 12:01, Monday 28 August 2023(72428)
Chilled Water Pump Has Failed for EY HVAC Air Handlers
J. Kissel, for T. Guidry, R. McCarthy

Just wanted to get a clear separate aLOG in regarding what Corey mentioned in passing in his mid-shift status LHO:72423:

The EY HVAC Air Handler's chilled water pump 1 of 2 failed this morning 2023-08-25 at 9:45a PDT, and thus the EY HVAC system has been shut down for repair at 17:35 UTC (10:35 PDT). The YVEA temperature is therefore rising as it equilibrates with the outdoor temperature; thus far from 64 deg F to 67 deg F.

Tyler, Richard, and an HVAC contractor are on it, actively repairing the system, and I'm sure we'll get a full debrief later.

Note -- we did not stop our OBSERVATION INTENT until 2h 40m hours later 2023-08-25 20:18 UTC (13:18 PDT), when we've gone out to do some commissioning.
Images attached to this report
Comments related to this report
jenne.driggers@LIGO.ORG - 13:53, Friday 25 August 2023 (72429)

The work that they've been doing so far today to diagnose this issue has been in the 'mechanical room'.  Their work should not add any additional significant noise over what aready occurs in that room at all times, so I do not expect that there should be any data quality issues as a result of this work.  But, we shall see (as Jeff points out) if there are any issues from the temperature itself changing. 

corey.gray@LIGO.ORG - 15:51, Friday 25 August 2023 (72437)FMP

They are done for the weekend and temperatures are returning to normal values. 

Chiller Pump #2 is the chiller we are now running.

Chiller Pump #1 will need to be looked at some more (Tyler mentioned the contractor will return on Tues).

Attached is a look at the last 4+yrs and both EY chillers (1 = ON & 0 = OFF).

Images attached to this comment
jeffrey.kissel@LIGO.ORG - 12:01, Monday 28 August 2023 (72484)DetChar, ISC, SUS
See Tyler's LHO:72444 for more accurate and precise description of what had happened to the HVAC system.
H1 CAL
louis.dartez@LIGO.ORG - posted 13:01, Friday 25 August 2023 - last comment - 13:16, Friday 25 August 2023(72422)
DARM OLG UGF now
J. Kissel, L. Dartez

I'm attaching a plot of the DARM UGF that compares two measurements taken ~3.5 months apart. The blue trace is taken from the calibration sweep that Ryan C. took a few days ago (LHO:72392) and the orange trace is from a similar sweep taken back in May (20230506T170817Z). 

The DARM loop UGF has moved from ~58Hz to ~66.4Hz and the phase margin has increased by about a degree since May. 

There is no immediate need to adjust the DARM open loop gain (DRIVEALIGN_L2L gain, as mentioned in LHO:72416).


The fact that we don't see the loop dip near 20Hz is a rough indicator that the [actuation stage] crossovers are stable. I'll be following up with a more in-depth look at that.


The script used to generate the attached plot lives at: /ligo/home/louis.dartez/projects/20230825/plot_olg/plot_olg_meas.py
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 13:16, Friday 25 August 2023 (72427)ISC
Tagging ISC, and adding a copy of Louis' script.
Non-image files attached to this comment
LHO General
corey.gray@LIGO.ORG - posted 12:57, Friday 25 August 2023 (72423)
Friday Day Mid Shift Status

H1 is currently at almost 50.5hrs for a lock (current record is ~54hrs).

We are approaching 1pm with the start of 2hrs of commissioning.

We are also dealing with increasing temperatures at EY due to HVAC chilled water issues.  Contractor is on their way.

Images attached to this report
H1 CAL
anthony.sanchez@LIGO.ORG - posted 12:34, Friday 25 August 2023 - last comment - 09:04, Thursday 05 October 2023(72420)
Double PCAL EX End Station Measurement


First ENDX Station Measurement:
During the Tuesday maintenace, the PCAL team( Rick Savage & Tony Sanchez) went to ENDX with Working Standard Hanford aka WSH(PS4) and took an End station measurement.
But the Upper PCAL BEAM had been move to the left by 5 mm last week. See alog https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72063.
We liked the idea of doing a calibration measurement with the beam off to the left just to try and see the effects of the offset on the calibration.

Because of limitations of our analysis tool which names files with a date stamp, the folder name for this non nominal measurement is tD20230821 even though it actually took place on Tuesday 2023-08-22.

Beam Spot Picture of the Upper Beam 5 mm to the Left on the apature
Martel_Voltage_Test.png
Document***
WS_at_TX.png
WS_at_RX.png
TX_RX.png
LHO_ENDX_PD_ReportV2.pdf
https://svn.ligo.caltech.edu/svn/aligocalibration/trunk/Projects/PhotonCalibrator/measurements/LHO_EndX/tD20230821/


We then Moved the PCAL BEAM back to the center, which is its NOMINAL position.
We took pictures of the beam spot.

Second NOMINAL End Station Measurement:
Then we did another ENDX Station measurement as we would normally do which is appropriately documented as tD20230822.
The second ENDX Station Measurement was carried out according to the procedure outlined in Document LIGO-T1500062-v15, Pcal End Station Power Sensor Responsivity Ratio Measurements: Procedures and Log, and was completed by noon.
We took pictures of the Beam Spot .

Martel:
We started by setting up a Martel Voltage source to apply some voltage into the PCAL Chassis's Input 1 channel and we record the times that a -4.000V, -2.000V and a 0.000V signal was sent to the Chassis. The analysis code that we run after we return uses the GPS times, grabs the data and created the Martel_Voltage_Test.png graph. We also did a measurement of the Martel's voltages in the PCAL lab to calculate the ADC conversion factor, which is included on the document .

After the Martel measurement the procedure walks us through the steps required to make a series of plots while the Working Standard(PS4) is in the Transmitter Module. These plots are shown in WS_at_TX.png.

Next steps include: The WS in the Receiver Module, These plots are shown in WS_at_RX.png.

Followed by TX_RX.png which are plots of the Tranmitter module and the receiver module operation without the WS in the beam path at all.

All of this data is then used to generate LHO_ENDX_PD_ReportV2.pdf which is attached, and a work in progress in the form of a living document.

All data and Analysis has been commited to the SVN.
https://svn.ligo.caltech.edu/svn/aligocalibration/trunk/Projects/PhotonCalibrator/measurements/LHO_EndX/tD20230822/

PCAL Lab Responsivity Ratio Measurement:
A WSH/GSHL (PS4/PS5)BackFront Responsivity Ratio Measurement was ran, analyzed, and pushed to the SVN.
The analysis of this measurement produces 4 PDF files which we use to vet the data for problems.
raw_voltages.pdf
avg_voltages.pdf
raw_ratios.pdf
avg_ratios.pdf

All data and Analysis has been commited to the SVN.
https://svn.ligo.caltech.edu/svn/aligocalibration/trunk/Projects/PhotonCalibrator/measurements/LabData/PS4_PS5/


I switched the order of the lab Measurements this time to have the Front Back Last this time to see is it changed the relative difference between FB and BF measurements.
PCAL Lab Responsivity Ratio Measurement:
A WSH/GSHL (PS4/PS5)FrontBack Responsivity Ratio measurement was ran, analyzed, and pushed to the SVN.
The analysis of this measurement produces 4 PDF files which we use to vet the data for problems.
raw_voltages2.pdf
avg_voltages2.pdf
raw_ratios2.pdf
avg_ratios2.pdf

This adventure has been brought to you by Rick Savage & Tony Sanchez.

 

Images attached to this report
Non-image files attached to this report
Comments related to this report
anthony.sanchez@LIGO.ORG - 10:56, Tuesday 29 August 2023 (72516)

After speaking to Rick and Dripta,
Line 10 in the pcal_params.py needs to be changed from:
PCALPARAMS['WHG'] = 0.916985 # PS4_PS5 as of 2023/04/18

To:
PCALPARAMS['WHG'] = 0.9159 #PS4_PS5 as of 2023-08-22

This change would reflect the changes we have observed in the measurements of PS4_PS5 responsivity ratio  measurements taken in the lab which affect the plots of Rx Calibration in sections 14 and 22 of the LHO_EndY_PD_ReportV2.pdf .

Investigations have shown that PS4 has changed but not PS5 OR Rx Calibration.
 

anthony.sanchez@LIGO.ORG - 09:04, Thursday 05 October 2023 (73285)
Non-image files attached to this comment
H1 CAL (ISC)
jeffrey.kissel@LIGO.ORG - posted 12:05, Friday 25 August 2023 - last comment - 13:14, Friday 25 August 2023(72416)
Why is DELTAL_EXTERNAL BNS Range Reporting So Much Higher Than GDS-CALIB_STRAIN BNS Range? Test Mass Actuation Strength Has Drifted by 8% And CAL-DELTAL Doesn't Compensate For It; GDS-STRAIN Does.
J. Betzwieser, L. Dartez, J. Kissel

Ryan Short recently updated the control room FOM for the BNS range (LHO:72415) which now shows -- with a clear legend -- the range computed using CAL-DELTAL_EXTERNAL_DQ vs. GDS-CALIB_STRAIN_CLEAN for both H1 and L1 observatories -- see example attached. 

This makes it dreadfully obvious that "L1's DELTAL_EXTERNAL range is right on top of the CALIB_STRAIN range -- but H1's is not, and DELTAL_EXTERNAL is *higher*." -- see First Attachment from the control room FOM screenshots.

The natural questions to ask then are "why?" "is something wrong with H1's calibration?"

No, there's nothing wrong.***
The discrepancy between DELTAL_EXTERNAL and CALIB-STRAIN at H1 is because the static test-mass stage actuation strength hasn't been updated since 2023-05-04 -- before the observing run started -- and it has slowly drifted due to test mass ESD charge accumulation -- and it's now at 8% larger than the May 04 2023 value. See the current value for the past 24 hours and a trend of the whole run thus far. L1's ESD strength has *not* drifted as much (see similar L1 trend), and they also regularly "fudge" their DELTAL_EXTERNAL actuator strength gains in order to get DELTAL_EXTERNAL more accurate (and they do so in a way that doesn't impact GDS-CALIB_STRAIN). H1 has chosen not to, to date.

This drift is tracked and accounted for in our "time dependent correction factor" or TDCF system for that test-mass stage actuation strength, \kappa_T -- and GDS-CALIB_STRAIN (and STRAIN_NOLINES, and STRAIN_CLEANED) all have this correction in place. Check out the Second attachment from the same day's "CAL" > "h(t) generation" summary page, and walk with me:
This plot is showing the ASD ratio (and thus roughly analogous to the magnitude of the transfer function) between all of the various stages of the calibration pipeline.
    - GDS-CALIB_STRAIN, GDS-CALIB_STRAIN_NOLINES, and GDS-CALIB_STRAIN_CLEANED are all this same from this perspective. Thus the ratio between these three channels with DELTAL_EXTERNAL in the denominator is highlighting the DELTAL_EXTERNAL is a preliminary product, and NOT corrected for TDCFs and thus there's a huge ~16% systematic difference between the two "stages" of product.
    - Recall that *all* of the four paths of the calibraion -- UIM, PUM, TST, and Sensing -- are being summed, and the cross-over frequency for these sums are all culminating around 50-200 Hz -- and in that region there's a factors of 2x to 3x gain peaking (see e.g. Figure 4 of P1900245) -- and thus the 8% drift in the TST stage strength means 16% systematic error in the DELTAL_EXTERNAL calibration.
    - However, the front-end version of the preliminary product that is corrected for TDCFs is also shown in the plot -- CFTD-DELTAL_EXTERNAL. The ASD ratio between this channel has MUCH less systematic discrepancy -- indicating that correcting for time-dependence (get it? CFTD!) does a LOT of the heavy lifting of accounting for this 8% TST drift. 

Of course, these ratios of different portions of the calibration pipeline don't *really* tell you if you've *really* done the right thing in an absolute sense. They only tell you what changes from step to step. (And indeed, the CFTD-DELTAL_EXTERNAL to GDS-CALIB_CLEANED ratio still shows *some* discrepancy.)

The fact that fifth attachment, from the archive showing the constant *direct measurement* of the systematic error in the calibration -- from the absolute reference, the PCALs -- is nice and low (i.e. the transfer function is close to unity magnitude and zero phase) indicates that all of the correction for time-dependence is doing the right thing.

*** Yet. In O3, L1 suffered a lot from TST strength drift. Joe has shown repeatedly that if you let an actuator TDCF drift too far beyond 10%, then the approximation we use to calculate these TDCFs breaks down (and see Aaron's work discussing it as a motivation for P2100107). In addition, since the real ESD strength is changing -- :: -- which is corroborated by the in-lock charge measurements -- I think -- see highlighted red region of sixth attachment from LHO:72310 -- :: -- that means the DARM open loop gain TF gain is also changing. 

This may impact the DARM loop stability (see e.g. LLO aLOGs 50900 and 50639). So, *eventually* we should resurrect the two things we've done in O3:
    (1) Reset the model of the static actuation strength for the TST stage to a more current value. (And thus start a new calibration epoch)
    (2) Potentially change the actual DARM loop by adjusting the DRIVEALIGN_L2L gain
    (3) Work up a solution to mitigate the drift -- perhaps doing something similar to what was done in O3, and play gains with turning on the ESD Bias voltage with the opposite sign when we're not in observing.
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 13:14, Friday 25 August 2023 (72426)
Louis has plotted DARM open loop gain transfer functions from May 2023 vs. Aug 2023 in LHO:72422. The comparison concludes we do NOT need to adjust the actual DARM loop (as suggested in item (2) above). In fact, the DARM OLG TF from August is *more* stable than it was in May (but not by much). This is indicative of a good robust loop design -- that ~10% level drifts don't impact the stability of the loop.

We discussed further actions (1) and (3) based on the OLG TF results, and conclude the actions can wait until next week. But... probably not 2 weeks, again because the drift is close enough to the TDCF calculation's approximation breakdown point that we need to take action and "reset" the TST stage actuation strength.
LHO VE
david.barker@LIGO.ORG - posted 10:51, Friday 25 August 2023 (72419)
Fri CP1 Fill

Fri Aug 25 10:12:06 2023 INFO: Fill completed in 12min 2secs

Travis confirmed a good fill curbside.

Images attached to this report
H1 AOS
david.barker@LIGO.ORG - posted 10:49, Friday 25 August 2023 (72418)
EY Chiller Yard cell phone alarms bypassed for chiller work

Richard has requested EY chiller alarms be bypassed while he is working on this system.

Bypass will expire:
Fri 25 Aug 2023 04:47:46 PM PDT
For channel(s):
    H0:FMC-EY_CY_H2O_SUP_DEGF
    H0:FMC-EY_CY_H2O_PUMPSTAT
 

H1 OpsInfo
ryan.short@LIGO.ORG - posted 10:12, Friday 25 August 2023 (72415)
Inspiral Range FOM Updated

This is a late entry noting my updates over the past week to the Inspiral Range FOM in the control room on nuc27's top screen; see attached screenshot. The DMT Viewer template is now showing both range calculations from LHO and LLO (from GDS-CALIB_STRAIN_CLEAN or "GDS" and from CAL-DELTAL_EXTERNAL or "the front-ends"), with the GDS range in a bolder line. It's worth noting that the GDS trace only exists while in OBSERVE, so this makes including the front-ends range necessary as well.

The DMT Viewer range template lives in: /opt/rtcds/userapps/release/cds/h1/scripts/fom_startup/nuc27/HLV-Range.xml

Images attached to this report
LHO General
ryan.short@LIGO.ORG - posted 08:23, Friday 25 August 2023 - last comment - 10:39, Monday 28 August 2023(72413)
Ops Day Shift Start

TITLE: 08/25 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 14mph Gusts, 11mph 5min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.07 μm/s
QUICK SUMMARY: H1 has been locked for 46 hours.

Comments related to this report
corey.gray@LIGO.ORG - 10:39, Monday 28 August 2023 (72482)

Just a note:  I was the Day Operator, but arrived late due to Route10 being CLOSED. :(

H1 General
oli.patane@LIGO.ORG - posted 00:12, Friday 25 August 2023 - last comment - 10:16, Friday 25 August 2023(72412)
Ops EVE Shift End

TITLE: 08/25 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Detector still Observing and has been Locked for 37hrs 37mins.

Throughout my shift there were a few instances where a GW candidate came in on GraceDB with the EARLY_WARNING tag(but very high FAR), and then possible subthreshold GRB candidates that came in closely after would cause Verbals to announce them.

Also, around 8/25 4:36ish UTC, I noticed that the lock clock said to stand down, but there had been no alert on Verbals, and going to GraceDB, there hadn't been any GRB events in the previous couple of hours.

 

23:00UTC Detector in Observing and Locked for 29.5 hours

23:19 SubGRBTargeted on Verbals E432118
23:22 SubGRBTargeted on Verbals E432119
     - A bunch more SubGRBTargeted on verbals, along with some getting repeated alerts a few minutes later

23:24 Earthquake mode activated for earthquake from Japan (and from off Oregon Coast)
23:24 Back to CALM

23:39 Earthquake mode activated from another earthquake from off the coast of Oregon
23:49 Back to CALM

4:36ish - 4:43UTC lock clock says to Stand Down


LOG:

no log

Images attached to this report
Comments related to this report
genevieve.connolly@LIGO.ORG - 10:16, Friday 25 August 2023 (72417)

Around the same time as the lock clock saying to Stand Down, the summary pages report an increase in SNR ≥5 glitches in the 10 Hz - 5 kHz range.

Images attached to this comment
H1 General
oli.patane@LIGO.ORG - posted 20:36, Thursday 24 August 2023 (72411)
Ops EVE Midshift Update

We're going on 34 hours of being Locked and Observing at 147Mpc.

H1 PEM (AOS, DetChar)
lanceanderson.blagg@LIGO.ORG - posted 16:56, Thursday 24 August 2023 (72408)
Potential Scattering Noise at HAM3

Genevieve and I did about a dozen scattering injections in the LVEA on July 20th and 21st. For this round of injections we inserted the retro-reflector into the guillotine port for 10 seconds, removed for 10 seconds and repeated 6 times. We are still working through the data on those but it appears as if we had some scattering noise on HAM3 in the left guillotine port. There is an increased noise source present when the reflector is inserted. I did more scattering injections yesterday on HAM3 and HAM6 during commissioning and will parse through that data soon. Spectrograms of accelerometer on HAM3 and DARM: -Acc -DARM -Stacked with red lines around the time the reflector was inserted and noise was present

Images attached to this report
H1 General (VE)
oli.patane@LIGO.ORG - posted 15:58, Thursday 24 August 2023 (72409)
Ops EVE Shift Start

TITLE: 08/24 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 13mph Gusts, 9mph 5min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY:

Taking over for Ryan C. Detector in Observing and has been Locked for 29.5 hours. The EX Pirani pressure gauge, VAC-EX_X2_PT524B_PRESS_TORR, is still down even though it should have come back up by now (tagging vacuum team even though they know already).

H1 CAL
jeffrey.kissel@LIGO.ORG - posted 15:58, Thursday 24 August 2023 (72407)
The Coherence Threshold for Front-end Computed TDCFs is 5% (not 0.5%)
L. Dartez, J. Kissel

We're investigating some issues with time-dependent correction factors (TDCFs) as computed by the front-end vs. GDS, and reminding ourselves of the parameters that define the "smoothing out" of the computation. In particular the uncertainty threshold against which the live uncertainty is compared.

Apparently, all front-end thresholds are set at 5% uncertainty.

    H1:CAL-CS_TDEP_CAVITY_POLE_F_C_GATE_UNC_THRESH       0.05
    H1:CAL-CS_TDEP_CAVITY_POLE_KAPPA_C_GATE_UNC_THRESH   0.05
    H1:CAL-CS_TDEP_D2N_SPRING_F_S_GATE_UNC_THRESH        0.05
    H1:CAL-CS_TDEP_D2N_SPRING_Q_S_GATE_UNC_THRESH        0.05
    H1:CAL-CS_TDEP_KAPPA_PUM_GATE_UNC_THRESH             0.05
    H1:CAL-CS_TDEP_KAPPA_TST_GATE_UNC_THRESH             0.05
    H1:CAL-CS_TDEP_KAPPA_UIM_GATE_UNC_THRESH             0.05
(Remember, the live uncertainty is defined by taking each calibration line's coherence, coh, with DARM_ERR, converting it to uncertainty via sqrt([1-coh]/[2*Navg*coh]), where Navg is determined by the FFT length -- see LHO:69175 for a recent review of this calculation.)

I could make up a story from my foggy memory as to why the front-end thresholds are set at 5% (and not 0.5%, like we expected -- and like what we *think* the GDS pipeline's threshold is), but I'll spare you. They've been this way since Apr 2021.

Attached is the time-series comparison between GDS (in red) and Front-end (in gray) computed \kappa_C for the 24 hour period after July 31 2023 00:00 UTC. The gray, front-end trace shows the characteristic ~3 hour thermalization transient we've come to know and love from our 76W IFO (see e.g. LHO:69796). The red trace, doesn't show this, and it's in fact frozen at the "last good value" by the GDS gating system -- because it's uncertainty threshold is much lower than 5%. 

We presume that this is because the detector noise was pretty DARM bad during this time period -- see 24-hour statistics on the sensitivity in second attachment.

Images attached to this report
H1 PEM (DetChar, FMP, OpsInfo, PEM)
lanceanderson.blagg@LIGO.ORG - posted 11:40, Thursday 24 August 2023 - last comment - 13:07, Friday 25 August 2023(72404)
Potential Noise in DARM from Garbage Truck at LSB

Following up on TJ's alog from 8/17 (72293), it was noted that a garbage truck at LSB was quite loud. The seismometers in the LVEA clearly picked up the noise, and it seems to coincide with noise in DARM. It's hard to be certain with only one signal, but it probably warrants further investigation.


Spectrogams attached of one seismometer and DARM for 2 minute time span around when noise was reported:
-Of signal
-Zoomed in with boxes on noise regions
-Stacked with some boxes around correlated noise

Images attached to this report
Comments related to this report
lanceanderson.blagg@LIGO.ORG - 13:07, Friday 25 August 2023 (72425)DetChar, FMP, OpsInfo
Oli relayed to Genevieve that the garbage truck left at 16:05:00 local time yesterday (8/24). No noise was reported on site, but approximately 2 minutes before the truck left we see a signal in the LVEA seismometers similar to that from the truck last week, and the signal once again shows up in DARM.
Images attached to this comment
Displaying reports 16321-16340 of 86625.Go to page Start 813 814 815 816 817 818 819 820 821 End