1,444 research outputs found
Recommended from our members
Multidimensional mSUGRA likelihood maps
We calculate the likelihood map in the full 7 dimensional parameter space of
the minimal supersymmetric standard model (MSSM) assuming universal boundary
conditions on the supersymmetry breaking terms. Simultaneous variations of m_0,
A_0, M_{1/2}, tan beta, m_t, m_b and alpha_s(M_Z) are applied using a Markov
chain Monte Carlo algorithm. We use measurements of b -> s gamma, (g-2)_mu and
Omega_{DM} h^2 in order to constrain the model. We present likelihood
distributions for some of the sparticle masses, for the branching ratio of
B_s^0 -> mu^+ mu^- and for m_{stau}-m_{chi_1^0}. An upper limit of 2.10^{-8} on
this branching ratio might be achieved at the Tevatron, and would rule out 29%
of the currently allowed likelihood. If one allows for non thermal-neutralino
components of dark matter, this fraction becomes 35%. The mass ordering allows
the important cascade decay squark_L -> chi_2^0 -> slepton_R -> chi_1^0 with a
likelihood of 24+/-4%. The stop coannihilation region is highly disfavoured,
whereas the light Higgs region is marginally disfavoured
Recommended from our members
Sampling using a 'bank' of clues
An easy-to-implement form of the Metropolis Algorithm is described which,
unlike most standard techniques, is well suited to sampling from multi-modal
distributions on spaces with moderate numbers of dimensions (order ten) in
environments typical of investigations into current constraints on
Beyond-the-Standard-Model physics. The sampling technique makes use of
pre-existing information (which can safely be of low or uncertain quality)
relating to the distribution from which it is desired to sample. This
information should come in the form of a ``bank'' or ``cache'' of space points
of which at least some may be expected to be near regions of interest in the
desired distribution. In practical circumstances such ``banks of clues'' are
easy to assemble from earlier work, aborted runs, discarded burn-in samples
from failed sampling attempts, or from prior scouting investigations. The
technique equilibrates between disconnected parts of the distribution without
user input. The algorithm is not lead astray by ``bad'' clues, but there is no
free lunch: performance gains will only be seen where clues are helpful
Weighing wimps with kinks at colliders: invisible particle mass measurements from endpoints
We consider the application of endpoint techniques to the problem of mass
determination for new particles produced at a hadron collider, where these
particles decay to an invisible particle of unknown mass and one or more
visible particles of known mass. We also consider decays of these types for
pair-produced particles and in each case consider situations both with and
without initial state radiation. We prove that, in most (but not all) cases,
the endpoint of an appropriate transverse mass observable, considered as a
function of the unknown mass of the invisible particle, has a kink at the true
value of the invisible particle mass. The co-ordinates of the kink yield the
masses of the decaying particle and the invisible particle. We discuss the
prospects for implementing this method at the LHC
Transverse masses and kinematic constraints: from the boundary to the crease
We re-examine the kinematic variable m_T2 and its relatives in the light of
recent work by Cheng and Han. Their proof that m_T2 admits an equivalent, but
implicit, definition as the `boundary of the region of parent and daughter
masses that is kinematically consistent with the event hypothesis' is
far-reaching in its consequences. We generalize their result both to simpler
cases (m_T, the transverse mass) and to more complex cases (m_TGen). We further
note that it is possible to re-cast many existing and unpleasant proofs (e.g.
those relating to the existence or properties of "kink" and "crease" structures
in m_T2) into almost trivial forms by using the alternative definition. Not
only does this allow us to gain better understanding of those existing results,
but it also allows us to write down new (and more or less explicit) definitions
of (a) the variable that naturally generalizes m_T2 to the case in which the
parent or daughter particles are not identical, and (b) the inverses of m_T and
m_T2 -- which may be useful if daughter masses are known and bounds on parent
masses are required. We note the implications that these results may have for
future matrix-element likelihood techniques
Recommended from our members
Constrained invariant mass distributions in cascade decays. The shape of the "m(qll)-threshold" and similar distributions
Considering the cascade decay in which
are massive particles and are massless particles, we
determine for the first time the shape of the distribution of the invariant
mass of the three massless particles for the sub-set of decays in
which the invariant mass of the last two particles in the chain is
(optionally) constrained to lie inside an arbitrary interval, . An example of an experimentally
important distribution of this kind is the `` threshold'' -- which is
the distribution of the combined invariant mass of the visible standard model
particles radiated from the hypothesised decay of a squark to the lightest
neutralino via successive two body decay,: \squark \to q \ntlinoTwo \to q l
\slepton \to q l l \ntlinoOne , in which the experimenter requires
additionally that be greater than . The
location of the ``foot'' of this distribution is often used to constrain
sparticle mass scales. The new results presented here permit the location of
this foot to be better understood as the shape of the distribution is derived.
The effects of varying the position of the cut(s) may now be seen more
easily
Recommended from our members
Measuring masses of semi-invisibly decaying particle pairs produced at hadron colliders
We introduce a variable useful for measuring masses of particles pair
produced at hadron colliders, where each particle decays to one particle that
is directly observable and another particle whose existence can only be
inferred from missing transverse momenta. This variable is closely related to
the transverse mass variable commonly used for measuring the mass at hadron
colliders, and like the transverse mass our variable extracts masses in a
reasonably model independent way. Without considering either backgrounds or
measurement errors we consider how our variable would perform measuring the
mass of selectrons in a mSUGRA SUSY model at the LHC
mTGen: mass scale measurements in pair-production at colliders
We introduce a new kinematic event variable MTGEN which can provide
information relating to the mass scales of particles pair-produced at hadronic
and leptonic colliders. The variable is of particular use in events with a
large number of particles in the final state when some of those particles are
massive and not detected, such as may arise in R-parity-conserving
supersymmetry
Recommended from our members
Improving estimates of the number of âfakeâ leptons and other mis-reconstructed objects in hadron collider events: BoBâs your UNCLE
We consider current and alternative approaches to setting limits on new
physics signals having backgrounds from misidentified objects; for example jets
misidentified as leptons, b-jets or photons. Many ATLAS and CMS analyses have
used a heuristic matrix method for estimating the background contribution from
such sources. We demonstrate that the matrix method suffers from statistical
shortcomings that can adversely affect its ability to set robust limits. A
rigorous alternative method is discussed, and is seen to produce fake rate
estimates and limits with better qualities, but is found to be too costly to
use. Having investigated the nature of the approximations used to derive the
matrix method, we propose a third strategy that is seen to marry the speed of
the matrix method to the performance and physicality of the more rigorous
approach.This is the final published version. It is available online from Springer in the Journal of High Energy Physics here: http://link.springer.com/article/10.1007/JHEP11(2014)031
Recommended from our members
Biased bootstrap sampling for efficient two-sample testing
The so-called 'energy test' is a frequentist technique used in experimental particle physics to decide whether two samples are drawn from the same distribution. Its usage requires a good understanding of the distribution of the test statistic, T, under the null hypothesis. We propose a technique which allows the extreme tails of the T-distribution to be determined more efficiently than possible with present methods. This allows quick evaluation of (for example) 5-sigma confidence intervals that otherwise would have required prohibitively costly computation times or approximations to have been made. Furthermore, we comment on other ways that T computations could be sped up using established results from the statistics community. Beyond two-sample testing, the proposed biased bootstrap method may provide benefit anywhere extreme values are currently obtained with bootstrap sampling
Recommended from our members
Determining SUSY model parameters and masses at the LHC using cross-sections, kinematic edges and other observables.
We address the problem of mass measurements of supersymmetric particles at
the Large Hadron Collider, using the ATLAS detector as an example. By using
Markov Chain sampling techniques to combine standard measurements of kinematic
edges in the invariant mass distributions of decay products with a measurement
of a missing cross-section, we show that the precision of mass
measurements at the LHC can be dramatically improved, even when we do not
assume that we have measured the kinematic endpoints precisely, or that we have
identified exactly which particles are involved in the decay chain causing the
endpoints. The generality of the technique is demonstrated in a preliminary
investigation of a non-universal SUGRA model, in which we relax the
requirements of mSUGRA by breaking the degeneracy of the GUT scale gaugino
masses. The model studied is compatible with the WMAP limits on dark matter
relic density
- âŠ