Boosted Decision Trees and Applications

. Decision trees are a machine learning technique more and more commonly used in high energy physics, while it has been widely used in the social sciences. After introducing the concepts of decision trees, this article focuses on its application in particle physics.


Introduction
Decision trees are a machine learning technique more and more commonly used in high energy physics, while it has been widely used in the social sciences.It was first developed in the context of data mining and pattern recognition, and gained momentum in various fields, including medical diagnostic, insurance and loan screening, and optical character recognition (OCR) of handwritten text.
It was developed and formalised by Breiman et al. [ 1] who proposed the CART algorithm (Classification And Regression Trees) with a complete and functional implementation of decision trees.
The basic principle is rather simple: it consists in extending a simple cut-based analysis into a multivariate technique by continuing to analyse events that fail a particular criterion.Many, if not most, events do not have all characteristics of either signal or background.If that were the case then an analysis with a few criteria would allow to easily extract the signal.The concept of a decision tree is therefore to not reject right away events that fail a criterion, and instead to check whether other criteria may help to classify these events properly.
In principle a decision tree can deal with multiple output classes, each branch splitting in many subbranches.In these proceedings only binary trees will be considered, with only two possible classes: signal and background.
Section 2 describes how a decision tree is constructed and what parameters can influence its development.Section 3 provides some insights into some of the intrinsic limitations of a decision tree and how to address some of them.One possible extension of decision trees, boosting, is introduced in Section 4, and other techniques trying to reach the same goal as boosting are presented in Section 5. Conclusions are summarised in Section 6 and references to available decision tree software are given in Section 7.
While starting with this powerful multivariate technique, it is important to remember that before applying it to real data, it is crucial to have a good understanding of the data and of the model used to describe them.Any discrepancy between the data and model will provide an artificial separation that the decision trees will use, misleading the analyser.The hard part (and interest) of the analysis is in building the proper model, not in extracting the signal.But once this is properly done, decision trees provide a very powerful tool to increase the significance of any analysis.1.If the node satisfies any stopping criterion, declare it as terminal (that is, a leaf) and exit the algorithm.
2. Sort all events according to each variable in x.
3. For each variable, find the splitting value that gives the best separation between two children, one with mostly signal events, the other with mostly background events (see Section 2.4 for details).If the separation cannot be improved by any splitting, turn the node into a leaf and exit the algorithm.
4. Select the variable and splitting value leading to the best separation and split the node in two new nodes (branches), one containing events that fail the criterion and one with events that satisfy it.
5. Apply recursively from step 1 on each node.
At each node, all variables can be considered, even if they have been used in a previous iteration: this allows to find intervals of interest in a particular variable, instead of limiting oneself to using each variable only once.
It should be noted that a decision tree is human readable: one can trace exactly which criteria an event satisfied in order to reach a particular leaf.It is therefore possible to interpret a tree in terms, e.g., of physics, defining selection rules, rather than only as a mathematical object.
In order to make the whole procedure clearer, let us take the tree in Fig. 1 as an example.Consider that all events are described by three variables: the transverse momentum, p T , of the leading jet, the reconstructed top quark mass M t and the scalar sum of p T 's of all reconstructed objects in the event, H T .All signal and background events make up the root node.
Following the above algorithm one should first sort all events according to each variable: t , where superscript s i (b j ) represents signal (background) event i ( j).Using some measure of separation one may find that the best splitting for each variable is (arbitrary unit): One would conclude that the best split is to use H T < 242 GeV, and would create two new nodes, the left one with events failing this criterion and the right one with events satisfying it.One can now apply the same algorithm recursively on each of these new nodes.As an example consider the righthand-side node with events that satisfied H T < 242 GeV.After sorting again all events in this node according to each of the three variables, it was found that the best criterion was M t > 162 GeV, and events were split accordingly into two new nodes.This time the right-hand-side node satisfied one of the stopping conditions and was turned into a leaf.From signal and background training events in this leaf, the purity was computed as p = 0.82 (see the next Section).

Decision tree output
The decision tree output for a particular event i is defined by how its x i variables behave in the tree: 1. Starting from the root node, apply the first criterion on x i .
2. Move to the passing or failing branch depending on the result of the test.
3. Apply the test associated to this node and move left or right in the tree depending on the result of the test.
4. Repeat step 3 until the event ends up in a leaf.
5. The decision tree output for event i is the value associated with this leaf.

02004-p.3 EPJ Web of Conferences
There are several conventions used for the value attached to a leaf.It can be the purity p = s s+b where s (b) is the sum of weights of signal (background) events that ended up in this leaf during training.It is then bound to [0, 1], close to 1 for signal and close to 0 for background.It can also be a binary answer, signal or background (mathematically typically +1 for signal and 0 or −1 for background) depending on whether the purity is above or below a specified critical value (e.g.+1 if p > 1 2 and −1 otherwise).Looking again at the tree in Fig. 1, the leaf with purity p = 0.82 would give an output of 0.82, or +1 as signal if choosing a binary answer with a critical purity of 0.5.

Tree parameters
The number of parameters of a decision tree is relatively limited.The first one is not specific to decision trees and applies to most techniques requiring training: how to normalise signal and background before starting the training?Conventionally the sums of weights of signal and background events are chosen to be equal, giving the root node a purity of 0.5, that is, an equal mix of signal and background.
Other parameters concern the selection of splits.One first needs a list of questions to ask, like "is variable x i < cut i ?", requiring a list of discriminating variables and a way to evaluate the best separation between signal and background events (the goodness of the split).Both aspects are described in more detail in Sections 2.4 and 2.5.
The splitting has to stop at some point, declaring such nodes as terminal leaves.Conditions to satisfy can include: • a minimum leaf size.A simple way is to require at least N min training events in each node after splitting, in order to ensure statistical significance of the purity measurement, with a statistical uncertainty √ N min .It becomes a little bit more complicated with weighted events, as is normally the case in high energy physics applications.One may then want to consider using the effective number of events instead: for a node with N events associated to weights w i (N eff = N for unweighted events).This would ensure a proper statistical uncertainty.
• having reached perfect separation (all events in the node belong to the same class).
• an insufficient improvement with further splitting.
• a maximal tree depth.One can decide that a tree cannot have more than a certain number of layers (for purely computational reasons or to have like-size trees).
Finally a terminal leaf has to be assigned to a class.This is classically done by labelling the leaf as signal if p > 0.5 and background otherwise.

Splitting a node
The core of a decision tree algorithm resides in how a node is split into two.Consider an impurity measure i(t) for node t, which describes to what extent the node is a mix of signal and background.Desirable features of such a function are that it should be: • maximal for an equal mix of signal and background (no separation).
• minimal for nodes with either only signal or only background events (perfect separation).

IN2P3 School Of Statistics, Autrans
• symmetric in signal and background purities, as isolating background is as valuable as isolating signal.
• strictly concave in order to reward purer nodes.This tends to favour end cuts with one smaller node and one larger node.
A figure of merit can be constructed with this impurity function, as the decrease of impurity for a split S of node t into two children t P (pass) and t F (fail): where p P (p F ) is the fraction of events that passed (failed) split S .
The goal is to find the split S * that maximises the decrease of impurity: Δi(S, t).
It will result in the smallest residual impurity, which minimises the overall tree impurity.Some decision tree applications use an alternative definition of the decrease of impurity [ 2]: which may perform better for, e.g., particle identification.A stopping condition can be defined using the decrease of impurity: one may decide to not split a node if Δi(S * , t) is less than some predefined value.One should nonetheless always be careful when using such early-stopping criteria, as sometimes a seemingly very weak split may allow child nodes to be powerfully split further (see Section 3.1 about pruning).
For signal (background) events with weights w i s (w i b ) the purity is defined as: Simplifying this expression one can write down the signal purity (or simply purity) as p s = p = s s+b and the background purity as p b = b s+b = 1 − p s = 1 − p. Common impurity functions (exhibiting most of the desired features mentioned previously) are illustrated in Fig. 2: • the misclassification error: 1 − max(p, 1 − p), • the (cross) entropy [1]: − i=s,b p i log p i , • the Gini index of diversity.
For a problem with any number of classes, the Gini index [ 3] is defined as: The statistical interpretation is that if one assigns a random object to class i with probability p i , the probability that it is actually in class j is p j and the Gini index is the probability of misclassification.
In the case of two classes, signal and background, then i = s and j = b, p s = p = 1 − p b and the Gini index is: The Gini index is the most popular in decision tree implementations.It typically leads to similar performance to entropy.
Other measures are also used sometimes, which do not satisfy all criteria listed previously but attempt at optimising signal significance, usually relevant in high energy physics applications: • cross section significance: − s 2 s+b , • excess significance: − s 2 b .

Variable selection
As stated before, the data and model have to be in good agreement before starting to use a decision tree (or any other analysis technique).This means that all variables used should be well described.
Forgetting this prerequisite will jeopardise the analysis.Overall decision trees are very resilient to most features associated to variables.They are not too much affected by the "curse of dimensionality", which forbids the use of too many variables in most multivariate techniques.For decision trees the CPU consumption scales as nN log N with n variables and N training events.It is not uncommon to encounter decision trees using tens or hundreds of variables.
A decision tree is immune to duplicate variables: the sorting of events according to each of them would be identical, leading to the exact same tree.The order in which variables are presented is completely irrelevant: all variables are treated equal.The order of events in the training samples is also irrelevant.
If variables are not discriminating, they will simply be ignored and will not add any noise to the decision tree.The final performance will not be affected, it will only come with some CPU overhead during both training and evaluation.
Decision trees can deal easily with both continuous and discrete variables, simultaneously.Another typical task before applying a multivariate technique is to transform input variables by for instance making them fit in the same range or taking the logarithm to regularise the variable.This is totally unnecessary with decision trees, which are completely insensitive to the replacement of any subset of input variables by (possibly different) arbitrary strictly monotone functions of them.The explanation is trivial.Let f : x i → f (x i ) be a strictly monotone function: if x > y then f (x) > f (y).Then any ordering of events by x i is the same as by f (x i ), which means that any split on x i will create 02004-p.6 IN2P3 School Of Statistics, Autrans the same separation as a split on f (x i ), producing the same decision tree.This means that decision trees have some immunity against outliers.
Until now the splits considered have always answered questions of the form "Is x i < c i ?", while it is also possible to make linear combinations of input variables and ask instead "Is i a i x i < c i ?",where a = (a 1 , .., a n ) is a set of coefficients such that ||a|| 2 = i a 2 i = 1.One would then choose the optimal split S * (a * ) and the set of linear coefficients a * that maximises Δi(S (a), t).This is in practise rather tricky to implement and very CPU intensive.This approach is also powerful only if strong linear correlations exist between variables.If this is the case, a simpler approach would consist in first decorrelating the input variables and then feeding them to the decision tree.Even if not doing this decorrelation, a decision tree will anyway find the correlations but in a very suboptimal way, by successive approximations, adding complexity to the tree structure.
It is possible to rank variables in a decision tree.To rank variable x i one can add up the decrease of impurity for each node where variable x i was used to split.The variable with the largest decrease of impurity is the best variable.
There is nevertheless a shortcoming with variable ranking in a decision tree: variable masking.Variable x j may be just a little worse than variable x i and would end up never being picked in the decision tree growing process.Variable x j would then be ranked as irrelevant, and one would conclude this variable has no discriminating power.But if one would remove x i , then x j would become very relevant.
There is a solution to this feature, called surrogate splits [1].For each split, one compares which training events pass or fail the optimal split to which events pass or fail a split on another variable.The split that mimics best the optimal split is called the surrogate split.One can then take this into consideration when ranking variables.This has applications in case of missing data: one can then replace the optimal split by the surrogate split.
All in all, variable rankings should never be taken at face value.They do provide valuable information but should not be over-interpreted.

Tree (in)stability
Despite all the nice features presented above, decision trees are known to be relatively unstable.If trees are too optimised for the training sample, they may not generalise very well to unknown events.This can be mitigated with pruning, described in Section 3.1.
A small change in the training sample can lead to drastically different tree structures, rendering the physics interpretation a bit less straightforward.For sufficiently large training samples, the performance of these different trees will be equivalent, but on small training samples variations can be very large.This doesn't give too much confidence in the result.
Moreover a decision tree output is by nature discrete, limited by the purities of all leaves in the tree.This means that to decrease the discontinuities one has to increase the tree size and complexity, which may not be desirable or even possible.Then the tendency is to have spikes in the output distribution at specific purity values, as illustrated in Fig. 3, or even two delta functions at ±1 if using a binary answer rather than the purity output.Section 3.2 describes how these shortcomings can for the most part be addressed by averaging.

Pruning a tree
When growing a tree, each node contains fewer and fewer events, leading to an increase of the statistical uncertainty on each new split.The tree will tend to become more and more specialised, focusing 02004-p.7 EPJ Web of Conferences A first approach to mitigate this effect, sometimes referred to as pre-pruning, has already been described in Section 2, using stopping conditions.They included requiring a minimum number of events in each node or a minimum amount of separation improvement.The limitation is that requiring too big a minimum leaf size or too much improvement may prevent further splitting that could be very beneficial.
Another approach consists in building a very large tree and then cutting irrelevant branches by turning an internal node and all its descendants into a leaf, removing the corresponding subtree.This is post-pruning, or simply pruning.
Why would one wish to prune a decision tree?It is possible to get a perfect classifier on the training sample: mathematically the misclassification rate can be made as little as one wants on training events.For instance one can build a tree such that each leaf contains only one class of events (down to one event per leaf if necessary).The training error is then zero.But when passing events through the tree that were not seen during training, the misclassification rate will most likely be non-zero, showing signs of overtraining.Pruning helps in avoiding such effects, by eliminating subtrees (branches) that are deemed too specific to the training sample.
There are many different pruning algorithms available.Only three of them, among the most commonly used, are presented briefly.
Expected error pruning was introduced by Quinlan [4] in the ID3 algorithm.A full tree is first grown.One then computes the approximate expected error for a node (using the Laplace error estimate E = n e +n c −1 N+n c , where n e is the number of misclassified events out of N events in the node, and n c is the number of classes, 2 for binary trees), and compares it to the weighted sum of expected errors from its children.If the expected error of the node is less than that of the children, then the node is pruned.This algorithm is fast and does not require a separate sample of events to do the pruning, but it is also known to be too aggressive: it tends to prune large subtrees containing branches with good separation power.
Reduced error pruning was also introduced by Quinlan [4], and requires a separate pruning sample.Starting from terminal leaves, the misclassification rate on the pruning sample for the full tree is compared to the misclassification rate when a node is turned into a leaf.If the simplified tree has better performance, the subtree is pruned.This operation is repeated until further pruning increases the misclassification rate.

02004-p.8 IN2P3 School Of Statistics, Autrans
Cost-complexity pruning is part of the CART algorithm [1].The idea is to penalise complex trees (with many nodes and/or leaves) and to find a compromise between a good fit to training data (requiring a larger tree) and good generalisation properties (usually better achieved with a smaller tree).
Consider a fully grown decision tree, T max .For any subtree T (with N T nodes) of T max with a misclassification rate R(T ) one can define the cost complexity R α (T ): where α is a complexity parameter.When trying to minimise R α (T ), a small α would help picking T max (no cost for complexity) while a large α would keep only the root node, T max being fully pruned.The optimally pruned tree is somewhere in the middle.
In a first pass, for terminal nodes t P and t F emerging from the split of node t, by construction R(t) ≥ R(t P ) + R(t F ).If these quantities are equal, then one can prune off t P and t F .Now, for node t and subtree T t , by construction R(t) > R(T t ) if t is non-terminal.The cost complexity for node t is: , then the branch T t has smaller cost-complexity than the single node {t} and should be kept.But for a critical value α = ρ t , obtained by solving R ρ t (T t ) = R ρ t (t), that is: pruning the tree and making t a leaf becomes preferable.The node with the smallest ρ t is the weakest link and gets pruned.The algorithm is applied recursively on the pruned tree until it is completely pruned, leaving only the root node.
This generates a sequence of decreasing cost-complexity subtrees.For each of them (from T max to the root node), one then computes their misclassification rate on the validation sample.It will first decrease, and then go through a minimum before increasing again.The optimally pruned tree is the one corresponding to the minimum.
It should be noted that the best pruned tree may not be optimal when part of a forest of trees, such as those introduced in the next Sections.

Averaging several trees
Pruning was shown to be helpful in maximising the generalisation potential of a single decision tree.It nevertheless doesn't address other shortcomings of trees like the discrete output or the sensitivity of the tree structure to the training sample composition.A way out is to proceed with averaging several trees, with the added potential bonus that the discriminating power may increase, as briefly described in the 2008 proceedings of this school [5].
Such a principle was introduced from the beginning with the so-called V-fold cross-validation [ 1], a useful technique for small samples.After dividing a training sample L into V subsets of equal size, L = v=1..V L v , one can train a tree T v on the L − L v sample and test it on L v .This produces V decision trees, whose outputs are combined into a final discriminant: Following this simple-minded approach, many other averaging techniques have been developed, after realising the enormous advantage it provided.Bagging, boosting and random forests are such techniques and will be described in the following Sections. 02004-p.9

Boosting
As will be shown in this section, the boosting algorithm has turned into a very successful way of improving the performance of any type of classifier, not only decision trees.After a short history of boosting in Section 4.1, the generic algorithm is presented in Section 4.2 and a specific implementation (AdaBoost) is described in Section 4.3.Boosting is illustrated with a few examples in Section 4.4.
Finally other examples of boosting implementations are given in Section 4.5.

A brief history of boosting
Boosting has appeared quite recently.The first provable algorithm of boosting was proposed by Schapire in 1990 [6].It worked in the following way: • train a classifier T 1 on a sample of N events.
• train T 2 on a new sample with N events, half of which were misclassified by T 1 .
• build T 3 on events where T 1 and T 2 disagree.
The boosted classifier was defined as a majority vote on the outputs of T 1 , T 2 and T 3 .
In 1995 Freund followed up on this idea [ 7], introducing boosting by majority.It consisted in combining many learners with a fixed error rate.This was an impractical prerequisite for a viable automated algorithm, but was a stepping stone to the proposal by Freund&Schapire of the first functional boosting algorithm, called AdaBoost [8].
Boosting, and in particular boosted decision trees, have become increasingly popular in high energy physics.The MiniBooNe experiment first compared the performance of different boosting algorithms and artificial neural networks for particle identification [ 9].The D0 experiment was the first to use boosted decision trees for a search, which lead to the first evidence (and then observation) of single top quark production [10].

Boosting algorithm
Boosting is a general technique which is not limited to decision trees, although it is often used with them.It can apply to any classifier, e.g., neural networks.It is hard to make a very good discriminant, but is it relatively easy to make simple ones which are certainly more error-prone but are still performing at least marginally better than random guessing.Such discriminants are called weak classifiers [ 5].The goal of boosting is to combine such weak classifiers into a new, more stable one, with a smaller error rate and better performance.
Consider a training sample T k containing N k events.The i th event is associated with a weight w k i , a vector of discriminating variables x i and a class label y i = +1 for signal, −1 for background.The pseudocode for a generic boosting algorithm is: The boosted output is some function F(T 1 , .., T N tree ), typically a weighted average: Thanks to this averaging, the output becomes quasi-continuous, mitigating one of the limitations of single decision trees.

AdaBoost
One particularly successful implementation of the boosting algorithm is AdaBoost, introduced by Freund&Schapire [8].AdaBoost stands for adaptive boosting, referring to the fact that the learning procedure adjusts itself to the training data in order to classify it better.There are many variations on the same theme for the actual implementation, and it is the most common boosting algorithm.It typically leads to better results than without boosting, up to the Bayes limit as will be seen later.
An actual implementation of the AdaBoost algorithm works as follows.After having built tree T k one should check which events in the training sample T k are misclassified by T k , hence defining the misclassification rate R(T k ).In order to ease the math, let us introduce some notations.Define I : X → I(X) such that I(X) = 1 if X is true, and 0 otherwise.One can now define a function that tells whether an event is misclassified by T k .In the decision tree output convention of returning only {±1} it gives: while in the purity output convention (with a critical purity of 0.5) it leads to: The misclassification rate is now: This misclassification rate can be used to derive a weight associated to tree T k : where β is a free boosting parameter to adjust the strength of boosting (it was set to 1 in the original algorithm).The core of the AdaBoost algorithm resides in the following step: each event in T k has its weight changed in order to create a new sample T k+1 such that: This means that properly classified events are unchanged from T k to T k+1 , while misclassified events see their weight increased by a factor e α k .The next tree T k+1 is then trained on the T k+1 sample.This next tree will therefore see a different sample composition with more weight on misclassified events, and will therefore try harder to classify properly difficult events that tree T k failed to identify correctly.The final AdaBoost result for event i is: As an example, assume for simplicity the case β = 1.A not-so-good classifier, with a misclassification rate ε = 40% would have a corresponding α = ln 1−0.4 0.4 = 0.4.All misclassified events would therefore get their weight multiplied by e 0.4 = 1.5, and the next tree will have to work a bit harder on these events.Now consider a good classifier with an error rate ε = 5% and α = ln 1−0.05 0.05 = 2.9.Now misclassified events get a boost of e 2.9 = 19 and will contribute decisively to the structure of the next 02004-p.11EPJ Web of Conferences tree!This shows that being failed by a good classifier brings a big penalty: it must be a difficult case, so the next tree will have to pay much more attention to this event and try to get it right.
It can be shown [11] that the misclassification rate ε of the boosted result on the training sample is bounded from above: If each tree has ε k 0.5, that is to say, if it does better than random guessing, then the conclusion is quite remarkable: the error rate falls to zero for a sufficiently large N tree !A corollary is that the training data is overfitted.
Overtraining is usually regarded has a negative feature.Does this mean that boosted decision trees are doomed because they are too powerful on the training sample?Not really.What matters most is not the error rate on the training sample, but rather the error rate on a testing sample.This may well decrease at first, reach a minimum and increase again as the number of trees increases.In such a case one should stop boosting when this minimum is reached.It has been observed that quite often boosted decision trees do not go through such a minimum, but rather tend towards a plateau in testing error.One could then decide to stop boosting after having reached this plateau.
In a typical high energy physics problem, the error rate may not even be what one wants to optimise.A good figure of merit on the testing sample would rather be the significance.Figure 4 (top left) illustrates this behaviour, showing how the significance saturates with an increasing number of boosting cycles.One could argue one should stop before the end and save resources, but at least the performance does not deteriorate with increasing boosting.
Another typical curve one tries to optimise is the signal efficiency vs. the background efficiency.Figure 4 (top right) clearly exemplifies the interesting property of boosted decision trees.The performance is clearly better on the training sample than on the testing sample (the training curves are getting very close to the upper left corner of perfect separation), with a single tree or with boosting, a clear sign of overtraining.But the boosted tree is still performing better than the single tree on the testing sample, proof that it does not suffer from this overtraining.
People have been wondering why boosting leads to such features, with typically no loss of generalisation performance due to overtraining.No clear explanation has emerged yet, but some ideas have come up.It may have to do with the fact that during the boosting sequence, the first tree is the best while the others are successive minor corrections, which are given smaller weights.This is shown at the bottom of Fig. 4, where the misclassification rate of each new tree separately is actually increasing, while the corresponding tree weight is decreasing.This is no surprise: during boosting the successive trees are specialising on specific event categories, and can therefore not perform as well on other events.So the trees that lead to a perfect fit of the training data are contributing very little to the final boosted decision tree output on the testing sample.When boosting decision trees, the last tree is not an evolution of the first one that performs better, quite the contrary.The first tree is typically the best, while others bring dedicated help for misclassified events.The power of boosting does not rely in the last tree in the sequence, but rather in combining a suite of trees that focus on different events.
Finally a probabilistic interpretation of AdaBoost was proposed [ 12] which gives some insight into the performance of boosted decision trees.It can be shown that for a boosted output T flexible enough: where one recognises the Bayes discriminant [13].This means that the AdaBoost algorithm will tend towards the Bayes limit, the maximal separation one can hope to reach.

Boosting practical examples
The examples of this section were produced using the TMVA package [ 14].

A simple 2D example
This example starts from code provided by G. Cowan [ 15].Consider a system described by two variables, x and y, as shown in Fig. 5.
Building a first tree leads to the result shown in Fig. 6 (top), as a decision tree (left) and with criteria applied in the x : y plane (right).This single decision tree is already performing quite well.Its output is either +1 (signal) or −1 (background), so the only way to use the output is to keep either −1 or +1 testing candidates.After applying boosting, the separation between signal and background can be improved further.The boosted decision tree output is shown in Fig. 6 (bottom left).The bottom right plot in this figure shows the background rejection vs. signal efficiency curves for the first decision tree, for the boosted decision trees and for a Fisher discriminant analysis (for comparison), all run on the same testing events.The boosted decision trees perform best, with more freedom than on a single tree to choose a working point (either choose a signal efficiency or background rejection).

The XOR problem
Another way of showing how a decision tree can handle complicated inputs is the XOR problem, a small version of the checkerboard, illustrated in Fig. 7.With enough statistics (left column), even a single tree is already able to find more or less the optimal separation, so boosting cannot actually do much better.On the other hand this type of correlations is a killer for a Fisher discriminant, which fairs as bad as random guessing.
One can repeat the exercise, this time with limited statistics (right column in Fig. 7).Now a single tree is not doing such a good job anymore and the Fisher discriminant is completely unreliable.Boosted decision trees, on the other hand, are doing almost as well as with full statistics, separating almost perfectly signal and background.This illustrates very clearly how the combination of weak classifiers (see for instance the lousy performance of the first tree) can generate a high performance discriminant with a boosting algorithm.

Circular correlation
This example is derived from a dataset generated with the create_circ macro from $ROOTSYS/tmva/test/createData.C.The 1D and 2D representations of the two variables used are shown in Fig. 8.
Several generic properties can be tested with this dataset: the impact of a longer boosting sequence (adding more and more trees), the meaningfulness of overtraining, the sensitivity to the splitting criterion and the difference between boosting many small trees or few big trees.
Figure 9 compares the performance of a Fisher discriminant (for reference), a single decision tree and boosted decision trees with an increasing number of trees (from 5 to 400).All other TMVA parameters are kept to their default value.One can see how a Fisher discriminant is not appropriate for non-linear correlation problems.The performance of the single tree is not so good, as expected since the default TMVA parameters make it very small, with a depth of 3 (it should be noted that a single bigger tree could solve this problem easily).Increasing the number of trees improves the performance until it saturates in the high background rejection and high signal efficiency corner.It should be noted that adding more trees doesn't seem to degrade the performance, the curve stays in the optimal corner.Looking at the contour plot one can however see that it wiggles a little for larger boosted decision trees, as they tend to pick up features of the training sample.This is overtraining.
Another sign of overtraining also appears in Fig. 10, showing the output of the various boosted decision trees for signal and background, both on the training and testing samples: larger boosted decision trees tend to show discrepancies between the two samples, as they adjust to peculiarities of the training sample that are not found in an independent testing sample.One can also see how the output acquires a "better" shape with more trees, really becoming quasi-continuous, which would allow to cut at a precise efficiency or rejection.
Both figures do exhibit clear signs of overtraining, but is it really an issue?As mentioned before what really matters in the end is the performance in data analysis.One way to evaluate this is to compute the significance s/ √ s + b.It is shown as the green curve in Fig. 11 for the same boosted decision trees as shown in Fig. 10, with increasing number of trees.The best significance is actually obtained with the 400-tree boosted decision tree!To be fair, the performance is very similar already with 10 trees.Now, comparing the outputs, if interested in a smoother result, 10 trees might not be enough, but 50 would probably do, without the overhead of eight times more trees.Such a choice should in any case not be made based on overtraining statements, but rather on final expected performance.This example can also be used to illustrate the performance of each tree in a boosting sequence.Figure 12 shows the rapid decrease of the weight α k of each tree, while at the same time the corresponding misclassification rate ε k of each individual tree increases rapidly towards just below 50%, that is, random guessing.It confirms that the best trees are the first ones, while the others are only minor corrections.
One can measure the impact of the choice of splitting criterion in Fig. 13.Results are shown for cross entropy, Gini index, Gini index with Laplace correction, misclassification error and s/ √ s + b as separation criterion, while all other TMVA parameters are left at their default value.The top left plot shows that they all have similar performance on this problem, even when zooming on the high efficiency side.One slight exception is the use of significance.It is also the only measure that doesn't respect some of the criteria suggested in Section 2. as a function of boosted decision tree output, which confirms that they all reach similar performance while the tree optimised with significance actually performs slightly worse than the others.
A final illustration concerns the impact of the size of each tree in terms of number of leaves and size of each leaf, and their relation to the number of trees.In order to test this, the same create_circ macro was used, but to produce a much larger dataset such that statistics do not interfere with the test.All combinations of boosted decision trees with 20 or 400 trees, a minimum leaf size of 10 or 500 events and a maximum depth of 3 or 20 were tested, and results are shown on Fig. 14 as the decision contour and in terms of background rejection vs. signal efficiency curves.One can see an overall very comparable performance.
As is often the case for boosted decision trees, such optimisation on the number and size of trees, size of leaves or splitting criterion depends on the use case.To first approximation it is fair to say that almost any default will do, and optimising these aspects may not be worth the time required to test all configurations.

Other boosting algorithms
AdaBoost is but one of many boosting algorithms.It is also referred to as discrete AdaBoost to distinguish it from other AdaBoost flavours.The Real AdaBoost algorithm [ 12] defines each decision tree output as: where p k (i) is the purity of the leaf on which event i falls.Events are reweighted as: and the boosted result is T (i) = N tree k=1 T k (i).Gentle AdaBoost and LogitBoost (with a logistic function) [12] are other variations.ε-Boost, also called shrinkage [16], consists in reweighting misclassified events by a fixed factor e 2ε rather than the tree-dependent α k factor of AdaBoost.ε-LogitBoost [9] is reweighting them with a logistic function e −y i T k (i) 1+e −y i T k (i) .ε-HingeBoost [9] is only dealing with misclassified events: Finally one can cite the adaptive version of the "boost by majority" [ 7] algorithm, called Brown-Boost [17].It works in the limit where each boosting iteration makes an infinitesimally small con-02004-p.20 tribution to the total result, modelling this limit with the differential equations that govern Brownian motion.

Other averaging techniques
As mentioned in Section 3.2 the key to improving a single decision tree performance and stability is averaging.Other techniques than boosting exist, some of which are briefly described below.As with boosting, the name of the game is to introduce statistical perturbations to randomise the training sample, hence increasing the predictive power of the ensemble of trees [ 5].
Bagging (Bootstrap AGGregatING) was proposed by Breiman [ 18].It consists in training trees on different bootstrap samples drawn randomly with replacement from the training sample.Events that are not picked for the bootstrap sample form an "out of bag" validation sample.The bagged output is the simple average of all such trees.Random forests is bagging with an extra level of randomisation [ 19].Before splitting a node, only a random subset of input variables is considered.The fraction can vary for each split for yet another level of randomisation.
Trimming is not exactly an averaging technique per se but can be used in conjunction with another technique, in particular boosting, to speed up the training process.After some boosting cycles, it is possible that very few events with very high weight are making up most of the total training sample weight.One may then decide to ignore events with very small weights, hence introducing again some minor statistical perturbations and speeding up the training.ε-HingeBoost is such an algorithm.These techniques, as was the case for boosting, are actually not limited to decision trees.Bagging applies to any training, random forests apply to any classifier where an extra level of randomisation is possible during the training, and trimming applies in particular to any boosting algorithm on any classifier.One could for instance build a boosted random forest with trimming.

Conclusion
In results (not black boxes) with possible interpretation by a physicist, can deal easily with all sorts of variables and with many of them, with in the end relatively few parameters.
Decision trees are, however, not perfect and suffer from the piecewise nature of their output and a high sensitivity to the content of the training sample.These shortcoming are for a large part addressed by averaging the results of several trees, each built after introducing some statistical perturbation in the training sample.Among the most popular such techniques, boosting (and its AdaBoost incarnation) was described in detail, providing ideas as to why it seems to be performing so well and being very resilient against overtraining.Other averaging techniques were briefly described.
Boosted decision trees have now become quite fashionable in high energy physics.Following the steps of MiniBooNe for particle identification and D0 for the first evidence and observation of single top quark production, other experiments and analyses are now using them, in particular at the LHC.This warrants a word a caution.Despite recent successes in several high profile results, boosted decision trees cannot be thought of as the best multivariate technique around.Most multivariate techniques will in principle tend towards the Bayes limit, the maximum achievable separation, given enough statistics, time and information.But in real life resources and knowledge are limited and it is impossible to know a priori which method will work best on a particular problem.The only way is to test them.Figure 15 illustrates this situation for the single top quark production evidence at D0 [ 10].In the end boosted decision trees performed only marginally better than Bayesian neural networks and the matrix elements technique, but all three analyses were very comparable, as shown by their power curves.The boosted decision tree analysis profited, however, of the fast turnaround of decision tree training in order to perform many valuable cross checks.
Finally, it cannot be stated often enough that using multivariate techniques is only the very last step of an analysis and is meaningful if and only if a proper model has been built that describes the data very well in all the variables one wishes to feed into the analysis.

Software
Many implementations of decision trees exist on the market.Historical algorithms like CART [ 1], ID3 [20] and its evolution C4.5 [21] are available in many different computing languages.The original MiniBoone [9] code is available at http://www-mhp.physics.lsa.umich.edu/∼roe/,so is the Stat-PatternRecognition [2] code at http://sourceforge.net/projects/statpatrec, and LHC experiments have various implementations in their software.
I would recommend a different approach, which is to use an integrated solution able to handle decision trees but also other techniques and flavours, allowing to run several of them to find the best suited to a given problem.Weka is a data mining software written in Java, open source, with a very good published manual.It was not written for HEP but is very complete.Details can be found at http://www.cs.waikato.ac.nz/ml/weka/.
Another recent development, now popular in HEP, is TMVA [ 14].It is integrated in recent ROOT releases, which makes it convenient to use, and comes with a complete manual.It is also available online at http://tmva.sourceforge.net.

EPJ
Web of Conferences DOI: 10.1051/ C Owned by the authors, published by EDP Sciences, 2013 , epjconf 201 /

Figure 1 .
Figure 1.Graphical representation of a decision tree.Blue ellipses and disks are internal nodes with their associated splitting criterion; green leaves are terminal nodes with purity p.

Figure 2 .
Figure 2. Various popular impurity measures as a function of signal purity.

Figure 3 .
Figure 3.Comparison of signal, backgrounds and data for one of the decision tree outputs in an old D0 single top quark search.The discrete output of a decision tree is clearly visible, but data are well reproduced by the model.

Figure 4 .
Figure 4. Behaviour of boosting.Top left: Significance as a function of the number of boosted trees.Top right: Signal efficiency vs. background efficiency for single and boosted decision trees, on the training and testing samples.Bottom left: Misclassification rate of each tree as a function of the number of boosted trees.Bottom right: Weight of each tree as a function of the number of boosted trees.

Figure 5 .Figure 6 .
Figure 5. x : y correlation of input variables used to illustrate how decision trees work.Signal is in blue, background in red.

Figure 7 .Figure 8 .
Figure7.The XOR problem.Signal is in blue, background in red.The left column uses sufficient statistics, while the right column has a limited number of training events.The top plots show the signal and background distributions as well as the criteria used by the first decision tree.The middle left plot shows the first decision tree for the large statistics case.The middle right plot shows the boosted decision tree output for the limited statistics case.Bottom plots illustrate the background rejection vs. signal efficiency curves for the first decision tree (red), for the boosted decision trees (black) and for a Fisher discriminant analysis (green), all run on the same testing events.

4 .Figure 9 .
Figure 9. Circular correlation example.Left: Background rejection vs. signal efficiency curves for a Fisher discriminant (black), a single decision tree (dark green) and boosted decision trees with an increasing number of trees (5 to 400).Right: Decision contour corresponding to the previous discriminants.

Figure 10 .
Figure 10.Circular correlation example.Comparison of the output on training (markers) and testing (histograms) samples for boosted decision trees with 1, 5, 10, 50, 100 and 400 trees (from top left to bottom right).

Figure 11 .
Figure 11.Circular correlation example.Signal efficiency, background efficiency, signal purity and significance s/ √ s + b for boosted decision trees with 1, 5, 10, 50, 100 and 400 trees (from top left to bottom right).

Figure 12 .
Figure 12.Circular correlation example, with a 400-tree boosted decision tree.Left: Boost weight of each tree.Right: Error fraction of each tree.

Figure 13 .
Figure 13.Circular correlation example: impact of the choice of splitting function.Top left: Background rejection vs. signal efficiency.From top middle to bottom left: Signal efficiency, background efficiency, signal purity and significance s/ √ s + b for cross entropy, Gini index, Gini index with Laplace correction, misclassification error and s/ √ s + b.

Figure 14 .
Figure 14.Circular correlation example with larger dataset and boosted decision trees with 20 or 400 trees, a minimum leaf size of 10 or 500 events and a maximum depth of 3 or 20.Left: Decision contour for each combination.Right: Background rejection vs. signal efficiency for each combination.

Figure 15 .
Figure 15.Comparison of several analysis techniques used in the D0 search for single top quark production [10].Left: Signal vs. background efficiency curves for random guessing, a cut-based analysis, artificial neural networks, decision trees, Bayesian neural networks, the matrix elements technique and boosted decision trees.Right: Power curves (p-value of the signal+background hypothesis vs. p-value of the background-only hypothesis) for the boosted decision trees, Bayesian neural networks and matrix elements.