This is an old revision of the document!
TMVA is a tool to run a multivariate analysis on a root tree. It is included in ROOT from version release 5.34/11 on. More information can be found in:
When using the neural netwerk method MLP, you might need ROOT 34.0.0 or newer, to have larger buffer for the xml reader, for example:
. /afs/cern.ch/sw/lcg/app/releases/ROOT/5.34.26/x86_64-slc6-gcc48-opt/root/bin/thisroot.sh
The parameters and options of the MVA method can be optimized from the default settings for better a performance, see the reference page.
For the BDT important parameters are the the learning rate, number of boost steps and maximal tree depth:
AdaBoostBeta=0.5
: learning rate, smaller (~0.1) is better, but takes longernTrees=800
: number of boost steps, too large mainly costs time and can cause overtrainingMaxDepth=3
: maximum tree depth, ~2-5 depending on interaction of the variablesnCuts=20
: grid points in variable range to find the optimal cut in node splittingMinNodeSize=5%
Important MLP parameters to tune are the number of neurons on each hidden layer, learning rate and the activation function.
HiddenLayers=N,N-1
: number of nodes in each hidden layer for N variablesN
= one hidden layer with N nodesN,N
= two hidden layersN+2,N
= two hidden layers, with N+2 nodes in the firstLearningRate=0.02