Classification Decision Forest

Decision forest classifier is a special case of the Decision Forest model.

Details

Given:

  • \(n\) feature vectors \(X = \{x_1 = (x_{11}, \ldots, x_{1p}), \ldots, x_n = (x_{n1}, \ldots, x_{np}) \}\) of size \(p\);

  • their non-negative sample weights \(w = (w_1, \ldots, w_n)\);

  • the vector of class labels \(y = (y_1, \ldots, y_n)\) that describes the class to which the feature vector \(x_i\) belongs, where \(y_i \in \{0, 1, \ldots, C-1\}\) and \(C\) is the number of classes.

The problem is to build a decision forest classifier.

Training Stage

Decision forest classifier follows the algorithmic framework of decision forest training with Gini impurity metrics as impurity metrics [Breiman84]. If sample weights are provided as input, the library uses a weighted version of the algorithm.

Gini index is an impurity metric, calculated as follows:

\[{I}_{Gini}\left(D\right)=1-\sum _{i=0}^{C-1}{p}_{i}^{2}\]

where

  • \(D\) is a set of observations that reach the node;

  • \(p_i\) is specified in the table below:

Decision Forest Classification: impurity calculations

Without sample weights

With sample weights

\(p_i\) is the observed fraction of observations that belong to class \(i\) in \(D\)

\(p_i\) is the observed weighted fraction of observations that belong to class \(i\) in \(D\):

\[p_i = \frac{\sum_{d \in \{d \in D | y_d = i \}} W_d}{\sum_{d \in D} W_d}\]

Prediction Stage

Given decision forest classifier and vectors \(x_1, \ldots, x_r\), the problem is to calculate the labels for those vectors. To solve the problem for each given query vector \(x_i\), the algorithm finds the leaf node in a tree in the forest that gives the classification response by that tree. The forest chooses the label y taking the majority of trees in the forest voting for that label.

Out-of-bag Error

Decision forest classifier follows the algorithmic framework for calculating the decision forest out-of-bag (OOB) error, where aggregation of the out-of-bag predictions in all trees and calculation of the OOB error of the decision forest is done as follows:

  • For each vector \(x_i\) in the dataset \(X\), predict its label \(\hat{y_i}\) by having the majority of votes from the trees that contain \(x_i\) in their OOB set, and vote for that label.

  • Calculate the OOB error of the decision forest \(T\) as the average of misclassifications:

    \[OOB(T) = \frac{1}{|{D}^{\text{'}}|}\sum _{y_i \in {D}^{\text{'}}}I\{y_i \ne \hat{y_i}\}\text{,where }{D}^{\text{'}}={\bigcup }_{b=1}^{B}\overline{D_b}.\]
  • If OOB error value per each observation is required, then calculate the prediction error for \(x_i\): \(OOB(x_i) = I\{{y}_{i}\ne \hat{{y}_{i}}\}\)

Variable Importance

The library computes Mean Decrease Impurity (MDI) importance measure, also known as the Gini importance or Mean Decrease Gini, by using the Gini index as impurity metrics.

Usage of Training Alternative

To build a Decision Forest Classification model using methods of the Model Builder class of Decision Forest Classification, complete the following steps:

  • Create a Decision Forest Classification model builder using a constructor with the required number of classes and trees.

  • Create a decision tree and add nodes to it:

    • Use the createTree method with the required number of nodes in a tree and a label of the class for which the tree is created.

    • Use the addSplitNode and addLeafNode methods to add split and leaf nodes to the created tree. See the note below describing the decision tree structure.

    • After you add all nodes to the current tree, proceed to creating the next one in the same way.

  • Use the getModel method to get the trained Decision Forest Classification model after all trees have been created.

Note

Each tree consists of internal nodes (called non-leaf or split nodes) and external nodes (leaf nodes). Each split node denotes a feature test that is a Boolean expression, for example, f < featureValue or f = featureValue, where f is a feature and featureValue is a constant. The test type depends on the feature type: continuous, categorical, or ordinal. For more information on the test types, see Decision Tree.

The inducted decision tree is a binary tree, meaning that each non-leaf node has exactly two branches: true and false. Each split node contains featureIndex, the index of the feature used for the feature test in this node, and featureValue, the constant for the Boolean expression in the test. Each leaf node contains a classLabel, the predicted class for this leaf. For more information on decision trees, see Decision Tree.

Add nodes to the created tree in accordance with the pre-calculated structure of the tree. Check that the leaf nodes do not have children nodes and that the splits have exactly two children.

Batch Processing

Decision forest classification follows the general workflow described in Decision Forest and Classification Usage Model.

Training

In addition to the parameters of a classifier (see Classification Usage Model) and decision forest parameters described in Batch Processing, the training algorithm for decision forest classification has the following parameters:

Training Parameters for Decision Forest Classification (Batch Processing)

Parameter

Default Value

Description

algorithmFPType

float

The floating-point type that the algorithm uses for intermediate computations. Can be float or double.

method

defaultDense

The computation method used by the decision forest classification.

For CPU:

  • defaultDense - default performance-oriented method

  • hist - inexact histogram computation method

For GPU:

nClasses

Not applicable

The number of classes. A required parameter.

Output

Decision forest classification calculates the result of regression and decision forest. For more details, refer to Batch Processing and Classification Usage Model.

Prediction

For the description of the input and output, refer to Classification Usage Model.

In addition to the parameters of a classifier, decision forest classification has the following parameters at the prediction stage:

Prediction Parameters for Decision Forest Classification (Batch Processing)

Parameter

Default Value

Description

algorithmFPType

float

The floating-point type that the algorithm uses for intermediate computations. Can be float or double.

method

defaultDense

The computation method used by the decision forest classification. The only prediction method supported so far is the default dense method.

nClasses

Not applicable

The number of classes. A required parameter.

votingMethod

weighted

A flag that specifies which method is used to compute probabilities and class labels:

weighted
  • Probability for each class is computed as a sample mean of estimates across all trees, where each estimate is the normalized number of training samples for this class that were recorded in a particular leaf node for current input.

  • The algorithm returns the label for the class that gets the maximal value in a sample mean.

unweighted
  • Probabilities are computed as normalized votes distribution across all trees of the forest.

  • The algorithm returns the label for the class that gets the majority of votes across all trees of the forest.