| Customize Help

Basic concepts for the MIL Classification module



The basic concepts and vocabulary conventions for the MIL Classification module are:

  • Classes. The unique categories in a finite number of categories that represent the conclusions to a classification problem. Fusilli and macaroni, for example, represent 2 classes for categorizing pasta. Classes can hold supplementary data that is typically used as a visual aid, such as an identifying icon image or color. You can add classes to a dataset context. Classes are also known as class definitions, labels, or outputs.

  • Classification. Applying mathematical learning processes, such as deep learning or machine learning, to solve identification and categorization issues known as classification problems. Typically, such problems are not easily solvable using traditional image processing techniques.

  • Classifier. The mathematical architecture that must be trained to perform classification (predict the class to which a target belongs). You can have a predefined CNN classifier or a tree ensemble classifier. A classifier is also known as a network or a model; it can also be referred to as a CNN or a tree ensemble.

  • Classifier context. A MIL object that stores a classifier and its settings. The terms classifier context and classifier are often used interchangeably.

  • Dataset. A type of database containing a labeled series of entries (images or features) with which to train a classifier.

  • Dataset context. A MIL object that stores a dataset and its settings. The terms dataset context and dataset are often used interchangeably.

  • Predicting. Using a trained classifier to identify the class to which the target belongs. The target is typically an image (image classification) or a set of features (feature classification), though it can also be a dataset. Predicting and prediction are used interchangeably and are also known as classification or inference.

  • Training. The process in which the classifier learns to predict the class to which the data belongs.

  • Training context. A MIL object that stores the training settings with which to train a classifier. You can have a CNN training context or a tree ensemble training context.

  • Tree ensemble. A classifier that uses multiple decision trees and bootstrap aggregating. After training this classifier with a features dataset, you can use it to perform feature classification.

Additional basic concepts and vocabulary conventions, more specifically related to classifiers, datasets, training, and predicting, are listed below.

Classifiers

The basic concepts and vocabulary conventions related to classifiers are:

  • FCNet. Refers to fully convolutional network. Matrox uses this term to name the CNNs that it has defined and that you can specify when you allocate a predefined CNN classifier context; for example, M_FCNET_M.

  • Predefined CNN. A classifier that uses a convolutional neural network (CNN) that was predefined by Matrox, and that must be trained with an images dataset. Once trained, you can use the CNN to perform image classification (classify an entire image) or coarse segmentation (a coarse pixel level classification of an image).

  • Receptive field. The portion of the input image that is visible to the CNN to make a prediction. The size of the receptive field is equivalent to the source layer's image size.

  • Source layer. The input (initial) layer of a predefined CNN.

  • Weights. Internal and hidden parameters within the classifier that transform data. Classifiers can have millions of weights. During training, modifications to the training mode controls (hyperparameters) can affect how weights are established. Once a classifier is trained, its weights no longer change. Weights are sometimes referred to as learnable parameters.

Datasets

The basic concepts and vocabulary conventions related to datasets are:

  • Augmentation. Creating plausible variations of a source dataset entry to increase the number of entries. The source dataset entry, and all variations made from it (the additional entries), are considered part of the augmentation.

  • Dataset entry. One row of fields in a dataset. The data defined in an entry's fields include the class (label) to which the entry belongs (the ground truth), an image (file name and path) or set of features (numerical values) representing that class, and a UUID (key) that uniquely identifies that entry. Dataset entries are also known as samples or inputs; each entry in a features dataset is also known as a set of features or a feature set.

  • Development dataset. The dataset that evaluates the performance of the classifier's training and regulates overfitting. Entries in the development dataset, training dataset, and testing dataset should be unique to their set. Tree ensemble classifiers do not typically require a development dataset; such classifiers use out-of-bag entries, which can evaluate the training's performance (provided there are no augmentations in the training dataset).

  • Features dataset. A dataset that holds the features with which to train a tree ensemble classifier.

  • Ground truth. The class to which a dataset entry belongs. You should determine this with direct human observation or, in some cases, assisted labeling.

  • Images dataset. A dataset that holds the images with which to train a CNN classifier.

  • Labeling. Specifying the class definition (ground truth) that a dataset entry represents. Training requires labeled entries.

  • Source dataset. A dataset that holds all the data with which to train a classifier. Typically, you will split your source dataset into a training dataset, a development dataset, and a testing dataset.

  • Testing dataset. An optional dataset that serves as a final check to determine whether the classifier is fully trained. Entries in the testing dataset, training dataset, and development dataset should be unique to their set.

  • Training dataset. The dataset that trains the classifier (the training dataset entries update the classifier's weights). Entries in the training dataset, development dataset, and testing dataset should be unique to their set.

  • UUID. A universally unique identifier used as a key to identify a dataset entry.

Training

The basic concepts and vocabulary conventions related to training are:

  • Bootstrap aggregating (bagging). Selecting entries to train a tree ensemble classifier. Entries are either in-the-bag or out-of-bag (per tree). MIL uses bootstrapping, bagging, randomness, and multiple learning algorithms to maximize the accuracy and performance of a tree ensemble classifier.

  • Complete training. A training mode that resets the classifier's weights. This is for completely restarting the training of a CNN classifier, or for training an untrained CNN classifier.

  • Confusion matrix. A type of table, in a matrix format, that presents information about how many entries were, for each class, correctly classified, and how many were, for each class, confused with other classes, during training. A confusion matrix is also known as an error matrix.

  • Epoch. One full cycle of training a CNN classifier.

  • Fine tuning. A training mode designed for a CNN classifier that is mostly trained. You can fine tune a classifier if it was previously trained to solve a similar problem, and you are using the same image size and the same number of classes.

  • In-the-bag. The dataset entries that MIL randomly selects (per tree) during bootstrap aggregating, to train a tree ensemble classifier. The random selection is done with replacement; the randomly selected entries are available for reselection.

  • Loss. The result of a mathematical loss (or cost) function that MIL uses to evaluate the lack of confidence (doubt) associated with the classification during training. After each epoch, MIL uses the loss value to adjust the classifier's weights to help achieve a lower loss at the end of the next epoch.

  • Mini-batch. A subset of entries in an images dataset. Since such datasets typically contain numerous entries which require a considerable amount of memory to manage, the training process randomly splits them into groups, or mini-batches, to improve efficiency.

  • Out-of-bag. The dataset entries that are not in-the-bag (per tree). MIL uses these to evaluate the performance of the classifier's training and regulate over-fitting.

  • Overfitting. When a classifier is trained to classify the training data so specifically, that it performs poorly when given other data that is similar. To prevent overfitting and develop a properly generalized classifier, MIL uses a development dataset or out-of-bag entries.

  • Training mode. The process by which MIL trains a CNN classifier and the extent to which it should use previously learned information. You can train with a complete process, a transfer learning process, or a fine tuning process. Changing the training mode affects how the training process establishes and evolves the classifier's weights. Training mode controls are also known as hyperparameters.

  • Transfer learning. A training mode where the network weights for the feature extraction layers are those from the CNN classifier. This mode is used when the quantity of training data is limited.

Predicting

The basic concepts and vocabulary conventions related to predicting are:

  • Assisted labeling. Performing the prediction operation on unlabeled dataset entries and labeling results with a very high score as the ground truth. This is also known as active learning.

  • Score. A measure, in percentage, of how well the target belongs to a class. This is also known as accuracy, or accuracy score.

  • Target. The image or set of features that the prediction operation classifies.