Login  |  Legal Disclosure / Privacy policy  |  Contact   |  

Discriminant analysis

A multidimensional classifier can be generated with the analySIS “discriminant analysis” add-in.

The add-in is routinely applied, among others, at a major pharmaceutical company’s cancer research division for the purpose of proliferating and non-proliferating cancer cells.



Example



The three different shapes (triangle, square and circle) should be automatically classified. This is not possible with a parameter such as the form factor. With the aid of discriminant analysis it is, however, quite simple. Parameters like circumference, circularity, Feret max and min and aspect ratio can be interlinked by means of a learning routine.


Discriminant analysis separates the triangles (red) from the circles (green) and the squares (blue). The further apart the areas are, the better the two classes can be separated.


In this case, the three shapes can be easily distinguished from one another and thus, classified.


A shape classification through discriminant analysis is represented by the carbon black analysis add-in.



Notes on discriminant analysis

Definition of the discriminant functions

The discriminant analysis add-in classifies objects using linear discriminant analysis:

Discriminant analysis is a class of statistical procedures whose aim is to assign objects to one of several a-priori determined classes, populations or categories based on their features and properties, or find the features important for such a classification. The classification rule is derived from a sample of already classified objects. This universal access allows for the application of discriminant analysis procedures to a wide variety of areas with practical tasks such as medicine (diagnostics), biology (systematics, automatic counting of colonies on petri dishes), industrial fabrication (quality control), security systems, character recognition or military intelligence (object recognition).

The discriminant functions produced by discriminant analysis allow predictions for cases, which were so far not classified in a group.

If there are only two groups, a mere single discriminant function A is necessary for separation. The discriminant function A can be represented as a linear combination of the Xi variables in the following form:

A = v0 + v1X1 + v2X2 + ... viXi

whereby:
A, B = discriminant functions
Xi = feature variable
vi , wi = feature variable’s Xi discriminant coefficients

In the case of three or more groups, a single discriminant function is no longer sufficient in order to separate the groups in a satisfactory manner. As a rule, a “discriminatory potential” (large overlapping areas) remains upon computation of a discriminant axis, so that additional discriminant axes have to be defined. With G groups, G-1, discriminant functions can be formulated. Thereby, the number of discriminant functions should be no greater than the number of feature variables. Empirically, not all functions contribute significantly to the separation of the groups so that extracting only a few is sufficient. Not all potential discriminant functions cause a crucial reduction to the overlapping areas between the groups. Empirical experience shows that two discriminant functions usually suffice (Cooley, Lohnes 1971 S.244; Backhaus u.a. 1996, S.213). In this add-in, only two discriminant functions are used for the separation (in the discriminant analysis table of results: non-standardized discriminant function coefficients)

A = v0 + v1X1 + v2X2 + ... viXi
B = w0 + w1X1 + w2X2 + ... wiXi

Both of these equations span a two-dimensional surface, where the groups are spatially separated from one another. This surface is represented on the scatter diagram.

The importance of discriminant functions

Several functions are calculated in discriminant analysis which describes the correlation between grouping- and independent variables. These are the so-called canonical discriminant functions. The number of functions is either equal to the number of independent variables, or the number of groups (minus one) – the lower value counts. The functions that define relationships between various group characteristics serve to predict the value of the grouping (dependent) variables. Some of these are more effective than others. This effectiveness is expressed by its eigenvalue’s size: the higher the eigenvalue, the higher is the variance of the grouping variable that can be explained by its corresponding function. The functions are listed in order of their eigenvalues. The eigenvalue shares (variance percentage) add up to 100%.

The separating force of discriminant functions

Two metrics are listed here to describe the separating force of discriminant functions.

Canonical correlation (C.C.): This is a specific measure for the dependence of grouping variables from the given function. The closer the correlation (C.C.) is, the better are the groups separated by a discriminant function. C.C. is calculated from the functions’ eigenvalue (EV) as follows: C.C. = EV / (EV-1)

Wilks’ Lambda: This is a cumulative value used to calculate the Chi-square and p value of the functions. Wilks’ Lambda is calculated from the canonical correlation by subtracting its square from one and then multiplying it by every subsequent value of Wilks’ Lambda. The lower Wilks’ Lambda is, the better are the groups separated by a discriminant function (the more different are the groups ).

Object classification

The distance concept is implemented in this add-in for the assignment of objects with unknown group affiliation to predetermined groups.

According to the distance concept, an object is assigned to the group whose center (centroid) presents the smallest distance from the object.

Only the first two functions are considered for this add-in and there is, therefore, a two-dimensional surface (scatter diagram). The corresponding distances to the individual centroids are calculated on this surface.

The Mahalanobis distances between the groups (upper right corner of the matrix) are a generalized distance measure. The Mahalanobis distance is a measure for the spacing of the groups, which is a result of the average of the function values (all functions added together). A long Mahalanobis distance means that the groups are separated in a satisfactory ways by the functions, so that an instance belonging to a group is unlikely to have been wrongly classified. Conversely, a shorter instance demonstrates fuzzy separation.