Module A3SOM

A3SOM(input_dim, n_classes, n=8, n_hidden=3, Tmax=1.0, Tmin=0.5,
      dropout_rate=0.0, act='none', abstained=False, dense_block=[],
      normalization='none')

Implementation of an abstained semi-supervised self-organizing map, based on Keras models. The data is first projected onto a SOM, where best-matching units (BMUs) are found (closest neuron for each data point). The activation pattern of the neurons is then fed into fully-connected dense layers to find out class-membership probabilities. The loss function includes both the SOM error and the classification error.

Arguments

input_dim: int.

Number of features in the input data.

n_classes: int.

Number of classes in the input data.

n: int, default=8.

Number of neurons along one axis of the self-organizing map (SOM). The dimension of the map will be n*n. Prioritize values between 4 and 15.

n_hidden: int, default=3.

Number of hidden layers in the dense block.

Tmax: float, default=1.

Starting temperature (radius) of the neighborhood function for the SOM. Temperature is slowly reduced using exponential decay.

Tmin: float, default=.5.

End temperature (radius) of the neighborhood function for the SOM. Tmin <= Tmax.

dropout_rate: float, default=0.

Rate of dropout applied after dense layers.

act: {‘none’, ‘relu’, ‘sigmoid’, ‘softmax’, ‘softplus’, ‘softsign’, ‘tanh’, ‘selu’, ‘elu’, ‘exponential’}, default=’none’.

Activation function to apply to the output of the SOM layer.

abstained: bool, default=False.

If True, the abstained mode of A3SOM is used. If False, the standard classification mode is used.

dense_block: list of layers, default=[].

To replace the predefined organization of dense layers with a custom list of succeeding layers. Overrides n_hidden and dropout_rate.

normalization: {‘none’, ‘batch’, ‘layer’, ‘both’}, default=’none’.

Apply batch normalization, layer normalization, or both after each dense layer.

Methods

compile()

compile(self, learning_rates={'som': 0.001, 'dense': 0.0001}, loss_weights={'gamma': 0.6, 'eta': 0.0001}, metrics=['accuracy'], **kwargs)

Configures the model for training.

Args:
learning_rates, default={‘som’: 0.001, ‘dense’: 0.0001}.

Dictionary with the learning rates to use for the SOM’s optimizer (‘som’) and the dense layers’ optimizer (‘dense’).

loss_weights, default={‘gamma’: 0.6, ‘eta’: 0.0001}.

Optional dictionary specifying scalar coefficients (Python floats) to weight the loss contributions of different model outputs. gamma is the weight associated to the SOM loss (distortion), and eta is the weight associated to the regularization term. These two terms are scaled in comparison to the categorical cross-entropy loss, which has a weight of 1. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients.

metrics:

List of metrics to be evaluated by the model during training and testing. Each of this can be a string (name of a built-in function), function or a tf.keras.metrics.Metric instance. See tf.keras.metrics.

fit()

fit(self, X, y, **kwargs)

Trains the model for a fixed number of epochs (iterations on a dataset).

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

y: Target data.

Like the input data x, it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x).

batch_size: Integer or None.

Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

epochs: Integer.

Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided (unless the steps_per_epoch flag is set to something other than None). Note that in conjunction with initial_epoch, epochs is to be understood as “final epoch”. The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached.

verbose: ‘auto’, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = one line per epoch. ‘auto’ defaults to 1 for most cases, but 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment).

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during training. See tf.keras.callbacks.

validation_split: Float between 0 and 1.

Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. If both validation_data and validation_split are provided, validation_data will override validation_split.

validation_data:

Data on which to evaluate the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data is formatted:(x_val, y_val) of Numpy arrays or tensors.

class_weight:

Optional dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to “pay more attention” to samples from an under-represented class.

workers: Integer.

Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

Returns:

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

predict()

predict(self, X, distances=False, **kwargs)

Predicts the samples in X and returns their class probabilities.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

distances: bool, default = False.

If False, only proba_classes will be returned. If True, a tuple will be returned. The first element is proba_classes, and the second is distances, the distance matrix between data and the SOM neurons.

Returns:
proba_classes:

a list of predicted probabilites for each class, for each sample in X.

distances:

a distance matrix between each sample in X and the SOM neurons.

summary()

summary(self)

Returns the summary of the model.

get_params()

get_params(self)

Returns the parameters used to train the model.