MNEflow API

mneflow.utils.MetaData class

class mneflow.utils.MetaData

Class containing all metadata required to run model training, prediction, and evaluation. Produced by mneflow.produce_tfrecords, saved to “path” and can be restored if you need to rerun training on existing tfrecords.

See mneflow.utils.load_meta() docstring.

path

A path where the output TFRecord (path + /tfrecrods/) files, models (path + /models/), and the corresponding metadata file (path + data_id + meta.pkl) will be stored.

Type:

str

data_id

Filename prefix for the output files and the metadata file.

Type:

str

input_type

Type of input data.

‘trials’ - treats each of n inputs as an iid sample, produces dataset with dimensions (n, 1, t, ch)

‘seq’ - treats each of n inputs as a seqence of shorter segments, produces dataset with dimensions (n, seq_length, segment, ch)

‘continuous’ - treats inputs as a single continuous sequence, produces dataset with dimensions (n*(t-segment)//aug_stride, 1, segment, ch)

Type:

str {‘trials’, ‘continuous’, ‘seq’, ‘fconn’}

target_type

Type of target variable.

‘int’ - for classification, ‘float’ - for regression problems. ‘signal’ - regression or classification a continuous (possbily multichannel) data. Requires “transform_targets” function to be applied to target variables.

Type:

str {‘int’, ‘float’}

n_folds

Number of folds to split the data for training/validation/testing. One fold of the n_folds is used as a validation set. If test_set == ‘holdout’ generates one extra fold used as test set. Defaults to 5

Type:

int, optional

test_set

Defines if a separate holdout test set is required. ‘holdout’ saves 50% of the validation set ‘loso’ produces an addtional trfrecord so that each input file can be used as a test test in leave-one-subject-out cross-validation. None does not produce a separate test set. Defaults to None.

Type:

str {‘holdout’, ‘loso’, None}, optional

fs

Sampling frequency, required only if inputs are not mne.Epochs

Type:

float, optional

Notes

See mneflow.produce_tfrecords and mneflow.utils.preprocess for more details and availble options.

__init__()
explore_components(sorting='output_corr', integrate=['vars', 'folds'], info=None, sensor_layout='Vectorview-grad', class_names=None)

Plots the weights of the output layer.

Parameters:
  • pat (int [0, self.specs['n_latent'])) – Index of the latent component to higlight

  • t (int [0, self.h_params['n_t'])) – Index of timepoint to highlight

Returns:

Imshow [n_latent, y_shape]

Return type:

figure

make_fake_evoked(topos, sensor_layout)
plot_combined_pattern()
plot_spectra(method='output_corr', log=True, savefig=False)
Relative power spectra of a given latent componende before and after

applying the convolution.

Parameters:
  • patterns_struct – instance of patterns_struct produced by model.compute_patterns

  • ax (axes) –

  • fs (float) – Sampling frequency.

  • log (bool) – Apply log-transform to the spectra.

plot_topos(topos, sensor_layout='Vectorview-mag', class_subset=None)

Plot any spatial distribution in the sensor space. TODO: Interpolation??

Parameters:
  • topos (np.array) – [n_ch, n_classes, …]

  • sensor_layout (TYPE, optional) – DESCRIPTION. The default is ‘Vectorview-mag’.

  • class_subset (np.array, optional) –

Return type:

None.

plot_waveforms()
restore_model()

Restored previously saved model from metadata.

save(verbose=True)

Saves the metadata to self.data[‘path’] + self.data[‘data_id’]

update(data=None, preprocessing=None, train_params=None, model_specs=None, patterns=None, results=None, weights=None)

Updates metadata file

mneflow.models.BaseModel class

class mneflow.models.BaseModel(meta=None, dataset=None, specs_prefix=False)

Bases: object

Parent class for all MNEflow models.

Provides fast and memory-efficient data handling and simplified API. Custom models can be built by overriding _build_graph and _set_optimizer methods.

__init__(meta=None, dataset=None, specs_prefix=False)
Parameters:
  • Dataset (mneflow.Dataset) – Dataset object.

  • specs (dict) – Dictionary of model-specific hyperparameters. Must include at least model_path - path for saving a trained model See Model subclass definitions for details. Unless otherwise specified uses default hyperparameters for each implemented model.

build(optimizer='adam', loss=None, metrics=None, mapping=None, learn_rate=0.0003)

Compile a model.

Parameters:
  • optimizer (str, tf.optimizers.Optimizer) – Deafults to “adam”

  • loss (str, tf.keras.losses.Loss) – Defaults to MSE in target_type is “float” and “softmax_crossentropy” if “target_type” is int

  • metrics (str, tf.keras.metrics.Metric) –

    Defaults to RMSE in target_type is “float” and

    ”categorical_accuracy” if “target_type” is int

  • learn_rate (float) – Learning rate, defaults to 3e-4

  • mapping (str) –

build_graph()

Build computational graph using defined placeholder self.X as input.

Can be overriden in a sub-class for customized architecture.

Returns:

y_pred – Output of the forward pass of the computational graph. Prediction of the target variable.

Return type:

tf.Tensor

evaluate(dataset=False)
Returns:

  • losses (list) – model loss on a specified dataset

  • metrics (np.array) – metrics evaluated on a specified dataset

permutation_p_value(dataset=None, n_perm=1000)
plot_hist()

Plot loss history during training.

predict(dataset=None)
Returns:

  • y_true (np.array) – ground truth labels taken from the dataset

  • y_pred (np.array) – model predictions

predict_sample(x)
prune_weights(increase_regularization=3.0)
save()

Saves the model and (optionally, patterns, confusion matrices)

shuffle_weights()
train(n_epochs=10, eval_step=None, min_delta=1e-06, early_stopping=3, mode='single_fold', prune_weights=False, collect_patterns=False, class_weights=None)

Train a model

Parameters:
  • n_epochs (int) – Maximum number of training eopchs.

  • eval_step (int, None) – iterations per epoch. If None each epoch passes the training set exactly once

  • early_stopping (int) – Patience parameter for early stopping. Specifies the number of epochs’s during which validation cost is allowed to rise before training stops.

  • min_delta (float, optional) – Convergence threshold for validation cost during training. Defaults to 1e-6.

  • mode (str, optional) – can be ‘single_fold’, ‘cv’, ‘loso’. Defaults to ‘single_fold’

  • collect_patterns (bool) – Whether to compute and store patterns after training each fold.

  • class_weights (None, dict) – Whether to apply cutom wegihts fro each class

update_log(rms=None, prefix='')

Logs experiment to self.model_path + self.scope + ‘_log.csv’.

If the file exists, appends a line to the existing file.

update_results()

mneflow.Dataset

class mneflow.data.Dataset(meta, train_batch=50, test_batch=None, split=True, class_subset=None, pick_channels=None, decim=None, rebalance_classes=False, **kwargs)

Bases: object

TFRecords dataset from TFRecords files using the metadata.

__init__(meta, train_batch=50, test_batch=None, split=True, class_subset=None, pick_channels=None, decim=None, rebalance_classes=False, **kwargs)

Initialize tf.data.TFRdatasets.

Parameters:
  • meta (MetaData) – Instance of MetaData, output of mneflow.utils.produce_tfrecords. See mneflow.utils.produce_tfrecords and mneflow.MetaData for details.

  • train_batch (int, None, optional) – Training mini-batch size. Defaults to 50. If None equals to the whole training set size

  • test_batch (int, None, optional) – Training mini-batch size. Defaults to None. If None equals to the whole test/validation set size

  • split (bool) – Whether to split dataset into training and validation folds based on h_params[‘folds’]. Defaults to True. Can be False if dataset is imported for evaluationg performance on the held-out set or vizualisations

  • class_subset (list of int) – Pick a susbet of the classes. Example in 5-class clalssification problem class_subset=[0, 2, 4] will filter the dataset to discriminate between these classes, without changing the parameters of the whole dataset (e.g. y_shape=5)

  • pick_channels (array of int) – Pick a subset of channels

  • decim (int) – Apply decimation in time. Note this feature does not check for aliasing effects.

  • rebalance_classes (bool) – Apply rejection sampling to oversample underrepresented classes. Defaults to False.

class_weights()

Weights take class proportions into account.

mneflow.utils

Specifies utility functions.

@author: Ivan Zubarev, ivan.zubarev@aalto.fi

class mneflow.utils.MetaData

Bases: object

Class containing all metadata required to run model training, prediction, and evaluation. Produced by mneflow.produce_tfrecords, saved to “path” and can be restored if you need to rerun training on existing tfrecords.

See mneflow.utils.load_meta() docstring.

path

A path where the output TFRecord (path + /tfrecrods/) files, models (path + /models/), and the corresponding metadata file (path + data_id + meta.pkl) will be stored.

Type:

str

data_id

Filename prefix for the output files and the metadata file.

Type:

str

input_type

Type of input data.

‘trials’ - treats each of n inputs as an iid sample, produces dataset with dimensions (n, 1, t, ch)

‘seq’ - treats each of n inputs as a seqence of shorter segments, produces dataset with dimensions (n, seq_length, segment, ch)

‘continuous’ - treats inputs as a single continuous sequence, produces dataset with dimensions (n*(t-segment)//aug_stride, 1, segment, ch)

Type:

str {‘trials’, ‘continuous’, ‘seq’, ‘fconn’}

target_type

Type of target variable.

‘int’ - for classification, ‘float’ - for regression problems. ‘signal’ - regression or classification a continuous (possbily multichannel) data. Requires “transform_targets” function to be applied to target variables.

Type:

str {‘int’, ‘float’}

n_folds

Number of folds to split the data for training/validation/testing. One fold of the n_folds is used as a validation set. If test_set == ‘holdout’ generates one extra fold used as test set. Defaults to 5

Type:

int, optional

test_set

Defines if a separate holdout test set is required. ‘holdout’ saves 50% of the validation set ‘loso’ produces an addtional trfrecord so that each input file can be used as a test test in leave-one-subject-out cross-validation. None does not produce a separate test set. Defaults to None.

Type:

str {‘holdout’, ‘loso’, None}, optional

fs

Sampling frequency, required only if inputs are not mne.Epochs

Type:

float, optional

Notes

See mneflow.produce_tfrecords and mneflow.utils.preprocess for more details and availble options.

__init__()
explore_components(sorting='output_corr', integrate=['vars', 'folds'], info=None, sensor_layout='Vectorview-grad', class_names=None)

Plots the weights of the output layer.

Parameters:
  • pat (int [0, self.specs['n_latent'])) – Index of the latent component to higlight

  • t (int [0, self.h_params['n_t'])) – Index of timepoint to highlight

Returns:

Imshow [n_latent, y_shape]

Return type:

figure

make_fake_evoked(topos, sensor_layout)
plot_combined_pattern()
plot_spectra(method='output_corr', log=True, savefig=False)
Relative power spectra of a given latent componende before and after

applying the convolution.

Parameters:
  • patterns_struct – instance of patterns_struct produced by model.compute_patterns

  • ax (axes) –

  • fs (float) – Sampling frequency.

  • log (bool) – Apply log-transform to the spectra.

plot_topos(topos, sensor_layout='Vectorview-mag', class_subset=None)

Plot any spatial distribution in the sensor space. TODO: Interpolation??

Parameters:
  • topos (np.array) – [n_ch, n_classes, …]

  • sensor_layout (TYPE, optional) – DESCRIPTION. The default is ‘Vectorview-mag’.

  • class_subset (np.array, optional) –

Return type:

None.

plot_waveforms()
restore_model()

Restored previously saved model from metadata.

save(verbose=True)

Saves the metadata to self.data[‘path’] + self.data[‘data_id’]

update(data=None, preprocessing=None, train_params=None, model_specs=None, patterns=None, results=None, weights=None)

Updates metadata file

mneflow.utils.cont_split_indices(data, events, n_folds=5, segments_per_fold=10)
Parameters:
  • data (ndarray) – 3d data array (n, ch, t)

  • n_folds (int) – number of folds

  • segments_per_fold (int) – minimum number of different (non-contiguous) data segments in each fold

Returns:

  • data (ndarray) – 3d data array (n, ch, t)

  • events (nd.array) – labels

  • folds (list of ndarrays) – indices for each fold

mneflow.utils.cosine_similarity(y_true, y_pred)
mneflow.utils.import_data(inp, array_keys={'X': 'X', 'y': 'y'})

Import epoch data into X, y data/target pairs.

Parameters:
  • inp (list, mne.epochs.Epochs, str) – List of mne.epochs.Epochs or strings with filenames. If input is a single string or Epochs object, it is first converted into a list.

  • array_keys (dict, optional) – Dictionary mapping {‘X’: ‘data_matrix’, ‘y’: ‘labels’}, where ‘data_matrix’ and ‘labels’ are names of the corresponding variables, if the input is paths to .mat or .npz files. Defaults to {‘X’: ‘X’, ‘y’: ‘y’}

Returns:

data, targets – data.shape = [n_epochs, channels, times]

targets.shape = [n_epochs, y_shape]

Return type:

ndarray

mneflow.utils.load_meta(path, data_id='')

Load a metadata file.

Parameters:

path (str) – Path to TFRecord folder

Returns:

meta – Metadata file

Return type:

MetaData

mneflow.utils.plot_confusion_matrix(cm, classes=None, normalize=True, title=None, cmap=<matplotlib.colors.LinearSegmentedColormap object>, vmax=None)

This function prints and plots the confusion matrix. Normalization can be applied by setting normalize=True.

mneflow.utils.preprocess(data, events, sample_counter, input_type='trials', n_folds=5, scale=False, scale_interval=None, crop_baseline=False, segment=False, aug_stride=None, seq_length=None, segment_y=False)

Preprocess input data. Applies scaling, segmenting/augmentation, and defines the split into training/validation folds.

Parameters:
  • data (np.array, (n_epochs, n_channels, n_times)) – input data array

  • events (np.array) – input array of target variables (n_epochs, …)

  • input_type (str {trials, continuous}) – See produce_tfrecords.

  • n_folds (int) – Number of folds defining the train/validation/test split.

  • sample_counter (int) – Number of traning examples in the dataset

  • scale (bool, optional) – Whether to perform scaling to baseline. Defaults to False.

  • scale_interval (NoneType, tuple of ints or floats, optional) – Baseline definition. If None (default) scaling is performed based on all timepoints of the epoch. If tuple, than baseline is data[tuple[0] : tuple[1]]. Only used if scale == True.

  • crop_baseline (bool, optional) – Whether to crop baseline specified by ‘scale_interval’ after scaling (defaults to False).

  • segment (bool, int, optional) – If specified, splits the data into smaller segments of specified number of time points. Defaults to False

  • aug_stride (int, optional) – If specified, sets the stride (in time points) for ‘segment’ allowing to extract overalapping segments. Has to be <= segment. Only applied within each fold to prevent data leakeage. Only applied if ‘segment’ is not False. If None, then it is set equal to length of the ‘segment’ returning non-overlapping segments. Defaults to None.

  • seq_length (int or None) – Length of segment sequence.

  • segment_y (bool) – whether to segment target variable in the same way as data. Only used if segment != False

Returns:

  • X (np.array) – Data array of dimensions [n_epochs, n_seq, n_t, n_ch]

  • Y (np.array) – Label arrays of dimensions [n_epochs, *(y_shape)]

  • folds (list of np.arrays)

mneflow.utils.preprocess_realtime(data, decimate=False, picks=None, bp_filter=False, fs=None)

Implements minimal prprocessing for convenitent real-time use.

Parameters:
  • data (np.array, (n_epochs, n_channels, n_times)) – input data array

  • picks (np.array) – indices of channels to pick

  • decimate (int) – decimation factor for downsampling

  • bp_filter (tuple of ints) – Band-pass filter cutoff frequencies

  • fs (int) – sampling frequency. Only used if bp_filter is used

mneflow.utils.preprocess_targets(y, scale_y=False, transform_targets=None)
mneflow.utils.produce_labels(y, return_stats=True)

Produce labels array from e.g. event (unordered) trigger codes.

Parameters:
  • y (ndarray, shape (n_epochs,)) – Array of trigger codes.

  • return_stats (bool) – Whether to return optional outputs.

Returns:

  • inv (ndarray, shape (n_epochs)) – Ordered class labels.

  • total_counts (int, optional) – Total count of events.

  • class_proportions (dict, optional) – {new_class: proportion of new_class in the dataset}.

  • orig_classes (dict, optional) – Mapping {new_class_label: old_class_label}.

mneflow.utils.produce_tfrecords(inputs, path, data_id, fs=1.0, input_type='trials', target_type='int', array_keys={'X': 'X', 'y': 'y'}, n_folds=5, predefined_split=None, test_set=False, scale=False, scale_interval=None, crop_baseline=False, segment=False, aug_stride=None, seq_length=None, overwrite=False, transform_targets=False, scale_y=False)

Produce TFRecord files from input, apply (optional) preprocessing.

Calling this function will convert the input data into TFRecords format that is used to effiently store and run Tensorflow models on the data.

Parameters:
  • inputs (mne.Epochs, list of str, tuple of ndarrays) – Input data.

  • path (str) – A path where the output TFRecord and corresponding metadata files will be stored.

  • data_id (str) – Filename prefix for the output files.

  • fs (float, optional) – Sampling frequency, required only if inputs are not mne.Epochs

  • input_type (str {'trials', 'continuous', 'seq', 'fconn'}) –

    Type of input data.

    ’trials’ - treats each of n inputs as an iid sample, produces dataset with dimensions (n, 1, t, ch)

    ’seq’ - treats each of n inputs as a seqence of shorter segments, produces dataset with dimensions (n, seq_length, segment, ch)

    ’continuous’ - treats inputs as a single continuous sequence, produces dataset with dimensions (n*(t-segment)//aug_stride, 1, segment, ch)

  • target_type (str {'int', 'float'}) –

    Type of target variable.

    ’int’ - for classification, ‘float’ - for regression problems. ‘signal’ - regression or classification a continuous (possbily multichannel) data. Requires “transform_targets” function to be applied to target variables

  • n_folds (int, optional) – Number of folds to split the data for training/validation/testing. One fold of the n_folds is used as a validation set. If test_set == ‘holdout’ generates one extra fold used as test set. Defaults to 5

  • predefined_split (list or lists, optional) – Pre-defined split of the dataset into training/validation folds. Should match exactly the size and type of MetaData.data[‘folds’], size of the dataset, and contain n_folds.

  • test_set (str {'holdout', 'loso', None}, optional) – Defines if a separate holdout test set is required. ‘holdout’ saves 50% of the validation set ‘loso’ saves the whole dataset in original order for leave-one-subject-out cross-validation. None does not leave a separate test set. Defaults to None.

segmentbool, int, optional

If specified, splits the data into smaller segments of specified number of time points. Defaults to False

aug_strideint, optional

Sliding window agumentation stride parameter. If specified, sets the stride (in time points) for ‘segment’ allowing to extract overalapping segments. Has to be <= segment. Only applied within each fold to prevent data leakeage. Only applied if ‘segment’ is not False. If None, then it is set equal to length of the ‘segment’ returning non-overlapping segments. Defaults to None.

scalebool, optional

Whether to perform scaling to baseline. Defaults to False.

scale_intervalNoneType, tuple of ints, optional

Baseline definition. If None (default) scaling is performed based on all timepoints of the epoch. If tuple, then baseline is data[tuple[0] : tuple[1]]. Only used if scale == True.

crop_baselinebool, optional

Whether to crop baseline specified by ‘scale_interval’ after scaling. Defaults to False.

array_keysdict, optional

Dictionary mapping {‘X’:’data_matrix’,’y’:’labels’}, where ‘data_matrix’ and ‘labels’ are names of the corresponding variables if the input is paths to .mat or .npz files. Defaults to {‘X’:’X’, ‘y’:’y’}

transform_targetscallable, optional

custom function used to transform target variables

seq_lengthint, optional

Length of segment sequence.

overwritebool, optional

Whether to overwrite the metafile if it already exists at the specified path.

Returns:

meta – Metadata associated with the processed dataset. Contains all the information about the dataset required for further processing with mneflow. Whenever the function is called the copy of metadata is also saved to data_path/meta.pkl so it can be restored at any time.

Return type:

mneflow.MetaData

Notes

Pre-processing functions are implemented mostly for for convenience when working with array inputs. When working with mne.epochs the use of the corresponding mne functions is preferred.

Examples

>>> meta = mneflow.produce_tfrecords(input_paths, \**import_opts)
mneflow.utils.pve(y_true, y_pred)
mneflow.utils.r2_score(y_true, y_pred)
mneflow.utils.regression_metrics(y_true, y_pred)
mneflow.utils.scale_to_baseline(X, baseline=None, crop_baseline=False)

Perform global scaling based on a specified baseline.

Subtracts the mean of each channel and divides by the standard deviation of all channels during the specified baseline interval.

Parameters:
  • X (ndarray) – Data array with dimensions [n_epochs, n_channels, time].

  • baseline (tuple of int, None) – Baseline definition (in samples). If baseline is set to None (default) the whole epoch is used for scaling.

  • crop_baseline (bool) – Whether to crop the baseline after scaling is applied. Only used if baseline is specified.

Returns:

X – Scaled data array.

Return type:

ndarray

mneflow.layers

Defines mneflow.layers for mneflow.models.

@author: Ivan Zubarev, ivan.zubarev@aalto.fi

class mneflow.layers.BaseLayer(*args, **kwargs)

Bases: Layer

__init__(size, nonlin, specs, **args)
class mneflow.layers.DeMixing(*args, **kwargs)

Bases: BaseLayer

Spatial demixing Layer

__init__(scope='dmx', size=None, nonlin=<function identity>, axis=-1, specs={}, **args)
build(input_shape)

Creates the variables of the layer (for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call. It is invoked automatically before the first execution of call().

This is typically used to create the weights of Layer subclasses (at the discretion of the subclass implementer).

Parameters:

input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).

call(x, training=None)
classmethod from_config(config)

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Parameters:

config – A Python dictionary, typically the output of get_config.

Returns:

A layer instance.

get_config()

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Returns:

Python dictionary.

class mneflow.layers.FullyConnected(*args, **kwargs)

Bases: BaseLayer, Layer

Fully-connected layer

__init__(scope='fc', size=None, nonlin=<function identity>, specs={}, **args)
build(input_shape)

Creates the variables of the layer (for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call. It is invoked automatically before the first execution of call().

This is typically used to create the weights of Layer subclasses (at the discretion of the subclass implementer).

Parameters:

input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).

call(x, training=None)

FullyConnected layer currying, to apply layer to any input tensor x

classmethod from_config(config)

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Parameters:

config – A Python dictionary, typically the output of get_config.

Returns:

A layer instance.

get_config()

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Returns:

Python dictionary.

class mneflow.layers.LFTConv(*args, **kwargs)

Bases: BaseLayer

Stackable temporal convolutional layer, interpreatble (LF)

__init__(scope='tconv', size=32, nonlin=<function relu>, filter_length=7, pooling=2, padding='SAME', specs={}, **args)
build(input_shape)

Creates the variables of the layer (for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call. It is invoked automatically before the first execution of call().

This is typically used to create the weights of Layer subclasses (at the discretion of the subclass implementer).

Parameters:

input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).

call(x, training=None)
classmethod from_config(config)

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Parameters:

config – A Python dictionary, typically the output of get_config.

Returns:

A layer instance.

get_config()

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Returns:

Python dictionary.

class mneflow.layers.LSTM(*args, **kwargs)

Bases: LSTM

__init__(scope='lstm', size=32, nonlin='tanh', dropout=0.0, recurrent_activation='tanh', recurrent_dropout=0.0, use_bias=True, unit_forget_bias=True, kernel_regularizer=None, bias_regularizer=None, return_sequences=True, stateful=False, unroll=False, **args)

Initialize the BaseRandomLayer.

Note that the constructor is annotated with @no_automatic_dependency_tracking. This is to skip the auto tracking of self._random_generator instance, which is an AutoTrackable. The backend.RandomGenerator could contain a tf.random.Generator instance which will have tf.Variable as the internal state. We want to avoid saving that state into model.weights and checkpoints for backward compatibility reason. In the meantime, we still need to make them visible to SavedModel when it is tracing the tf.function for the call(). See _list_extra_dependencies_for_serialization below for more details.

Parameters:
  • seed – optional integer, used to create RandomGenerator.

  • force_generator – boolean, default to False, whether to force the RandomGenerator to use the code branch of tf.random.Generator.

  • rng_type – string, the rng type that will be passed to backend RandomGenerator. None will allow RandomGenerator to choose types by itself. Valid values are “stateful”, “stateless”, “legacy_stateful”. Defaults to None.

  • **kwargs – other keyword arguments that will be passed to the parent *class

build(input_shape)

Creates the variables of the layer (for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call. It is invoked automatically before the first execution of call().

This is typically used to create the weights of Layer subclasses (at the discretion of the subclass implementer).

Parameters:

input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).

classmethod from_config(config)

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Parameters:

config – A Python dictionary, typically the output of get_config.

Returns:

A layer instance.

get_config()

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Returns:

Python dictionary.

class mneflow.layers.SquareSymm(*args, **kwargs)

Bases: BaseLayer

SquaredSymmetric Layer

__init__(scope='ssym', size=None, nonlin=<function identity>, axis=1, specs={}, **args)
build(input_shape)

Creates the variables of the layer (for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call. It is invoked automatically before the first execution of call().

This is typically used to create the weights of Layer subclasses (at the discretion of the subclass implementer).

Parameters:

input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).

call(x, training=None)
classmethod from_config(config)

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Parameters:

config – A Python dictionary, typically the output of get_config.

Returns:

A layer instance.

get_config()

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Returns:

Python dictionary.

class mneflow.layers.TempPooling(*args, **kwargs)

Bases: BaseLayer

__init__(scope='pool', stride=2, pooling=2, specs={}, padding='SAME', pool_type='max', **args)
build(input_shape)

Creates the variables of the layer (for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call. It is invoked automatically before the first execution of call().

This is typically used to create the weights of Layer subclasses (at the discretion of the subclass implementer).

Parameters:

input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).

call(x)
get_config()

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Returns:

Python dictionary.

class mneflow.layers.VARConv(*args, **kwargs)

Bases: BaseLayer

Stackable temporal convolutional layer

__init__(scope='tconv', size=32, nonlin=<function relu>, filter_length=7, pooling=2, padding='SAME', specs={}, **args)
build(input_shape)

Creates the variables of the layer (for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call. It is invoked automatically before the first execution of call().

This is typically used to create the weights of Layer subclasses (at the discretion of the subclass implementer).

Parameters:

input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).

call(x, training=None)
classmethod from_config(config)

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Parameters:

config – A Python dictionary, typically the output of get_config.

Returns:

A layer instance.

get_config()

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Returns:

Python dictionary.