elektronn.training package

elektronn.training.CNNData module

elektronn.training.CNNData.plotTrainingTarget(img, lab, stride=1)[source]

Plots raw image vs label to check if valid batches are produced. Raw data is also shown overlaid with labels

Parameters:
img: 2d array

raw image from batch

lab: 2d array

labels

stride: int

stride of labels

class elektronn.training.CNNData.CNNData(patch_size=None, stride=None, offset=None, n_dim=2, n_lab=None, anistropic_data=False, mode='img-img', zchxy_order=False, border_mode='crop', pre_process=None, upright_x=False, float_label=False, affinity=False)[source]

Bases: object

Patch creation and data handling interface for image like training data

Parameters:
patch_size: 2/3-tuple

Specifying CNN input shape of a single example, without channels: (x,y)/(z,x,y)

stride: 2/3-tuple

Specifying CNN output stride. May be None for scalar labels

offset: 2/3-tuple

Specifying overall CNN convolution border. May be None for scalar labels

n_dim: int

2 or 3, CNN dimension

n_lab: int

Number of distinct classes/labels, if not provided (->None) this is automatically inferred (slow!)

anistropic_data: Bool

If True 2d slices are only cut and rotated along z-axis, otherwise all 3 alignments are used

mode: str

Mode that describes the kind of data and labels: img-img or img-scalar. If the labels are scalar but the data is a stack (along z-axis) of many examples, the many scalar labels should be stacked to a vector. For vect-scalar training use the TrainData-class instead.

zchxy_order: Bool

If the data files are already in memory layout (z,ch,x,y)/(z,x,y), this option must be set to True, which makes data loading faster.

border_mode: string

For img-scalar training: specifies how to treat images that don’t match a valid CNN input size

upright_x: Bool

If True, image augmentation leaves the upright position of natural images intact, e.g. they are only mirrored horizontally, not vertically. Note: the horizontal direction corresponds to ‘y’ (because the ‘x’ comes before ‘y’ and the vertical comes before horizontal in numpy)!

float_label: Bool

Whether to return labels as float32 (for regression) or int16 (for classification)

affinity: str/False

False/’affinity’/’malis’: malis returs additionally the segmentation IDs

addDataFromFile(d_path, l_path, d_files, l_files, cube_prios=None, valid_cubes=[], downsample_xy=False)[source]
Parameters:
d_path/l_path: string

Directories to load data from

d_files/l_files: list

List of data/label files in <path> directory (must be in the same order!). Each list

element is a tuple in the form **(<Name of h5-file>, <Key of h5-dataset>)**
cube_prios: list

(not normalised) list of sampling weights to draw examples from the respective cubes. If None the cube sizes are taken as priorities.

valid_cubes: list

List of indices for cubes (from the file-lists) to use as validation data and exclude from training, may be empty list to skip performance estimation on validation data.

addDataFromNdarray(d_train, l_train, d_valid=[], l_valid=[], cube_prios=None)[source]
Parameters:
d_train: list of numpy arrays

the input data for Training

l_train: list of numpy arrays

the labels for Training

d_valid: list of numpy arrays

the input data for validation [OPTIONAL]

l_valid: list of numpy arrays

the labels for validation [OPTIONAL]

cube_prios: list of floats

Default: None –> probability of sampling Training data from a cube is proportional to its size

getbatch(batch_size=1, source='train', strided=True, flip=True, grey_augment_channels=[], ret_info=False, warp_on=False, ignore_thresh=0.0, ret_example_weights=False)[source]

Prepares a batch by randomly sampling, shifting and augmenting patches from the data

Parameters:
batch_size: int

Number of examples in batch (for CNNs often just 1)

source: string

Data set to draw data from: ‘train’/’valid’

strided: Bool

If True the labels are sub-sampled according to the CNN output stride. Non-strided labels requires MFP in the CNN!

flip: Bool

If True examples are mirrored and rotated by 90 deg randomly

grey_augment_channels: list

List of channel indices to apply grey-value augmentation to

ret_info: Bool

If True additional information for reach batch example is returned. Currently implemented are two info arrays to indicate the labelling mode. The first dimension of those arrays is the batch_size!

warp_on: Bool/Float(0,1)

Whether warping/distortion augmentations are applied to examples (slow –> use multiprocessing) If this is a float number, warping is applied to this fraction of examples e.g. 0.5 –> every other example

ignore_thresh: float

If the fraction of negative labels in an example patch exceeds this threshold, this example is discarded (Negative labels are ignored for training [but could be used for unsupervised label propagation]).

Returns:
data:

[bs, ch, x, y] or [bs, z, ch, x, y] for 2d and 3d CNNS

label:

[bs, x, y] or [bs, z, x, y]

info1:

(optional) [bs, n_lab]

info2:

(optional) [bs, n_lab]

getExampleWeights(raw_rec, lab, gain=2.0, blurr=False)[source]

elektronn.training.config module

class elektronn.training.config.MasterConfig[source]

Bases: object

class elektronn.training.config.DefaultConfig[source]

Bases: elektronn.training.config.MasterConfig

class elektronn.training.config.Config(config_file, gpu, trainer_file, use_existing_dir=False, override_MFP_to_active=False, imposed_input_size=None)[source]

Bases: object

Configuration object to manage the parameters of trainingInstance

The monitor_batch_size is automatically fixed to a multiple of the batch_size. An attribute dimensions of type Net.netutils.CNNCalcquotesulator is created that checks if the CNN architecture (combination of filter sizes and poolings) is valid and determines the input shape closest to the desired_input The top level script (trainer_file), the config, the Net module and the Training module are backed up into the CNN directory automatically. The Backup is intended to contain all code to reproduce the CNN Training

Parameters:
config_file: string

Path to a CNN config file

gpu: int

Specifying id of GPU to initialise for usage. E.g. 1 –> “gpu1”, None will initialise gpu0, False will not initialise any GPU. This only works if “device” is not set in .theanorc or if theano has not been imported up to now. If the initialisation fails an error will be printed but the script will not crash.

trainer_file: string

Path to the NetTrainer-script (or any other top level script that drives the Training). The path is needed to backup the script in the CNN directory

use_existing_dir: Bool

Do not create a new directory for the CNN if True

override_MFP_to_active: Bool

If true, activates MFP in all layers where possible, ignoring the configuration in the config file. This is useful for prediction using a config file from training. (only for CNN)

override_input_size_with: tuple or None

Similar as above, this can be used to impose another input size than specified in the config file. (only for CNN)

mandatory_vars = ['SGD_params', 'batch_size', 'max_runtime', 'monitor_batch_size', 'n_steps', 'n_lab', 'save_name', 'mode']
mandatory_data = ['d_files', 'data_path', 'l_files', 'label_path']
mandatory_cnn = ['desired_input', 'filters', 'n_dim', 'nof_filters', 'pool']
mandatory_mlp = ['MLP_layers']
parseConfig()[source]
fixValues()[source]
typeChecks()[source]
checkConfig(custom_dict)[source]
backupScripts(trainer_file)[source]

Saves all python files into the folder specified by self.save_path Also changes working directory to the save_path directory

levenshtein(s1, s2)[source]

Computes Levenshtein-distance between s1 and s2 strings Taken from: http://en.wikibooks.org/wiki/Algorithm_Implementation/Strings/Levenshtein_distance#Python

elektronn.training.parallelisation module

class elektronn.training.parallelisation.SharedMem[source]

Bases: object

Utilities to share np.arrays between processes

static shm2ndarray(mp_array, shape=None)[source]
Parameters:
mp_array: a mp.Array
shape: (optional) the returned np.ndarray is reshaped to this shape, flat otherwise
Returns:
array: np.ndarray

That can be normally used but changes are reflected in shared mem

Note: the returned array is still pointing to the sharedmem, data might be changed by another process!
static ndarray2shm(np_array, lock=False)[source]
Parameters:
np_array: np.ndarray

array of arbitrary shape

lock: Bool

Whether to create a multiprocessing.Lock

Returns
——-
handle: mp.Array:

flat with data from ndarray copied to it

puthandle(dtype, shape, data=None, lock=False)[source]

Creates new shared memory and puts it on the queue. Other sub-processes can write to it.

Parameters:
dtype: np.dtype

Type of data to store in array

shape: tuple

Properties of shared mem to be created

data: np.ndarray

(optional) values to fill shared array with

lock: Bool

Whether to create a multiprocessing.Lock on the shared variable

Returns:
sharedmem handle: mp.array
class elektronn.training.parallelisation.Proc(mp_arrays, shapes, events, target, target_args, target_kwargs, profile)[source]

Bases: multiprocessing.process.Process

A reusable and configurable background process, that does the same job every time events['new'] is set and signals that is has finished one iteration by setting events['ready']

run()[source]

Method to be run in sub-process; can be overridden in sub-class

class elektronn.training.parallelisation.BackgroundProc(target, dtypes=None, shapes=None, n_proc=1, target_args=(), target_kwargs={}, profile=False)[source]

Bases: elektronn.training.parallelisation.SharedMem

get()[source]

This gets the next result from a background process and blocks until the corresponding proc has finished.

shutdown()[source]

Must be called to free memory if the background tasks are no longer needed

reset()[source]

Should be called after an exception (e.g. by pressing ctrl+c) was raised.

class elektronn.training.parallelisation.SharedQ(n_proc=0, default_target=None, default_args=(), default_kwargs={}, profile=False)[source]

Bases: elektronn.training.parallelisation.SharedMem

FIFO Queue to process np.ndarrays in the background (also pre-loading of data from disk)

procs must accept list of mp.Array and make items np.ndarray using SharedQ.shm2ndarray, for this the shapes are required as too. The target requires the signature:

>>> target(mp_arrays, shapes, *args, **kwargs)

Whereas mp_array and shape are automatically added internally

All parameters are optional:

Parameters:
n_proc: int

If larger than 0, a message is printed if to few processes are running

default_target: callable

Default background proc callable

default_args: tuple

Default background proc and their parameters

default_kwargs: dict

Default background proc kwargs

profile: Bool

Whether to print timing results in terminal

Examples

Automatic use:

>>> Q = SharedQ(n_proc=2)
>>> Q.startproc(target=, shape= args=, kwargs=)
>>> Q.startproc(target=, shape= args=, kwargs=)
>>> for i in xrange(5):
>>>   Q.startproc(target=, shape= args=, kwargs=)
>>>   item = Q.get() # starts as many new jobs as to maintain n_proc
>>>   dosomehtingelse(item) # processes work in background to pre-fetch data for next iteration
startproc(dtypes, shapes, target=None, target_args=(), target_kwargs={})[source]

Starts a new process

procs must accept list of mp.Array and make items np.ndarray using SharedQ.shm2ndarray, for this the shapes are required as too. The target requires the signature:

target(mp_arrays, shapes, *args, **kwargs)

Whereas mp_array and shape are automatically added internally

get()[source]

This gets the first results in the queue and blocks until the corresponding proc has finished. If a n_proc value is defined this then new procs must be started before to avoid a warning message.

elektronn.training.predictor module

elektronn.training.predictor.create_predncnn(config_file, n_ch, n_lab, gpu=None, override_MFP_to_active=False, imposed_input_size=None, param_file=None)[source]

Creates and compiles a CNN/NN as specified in a config file (used for training). Loads the last parameters from the training directory.

The CNN/NN object is returned

Parameters:
config_file: string

Path to a CNN config file

n_ch: int

Number of input channels, for a MLP this is the dimensionality of the input vectors, for CNNs this is the number of channels in an image/volume (e.g. 1 for plain gray value images)

n_lab: int

Number of distinct labels/classes

gpu: int

Specifying id of GPU to initialise for usage. E.g. 1 –> “gpu1”, None will initialise gpu0, False will not initialise any GPU. This only works if “device” is not set in .theanorc or if theano has not been imported up to now. If the initialisation fails an error will be printed but the script will not crash.

override_MFP_to_active: Bool

If true, activates MFP in all layers where possible, ignoring the configuration in the config file. This is useful for prediction using a config file from training. (only for CNN)

imposed_input_size: tuple or None

Similar as above, this can be used to impose another input size than specified in the config file. (z,x,y)!!! (only for CNN)

param_file: string/None

If other parameters than “*-Last.param” should be loaded, this can specify the param file.

elektronn.training.traindata module

elektronn.training.traindata.sort_human(file_names)[source]

Sort the given list in the way that humans expect.

class elektronn.training.traindata.Data(n_lab=None)[source]

Bases: object

Load and prepare data, Base-Obj

getbatch(batch_size, source='train')[source]
createSplitPerm(size, subset_ratio=0.8, seed=None)[source]
createCVSplit(data, label, n_folds=3, use_fold=2, shuffle=False, random_state=None)[source]
splitData(data, label, valid_size, split_no=0)[source]
makeExampleSubset(subset_ratio=0.8, seed=None)[source]
makeFeatureSubset(subset_ratio=0.8, seed=None)[source]
class elektronn.training.traindata.BalancedData[source]

Bases: elektronn.training.traindata.Data

getbatch(batch_size, source='train', balanced=False)[source]
getbatch_balanced(batch_size)[source]
class elektronn.training.traindata.QueueData[source]

Bases: elektronn.training.traindata.Data

queueget(n)[source]
queueupdate(nlls, iteration)[source]
queuereset()[source]
save(path='data')[source]
class elektronn.training.traindata.AdultData(path='~/devel/data/adult.pkl', create=False)[source]

Bases: elektronn.training.traindata.Data

class elektronn.training.traindata.MNISTData(path=None, convert2image=True, warp_on=False, shift_augment=True, center=True)[source]

Bases: elektronn.training.traindata.Data

download()[source]
convert_to_image()[source]

For MNIST / flattened 2d, single-layer, square images

getbatch(batch_size, source='train')[source]
class elektronn.training.traindata.BuzzData(path='~/devel/data/Buzz/Twitter/twitter.pkl', norm_targets=True, target_scale=9999, fold_no=0)[source]

Bases: elektronn.training.traindata.Data

getbatch(batch_size, source='train')[source]
class elektronn.training.traindata.PianoData(path='~/devel/data/PianoRoll/Nottingham_enc.pkl', n_tap=20, n_lab=58)[source]

Bases: elektronn.training.traindata.Data

getbatch(batch_size, source='train')[source]
class elektronn.training.traindata.GeneData(path='~/devel/data/GEMLeR_GeneExpression/Breast_Colon.pkl', fold_no=0)[source]

Bases: elektronn.training.traindata.Data

getbatch(batch_size, source='train')[source]

elektronn.training.trainer module

class elektronn.training.trainer.Trainer(config=None)[source]

Bases: object

Object that manages Training of a CNN.

Parameters:
config: trainutils.ConfigObj
Container for all configurations

Examples

All necessary configuration information is contained in cofig:

>>> T = Trainer(config)
>>> T.loadData()
>>> T.createNet()
>>> T.run() # The Training loop

If the config options print_status and plot_on are set the CNN progress can be supervised.

Control during iteration can be exercised by ctrl+c which evokes a commandline. There are various shortcuts displayed but in principle all attributes of the CNN can be accessed:

>>> CNN MENU
>> Debug_Run <<
Shortcuts:
'q' (leave interface),          'abort' (saving params),
'kill'(no saving),      'save'/'load' (opt:filename),
'sf'/' (show filters)',         'smooth' (smooth filters),
'sethist <int>',        'setlr <float>',
'setmom <float>' ,      'params' print info,
Change Training mode :('SGD','CG', 'RPROP', 'LBFGS')
For EVERYTHING else enter your command in the command line
>>> user@cnn: setlr 0.01 # Change learning rate of SGD
>>> user@cnn: CG # Change Training mode CG (Optimizer will be compiled on demand)
Changing Training mode...
>>> user@cnn: self.config.savename # Access an attribute of ``trainerInstance``.
# Inputs containing '(' or '=' will result in a print of the value
'Debug_Run'
>>> user@cnn: print cnn.getDropoutRates() # To see the return of function add 'print'
[0.5, 0.5]
>>> user@cnn: cnn.setOptimizerParams(CG={'max_step': 0.1}) # change CG-'max_step'
>>> user@cnn: q # leave interface
Continuing Training
Compiling CG
  Compiling done - in 7.206 s!
reset()[source]

Resets all history of NLLs etc and randomizes CNN weights, optimiser hyper-parameters are set to initial values from config

run()[source]

Runs the Training loop until termination. Control during iteration can be exercised by ctrl+c which evokes a commandline. There are various shortcuts displayed but in principle all attributes of the CNN can be accessed:

Examples

Using the command line

>>> CNN MENU
>> Debug_Run <<
Shortcuts:
'q' (leave interface),            'abort' (saving params),
'kill'(no saving),        'save'/'load' (opt:filename),
'sf'/' (show filters)',           'smooth' (smooth filters),
'sethist <int>',          'setlr <float>',
'setmom <float>' ,        'params' print info,
Change Training mode :('SGD','CG', 'RPROP', 'LBFGS')
For EVERYTHING else enter your command in the command line
>>> user@cnn: setlr 0.01 # Change learning rate of SGD
>>> user@cnn: CG # Change Training mode CG (Optimizer will be compiled on demand)
Changing Training mode...
>>> user@cnn: self.config.savename # Access an attribute of ``trainerInstance``.
# Inputs containing '(' or '=' will result in a print of the value
'Debug_Run'
>>> user@cnn: print cnn.getDropoutRates() # To see the return of function add 'print'
[0.5, 0.5]
>>> user@cnn: cnn.setOptimizerParams(CG={'max_step': 0.1}) # change CG-'max_step'
>>> user@cnn: q # leave interface
Continuing Training
Compiling CG
  Compiling done - in 7.206 s!
loadData()[source]
createNet()[source]

Creates CNN according to config

debugGetCNNBatch()[source]

Executes getbatch but with un-strided labels and always returning info. The first batch example is plotted and the whole batch is returned for inspection.

testModel(data_source)[source]

Computes NLL and error/accuracy on batch with monitor_batch_size

Parameters:
data_source: string

‘train’ or ‘valid’

Returns:
NLL, error:
predictAndWrite(raw_img, number=0, export_class='all', block_name='', z_thick=5)[source]

Predict and and save a slice as preview image

Parameters:
raw_img : np.ndarray

raw data in the format (ch, x, y, z)

number: int/float

consecutive number for the save name (i.e. hours, iterations etc.)

export_class: str or int

‘all’ writes images of all classes, otherwise only the class with index export_class (int) is saved.

block_name: str

Name/number to distinguish different raw_imges

previewSliceFromTrainData(cube_i=0, off=(0, 0, 0), sh=(10, 400, 400), number=0, export_class='all')[source]

Predict and and save a selected slice from the training data as preview

Parameters:
cube_i: int

index of source cube in CNNData

off: 3-tuple of int

start index of slice to cut from cube (z,x,y)

sh: 3-tuple of int

shape of cube to cut (z,x,y)

number: int

consecutive number for the save name (i.e. hours, iterations etc.)

export_class: str or int

‘all’ writes images of all classes, otherwise only the class with index export_class (int) is saved.

previewSlice(number=0, export_class='all', max_z_pred=5)[source]

Predict and and save a data from a separately loaded file as preview

Parameters:
number: int/float

consecutive number for the save name (i.e. hours, iterations etc.)

export_class: str or int

‘all’ writes images of all classes, otherwise only the class with index export_class (int) is saved.

max_z_pred: int

approximate maximal number of z-slices to produce (depends on CNN architecture)

malisPreviewSlice(batch, name='A')[source]

elektronn.training.trainutils module

elektronn.training.trainutils.import_variable_from_file(file_path, class_name)[source]
elektronn.training.trainutils.parseargs(gpu)[source]
elektronn.training.trainutils.parseargs_dev(args, config_file, gpu)[source]

Parses the commandline arguments if elektronn-train is called as:

“elektronn-train [config=</path/to_file>] [ gpu={auto|false|<int>}]”

elektronn.training.trainutils.get_free_gpu(wait=0, nb_gpus=-1)[source]
elektronn.training.trainutils.initGPU(gpu)[source]
elektronn.training.trainutils.xyz2zyx(shapes)[source]

Swaps dimension order for list of (filter) shapes. This is needed to allow users to specify 2d and 3d filters in the same order.

elektronn.training.trainutils.plotInfoFromFiles(path, save_name, autoscale=True)[source]

Create the plots from backup files in the CNN directory (e.g. if plotting was not on during training). The plots are generated as pngs in the current working directory and will not show up.

Parameters:
path: string

Path to CNN-folder

save_name: string

name of cnn / file prefix

autoscale: Bool

If true axis are optimised for value read-off, if false, mpl default scaling is used

elektronn.training.trainutils.saveHist(timeline, history, CG_timeline, errors, save_name)[source]
elektronn.training.trainutils.plotInfo(timeline, history, CG_timeline, errors, save_name, autoscale=True)[source]

Plot graphical info during Training

elektronn.training.trainutils.previewDiffPlot(names, root_dir='~/CNN_Training/3D', block_name=0, c=1, z=0, number=1, save=True)[source]

Visualisation tool to compare the predictions of 2 or multiple CNNs It is assumed that

Parameters:
names: list of str

Folder/Save names of the CNNs

root_dir: str

path in which the CNN folders are located

block_name: int/str

Number/Name of the prediction preview example (“…pred_<i>_c..”)

elektronn.training.trainutils.pprintmenu(save_name)[source]

print menu string

elektronn.training.trainutils.userInput(cnn, history_freq)[source]
elektronn.training.trainutils.pickleSave(data, file_name)[source]

Writes one or many objects to pickle file

data:
single objects to save or iterable of objects to save. For iterable, all objects are written in this order to the file.
file_name: string
path/name of destination file
elektronn.training.trainutils.pickleLoad(file_name)[source]

Loads all object that are saved in the pickle file. Multiple objects are returned as list.

elektronn.training.trainutils.h5Save(data, file_name, keys='None', compress=True)[source]

Writes one or many arrays to h5 file

data:
single array to save or iterable of arrays to save. For iterable all arrays are written to the file.
file_name: string
path/name of destination file
keys: string / list thereof
For single arrays this is a single string which is used as a name for the data set. For multiple arrays each dataset is named by the corresponding key. If keys is None, the dataset names created by enumeration: data%i
compress: Bool
Whether to use lzf compression, defaults to True. Most useful for label arrays.
elektronn.training.trainutils.h5Load(file_name, keys=None)[source]

Loads data sets from h5 file

file_name: string
destination file
keys: string / list thereof
Load only data sets specified in keys and return as list in the order of keys For a single key the data is returned directly - not as list If keys is None all datasets that are listed in the keys-attribute of the h5 file are loaded.
elektronn.training.trainutils.timeit(foo, n=1)[source]

Decorator: decorates foo such that its execution time is printed upon call

elektronn.training.warping module

elektronn.training.warping.warp2dJoint(img, lab, patch_size, rot, shear, scale, stretch)[source]

Warp image and label data jointly. Non-image labels are ignored i.e. lab must be 3d to be warped

Parameters:
img: array

Image data The array must be 3-dimensional (ch,x,y) and larger/equal the patch size

lab: array

Label data (with offsets subtracted)

patch_size: 2-tuple

Patch size excluding channel for the image: (px, py). The warping result of the input image is cropped to this size

rot: float

Rotation angle in deg for rotation around z-axis

shear: float

Shear angle in deg for shear w.r.t xy-diagonal

scale: 3-tuple of float

Scale per axis

stretch: 4-tuple of float

Fraction of perspective stretching from the center (where stretching is always 1) to the outer border of image per axis. The 4 entry correspond to:

  • X stretching depending on Y
  • Y stretching depending on X
Returns:
img, lab: np.ndarrays

Warped image and labels (cropped to patch_size)

elektronn.training.warping.warp3dJoint(img, lab, patch_size, rot=0, shear=0, scale=(1, 1, 1), stretch=(0, 0, 0, 0), twist=0)[source]

Warp image and label data jointly. Non-image labels are ignored i.e. lab must be 3d to be warped

Parameters:
img: array

Image data The array must be 4-dimensional (z,ch,x,y) and larger/equal the patch size

lab: array

Label data (with offsets subtracted)

patch_size: 3-tuple

Patch size excluding channel for the image: (pz, px, py). The warping result of the input image is cropped to this size

rot: float

Rotation angle in deg for rotation around z-axis

shear: float

Shear angle in deg for shear w.r.t xy-diagonal

scale: 3-tuple of float

Scale per axis

stretch: 4-tuple of float

Fraction of perspective stretching from the center (where stretching is always 1) to the outer border of image per axis. The 4 entry correspond to:

  • X stretching depending on Y
  • Y stretching depending on X
  • X stretching depending on Z
  • Y stretching depending on Z
twist: float

Dependence of the rotation angle on z in deg from center to outer border

Returns:
img, lab: np.ndarrays

Warped image and labels (cropped to patch_size)

elektronn.training.warping.getCornerIx(sh)[source]

Returns array-indices of corner elements for n-dim shape

elektronn.training.warping.getRequiredPatchSize(patch_size, rot, shear, scale, stretch, twist=None)[source]

Given desired patch size and warping parameters: return required size for warping input patch

elektronn.training.warping.getWarpParams(patch_size, amount=1.0)[source]

To be called from CNNData. Get warping parameters + required warping input patch size.

elektronn.training.warping.maketestimage(sh)[source]
elektronn.training.warping.test()[source]

Module contents