grakel.MultiscaleLaplacianFast

class grakel.MultiscaleLaplacianFast(n_jobs=None, normalize=False, verbose=False, random_state=None, L=3, P=10, gamma=0.01, heta=0.01, n_samples=50)[source][source]

Laplacian Graph Kernel as proposed in [KP16].

Parameters
random_stateRandomState or int, default=None

A random number generator instance or an int to initialize a RandomState as a seed.

Lint, default=3

The number of neighborhoods.

gammaReal, default=0.01

A smoothing parameter of float value.

hetafloat, default=0.01

A smoothing parameter of float value.

Pint, default=10

Restrict the maximum number of eigenvalues, taken on eigenvalue decomposition.

n_samplesint, default=50

The number of vertex samples.

Attributes
random_state_RandomState

A RandomState object handling all randomness of the class.

_data_leveldict

A dictionary containing the feature basis information needed for each level calculation on transform.

Methods

diagonal(self)

Calculate the kernel matrix diagonal of the fit/transformed data.

fit(self, X[, y])

Fit a dataset, for a transformer.

fit_transform(self, X)

Fit and transform, on the same dataset.

get_params(self[, deep])

Get parameters for this estimator.

initialize(self)

Initialize all transformer arguments, needing initialization.

pairwise_operation(self, x, y)

FLG calculation for the fast multiscale laplacian.

parse_input(self, X)

Fast ML Graph Kernel.

set_params(self, \*\*params)

Call the parent method.

transform(self, X)

Calculate the kernel matrix, between given and fitted dataset.

Initialise a multiscale_laplacian kernel.

Attributes
X

Methods

diagonal(self)

Calculate the kernel matrix diagonal of the fit/transformed data.

fit(self, X[, y])

Fit a dataset, for a transformer.

fit_transform(self, X)

Fit and transform, on the same dataset.

get_params(self[, deep])

Get parameters for this estimator.

initialize(self)

Initialize all transformer arguments, needing initialization.

pairwise_operation(self, x, y)

FLG calculation for the fast multiscale laplacian.

parse_input(self, X)

Fast ML Graph Kernel.

set_params(self, \*\*params)

Call the parent method.

transform(self, X)

Calculate the kernel matrix, between given and fitted dataset.

__init__(self, n_jobs=None, normalize=False, verbose=False, random_state=None, L=3, P=10, gamma=0.01, heta=0.01, n_samples=50)[source][source]

Initialise a multiscale_laplacian kernel.

diagonal(self)[source]

Calculate the kernel matrix diagonal of the fit/transformed data.

Parameters
None.
Returns
X_diagnp.array

The diagonal of the kernel matrix between the fitted data. This consists of each element calculated with itself.

Y_diagnp.array

The diagonal of the kernel matrix, of the transform. This consists of each element calculated with itself.

fit(self, X, y=None)[source]

Fit a dataset, for a transformer.

Parameters
Xiterable

Each element must be an iterable with at most three features and at least one. The first that is obligatory is a valid graph structure (adjacency matrix or edge_dictionary) while the second is node_labels and the third edge_labels (that fitting the given graph format). The train samples.

yNone

There is no need of a target in a transformer, yet the pipeline API requires this parameter.

Returns
selfobject
Returns self.
fit_transform(self, X)[source]

Fit and transform, on the same dataset.

Parameters
Xiterable

Each element must be an iterable with at most three features and at least one. The first that is obligatory is a valid graph structure (adjacency matrix or edge_dictionary) while the second is node_labels and the third edge_labels (that fitting the given graph format). If None the kernel matrix is calculated upon fit data. The test samples.

yNone

There is no need of a target in a transformer, yet the pipeline API requires this parameter.

Returns
Knumpy array, shape = [n_targets, n_input_graphs]

corresponding to the kernel matrix, a calculation between all pairs of graphs between target an features

get_params(self, deep=True)[source]

Get parameters for this estimator.

Parameters
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns
paramsmapping of string to any

Parameter names mapped to their values.

initialize(self)[source][source]

Initialize all transformer arguments, needing initialization.

pairwise_operation(self, x, y)[source][source]

FLG calculation for the fast multiscale laplacian.

Parameters
x, ytuple

An np.array of inverse and the log determinant of S (for the calculation of S matrices see the algorithm 1

of the supplement material in cite:kondor2016multiscale).

Returns
kernelnumber

The FLG core kernel value.

parse_input(self, X)[source][source]

Fast ML Graph Kernel.

See supplementary material [KP16], algorithm 1.

Parameters
Xiterable

For the input to pass the test, we must have: Each element must be an iterable with at most three features and at least one. The first that is obligatory is a valid graph structure (adjacency matrix or edge_dictionary) while the second is node_labels and the third edge_labels (that correspond to the given graph format). A valid input also consists of graph type objects.

Returns
outlist

A list of tuples with S matrices inverses and their 4th-root determinants.

set_params(self, **params)[source]

Call the parent method.

transform(self, X)[source]

Calculate the kernel matrix, between given and fitted dataset.

Parameters
Xiterable

Each element must be an iterable with at most three features and at least one. The first that is obligatory is a valid graph structure (adjacency matrix or edge_dictionary) while the second is node_labels and the third edge_labels (that fitting the given graph format). If None the kernel matrix is calculated upon fit data. The test samples.

Returns
Knumpy array, shape = [n_targets, n_input_graphs]

corresponding to the kernel matrix, a calculation between all pairs of graphs between target an features

Bibliography

KP16(1,2)

Risi Kondor and Horace Pan. The Multiscale Laplacian Graph Kernel. In Advances in Neural Information Processing Systems, 2990–2998. 2016.