Weisfilerlehman
neps.optimizers.bayesian_optimization.kernels.weisfilerlehman
#
WeisfilerLehman
#
WeisfilerLehman(
h: int = 0,
base_type: str = "subtree",
se_kernel: Stationary = None,
layer_weights=None,
node_weights=None,
oa: bool = False,
node_label: str = "op_name",
edge_label: tuple = "op_name",
n_jobs: int = None,
return_tensor: bool = True,
requires_grad: bool = False,
undirected: bool = False,
**kwargs
)
Bases: GraphKernels
Weisfiler Lehman kernel using grakel functions
h: int: The number of Weisfeiler-Lehman iterations base_type: str: defines the base kernel of WL iteration. Possible types are 'subtree' (default), 'sp': shortest path and 'edge' (The latter two are untested) se_kernel: Stationary. defines a stationary vector kernel to be used for successive embedding (i.e. the kernel function on which the vector embedding inner products are computed). if None, use the default linear kernel node_weights oa: whether the optimal assignment variant of the Weisfiler-Lehman kernel should be used node_label: the node_label defining the key node attribute. edge_label: the edge label defining the key edge attribute. only relevant when base_type == 'edge' n_jobs: Parallisation to be used. *current version does not support parallel computing' return_tensor: whether return a torch tensor. If False, a numpy array will be returned. kwargs
Source code in neps/optimizers/bayesian_optimization/kernels/weisfilerlehman.py
20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 |
|
change_se_params
#
change_se_params(params: dict)
Change the kernel parameter of the successive embedding kernel.
Source code in neps/optimizers/bayesian_optimization/kernels/weisfilerlehman.py
feature_map
#
Get the feature map in term of encoding (position in the feature index): the feature string. Parameters
flatten: whether flatten the dict (originally, the result is layered in term of h (the number of WL iterations).
Returns#
Source code in neps/optimizers/bayesian_optimization/kernels/weisfilerlehman.py
feature_value
#
Given a list of architectures X_s, compute their WL embedding of size N_s x D, where N_s is the length of the list and D is the number of training set features.
RETURNS | DESCRIPTION |
---|---|
embedding
|
torch.Tensor of shape N_s x D, described above names: list of shape D, which has 1-to-1 correspondence to each element of the embedding matrix above |
Source code in neps/optimizers/bayesian_optimization/kernels/weisfilerlehman.py
forward_t
#
Forward pass, but in tensor format.
Parameters#
gr1: single networkx graph
Returns#
K: the kernel matrix x2 or y: the leaf variable(s) with requires_grad enabled. This allows future Jacobian-vector product to be efficiently computed.
Source code in neps/optimizers/bayesian_optimization/kernels/weisfilerlehman.py
transform
#
transform(gr: list)
transpose: by default, the grakel produces output in shape of len(y) * len(x2). Use transpose to reshape that to a more conventional shape..