smac.epm.gaussian_process.kernels.boing

Functions

construct_gp_kernel(kernel_kwargs, ...)

Construct a GP kernel with the given kernel init argument, the cont_dims, and cat_dims of the problem.

Classes

FITCKernel(base_kernel, X_inducing, ...[, ...])

FITCMean(covar_module[, batch_shape])

MixedKernel(cont_kernel, cat_kernel)

A special form of ProductKernel.

class smac.epm.gaussian_process.kernels.boing.FITCKernel(base_kernel, X_inducing, likelihood, X_out, y_out, active_dims=None)[source]

Bases: gpytorch.kernels.kernel.Kernel

forward(x1, x2, diag=False, **kwargs)[source]

Compute the kernel function

Return type

LazyTensor

num_outputs_per_input(x1, x2)[source]

Number of outputs given the inputs if x1 is of size n x d and x2 is size m x d, then the size of the kernel will be (n * num_outputs_per_input) x (m * num_outputs_per_input)

Parameters
  • x1 (torch.Tensor) – the first input of the kernel

  • x2 (torch.Tensor) – the second input of the kernel

Returns

res – for base kernels such as matern or RBF kernels, this value needs to be 1.

Return type

int

posterior_mean(inputs)[source]

The posterior mean of the FITC kernel, will serve as the prior mean of the dense kernel.

Parameters

inputs (torch.Tensor(N_inputs, D)) – input of the FITC kernel

Returns

res – The posterior mean of the FITC Kernel

Return type

Torch.Tensor (N_inputs, 1)

class smac.epm.gaussian_process.kernels.boing.FITCMean(covar_module, batch_shape=torch.Size([]), **kwargs)[source]

Bases: gpytorch.means.mean.Mean

forward(input)[source]

Compute the posterior mean from the cached value of FITC kernels

Parameters

input (torch.Tensor(N_xin, D)) – input torch Tensor

Returns

res – posterior mean value of FITC GP model

Return type

torch.Tensor(N_xin)

class smac.epm.gaussian_process.kernels.boing.MixedKernel(cont_kernel, cat_kernel)[source]

Bases: gpytorch.kernels.kernel.ProductKernel

A special form of ProductKernel. It is composed of a cont_kernel and a cat_kernel that work with continuous and categorical parameters, respectively. Its forward pass allows an additional parameter to determine if only cont_kernel is applied to the input.

forward(x1, x2, diag=False, cont_only=False, **params)[source]

Compute kernel values, if cont_only is True, then the categorical kernel is omitted

Return type

LazyTensor

smac.epm.gaussian_process.kernels.boing.construct_gp_kernel(kernel_kwargs, cont_dims, cat_dims)[source]

Construct a GP kernel with the given kernel init argument, the cont_dims, and cat_dims of the problem. Since the subspace might not have the same number of dimensions as the global search space. We need to reconstruct the kernel every time when a new subspace is generated.

Parameters
  • kernel_kwargs (Dict[str, Any]) –

    kernel kwargs. Arguments to initialize the kernels. It needs to contain the following items:

    cont_kernel: type of continuous kernels cont_kernel_kwargs: additional arguments for continuous kernels, for instance, length constraints and prior cat_kernel: type of categorical kernels cat_kernel_kwargs: additional arguments for categorical kernels, for instance, length constraints and prior scale_kernel: type of scale kernels scale_kernel_kwargs: additional arguments for scale kernels, for instance, length constraints and prior

  • cont_dims (np.ndarray) – dimensions of continuous hyperparameters

  • cat_dims (np.ndarray) – dimensions of categorical hyperparameters

Returns

kernel – constructed kernels

Return type

Union[Kernel, SKLKernels]