The release log for BoTorch.
- Enable creating a new Ax release
- Support
post_processing_funcinoptimize_with_nsgaiifor post-processing optimization results, e.g., to round discrete dimensions to valid values (#3215).
- Require GPyTorch>=1.15.2 and linear_operator>=0.6.1 (#3182).
- Fix BAxUS bugs (#3204).
- Fix q-dim bug in
ScalarizedPosteriorMean(#3191). - Project
candidate_setinqMultiFidelityMaxValueEntropy.__init__before passing tosuper().__init__, allowingcandidate_setto be eithern x d(without fidelity dims) orn x (d + s)(with fidelity dims) (#3205). - Fix bug in
optimize_acqf_mixed_alternatingthat may produce candidates with invalid values when using parameter constraints on discrete parameters (#3212).
- Make
dandtarget_fidelitiesrequired arguments inproject_to_target_fidelityto eliminate silent bugs (#3203). - Add test harness infrastructure for acquisition function testing (#3190, #3192).
- Fix typos and docstring issues across botorch (#3211).
- Fix docstring issues in analytic acquisition functions (#3208).
- Require Python>=3.11 and PyTorch>=2.2 (#3152).
- Require GPyTorch>=1.15.1 (#3181).
- Efficient leave-one-out cross-validation for Gaussian processes (#3098) and ensemble models (#3103).
TrajectoryPlanningProblemtest problem (#3182).
- Feasibility-driven Trust Region Bayesian Optimization tutorial (#3048).
- HIPE tutorial notebook (#3102).
- Optimization help example notebook (#3148).
- Fix the shape of
NoisyExpectedHypervolumeMixin._initial_hvs(#3090). - Fix
KroneckerMultiTaskGP.posteriortransforming inputs twice (#3132). - Fix minor bugs in
safe_math.py(#3172).
- Make SAAS prior sampleable (#3105).
- Add retries to
optimize_with_nsgaii(#3116). - Allow
PosteriorTransformAPI to optionally accept featuresX(#3117). - Clean up
get_constants_likeusage inbotorch/utils/probability(#3131). - Improve gradient stability in BVN (#3143).
- Enable evaluation of
MultiTaskGPwith unobserved tasks (#3145). - Simplify task value remapping API (#3163).
- Update
optimize_acqf_homotopyto support mixed optimization (#3165). - Support custom data covariance modules in
SingleTaskMultifidelityGP(#3167). - Add task-level means to
SaasFullyBayesianMultiTaskGP(#3175). - Use
HadamardGaussianLikelihoodinHeterogeneousMTGPfor inferred noise (#3176). - Add multi-task mixin support to
PyroModelhierarchy (#3177).
- Remove deprecated APIs for v0.17 (#3134).
get_fitted_map_saas_ensembleqMultiObjectiveMaxValueEntropyFullyBayesianPosteriortask_featureparameter fromSingleTaskGP.construct_inputsfixed_featuresargument fromoptimize_acqf_homotopy
- HIPE acquisition function (#3083, #3108).
- Clamp to 0 to avoid sqrt of negative numbers in
mvn_hellinger_distance(#3109). - Add support for styles to PFN models (#3099, #3114).
- Batched NEI with PFNs, enable more styles in botorch PFNs (#3113).
-
New default parameterization for
MultiTaskGP(#3049). See discussion: meta-pytorch#3065- Add
PositiveIndexKernel(#3047).
- Add
-
Add
HeterogeneousMTGPfor transfer learning between different search spaces (#3073).
- Add missing int cast in
MCSampler(#3062). - Fix shape error in
qNegIntegratedPosteriorVariance(#3068). - Add taus to the model state dict for
EnsembleMapSaasSingleTaskGPfor consistency when loading state dict (#3069). - Fix confusing warning when using input transforms with deterministic models (#3078).
- Structure
optionsarguments ingen_candidates_torch(#3019). - Set default
cache_rootbased on whether the model supports it (#3075). - Use
nanmeanandnanstdinStandardizeoutcome transform (#3072).
- Require GPyTorch>=1.14.2 (#3055).
- Add
EnsembleMapSaasSingleTaskGP(#3035, #3038, #3040). - Allow different inferred noise levels for each task in
MultitaskGP(#2997). - Allow
LatentKroneckerGPmodel to support differentTvalues at train and test time (#3032, #3037). - Allow
qHypervolumeKnowledgeGradientto return log values for better numerical stability (#2974, #2976, #2979). - Add
NumericToCategoricalEncodinginput transform (#2907). - Added a
MatheronPathModel- aDeterministicModelreturning a Matheron path sample (#2984). - Project points generated in acquisition function optization to the feasible space (#3010).
- Add support for non-uniform model weights in
EnsembleModelandEnsemblePosterior(#2993). - Allow optimizers to support negative indices for fixed features (#2970).
- Add worst known feasible value to constrained test problems (#3016).
- Fix
optimize_acqf_mixed_alternatinginitialization with categorical features (#2986). - Use
IIDNormalSamplerforPosteriorListby default to fix issue with correlated Sobol samples (#2977). - Fix
condition_on_observationsto correctly apply input transforms and properly add data to train_inputs (#2989, #2990, #3034). - Fix handling of input transforms for
AdditiveMapSaasSingleTaskGP(#3042). - Preserve train inputs and targets through transforms (#3044).
- Improve how
qNEHVIhandles pending points to avoid duplicate suggestions when initial pending points are passed (#2985).
- Add support for missing tasks in multi-task GP models (#2960).
- Add input constructor for
LogConstrainedExpectedImprovement(#2973). - Improve error handling and update documentation for inter-point constraints (#3003).
- Make
AnalyticAcquisitionFunction._mean_and_sigma()return output dim consistently (#3028). - Improve initialization with continuous relaxation in
optimize_acqf_mixed_alternating(#3041). - Implement
ContextualDataset.__eq__()(#3005). - Check shape of state dict when comparing input transforms (#3051).
- Add
py.typedfile to precent tools complainnig about type stubs (#2982). - Improve best feasible objective computation; point user to use probability of feasibility (#3011).
- Deprecate
get_fitted_map_saas_ensemble()in favor ofEnsembleMapSaasSingleTaskGP(#3036).
- Add
MCAcquisitionsupport toPFNModel(#3031). - Add copula-based multivariate posterior for
PFNModel(#3045). - Allow
PFNModelto load checkpoints from trainings done withautoml/PFNs(#3017). - Add support for Kaiming/He initialization for the VBLL mean (#3053).
This is a compatibility release, coming only one week after 0.15.0.
- Enable optimizing a sequence of acquisition functions in
optimize_acqf(#2931).
- NP Regression Model w/ LIG Acquisition (#2683).
- Fully Bayesian Matern GP with dimension scaling prior (#2855).
- Option for input warping in non-linear fully Bayesian GPs (#2858).
- Support for
condition_on_observationsinFullyBayesianMultiTaskGP(#2871). - Improvements to
optimize_acqf_mixed_alternating:- Support categoricals in alternating optimization (#2866).
- Batch mixed optimization (#2895).
- Non-equidistant discrete dimensions for
optimize_acqf_mixed_alternating(#2923). - Update syntax for categoricals in
optimize_acqf_mixed_alternating(#2942). - Equality constraints for
optimize_acqf_mixed_alternating(#2944).
- Multi-output acquisition functions and related utilities:
- Multi-Output Acquisition Functions (#2935).
- Utility for greedily selecting an approximate hypervolume maximizing subset (#2936).
- Update optimize with NSGA-II (#2937).
- Add utility for running pymoo NSGA-II (#2868).
- Batched L-BFGS-B for more efficient acquisition function optimization (#2870, #2892).
- Pathwise Thompson sampling for ensemble models (#2877).
- ROBOT tutorial notebook (#2883).
- Add community notebooks to the botorch.org website (#2913).
- Fix model paths in prior fitted networks (#2843).
- Fix a bug where input transforms were not applied in fully Bayesian models in train mode (#2859).
- Fix local
Yvs globalY_Trainingenerate_batchfunction in TURBO tutorial (#2862). - Fix CUDA support for
FullyBayesianMTGP(#2875). - Fix edge case with NaNs in
is_non_dominated(#2925). - Normalize for correct fidelity in
qLowerBoundMaxValueEntropy(#2930). - Bug: Botorch_community
VBLLModelposterior doesn't work with single value tensor (#2929). - Fix variance shape bug in Riemann posterior (#2939).
- Fix input constructor for
LogProbabilityOfFeasibility(#2945). - Fix
AugmentedRosenbrockproblem and expand testing for optimizers (#2950).
- Improved documentation for
optimize_acqf(#2865). - Fully Bayesian Multi-Task GP cleanup (#2869).
average_over_ensemble_modelsdecorator for acquisition functions (#2873).- Changes to I-BNN tutorial (#2889).
- Allow batched fixed features in gen_candidates_scipy and gen_candidates_torch (#2893)
- Refactor of
MultiTask/FullyBayesianMultiTaskGPto useProductKernelandIndexKernel(#2908). - Various changes to PFNs to improve Ax compatibility (#2915, #2940).
- Eliminate expensive indexing in
separate_mtmvn(#2920). - Added reset method to
StoppingCriterion(#2927). - Simplify closure dispatch (#2947).
- Add BaseTestProblem.is_minimization_problem property (#2949).
- Simplify NdarrayOptimizationClosure (#2951).
- Prior Fitted Network (PFN) surrogate model integration (#2784).
- Variational Bayesian last-layer models as surrogate
Models (#2754). - Probabilities of feasibility for classifier-based constraints in acquisition functions (#2776).
- Helper for evaluating feasibility of candidate points (#2733).
- Check for feasibility in
gen_candidates_scipyand error out for infeasible candidates (#2737). - Return a feasible candidate if there is one and
return_best_only=True(#2778).
- Check for feasibility in
- Allow for observation noise without provided
evaluation_maskmask inModelListGP(#2735). - Implement incremental
qLogNEIviaincrementalargument toqLogNoisyExpectedImprovement(#2760). - Add utility for computing AIC/BIC/MLL from a model (#2785).
- New test functions:
- Multi-fidelity test functions with discrete fidelities (#2796).
- Keane bump function (#2802).
- Mixed Ackley test function (#2830).
- LABS test function (#2832).
- Add parameter types to test functions to support problems defined in mixed / discrete spaces (#2809).
- Add input validation to test functions (#2829).
- Add
[q]LogProbabilityOfFeasibilityacquisition functions (#2815).
- Remove hard-coded
dtypefrombest_fbuffers (#2725). - Fix
dtype/nanissue inStratifiedStandardize(#2757). - Properly handle observed noise in
AdditiveMapSaasSingleTaskGPwith outcome transforms (#2763). - Do not count STOPPED (due to specified budget) as a model fitting failure (#2767).
- Ensure that
initialize_q_batchalways includes the maximum value when called in batch mode (#2773). - Fix posterior with observation noise in batched MTGP models (#2782).
- Detach tensor in
gen_candidates_scipyto avoid test failure due to new warning (#2797). - Fix batch computation in Pivoted Cholesky (#2823).
- Add optimal values for synthetic contrained optimization problems (#2730).
- Update
max_hvand reference point for Penicillin problem (#2771). - Add optimal value to SpeedReducer problem (#2799).
- Update
- Update
nonlinear_constraint_is_feasibleto return a boolean tensor (#2731). - Restructure sampling methods for info-theoretic acquisition functions (#2753).
- Prune baseline points in
qLogNEIby default (#2762). - Misc updates to MES-based acqusition functions (#2769).
- Pass option to reset submodules in train method for fully Bayesian models (#2783).
- Put outcome transforms into train mode in model constructors (#2817).
LogEI: selectcache_rootbased on model support (#2820).- Remove Ax dependency from BoTorch tutorials and reference Ax tutorials instead (#2839).
- Remove deprecated
gp_samplingmodule (#2768). - Remove
qMultiObjectiveMaxValueEntropyacquisition function (#2800). - Remove model converters (#2801).
- BoTorch website has been upgraded to utilize Docusaurus v3, with the API reference being hosted by ReadTheDocs. The tutorials now expose an option to open with Colab, for easy access to a runtime with modifiable tutorials. The old versions of the website can be found at archive.botorch.org (#2653).
RobustRelevancePursuitSingleTaskGP, a robust Gaussian process model that adaptively identifies outliers and leverages Bayesian model selection (paper) (#2608, #2690, #2707).LatentKroneckerGP, a scalable model for data on partially observed grids, like the joint modeling of hyper-parameters and partially completed learning curves in AutoML (paper) (#2647).- Add MAP-SAAS model, which utilizes the sparse axis-aligned subspace priors (paper) with MAP model fitting (#2694).
- Require GPyTorch==1.14 and linear_operator==0.6 (#2710).
- Remove support for anaconda (official package) (#2617).
- Remove
mpmathdependency pin (#2640). - Updates to optimization routines to support SciPy>1.15:
- Use
threadpoolctlinminimize_with_timeoutto prevent CPU oversubscription (#2712). - Update optimizer output parsing to make model fitting compatible with SciPy>1.15 (#2667).
- Use
- Add support for priors in OAK Kernel (#2535).
- Add
BatchBroadcastedTransformList, which broadcasts a list ofInputTransforms over batch shapes (#2558). InteractionFeaturesinput transform (#2560).- Implement
percentile_of_score, which takes inputsdataandscore, and returns the percentile of values indatathat are belowscore(#2568). - Add
optimize_acqf_mixed_alternating, which supports optimization over mixed discrete & continuous spaces (#2573). - Add support for
PosteriorTransformtoget_optimal_samplesandoptimize_posterior_samples(#2576). - Support inequality constraints &
X_avoidinoptimize_acqf_discrete(#2593). - Add ability to mix batch initial conditions and internal IC generation (#2610).
- Add
qPosteriorStandardDeviationacquisition function (#2634). - TopK downselection for initial batch generation. (#2636).
- Support optimization over mixed spaces in
optimize_acqf_homotopy(#2639). - Add
InfeasibilityErrorexception class (#2652). - Support
InputTransforms inSparseOutlierLikelihoodandget_posterior_over_support(#2659). StratifiedStandardizeoutcome transform (#2671).- Add
centerargument toNormalize(#2680). - Add input normalization step in
Warpinput transform (#2692). - Support mixing fully Bayesian &
SingleTaskGPmodels inModelListGP(#2693). - Add abstract fully Bayesian GP class and fully Bayesian linear GP model (#2696, #2697).
- Tutorial on BO constrained by probability of classification model (#2700).
- Fix error in decoupled_mobo tutorial due to torch/numpy issues (#2550).
- Raise error for MTGP in
batch_cross_validation(#2554). - Fix
posteriormethod inBatchedMultiOutputGPyTorchModelfor tracing JIT (#2592). - Replace hard-coded double precision in test_functions with default dtype (#2597).
- Remove
as_tensorargument ofset_tensors_from_ndarray_1d(#2615). - Skip fixed feature enumerations in
optimize_acqf_mixedthat can't satisfy the parameter constraints (#2614). - Fix
get_default_partitioning_alphafor >7 objectives (#2646). - Fix random seed handling in
sample_hypersphere(#2688). - Fix bug in
optimize_objectivewith fixed features (#2691). FullyBayesianSingleTaskGP.trainshould not returnNone(#2702).
- More efficient sampling from
KroneckerMultiTaskGP(#2460). - Update
HigherOrderGPto use new priors & standardize outcome transform by default (#2555). - Update
initialize_q_batchmethods to return both candidates and the corresponding acquisition values (#2571). - Update optimization documentation with LogEI insights (#2587).
- Make all arguments in
optimize_acqf_homotopyexplicit (#2588). - Introduce
trial_indicesargument toSupervisedDataset(#2595). - Make optimizers raise an error when provided negative indices for fixed features (#2603).
- Make input transforms
Modules by default (#2607). - Reduce memory usage in
ConstrainedMaxPosteriorSampling(#2622). - Add
clonemethod to datasets (#2625). - Add support for continuous relaxation within
optimize_acqf_mixed_alternating(#2635). - Update indexing in
qLogNEI._get_samples_and_objectivesto support multiple input batches (#2649). - Pass
XtoOutcomeTransforms (#2663). - Use mini-batches when evaluating candidates within
optimize_acqf_discrete_local_search(#2682).
- Remove
HeteroskedasticSingleTaskGP(#2616). - Remove
FixedNoiseDataset(#2626). - Remove support for legacy format non-linear constraints (#2627).
- Remove
maximizeoption from information theoretic acquisition functions (#2590).
- Update most models to use dimension-scaled log-normal hyperparameter priors by
default, which makes performance much more robust to dimensionality. See
discussion #2451 for details. The only models that are not changed are those
for fully Bayesian models and
PairwiseGP; for models that utilize a composite kernel, such as multi-fidelity/task/context, this change only affects the base kernel (#2449, #2450, #2507). - Use
Standarizeby default in all the models using the upgraded priors. In addition to reducing the amount of boilerplate needed to initialize a model, this change was motivated by the change to default priors, because the new priors will work less well when data is not standardized. Users who do not want to use transforms should explicitly pass inNone(#2458, #2532).
- Unpin NumPy (#2459).
- Require PyTorch>=2.0.1, GPyTorch==1.13, and linear_operator==0.5.3 (#2511).
- Introduce
PathwiseThompsonSamplingacquisition function (#2443). - Enable
qBayesianActiveLearningByDisagreementto accept a posterior transform, and improve its implementation (#2457). - Enable
SaasPyroModelto sample via NUTS when training data is empty (#2465). - Add multi-objective
qBayesianActiveLearningByDisagreement(#2475). - Add input constructor for
qNegIntegratedPosteriorVariance(#2477). - Introduce
qLowerConfidenceBound(#2517). - Add input constructor for
qMultiFidelityHypervolumeKnowledgeGradient(#2524). - Add
posterior_transformtoApproximateGPyTorchModel.posterior(#2531).
- Fix
batch_shapedefault inOrthogonalAdditiveKernel(#2473). - Ensure all tensors are on CPU in
HitAndRunPolytopeSampler(#2502). - Fix duplicate logging in
generation/gen.py(#2504). - Raise exception if
X_pendingis set on the underlyingAcquisitionFunctionin prior-guidedAcquisitionFunction(#2505). - Make affine input transforms error with data of incorrect dimension, even in eval mode (#2510).
- Use fidelity-aware
current_valuein input constructor forqMultiFidelityKnowledgeGradient(#2519). - Apply input transforms when computing MLL in model closures (#2527).
- Detach
fvalintorch_minimizeto remove an opportunity for memory leaks (#2529).
- Clarify incompatibility of inter-point constraints with
get_polytope_samples(#2469). - Update tutorials to use the log variants of EI-family acquisition functions,
don't make tutorials pass
Standardizeunnecessarily, and other simplifications and cleanup (#2462, #2463, #2490, #2495, #2496, #2498, #2499). - Remove deprecated
FixedNoiseGP(#2536).
- More informative warnings about failure to standardize or normalize data (#2489).
- Suppress irrelevant warnings in
qHypervolumeKnowledgeGradienthelpers (#2486). - Cleaner
botorch/acquisition/multi_objectivedirectory structure (#2485). - With
AffineInputTransform, always require data to have at least two dimensions (#2518). - Remove deprecated argument
data_fidelitytoSingleTaskMultiFidelityGPand deprecated modelFixedNoiseMultiFidelityGP(#2532). - Raise an
OptimizationGradientErrorwhen optimization produces NaN gradients (#2537). - Improve numerics by replacing
torch.log(1 + x)withtorch.log1p(x)andtorch.exp(x) - 1withtorch.special.expm1(#2539, #2540, #2541).
- Pin NumPy to <2.0 (#2382).
- Require GPyTorch 1.12 and LinearOperator 0.5.2 (#2408, #2441).
- Support evaluating posterior predictive in
MultiTaskGP(#2375). - Infinite width BNN kernel (#2366) and the corresponding tutorial (#2381).
- An improved elliptical slice sampling implementation (#2426).
- Add a helper for producing a
DeterministicModelusing a Matheron path (#2435).
- Stop allowing some arguments to be ignored in acqf input constructors (#2356).
- Reap deprecated
**kwargsargument fromoptimize_acqfvariants (#2390). - Delete
DeterministicPosteriorandDeterministicSampler(#2391, #2409, #2410). - Removed deprecated
CachedCholeskyMCAcquisitionFunction(#2399). - Deprecate model conversion code (#2431).
- Deprecate
gp_samplingmodule in favor of pathwise sampling (#2432).
- Fix observation noise shape for batched models (#2377).
- Fix
sample_all_priorsto not sample one value for all lengthscales (#2404). - Make
(Log)NoisyExpectedImprovementcreate a correct fantasy model with non-defaultSingleTaskGP(#2414).
- Various documentation improvements (#2395, #2425, #2436, #2437, #2438).
- Clean up
**kwargsarguments inqLogNEI(#2406). - Add a
NumericsWarningfor Legacy EI implementations (#2429).
See 0.11.3 release. This release failed due to mismatching GPyTorch and LinearOperator versions.
- Implement
qLogNParEGO(#2364). - Support picking best of multiple fit attempts in
fit_gpytorch_mll(#2373).
- Many functions that used to silently ignore arbitrary keyword arguments will now raise an exception when passed unsupported arguments (#2327, #2336).
- Remove
UnstandardizeMCMultiOutputObjectiveandUnstandardizePosteriorTransform(#2362).
- Remove correlation between the step size and the step direction in
sample_polytope(#2290). - Fix pathwise sampler bug (#2337).
- Explicitly check timeout against
Noneso that0.0isn't ignored (#2348). - Fix boundary handling in
sample_polytope(#2353). - Avoid division by zero in
normalize&unnormalizewhen lower & upper bounds are equal (#2363). - Update
sample_all_priorsto support wider set of priors (#2371).
- Clarify
is_non_dominatedbehavior with NaN (#2332). - Add input constructor for
qEUBO(#2335). - Add
LogEIas a baseline in theTuRBOtutorial (#2355). - Update polytope sampling code and add thinning capability (#2358).
- Add initial objective values to initial state for sample efficiency (#2365).
- Clarify behavior on standard deviations with <1 degree of freedom (#2357).
- Reqire Python >= 3.10 (#2293).
- SCoreBO and Bayesian Active Learning acquisition functions (#2163).
- Fix non-None constraint noise levels in some constrained test problems (#2241).
- Fix inverse cost-weighted utility behaviour for non-positive acquisition values (#2297).
- Don't allow unused keyword arguments in
Model.construct_inputs(#2186). - Re-map task values in MTGP if they are not contiguous integers starting from zero (#2230).
- Unify
ModelListandModelListGPsubset_outputbehavior (#2231). - Ensure
meanandinterior_pointofLinearEllipticalSliceSamplerhave correct shapes (#2245). - Speed up task covariance of
LCEMGP(#2260). - Improvements to
batch_cross_validation, support for model init kwargs (#2269). - Support custom
all_tasksfor MTGPs (#2271). - Error out if scipy optimizer does not support bounds / constraints (#2282).
- Support diagonal covariance root with fixed indices for
LinearEllipticalSliceSampler(#2283). - Make
qNIPVa subclass ofAcquisitionFunctionrather thanAnalyticAcquisitionFunction(#2286). - Increase code-sharing of
LCEMGP& defineconstruct_inputs(#2291).
- Remove deprecated args from base
MCSampler(#2228). - Remove deprecated
botorch/generation/gen/minimize(#2229). - Remove
fit_gpytorch_model(#2250). - Remove
requires_grad_ctx(#2252). - Remove
base_samplesargument ofGPyTorchPosterior.rsample(#2254). - Remove deprecated
mvnargument toGPyTorchPosterior(#2255). - Remove deprecated
Posterior.event_shape(#2320). - Remove
**kwargs& deprecatedindicesargument ofRoundtransform (#2321). - Remove
Standardize.load_state_dict(#2322). - Remove
FixedNoiseMultiTaskGP(#2323).
- Introduce updated guidelines and a new directory for community contributions (#2167).
- Add
qEUBOpreferential acquisition function (#2192). - Add Multi Information Source Augmented GP (#2152).
- Fix
condition_on_observationsin fully Bayesian models (#2151). - Fix for bug that occurs when splitting single-element bins, use default BoTorch kernel for BAxUS. (#2165).
- Fix a bug when non-linear constraints are used with
q > 1(#2168). - Remove unsupported
X_pendingfromqMultiFidelityLowerBoundMaxValueEntropyconstructor (#2193). - Don't allow
data_fidelities=[]inSingleTaskMultiFidelityGP(#2195). - Fix
EHVI,qEHVI, andqLogEHVIinput constructors (#2196). - Fix input constructor for
qMultiFidelityMaxValueEntropy(#2198). - Add ability to not deduplicate points in
_is_non_dominated_loop(#2203).
- Minor improvements to
MVaRrisk measure (#2150). - Add support for multitask models to
ModelListGP(#2154). - Support unspecified noise in
ContextualDataset(#2155). - Update
HVKGsampler to reflect the number of model outputs (#2160). - Release restriction in
OneHotToNumericthat the categoricals are the trailing dimensions (#2166). - Standardize broadcasting logic of
q(Log)EI'sbest_fandcompute_best_feasible_objective(#2171). - Use regular inheritance instead of dispatcher to special-case
PairwiseGPlogic (#2176). - Support
PBOinEUBO's input constructor (#2178). - Add
posterior_transformtoqMaxValueEntropySearch's input constructor (#2181). - Do not normalize or standardize dimension if all values are equal (#2185).
- Reap deprecated support for objective with 1 arg in
GenericMCObjective(#2199). - Consistent signature for
get_objective_weights_transform(#2200). - Update context order handling in
ContextualDataset(#2205). - Update contextual models for use in MBM (#2206).
- Remove
(Identity)AnalyticMultiOutputObjective(#2208). - Reap deprecated support for
soft_eval_constraint(#2223). Please usebotorch.utils.sigmoidinstead.
- Pin
mpmath <= 1.3.0to avoid CI breakages due to removed modules in the latest alpha release (#2222).
Hypervolume Knowledge Gradient (HVKG):
- Add
qHypervolumeKnowledgeGradient, which seeks to maximize the difference in hypervolume of the hypervolume-maximizing set of a fixed size after conditioning the unknown observation(s) that would be received if X were evaluated (#1950, #1982, #2101). - Add tutorial on decoupled Multi-Objective Bayesian Optimization (MOBO) with HVKG (#2094).
Other new features:
- Add
MultiOutputFixedCostModel, which is useful for decoupled scenarios where the objectives have different costs (#2093). - Enable
q > 1in acquisition function optimization when nonlinear constraints are present (#1793). - Support different noise levels for different outputs in test functions (#2136).
- Fix fantasization with a
FixedNoiseGaussianLikelihoodwhennoiseis known andXis empty (#2090). - Make
LearnedObjectivecompatible with constraints in acquisition functions regardless ofsample_shape(#2111). - Make input constructors for
qExpectedImprovement,qLogExpectedImprovement, andqProbabilityOfImprovementcompatible withLearnedObjectiveregardless ofsample_shape(#2115). - Fix handling of constraints in
qSimpleRegret(#2141).
- Increase default sample size for
LearnedObjective(#2095). - Allow passing in
Xwith or without fidelity dimensions inproject_to_target_fidelity(#2102). - Use full-rank task covariance matrix by default in SAAS MTGP (#2104).
- Rename
FullyBayesianPosteriortoGaussianMixturePosterior; add_is_ensembleand_is_fully_bayesianattributes toModel(#2108). - Various improvements to tutorials including speedups, improved explanations, and compatibility with newer versions of libraries.
- Re-establish compatibility with PyTorch 1.13.1 (#2083).
- Additional "Log" acquisition functions for multi-objective optimization with better numerical behavior, which often leads to significantly improved BO performance over their non-"Log" counterparts:
qLogEHVI(#2036).qLogNEHVI(#2045, #2046, #2048, #2051).- Support fully Bayesian models with
LogEI-type acquisition functions (#2058).
FixedNoiseGPandFixedNoiseMultiFidelityGPhave been deprecated, their functionalities merged intoSingleTaskGPandSingleTaskMultiFidelityGP, respectively (#2052, #2053).- Removed deprecated legacy model fitting functions:
numpy_converter,fit_gpytorch_scipy,fit_gpytorch_torch,_get_extra_mll_args(#1995, #2050).
- Support multiple data fidelity dimensions in
SingleTaskMultiFidelityGPand (deprecated)FixedNoiseMultiFidelityGPmodels (#1956). - Add
logsumexpandfatmaxto handle infinities and control asymptotic behavior in "Log" acquisition functions (#1999). - Add outcome and feature names to datasets, implement
MultiTaskDataset(#2015, #2019). - Add constrained Hartmann and constrained Gramacy synthetic test problems (#2022, #2026, #2027).
- Support observed noise in
MixedSingleTaskGP(#2054). - Add
PosteriorStandardDeviationacquisition function (#2060).
- Fix input constructors for
qMaxValueEntropyandqMultiFidelityKnowledgeGradient(#1989). - Fix precision issue that arises from inconsistent data types in
LearnedObjective(#2006). - Fix fantasization with
FixedNoiseGPand outcome transforms and useFantasizeMixin(#2011). - Fix
LearnedObjectivebase sample shape (#2021). - Apply constraints in
prune_inferior_points(#2069). - Support non-batch evaluation of
PenalizedMCObjective(#2073). - Fix
Datasetequality checks (#2077).
- Don't allow unused
**kwargsin input_constructors except for a defined set of exceptions (#1872, #1985). - Merge inferred and fixed noise LCE-M models (#1993).
- Fix import structure in
botorch.acquisition.utils(#1986). - Remove deprecated functionality:
weightsargument ofRiskMeasureMCObjectiveandsqueeze_last_dim(#1994). - Make
X,Y,Yvarinto properties in datasets (#2004). - Make synthetic constrained test functions subclass from
SyntheticTestFunction(#2029). - Add
construct_inputsto contextual GP modelsLCEAGPandSACGP(#2057).
- Hot fix (#1973) for a few issues:
- A naming mismatch between Ax's modular
BotorchModeland the BoTorch's acquisition input constructors, leading to outcome constraints in Ax not being used with single-objective acquisition functions in Ax's modularBotorchModel. The naming has been updated in Ax and consistent naming is now used in input constructors for single and multi-objective acquisition functions in BoTorch. - A naming mismatch in the acquisition input constructor
constraintsinqNoisyLogExpectedImprovement, which kept constraints from being used. - A bug in
compute_best_feasible_objectivethat could lead to-infincumbent values.
- A naming mismatch between Ax's modular
- Fix setting seed in
get_polytope_samples(#1968)
- Merge
SupervisedDatasetandFixedNoiseDataset(#1945). - Constrained tutorial updates (#1967, #1970).
- Resolve issues with missing pytorch binaries with py3.11 on Mac (#1966).
- Require linear_operator == 0.5.1 (#1963).
- Require Python >= 3.9.0 (#1924).
- Require PyTorch >= 1.13.1 (#1960).
- Require linear_operator == 0.5.0 (#1961).
- Require GPyTorch == 1.11 (#1961).
- Introduce
OrthogonalAdditiveKernel(#1869). - Speed up LCE-A kernel by over an order of magnitude (#1910).
- Introduce
optimize_acqf_homotopy, for optimizing acquisition functions with homotopy (#1915). - Introduce
PriorGuidedAcquisitionFunction(PiBO) (#1920). - Introduce
qLogExpectedImprovement, which provides more accurate numerics thanqExpectedImprovementand can lead to significant optimization improvements (#1936). - Similarly, introduce
qLogNoisyExpectedImprovement, which is analogous toqNoisyExpectedImprovement(#1937).
- Add constrained synthetic test functions
PressureVesselDesign,WeldedBeam,SpeedReducer, andTensionCompressionString(#1832). - Support decoupled fantasization (#1853) and decoupled evaluations in cost-aware utilities (#1949).
- Add
PairwiseBayesianActiveLearningByDisagreement, an active learning acquisition function for PBO and BOPE (#1855). - Support custom mean and likelihood in
MultiTaskGP(#1909). - Enable candidate generation (via
optimize_acqf) with bothnon_linear_constraintsandfixed_features(#1912). - Introduce
L0PenaltyApproxObjectiveto support L0 regularization (#1916). - Enable batching in
PriorGuidedAcquisitionFunction(#1925).
- Deprecate
FixedNoiseMultiTaskGP; allowtrain_Yvaroptionally inMultiTaskGP(#1818). - Implement
load_state_dictfor SAAS multi-task GP (#1825). - Improvements to
LinearEllipticalSliceSampler(#1859, #1878, #1879, #1883). - Allow passing in task features as part of X in MTGP.posterior (#1868).
- Improve numerical stability of log densities in pairwise GPs (#1919).
- Python 3.11 compliance (#1927).
- Enable using constraints with
SampleReducingMCAcquisitionFunctions when usinginput_constructors andget_acquisition_function(#1932). - Enable use of
qLogExpectedImprovementandqLogNoisyExpectedImprovementwith Ax (#1941).
- Enable pathwise sampling modules to be converted to GPU (#1821).
- Allow
Standardizemodules to be loaded once trained (#1874). - Fix memory leak in Inducing Point Allocators (#1890).
- Correct einsum computation in
LCEAKernel(#1918). - Properly whiten bounds in MVNXPB (#1933).
- Make
FixedFeatureAcquisitionFunctionconvert floats to double-precision tensors rather than single-precision (#1944). - Fix memory leak in
FullyBayesianPosterior(#1951). - Make
AnalyticExpectedUtilityOfBestOptioninput constructor work correctionly with multi-task GPs (#1955).
- Support inferred noise in
SaasFullyBayesianMultiTaskGP(#1809).
- More informative error message when
Standardizehas wrong batch shape (#1807). - Make GIBBON robust to numerical instability (#1814).
- Add
sample_multiplierin EUBO'sacqf_input_constructor(#1816).
- Only do checks for
_optimize_acqf_sequential_qwhen it will be used (#1808). - Fix an issue where
PairwiseGPcomparisons might be implicitly modified (#1811).
- Require GPyTorch == 1.10 and linear_operator == 0.4.0 (#1803).
- Polytope sampling for linear constraints along the q-dimension (#1757).
- Single-objective joint entropy search with additional conditioning, various improvements to entropy-based acquisition functions (#1738).
- Various updates to improve numerical stability of
PairwiseGP(#1754, #1755). - Change batch range for
FullyBayesianPosterior(1176a38352b69d01def0a466233e6633c17d6862, #1773). - Make
gen_batch_initial_conditionsmore flexible (#1779). - Deprecate
objectivein favor ofposterior_transformforMultiObjectiveAnalyticAcquisitionFunction(#1781). - Use
prune_baseline=Trueas default forqNoisyExpectedImprovement(#1796). - Add
batch_shapeproperty toSingleTaskVariationalGP(#1799). - Change minimum inferred noise level for
SaasFullyBayesianSingleTaskGP(#1800).
- Add
output_tasktoMultiTaskGP.construct_inputs(#1753). - Fix custom bounds handling in test problems (#1760).
- Remove incorrect
BotorchTensorDimensionWarning(#1790). - Fix handling of non-Container-typed positional arguments in
SupervisedDatasetMeta(#1663).
- Add BAxUS tutorial (#1559).
- Various improvements to tutorials (#1703, #1706, #1707, #1708, #1710, #1711, #1718, #1719, #1739, #1740, #1742).
- Allow tensor input for
integer_indicesinRoundtransform (#1709). - Expose
cache_rootin qNEHVI input constructor (#1730). - Add
get_init_argshelper toNormalize&Roundtransforms (#1731). - Allowing custom dimensionality and improved gradient stability in
ModifiedFixedSingleSampleModel(#1732).
- Improve batched model handling in
_verify_output_shape(#1715). - Fix qNEI with Derivative Enabled BO (#1716).
- Fix
get_infeasible_costfor objectives that require X (#1721).
- Require PyTorch >= 1.12 (#1699).
- Introduce pathwise sampling API for efficiently sampling functions from (approximate) GP priors and posteriors (#1463).
- Add
OneHotToNumericinput transform (#1517). - Add
get_rounding_input_transformutility for constructing rounding input transforms (#1531). - Introduce
EnsemblePosterior(#1636). - Inducing Point Allocators for Sparse GPs (#1652).
- Pass
gen_candidatescallable inoptimize_acqf(#1655). - Adding
logmeanexpandlogdiffexpnumerical utilities (#1657).
- Warn if inoperable keyword arguments are passed to optimizers (#1421).
- Add
BotorchTestCase.assertAllClose(#1618). - Add
sample_shapeproperty toListSampler(#1624). - Do not filter out
BoTorchWarnings by default (#1630). - Introduce a
DeterministicSampler(#1641). - Warn when optimizer kwargs are being ignored in BoTorch optim utils
_filter_kwargs(#1645). - Don't use
functools.lru_cacheon methods (#1650). - More informative error when someone adds a module without updating the corresponding rst file (#1653).
- Make indices a buffer in
AffineInputTransform(#1656). - Clean up
optimize_acqfand_make_linear_constraints(#1660, #1676). - Support NaN
max_reference_pointininfer_reference_point(#1671). - Use
_fast_solvesinHOGP.posterior(#1682). - Approximate qPI using
MVNXPB(#1684). - Improve filtering for
cache_rootinCachedCholeskyMCAcquisitionFunction(#1688). - Add option to disable retrying on optimization warning (#1696).
- Fix normalization in Chebyshev scalarization (#1616).
- Fix
TransformedPosteriormissing batch shape error in_update_base_samples(#1625). - Detach
coefficientandoffsetinAffineTransformin eval mode (#1642). - Fix pickle error in
TorchPosterior(#1644). - Fix shape error in
optimize_acqf_cyclic(#1648). - Fixed bug where
optimize_acqfdidn't work with different batch sizes (#1668). - Fix EUBO optimization error when two Xs are identical (#1670).
- Bug fix:
_filter_kwargswas erroring when provided a function without a__name__attribute (#1678).
- This release includes changes for compatibility with the newest versions of linear_operator and gpytorch.
- Several acquisition functions now have "Log" counterparts, which provide better
numerical behavior for improvement-based acquisition functions in areas where the probability of
improvement is low. For example,
LogExpectedImprovement(#1565) should behave better thanExpectedImprovement. These new acquisition functions areLogExpectedImprovement(#1565).LogNoisyExpectedImprovement(#1577).LogProbabilityOfImprovement(#1594).LogConstrainedExpectedImprovement(#1594).
- Bug fix: Stop
ModelListGP.posteriorfrom quietly ignoringLog,Power, andBilogoutcome transforms (#1563). - Turn off
fast_computationssetting in linear_operator by default (#1547).
- Require linear_operator == 0.3.0 (#1538).
- Require pyro-ppl >= 1.8.4 (#1606).
- Require gpytorch == 1.9.1 (#1612).
- Add
etatoget_acquisition_function(#1541). - Support 0d-features in
FixedFeatureAcquisitionFunction(#1546). - Add timeout ability to optimization functions (#1562, #1598).
- Add
MultiModelAcquisitionFunction, an abstract base class for acquisition functions that require multiple types of models (#1584). - Add
cache_rootoption for qNEI inget_acquisition_function(#1608).
- Docstring corrections (#1551, #1557, #1573).
- Removal of
_fit_multioutput_independentandallclose_mll(#1570). - Better numerical behavior for fully Bayesian models (#1576).
- More verbose Scipy
minimizefailure messages (#1579). - Lower-bound noise in
SaasPyroModelto avoid Cholesky errors (#1586).
- Error rather than failing silently for NaN values in box decomposition (#1554).
- Make
get_bounds_as_ndarraydevice-safe (#1567).
This release includes some backwards incompatible changes.
- Refactor
PosteriorandMCSamplermodules to better support non-Gaussian distributions in BoTorch (#1486).- Introduced a
TorchPosteriorobject that wraps a PyTorchDistributionobject and makes it compatible with the rest ofPosteriorAPI. PosteriorListno longer accepts Gaussian base samples. It should be used with aListSamplerthat includes the appropriate sampler for each posterior.- The MC acquisition functions no longer construct a Sobol sampler by default. Instead, they rely on a
get_samplerhelper, which dispatches an appropriate sampler based on the posterior provided. - The
resampleandcollapse_batch_dimsarguments toMCSamplers have been removed. TheForkedRNGSamplerandStochasticSamplercan be used to get the same functionality. - Refer to the PR for additional changes. We will update the website documentation to reflect these changes in a future release.
- Introduced a
- #1191 refactors much of
botorch.optimto operate based on closures that abstract away how losses (and gradients) are computed. By default, these closures are created using multiply-dispatched factory functions (such asget_loss_closure), which may be customized by registering methods with an associated dispatcher (e.g.GetLossClosure). Future releases will contain tutorials that explore these features in greater detail.
- Add mixed optimization for list optimization (#1342).
- Add entropy search acquisition functions (#1458).
- Add utilities for straight-through gradient estimators for discretization functions (#1515).
- Add support for categoricals in Round input transform and use STEs (#1516).
- Add closure-based optimizers (#1191).
- Do not count hitting maxiter as optimization failure & update default maxiter (#1478).
BoxDecompositioncleanup (#1490).- Deprecate
torch.triangular_solvein favor oftorch.linalg.solve_triangular(#1494). - Various docstring improvements (#1496, #1499, #1504).
- Remove
__getitem__method fromLinearTruncatedFidelityKernel(#1501). - Handle Cholesky errors when fitting a fully Bayesian model (#1507).
- Make eta configurable in
apply_constraints(#1526). - Support SAAS ensemble models in RFFs (#1530).
- Deprecate
botorch.optim.numpy_converter(#1191). - Deprecate
fit_gpytorch_scipyandfit_gpytorch_torch(#1191).
- Enforce use of float64 in
NdarrayOptimizationClosure(#1508). - Replace deprecated np.bool with equivalent bool (#1524).
- Fix RFF bug when using FixedNoiseGP models (#1528).
- #1454 fixes a critical bug that affected multi-output
BatchedMultiOutputGPyTorchModels that were using aNormalizeorInputStandardizeinput transform and trained usingfit_gpytorch_model/mllwithsequential=True(which was the default until 0.7.3). The input transform buffers would be reset after model training, leading to the model being trained on normalized input data but evaluated on raw inputs. This bug had been affecting model fits since the 0.6.5 release. - #1479 changes the inheritance structure of
Models in a backwards-incompatible way. If your code relies onisinstancechecks with BoTorchModels, especiallySingleTaskGP, you should revisit these checks to make sure they still work as expected.
- Require linear_operator == 0.2.0 (#1491).
- Introduce
bvn,MVNXPB,TruncatedMultivariateNormal, andUnifiedSkewNormalclasses / methods (#1394, #1408). - Introduce
AffineInputTransform(#1461). - Introduce a
subset_transformdecorator to consolidate subsetting of inputs in input transforms (#1468).
- Add a warning when using float dtype (#1193).
- Let Pyre know that
AcquisitionFunction.modelis aModel(#1216). - Remove custom
BlockDiagLazyTensorlogic when usingStandardize(#1414). - Expose
_aug_batch_shapeinSaasFullyBayesianSingleTaskGP(#1448). - Adjust
PairwiseGPScaleKernelprior (#1460). - Pull out
fantasizemethod into aFantasizeMixinclass, so it isn't so widely inherited (#1462, #1479). - Don't use Pyro JIT by default , since it was causing a memory leak (#1474).
- Use
get_default_partitioning_alphafor NEHVI input constructor (#1481).
- Fix
batch_shapeproperty ofModelListGPyTorchModel(#1441). - Tutorial fixes (#1446, #1475).
- Bug-fix for Proximal acquisition function wrapper for negative base acquisition functions (#1447).
- Handle
RuntimeErrordue to constraint violation while sampling from priors (#1451). - Fix bug in model list with output indices (#1453).
- Fix input transform bug when sequentially training a
BatchedMultiOutputGPyTorchModel(#1454). - Fix a bug in
_fit_multioutput_independentthat failed mll comparison (#1455). - Fix box decomposition behavior with empty or None
Y(#1489).
- A full refactor of model fitting methods (#1134).
- This introduces a new
fit_gpytorch_mllmethod that multiple-dispatches on the model type. Users may register custom fitting routines for different combinations of MLLs, Likelihoods, and Models. - Unlike previous fitting helpers,
fit_gpytorch_mlldoes not passkwargstooptimizerand instead introduces an optionaloptimizer_kwargsargument. - When a model fitting attempt fails,
botorch.fitmethods restore modules to their original states. fit_gpytorch_mllthrows aModelFittingErrorwhen all model fitting attempts fail.- Upon returning from
fit_gpytorch_mll,mll.trainingwill beTrueif fitting failed andFalseotherwise.
- This introduces a new
- Allow custom bounds to be passed in to
SyntheticTestFunction(#1415).
- Deprecate weights argument of risk measures in favor of a
preprocessing_function(#1400), - Deprecate
fit_gyptorch_model; to be superseded byfit_gpytorch_mll.
- Support risk measures in MOO input constructors (#1401).
- Fix fully Bayesian state dict loading when there are more than 10 models (#1405).
- Fix
batch_shapeproperty ofSaasFullyBayesianSingleTaskGP(#1413). - Fix
model_list_to_batchedignoring thecovar_moduleof the input models (#1419).
- Pin GPyTorch >= 1.9.0 (#1397).
- Pin linear_operator == 0.1.1 (#1397).
- Implement
SaasFullyBayesianMultiTaskGPand related utilities (#1181, #1203).
- Support loading a state dict for
SaasFullyBayesianSingleTaskGP(#1120). - Update
load_state_dictforModelListto support fully Bayesian models (#1395). - Add
is_one_to_manyattribute to input transforms (#1396).
- Fix
PairwiseGPon GPU (#1388).
- Require python >= 3.8 (via #1347).
- Support for python 3.10 (via #1379).
- Require PyTorch >= 1.11 (via (#1363).
- Require GPyTorch >= 1.9.0 (#1347).
- GPyTorch 1.9.0 is a major refactor that factors out the lazy tensor
functionality into a new
LinearOperatorlibrary, which required a number of adjustments to BoTorch (#1363, #1377).
- GPyTorch 1.9.0 is a major refactor that factors out the lazy tensor
functionality into a new
- Require pyro >= 1.8.2 (#1379).
- Add ability to generate the features appended in the
AppendFeaturesinput transform via a generic callable (#1354). - Add new synthetic test functions for sensitivity analysis (#1355, #1361).
- Use
time.monotonic()instead oftime.time()to measure duration (#1353). - Allow passing
Y_samplesdirectly inMARS.set_baseline_Y(#1364).
- Patch
state_dictloading forPairwiseGP(#1359). - Fix
batch_shapehandling inNormalizeandInputStandardizetransforms (#1360).
- Require GPyTorch >= 1.8.1 (#1347).
- Support batched models in
RandomFourierFeatures(#1336). - Add a
skip_expandoption toAppendFeatures(#1344).
- Allow
qProbabilityOfImprovementto use batch-shapedbest_f(#1324). - Make
optimize_acqfre-attempt failed optimization runs and handle optimization errors inoptimize_acqfandgen_candidates_scipybetter (#1325). - Reduce memory overhead in
MARS.set_baseline_Y(#1346).
- Fix bug where
outcome_transformwas ignored forModelListGP.fantasize(#1338). - Fix bug causing
get_polytope_samplesto sample incorrectly when variables live in multiple dimensions (#1341).
- Add more descriptive docstrings for models (#1327, #1328, #1329, #1330) and for other classes (#1313).
- Expanded on the model documentation at botorch.org/docs/models (#1337).
- Require PyTorch >=1.10 (#1293).
- Require GPyTorch >=1.7 (#1293).
- Add MOMF (Multi-Objective Multi-Fidelity) acquisition function (#1153).
- Support
PairwiseLogitLikelihoodand modularizePairwiseGP(#1193). - Add in transformed weighting flag to Proximal Acquisition function (#1194).
- Add
FeasibilityWeightedMCMultiOutputObjective(#1202). - Add outcome_transform to
FixedNoiseMultiTaskGP(#1255). - Support Scalable Constrained Bayesian Optimization (#1257).
- Support
SaasFullyBayesianSingleTaskGPinprune_inferior_points(#1260). - Implement MARS as a risk measure (#1303).
- Add MARS tutorial (#1305).
- Add
Bilogoutcome transform (#1189). - Make
get_infeasible_costreturn a cost value for each outcome (#1191). - Modify risk measures to accept
List[float]for weights (#1197). - Support
SaasFullyBayesianSingleTaskGPin prune_inferior_points_multi_objective (#1204). - BotorchContainers and BotorchDatasets: Large refactor of the original
TrainingDataAPI to allow for more diverse types of datasets (#1205, #1221). - Proximal biasing support for multi-output
SingleTaskGPmodels (#1212). - Improve error handling in
optimize_acqf_discretewith a check thatchoicesis non-empty (#1228). - Handle
X_pendingproperly inFixedFeatureAcquisition(#1233, #1234). - PE and PLBO support in Ax (#1240, #1241).
- Remove
model.traincall fromget_X_baselinefor better caching (#1289). - Support
infvalues inboundsargument ofoptimize_acqf(#1302).
- Update
get_gp_samplesto support input / outcome transforms (#1201). - Fix cached Cholesky sampling in
qNEHVIwhen usingStandardizeoutcome transform (#1215). - Make
task_featureas required input inMultiTaskGP.construct_inputs(#1246). - Fix CUDA tests (#1253).
- Fix
FixedSingleSampleModeldtype/device conversion (#1254). - Prevent inappropriate transforms by putting input transforms into train mode before converting models (#1283).
- Fix
sample_points_around_bestwhen using 20 dimensional inputs orprob_perturb(#1290). - Skip bound validation in
optimize_acqfif inequality constraints are specified (#1297). - Properly handle RFFs when used with a
ModelListwith individual transforms (#1299). - Update
PosteriorListto support deterministic-only models and fixevent_shape(#1300).
- Add a note about observation noise in the posterior in
fit_model_with_torch_optimizernotebook (#1196). - Fix custom botorch model in Ax tutorial to support new interface (#1213).
- Update MOO docs (#1242).
- Add SMOKE_TEST option to MOMF tutorial (#1243).
- Fix
ModelListGP.condition_on_observations/fantasizebug (#1250). - Replace space with underscore for proper doc generation (#1256).
- Update PBO tutorial to use EUBO (#1262).
- Implement
ExpectationPosteriorTransform(#903). - Add
PairwiseMCPosteriorVariance, a cheap active learning acquisition function (#1125). - Support computing quantiles in the fully Bayesian posterior, add
FullyBayesianPosteriorList(#1161). - Add expectation risk measures (#1173).
- Implement Multi-Fidelity GIBBON (Lower Bound MES) acquisition function (#1185).
- Add an error message for one shot acquisition functions in
optimize_acqf_discrete(#939). - Validate the shape of the
boundsargument inoptimize_acqf(#1142). - Minor tweaks to
SAASBO(#1143, #1183). - Minor updates to tutorials (24f7fda7b40d4aabf502c1a67816ac1951af8c23, #1144, #1148, #1159, #1172, #1180).
- Make it easier to specify a custom
PyroModel(#1149). - Allow passing in a
mean_moduletoSingleTaskGP/FixedNoiseGP(#1160). - Add a note about acquisitions using gradients to base class (#1168).
- Remove deprecated
box_decompositionmodule (#1175).
- Bug-fixes for
ProximalAcquisitionFunction(#1122). - Fix missing warnings on failed optimization in
fit_gpytorch_scipy(#1170). - Ignore data related buffers in
PairwiseGP.load_state_dict(#1171). - Make
fit_gpytorch_modelproperly honor thedebugflag (#1178). - Fix missing
posterior_transformingen_one_shot_kg_initial_conditions(#1187).
- Implement SAASBO -
SaasFullyBayesianSingleTaskGPmodel for sample-efficient high-dimensional Bayesian optimization (#1123). - Add SAASBO tutorial (#1127).
- Add
LearnedObjective(#1131),AnalyticExpectedUtilityOfBestOptionacquisition function (#1135), and a few auxiliary classes to support Bayesian optimization with preference exploration (BOPE). - Add BOPE tutorial (#1138).
- Use
qKG.evaluateinoptimize_acqf_mixed(#1133). - Add
construct_inputsto SAASBO (#1136).
- Fix "Constraint Active Search" tutorial (#1124).
- Update "Discrete Multi-Fidelity BO" tutorial (#1134).
- Use
BOTORCH_MODULARin tutorials with Ax (#1105). - Add
optimize_acqf_discrete_local_searchfor discrete search spaces (#1111).
- Fix missing
posterior_transformin qNEI andget_acquisition_function(#1113).
- Add
Standardizeinput transform (#1053). - Low-rank Cholesky updates for NEI (#1056).
- Add support for non-linear input constraints (#1067).
- New MOO problems: MW7 (#1077), disc brake (#1078), penicillin (#1079), RobustToy (#1082), GMM (#1083).
- Support multi-output models in MES using
PosteriorTransform(#904). - Add
Dispatcher(#1009). - Modify qNEHVI to support deterministic models (#1026).
- Store tensor attributes of input transforms as buffers (#1035).
- Modify NEHVI to support MTGPs (#1037).
- Make
Normalizeinput transform input column-specific (#1047). - Improve
find_interior_point(#1049). - Remove deprecated
botorch.distributionsmodule (#1061). - Avoid costly application of posterior transform in Kronecker & HOGP models (#1076).
- Support heteroscedastic perturbations in
InputPerturbations(#1088).
- Make risk measures more memory efficient (#1034).
- Properly handle empty
fixed_featuresin optimization (#1029). - Fix missing weights in
VaRrisk measure (#1038). - Fix
find_interior_pointfor negative variables & allow unbounded problems (#1045). - Filter out indefinite bounds in constraint utilities (#1048).
- Make non-interleaved base samples use intuitive shape (#1057).
- Pad small diagonalization with zeros for
KroneckerMultitaskGP(#1071). - Disable learning of bounds in
preprocess_transform(#1089). - Fix
gen_candidates_torch(4079164489613d436d19c7b2df97677d97dfa8dc). - Catch runtime errors with ill-conditioned covar (#1095).
- Fix
compare_mc_analytic_acquisitiontutorial (#1099).
- Require PyTorch >=1.9 (#1011).
- Require GPyTorch >=1.6 (#1011).
- New
ApproximateGPyTorchModelwrapper for various (variational) approximate GP models (#1012). - New
SingleTaskVariationalGPstochastic variational Gaussian Process model (#1012). - Support for Multi-Output Risk Measures (#906, #965).
- Introduce
ModelListandPosteriorList(#829). - New Constraint Active Search tutorial (#1010).
- Add additional multi-objective optimization test problems (#958).
- Add
covar_moduleas an optional input ofMultiTaskGPmodels (#941). - Add
min_rangeargument toNormalizetransform to prevent division by zero (#931). - Add initialization heuristic for acquisition function optimization that samples around best points (#987).
- Update initialization heuristic to perturb a subset of the dimensions of the best points if the dimension is > 20 (#988).
- Modify
apply_constraintsutility to work with multi-output objectives (#994). - Short-cut
t_batch_mode_transformdecorator on non-tensor inputs (#991).
- Use lazy covariance matrix in
BatchedMultiOutputGPyTorchModel.posterior(#976). - Fast low-rank Cholesky updates for
qNoisyExpectedHypervolumeImprovement(#747, #995, #996).
- Update error handling to new PyTorch linear algebra messages (#940).
- Avoid test failures on Ampere devices (#944).
- Fixes to the
Griewanktest function (#972). - Handle empty base_sample_shape in
Posterior.rsample(#986). - Handle
NotPSDErrorand hittingmaxiterinfit_gpytorch_model(#1007). - Use TransformedPosterior for subclasses of GPyTorchPosterior (#983).
- Propagate
best_fargument toqProbabilityOfImprovementin input constructors (f5a5f8b6dc20413e67c6234e31783ac340797a8d).
- Require GPyTorch >=1.5.1 (#928).
- Add
HigherOrderGPcomposite Bayesian Optimization tutorial notebook (#864). - Add Multi-Task Bayesian Optimziation tutorial (#867).
- New multi-objective test problems from (#876).
- Add
PenalizedMCObjectiveandL1PenaltyObjective(#913). - Add a
ProximalAcquisitionFunctionfor regularizing new candidates towards previously generated ones (#919, #924). - Add a
Poweroutcome transform (#925).
- Batch mode fix for
HigherOrderGPinitialization (#856). - Improve
CategoricalKernelprecision (#857). - Fix an issue with
qMultiFidelityKnowledgeGradient.evaluate(#858). - Fix an issue with transforms with
HigherOrderGP. (#889) - Fix initial candidate generation when parameter constraints are on different device (#897).
- Fix bad in-place op in
_generate_unfixed_lin_constraints(#901). - Fix an input transform bug in
fantasizecall (#902). - Fix outcome transform bug in
batched_to_model_list(#917).
- Make variance optional for
TransformedPosterior.mean(#855). - Support transforms in
DeterministicModel(#869). - Support
batch_shapeinRandomFourierFeatures(#877). - Add a
maximizeflag toPosteriorMean(#881). - Ignore categorical dimensions when validating training inputs in
MixedSingleTaskGP(#882). - Refactor
HigherOrderGPPosteriorfor memory efficiency (#883). - Support negative weights for minimization objectives in
get_chebyshev_scalarization(#884). - Move
train_inputstransforms tomodel.train/evalcalls (#894).
- Require PyTorch >=1.8.1 (#832).
- Require GPyTorch >=1.5 (#848).
- Changes to how input transforms are applied:
transform_inputsis applied inmodel.forwardif the model is intrainmode, otherwise it is applied in theposteriorcall (#819, #835).
- Improved multi-objective optimization capabilities:
qNoisyExpectedHypervolumeImprovementacquisition function that improves onqExpectedHypervolumeImprovementin terms of tolerating observation noise and speeding up computation for largeq-batches (#797, #822).qMultiObjectiveMaxValueEntropyacqusition function (913aa0e510dde10568c2b4b911124cdd626f6905, #760).- Heuristic for reference point selection (#830).
FastNondominatedPartitioningfor Hypervolume computations (#699).DominatedPartitioningfor partitioning the dominated space (#726).BoxDecompositionListfor handling box decompositions of varying sizes (#712).- Direct, batched dominated partitioning for the two-outcome case (#739).
get_default_partitioning_alphautility providing heuristic for selecting approximation level for partitioning algorithms (#793).- New method for computing Pareto Frontiers with less memory overhead (#842, #846).
- New
qLowerBoundMaxValueEntropyacquisition function (a.k.a. GIBBON), a lightweight variant of Multi-fidelity Max-Value Entropy Search using a Determinantal Point Process approximation (#724, #737, #749). - Support for discrete and mixed input domains:
CategoricalKernelfor categorical inputs (#771).MixedSingleTaskGPfor mixed search spaces (containing both categorical and ordinal parameters) (#772, #847).optimize_acqf_discretefor optimizing acquisition functions over fully discrete domains (#777).- Extend
optimize_acqf_mixedto allow batch optimization (#804).
- Support for robust / risk-aware optimization:
- Risk measures for robust / risk-averse optimization (#821).
AppendFeaturestransform (#820).InputPerturbationinput transform for for risk averse BO with implementation errors (#827).- Tutorial notebook for Bayesian Optimization of risk measures (#823).
- Tutorial notebook for risk-averse Bayesian Optimization under input perturbations (#828).
- More scalable multi-task modeling and sampling:
KroneckerMultiTaskGPmodel for efficient multi-task modeling for block-design settings (all tasks observed at all inputs) (#637).- Support for transforms in Multi-Task GP models (#681).
- Posterior sampling based on Matheron's rule for Multi-Task GP models (#841).
- Various changes to simplify and streamline integration with Ax:
- Handle non-block designs in
TrainingData(#794). - Acquisition function input constructor registry (#788, #802, #845).
- Handle non-block designs in
- Random Fourier Feature (RFF) utilties for fast (approximate) GP function sampling (#750).
DelaunayPolytopeSamplerfor fast uniform sampling from (simple) polytopes (#741).- Add
evaluatemethod toScalarizedObjective(#795).
- Handle the case when all features are fixed in
optimize_acqf(#770). - Pass
fixed_featuresto initial candidate generation functions (#806). - Handle batch empty pareto frontier in
FastPartitioning(#740). - Handle empty pareto set in
is_non_dominated(#743). - Handle edge case of no or a single observation in
get_chebyshev_scalarization(#762). - Fix an issue in
gen_candidates_torchthat caused problems with acqusition functions using fantasy models (#766). - Fix
HigherOrderGPdtypebug (#728). - Normalize before clamping in
Warpinput warping transform (#722). - Fix bug in GP sampling (#764).
- Modify input transforms to support one-to-many transforms (#819, #835).
- Make initial conditions for acquisition function optimization honor parameter constraints (#752).
- Perform optimization only over unfixed features if
fixed_featuresis passed (#839). - Refactor Max Value Entropy Search Methods (#734).
- Use Linear Algebra functions from the
torch.linalgmodule (#735). - Use PyTorch's
Kumaraswamydistribution (#746). - Improved capabilities and some bugfixes for batched models (#723, #767).
- Pass
callbackargument toscipy.optim.minimizeingen_candidates_scipy(#744). - Modify behavior of
X_pendingin in multi-objective acqusiition functions (#747). - Allow multi-dimensional batch shapes in test functions (#757).
- Utility for converting batched multi-output models into batched single-output models (#759).
- Explicitly raise
NotPSDErrorin_scipy_objective_and_grad(#787). - Make
raw_samplesoptional ifbatch_initial_conditionsis passed (#801). - Use powers of 2 in qMC docstrings & examples (#812).
- Require PyTorch >=1.7.1 (#714).
- Require GPyTorch >=1.4 (#714).
HigherOrderGP- High-Order Gaussian Process (HOGP) model for high-dimensional output regression (#631, #646, #648, #680).qMultiStepLookaheadacquisition function for general look-ahead optimization approaches (#611, #659).ScalarizedPosteriorMeanandproject_to_sample_pointsfor more advanced MFKG functionality (#645).- Large-scale Thompson sampling tutorial (#654, #713).
- Tutorial for optimizing mixed continuous/discrete domains (application to multi-fidelity KG with discrete fidelities) (#716).
GPDrawutility for sampling from (exact) GP priors (#655).- Add
Xas optional arg to call signature ofMCAcqusitionObjective(#487). OSYsynthetic test problem (#679).
- Fix matrix multiplication in
scalarize_posterior(#638). - Set
X_pendinginget_acquisition_functioninqEHVI(#662). - Make contextual kernel device-aware (#666).
- Do not use an
MCSamplerinMaxPosteriorSampling(#701). - Add ability to subset outcome transforms (#711).
- Batchify box decomposition for 2d case (#642).
- Use scipy distribution in MES quantile bisect (#633).
- Use new closure definition for GPyTorch priors (#634).
- Allow enabling of approximate root decomposition in
posteriorcalls (#652). - Support for upcoming 21201-dimensional PyTorch
SobolEngine(#672, #674). - Refactored various MOO utilities to allow future additions (#656, #657, #658, #661).
- Support input_transform in PairwiseGP (#632).
- Output shape checks for t_batch_mode_transform (#577).
- Check for NaN in
gen_candidates_scipy(#688). - Introduce
base_sample_shapeproperty toPosteriorobjects (#718).
Contextual Bayesian Optimization, Input Warping, TuRBO, sampling from polytopes.
- Require PyTorch >=1.7 (#614).
- Require GPyTorch >=1.3 (#614).
- Models (LCE-A, LCE-M and SAC ) for Contextual Bayesian Optimziation (#581).
- Implements core models from: High-Dimensional Contextual Policy Search with Unknown Context Rewards using Bayesian Optimization. Q. Feng, B. Letham, H. Mao, E. Bakshy. NeurIPS 2020.
- See Ax for usage of these models.
- Hit and run sampler for uniform sampling from a polytope (#592).
- Input warping:
- Core functionality (#607).
- Kumaraswamy Distribution (#606).
- Tutorial (8f34871652042219c57b799669a679aab5eed7e3).
- TuRBO-1 tutorial (#598).
- Implements the method from Scalable Global Optimization via Local Bayesian Optimization. D. Eriksson, M. Pearce, J. Gardner, R. D. Turner, M. Poloczek. NeurIPS 2019.
- Fix bounds of
HolderTablesynthetic function (#596). - Fix
deviceissue in MOO tutorial (#621).
- Add
train_inputsoption toqMaxValueEntropy(#593). - Enable gpytorch settings to override BoTorch defaults for
fast_pred_varanddebug(#595). - Rename
set_train_data_transform->preprocess_transform(#575). - Modify
_expand_bounds()shape checks to work with >2-dim bounds (#604). - Add
batch_shapeproperty to models (#588). - Modify
qMultiFidelityKnowledgeGradient.evaluate()to work withproject,expandandcost_aware_utility(#594). - Add list of papers using BoTorch to website docs (#617).
Maintenance Release
- Add
PenalizedAcquisitionFunctionwrapper (#585) - Input transforms
- Reversible input transform (#550)
- Rounding input transform (#562)
- Log input transform (#563)
- Differentiable approximate rounding for integers (#561)
- Fix sign error in UCB when
maximize=False(a4bfacbfb2109d3b89107d171d2101e1995822bb) - Fix batch_range sample shape logic (#574)
- Better support for two stage sampling in preference learning (0cd13d0cb49b1ac8d0971e42f1f0e9dd6126fd9a)
- Remove noise term in
PairwiseGPand addScaleKernelby default (#571) - Rename
priortotask_covar_priorinMultiTaskGPandFixedNoiseMultiTaskGP(8e42ea82856b165a7df9db2a9b6f43ebd7328fc4) - Support only transforming inputs on training or evaluation (#551)
- Add
equalsmethod forInputTransform(#552)
Maintenance Release
- Constrained Multi-Objective tutorial (#493)
- Multi-fidelity Knowledge Gradient tutorial (#509)
- Support for batch qMC sampling (#510)
- New
evaluatemethod forqKnowledgeGradient(#515)
- Require PyTorch >=1.6 (#535)
- Require GPyTorch >=1.2 (#535)
- Remove deprecated
botorch.gen module(#532)
- Fix bad backward-indexing of task_feature in
MultiTaskGP(#485) - Fix bounds in constrained Branin-Currin test function (#491)
- Fix max_hv for C2DTLZ2 and make Hypervolume always return a float (#494)
- Fix bug in
draw_sobol_samplesthat did not use the proper effective dimension (#505) - Fix constraints for
q>1inqExpectedHypervolumeImprovement(c80c4fdb0f83f0e4f12e4ec4090d0478b1a8b532) - Only use feasible observations in partitioning for
qExpectedHypervolumeImprovementinget_acquisition_function(#523) - Improved GPU compatibility for
PairwiseGP(#537)
- Reduce memory footprint in
qExpectedHypervolumeImprovement(#522) - Add
(q)ExpectedHypervolumeImprovementto nonnegative functions [for better initialization] (#496)
- Support batched
best_finqExpectedImprovement(#487) - Allow to return full tree of solutions in
OneShotAcquisitionFunction(#488) - Added
construct_inputsclass method to models to programmatically construct the inputs to the constructor from a standardizedTrainingDatarepresentation (#477, #482, 3621198d02195b723195b043e86738cd5c3b8e40) - Acquisition function constructors now accept catch-all
**kwargsoptions (#478, e5b69352954bb10df19a59efe9221a72932bfe6c) - Use
psd_safe_choleskyinqMaxValueEntropyfor better numerical stabilty (#518) - Added
WeightedMCMultiOutputObjective(81d91fd2e115774e561c8282b724457233b6d49f) - Add ability to specify
outcomesto all multi-output objectives (#524) - Return optimization output in
info_dictforfit_gpytorch_scipy(#534) - Use
setuptools_scmfor versioning (#539)
Multi-Objective Bayesian Optimization
- Multi-Objective Acquisition Functions (#466)
- q-Expected Hypervolume Improvement
- q-ParEGO
- Analytic Expected Hypervolume Improvement with auto-differentiation
- Multi-Objective Utilities (#466)
- Pareto Computation
- Hypervolume Calculation
- Box Decomposition algorithm
- Multi-Objective Test Functions (#466)
- Suite of synthetic test functions for multi-objective, constrained optimization
- Multi-Objective Tutorial (#468)
- Abstract ConstrainedBaseTestProblem (#454)
- Add optimize_acqf_list method for sequentially, greedily optimizing 1 candidate from each provided acquisition function (d10aec911b241b208c59c192beb9e4d572a092cd)
- Fixed re-arranging mean in MultiTask MO models (#450).
- Move gpt_posterior_settings into models.utils (#449)
- Allow specifications of batch dims to collapse in samplers (#457)
- Remove outcome transform before model-fitting for sequential model fitting in MO models (#458)
Bugfix Release
- Fixed issue with broken wheel build (#444).
- Changed code style to use absolute imports throughout (#443).
Bugfix Release
- There was a mysterious issue with the 0.2.3 wheel on pypi, where part of the
botorch/optim/utils.pyfile was not included, which resulted in anImportErrorfor many central components of the code. Interestingly, the source dist (built with the same command) did not have this issue. - Preserve order in ChainedOutcomeTransform (#440).
- Utilities for estimating the feasible volume under outcome constraints (#437).
Pairwise GP for Preference Learning, Sampling Strategies.
- Require PyTorch >=1.5 (#423).
- Require GPyTorch >=1.1.1 (#425).
- Add
PairwiseGPfor preference learning with pair-wise comparison data (#388). - Add
SamplingStrategyabstraction for sampling-based generation strategies, includingMaxPosteriorSampling(i.e. Thompson Sampling) andBoltzmannSampling(#218, #407).
- The existing
botorch.genmodule is moved tobotorch.generation.genand imports frombotorch.genwill raise a warning (an error in the next release) (#218).
- Fix & update a number of tutorials (#394, #398, #393, #399, #403).
- Fix CUDA tests (#404).
- Fix sobol maxdim limitation in
prune_baseline(#419).
- Better stopping criteria for stochastic optimization (#392).
- Improve numerical stability of
LinearTruncatedFidelityKernel(#409). - Allow batched
best_finqExpectedImprovementandqProbabilityOfImprovement(#411). - Introduce new logger framework (#412).
- Faster indexing in some situations (#414).
- More generic
BaseTestProblem(9e604fe2188ac85294c143d249872415c4d95823).
Require PyTorch 1.4, Python 3.7 and new features for active learning, multi-fidelity optimization, and a number of bug fixes.
- Require PyTorch >=1.4 (#379).
- Require Python >=3.7 (#378).
- Add
qNegIntegratedPosteriorVariancefor Bayesian active learning (#377). - Add
FixedNoiseMultiFidelityGP, analogous toSingleTaskMultiFidelityGP(#386). - Support
scalarize_posteriorfor m>1 and q>1 posteriors (#374). - Support
subset_outputmethod on multi-fidelity models (#372). - Add utilities for sampling from simplex and hypersphere (#369).
- Fix
TestLoaderlocal test discovery (#376). - Fix batch-list conversion of
SingleTaskMultiFidelityGP(#370). - Validate tensor args before checking input scaling for more informative error messaages (#368).
- Fix flaky
qNoisyExpectedImprovementtest (#362). - Fix test function in closed-loop tutorial (#360).
- Fix num_output attribute in BoTorch/Ax tutorial (#355).
- Require output dimension in
MultiTaskGP(#383). - Update code of conduct (#380).
- Remove deprecated
joint_optimizeandsequential_optimize(#363).
Minor bug fix release.
- Add a static method for getting batch shapes for batched MO models (#346).
- Revamp qKG constructor to avoid issue with missing objective (#351).
- Make sure MVES can support sampled costs like KG (#352).
- Allow custom module-to-array handling in fit_gpytorch_scipy (#341).
Max-value entropy acquisition function, cost-aware / multi-fidelity optimization, subsetting models, outcome transforms.
- Require PyTorch >=1.3.1 (#313).
- Require GPyTorch >=1.0 (#342).
- Add cost-aware KnowledgeGradient (
qMultiFidelityKnowledgeGradient) for multi-fidelity optimization (#292). - Add
qMaxValueEntropyandqMultiFidelityMaxValueEntropymax-value entropy search acquisition functions (#298). - Add
subset_outputfunctionality to (most) models (#324). - Add outcome transforms and input transforms (#321).
- Add
outcome_transformkwarg to model constructors for automatic outcome transformation and un-transformation (#327). - Add cost-aware utilities for cost-sensitive acquisiiton functions (#289).
- Add
DeterminsticModelandDetermisticPosteriorabstractions (#288). - Add
AffineFidelityCostModel(f838eacb4258f570c3086d7cbd9aa3cf9ce67904). - Add
project_to_target_fidelityandexpand_trace_observationsutilties for use in multi-fidelity optimization (1ca12ac0736e39939fff650cae617680c1a16933).
- New
prune_baselineoption for pruningX_baselineinqNoisyExpectedImprovement(#287). - Do not use approximate MLL computation for deterministic fitting (#314).
- Avoid re-evaluating the acquisition function in
gen_candidates_torch(#319). - Use CPU where possible in
gen_batch_initial_conditionsto avoid memory issues on the GPU (#323).
- Properly register
NoiseModelAddedLossTerminHeteroskedasticSingleTaskGP(671c93a203b03ef03592ce322209fc5e71f23a74). - Fix batch mode for
MultiTaskGPyTorchModel(#316). - Honor
propagate_gradsargument infantasizeofFixedNoiseGP(#303). - Properly handle
diagarg inLinearTruncatedFidelityKernel(#320).
- Consolidate and simplify multi-fidelity models (#308).
- New license header style (#309).
- Validate shape of
best_finqExpectedImprovement(#299). - Support specifying observation noise explicitly for all models (#256).
- Add
num_outputsproperty to theModelAPI (#330). - Validate output shape of models upon instantiating acquisition functions (#331).
- Silence warnings outside of explicit tests (#290).
- Enforce full sphinx docs coverage in CI (#294).
Knowledge Gradient acquisition function (one-shot), various maintenance
- Require explicit output dimensions in BoTorch models (#238)
- Make
joint_optimize/sequential_optimizereturn acquisition function values (#149) [note deprecation notice below] standardizenow works on the second to last dimension (#263)- Refactor synthetic test functions (#273)
- Add
qKnowledgeGradientacquisition function (#272, #276) - Add input scaling check to standard models (#267)
- Add
cyclic_optimize, convergence criterion class (#269) - Add
settings.debugcontext manager (#242)
- Consolidate
sequential_optimizeandjoint_optimizeintooptimize_acqf(#150)
- Properly pass noise levels to GPs using a
FixedNoiseGaussianLikelihood(#241) [requires gpytorch > 0.3.5] - Fix q-batch dimension issue in
ConstrainedExpectedImprovement(6c067185f56d3a244c4093393b8a97388fb1c0b3) - Fix parameter constraint issues on GPU (#260)
- Add decorator for concatenating pending points (#240)
- Draw independent sample from prior for each hyperparameter (#244)
- Allow
dim > 1111forgen_batch_initial_conditions(#249) - Allow
optimize_acqfto useq>1forAnalyticAcquisitionFunction(#257) - Allow excluding parameters in fit functions (#259)
- Track the final iteration objective value in
fit_gpytorch_scipy(#258) - Error out on unexpected dims in parameter constraint generation (#270)
- Compute acquisition values in gen_ functions w/o grad (#274)
- Introduce BotorchTestCase to simplify test code (#243)
- Refactor tests to have monolithic cuda tests (#261)
Compatibility & maintenance release
- Updates to support breaking changes in PyTorch to boolean masks and tensor comparisons (#224).
- Require PyTorch >=1.2 (#225).
- Require GPyTorch >=0.3.5 (itself a compatibility release).
- Add
FixedFeatureAcquisitionFunctionwrapper that simplifies optimizing acquisition functions over a subset of input features (#219). - Add
ScalarizedObjectivefor scalarizing posteriors (#210). - Change default optimization behavior to use L-BFGS-B by for box constraints (#207).
- Add validation to candidate generation (#213), making sure constraints are strictly satisfied (rater than just up to numerical accuracy of the optimizer).
- Introduce
AcquisitionObjectivebase class (#220). - Add propagate_grads context manager, replacing the
propagate_gradskwarg in modelposterior()calls (#221) - Add
batch_initial_conditionsargument tojoint_optimize()for warm-starting the optimization (ec3365a37ed02319e0d2bb9bea03aee89b7d9caa). - Add
return_best_onlyargument tojoint_optimize()(#216). Useful for implementing advanced warm-starting procedures.
Maintenance release
- Avoid [PyTorch bug]((pytorch/pytorch#22353) resulting in bad gradients on GPU by requiring GPyTorch >= 0.3.4
- Fixes to resampling behavior in MCSamplers (#204)
- Linear truncated kernel for multi-fidelity bayesian optimization (#192)
- SingleTaskMultiFidelityGP for GP models that have fidelity parameters (#181)
API updates, more robust model fitting
- rename
botorch.qmctobotorch.sampling, move MC samplers fromacquisition.samplertobotorch.sampling.samplers(#172)
- Add
condition_on_observationsandfantasizeto the Model level API (#173) - Support pending observations generically for all
MCAcqusitionFunctions(#176) - Add fidelity kernel for training iterations/training data points (#178)
- Support for optimization constraints across
q-batches (to support things like sample budget constraints) (2a95a6c3f80e751d5cf8bc7240ca9f5b1529ec5b) - Add ModelList <-> Batched Model converter (#187)
- New test functions
- basic:
neg_ackley,cosine8,neg_levy,neg_rosenbrock,neg_shekel(e26dc7576c7bf5fa2ba4cb8fbcf45849b95d324b) - for multi-fidelity BO:
neg_aug_branin,neg_aug_hartmann6,neg_aug_rosenbrock(ec4aca744f65ca19847dc368f9fee4cc297533da)
- basic:
- More robust model fitting
- Catch gpytorch numerical issues and return
NaNto the optimizer (#184) - Restart optimization upon failure by sampling hyperparameters from their prior (#188)
- Sequentially fit batched and
ModelListGPmodels by default (#189) - Change minimum inferred noise level (e2c64fef1e76d526a33951c5eb75ac38d5581257)
- Catch gpytorch numerical issues and return
- Introduce optional batch limit in
joint_optimizeto increases scalability of parallel optimization (baab5786e8eaec02d37a511df04442471c632f8a) - Change constructor of
ModelListGPto comply with GPyTorch’sIndependentModelListconstructor (a6cf739e769c75319a67c7525a023ece8806b15d) - Use
torch.randomto set default seed for samplers (rather thanrandom) to making sampling reproducible when settingtorch.manual_seed(ae507ad97255d35f02c878f50ba68a2e27017815)
- Use
einsuminLinearMCObjective(22ca29535717cda0fcf7493a43bdf3dda324c22d) - Change default Sobol sample size for
MCAquisitionFunctionsto be base-2 for better MC integration performance (5d8e81866a23d6bfe4158f8c9b30ea14dd82e032) - Add ability to fit models in
SumMarginalLogLikelihoodsequentially (and make that the default setting) (#183) - Do not construct the full covariance matrix when computing posterior of single-output BatchedMultiOutputGPyTorchModel (#185)
- Properly handle observation_noise kwarg for BatchedMultiOutputGPyTorchModels (#182)
- Fix a issue where
f_bestwas always max for NoisyExpectedImprovement (de8544a75b58873c449b41840a335f6732754c77) - Fix bug and numerical issues in
initialize_q_batch(844dcd1dc8f418ae42639e211c6bb8e31a75d8bf) - Fix numerical issues with
inv_transformfor qMC sampling (#162)
- Bump GPyTorch minimum requirement to 0.3.3
First public beta release.