MATLAB Answers

How to use bayesopt function to predict the optimal parameters for the experiment?

16 views (last 30 days)
xu supeng
xu supeng on 23 Jun 2020
Answered: xu supeng on 6 Jul 2020
I have a list of X values (like experimental parameters) and corresponding Y values (the experimental results), as well as X variable ranges, how can I use the bayesopt() function to predict the optimal X parameters to get the best Y value, since most of the examples on the internet are about find the optimal hyperparameters of a algorithm, which is a litte different from my problem. I don't know how to deal with the object function, since I don't have a object function, my data are experimental parameters and experimental results. I hope to use the existing data to predict the optimal parameters. I find some examples, and write a program, but it works not well as I expected, here is the program,
the examples that I refered,
X = unifrnd(0,10,30,3);
Y = unifrnd(0,2000,30,1);
sat1 = optimizableVariable('s1',[0,10],'Type','real');
sat2 = optimizableVariable('s2',[0,10],'Type','real');
sat3 = optimizableVariable('s3',[0,10],'Type','real');
var=[sat1,sat2 sat3];
initialXList = table;
initialXList.s1 = X(:,1);
initialXList.s2 = X(:,2);
initialXList.s3 = X(:,3);
initialObjList = Y;
dummyFunc = @(Tbl)0;
bayesObject = bayesopt(dummyFunc,var,...
'InitialX',initialXList,...
'InitialObjective',initialObjList,...
'MaxObjectiveEvaluations',50, ...
'Verbose',1);

  1 Comment

xu supeng
xu supeng on 24 Jun 2020
I have a thought, but maybe it's not easy to realize. Firstly, use a group of X and corresponding Y to train a model with optimal parameters, e.g. fitcknn algorithm (to get the optimal hyperparameters), then, use the model as objective function, and use bayesopt() to select the next 20 X values from the X ranges, and the correspoing Y values, and use the 20 group of [X, Y] as experimental data to train the algorithm again, repeat this process agin and again, finally, get the optimal results, I will do it, if anyone can give me suggestions, I would thanks very much, since I am not familar with algorithm, and which algorithm to choose should have quite influence on the final results~
Note, the [X, Y] are totally experimental results, so there isn't obvious relationship between them~
A simple case, but it seems the bayesopt() function can't use the trained model Md1, why? what's the problem~
rng default
X = unifrnd(0,10,30,3);
Y = unifrnd(0,2000,30,1);
Mdl = fitcknn(X,Y,'OptimizeHyperparameters','auto',...
'HyperparameterOptimizationOptions',...
struct('AcquisitionFunctionName','expected-improvement-plus'));
sat1 = optimizableVariable('s1',[0,10],'Type','real');
sat2 = optimizableVariable('s2',[0,10],'Type','real');
sat3 = optimizableVariable('s3',[0,10],'Type','real');
var=[sat1,sat2 sat3];
Fun = @(z)predict(Mdl,[z.s1,z.s2,z.s3]);
bayesObject = bayesopt(Fun,var,...
'MaxObjectiveEvaluations',20, ...
'Verbose',1);

Sign in to comment.

Accepted Answer

Alan Weiss
Alan Weiss on 24 Jun 2020
It soiunds to me as if there are two steps to your problem:
  1. Fit a parameterized function to some data. So you might have some function like y = A(1)exp(-A(2)*x + A(3)*x^2 and you first need to fit your A values to the given (x,y) data. It really is up to you to decide on a suitable parameterized function.
  2. Minimize the resulting function. Once you have a parameterized curve or surface y = f(x) with known function f, use the normal optimization procedures to find the location of the minimum.
You can use bayesopt for either or both steps, or use some other optimizer for either or both steps.
Good luck,
Alan Weiss
MATLAB mathematical toolbox documentation

  6 Comments

Show 3 older comments
xu supeng
xu supeng on 1 Jul 2020
Finally, I write a program to optimize the best parameters for experiment, but it seems work not well. Here, I don't ask technical questions, I just wonder why the program doesn't work as I expected (here, I mean generally it cann't give me the minimum Y value or the minimum estimated value is in the minimum experimental data place, and once I changed the experimental data space, the optimal estimated Y value change to the new experimental minimum value place, the optimization cann't give me obvious improvement in Y value, little complex, I try to explain why I think it doesn't work), maybe you can give me some suggestions because you are familar with bayesopt() function.
The process is, firstly, train the fitrgp() model with experimental data, and obtain Md1 with best hyperparameters. Secondly, use the Md1 as the objective function and bayesopt() function to get the next [Xi Yi], then put the [Xi, Yi] into the [X Y] array and use fitrgp() to train to get a new model with best hyperparameters, and put the new model into the bayesopt() function to get the next point, repeat this again and again. This is also what most tutorial said ~
Here is the program, any suggestions, appreciate it so much
clc;
clear;
tic;
%%%Colom 1 ~ 6 is the X parameters, Colom 7 is the Y values
Data =[5.6207 3.4326 0.99076 -0.5681 2.2269 -0.76352 -2560.6
9.4475 5.1669 8.8368 -1.5091 4.6748 -1.2324 -738.19
6.2045 3.1342 5.3344 -1.0461 -4.807 -2.4016 -3044.6
9.1965 5.4567 2.741 4.9886 -2.2476 -1.2381 -427.06
5.5178 2.9224 0.47097 -4.7451 -4.3422 2.582 -1363.8
1.4587 6.82 9.6364 -0.55175 -4.2558 4.4485 -5265.8
9.003 1.3252 0.91798 3.956 1.3036 2.8979 -788.03
3.658 4.7395 9.7259 3.8048 -0.8229 0.097753 -2241.8
3.661 9.6049 9.3967 -0.52114 1.8493 -3.1426 -2594.9
7.538 6.3236 3.6793 -0.27248 -2.4987 2.9187 -3992.8
2.8826 2.9379 7.6082 -4.3078 3.5267 -0.76454 -1637.5
2.5475 0.41993 2.1689 3.0568 -0.5199 -2.6993 -971.66
9.8318 9.7541 7.1499 -1.069 -0.10125 2.48 -3309.2
5.0904 0.4727 6.0548 -3.1414 -1.1608 2.6665 -2494.9
5.3166 9.3089 3.4795 1.1064 1.6673 3.2222 -3272.3
4.8304 0.35989 1.7593 4.4904 -4.7422 1.9583 -421.03
5.5926 7.7423 7.4898 1.6932 3.9333 3.2528 -911.6
8.5082 8.2249 6.6189 4.0599 1.9319 -0.82999 -6993
7.0123 4.0614 2.0719 -2.5927 -3.013 -1.769 -1800.4
3.6205 2.0072 4.4781 -1.2097 3.811 2.8131 -1086.1];
X = Data(:,1:6);
Y = Data(:,7);
for ii=1:100 %%% 100 groups of Loop, each Loop 1 iterations based on the former X Y data, since the initial [X Y] values are also included in the iteration numbers
% disp([X,Y]);
disp(ii);
Len=length(Y);
% disp(Len);
%%% trained with fitrgp to find the optimal hyperparameters for X Y
Md1 = fitrgp(X,Y,'OptimizeHyperparameters','all','HyperparameterOptimizationOptions',struct('ShowPlots',false,'Verbose',0));
% disp([Md1.BasisFunction,' ',Md1.KernelFunction,' ',num2str(Md1.Sigma)]);
if mod(ii,10)==0
figure(ii);
plot(Y);
hold on;
plot(predict(Md1,X)); %% validate the trained model with real data
end
sat1 = optimizableVariable('s1',[0,10],'Type','real');
sat2 = optimizableVariable('s2',[0,10],'Type','real');
sat3 = optimizableVariable('s3',[0,10],'Type','real');
delta1 = optimizableVariable('d1',[-5,5],'Type','real');
delta2 = optimizableVariable('d2',[-5,5],'Type','real');
delta3 = optimizableVariable('d3',[-5,5],'Type','real'); %% set the variable
var=[sat1 sat2 sat3 delta1 delta2 delta3];
initialXList = table;
initialXList.s1 = X(:,1);
initialXList.s2 = X(:,2);
initialXList.s3 = X(:,3);
initialXList.d1 = X(:,4);
initialXList.d2 = X(:,5);
initialXList.d3 = X(:,6);
initialObjList = Y; %%% set te initial X Y values
bayesObject = bayesopt(@(tbl)mdlfun(tbl,Md1),var,...
'MaxObjectiveEvaluations',Len+1,... %%%
'InitialX',initialXList,...
'InitialObjective',initialObjList,...
'PlotFcn',{},...
'Verbose',0);
disp([bayesObject.XAtMinObjective array2table(bayesObject.MinObjective)]);
disp([bayesObject.XAtMinEstimatedObjective array2table(bayesObject.MinEstimatedObjective)]);
Results(ii,:) = [bayesObject.XAtMinEstimatedObjective array2table(bayesObject.MinEstimatedObjective)];
X=table2array(bayesObject.XTrace);
Y=bayesObject.ObjectiveTrace;
end
% disp([bayesObject.XAtMinObjective array2table(bayesObject.MinObjective)]); %%% optimal observation
% disp([bayesObject.XAtMinEstimatedObjective array2table(bayesObject.MinEstimatedObjective)]); %%% optimal extimation
figure;
plot(table2array(Results(:,7)),'o');
toc;
function f = mdlfun(tbl,mdl)
sat1 = tbl.s1;
sat2 = tbl.s2;
sat3 = tbl.s3;
delta1 = tbl.d1;
delta2 = tbl.d2;
delta3 = tbl.d3;
var = [sat1 sat2 sat3 delta1 delta2 delta3];
f = predict(mdl,var);
end
Alan Weiss
Alan Weiss on 1 Jul 2020
I think that you are not letting bayesopt run long enough. You have a 6-D space to search. That is a lot of volume to cover, and you let it search for very few steps, so it probably does not get close to the minimum.
In fact, I think that it is a mistake to use bayesopt to look for a minimum of the objective function. I'd use fmincon starting from a bunch of initial points, or maybe do that automatically using MultiStart. But if you don't have Global Optimization Toolbox, just use fmincon. Or patternsearch if things are not smooth and you do have Global Optimization Toolbox.
Good luck,
Alan Weiss
MATLAB mathematical toolbox documentation

Sign in to comment.

More Answers (1)

xu supeng
xu supeng on 6 Jul 2020
Hi, guys
I already fix this problem last week, the problem is I try to use the fitted objective function to estimated the next Y values, but actually the function is a prediction function, it can tell you the next optimal Xi place where you have the largest opportunity to found the optimal Yi value, and you need to do experiment or simulation with Xi to get the real Yi value and put it into the database and updates the fitted objective function, then it works quite well, here I share the codes and the results ~
clc;
clear;
tic;
%%%Colom 1 ~ 6 is the X parameters, Colom 7 is the Y values
Data =[5.6207 3.4326 0.99076 -0.5681 2.2269 -0.76352 -2560.6
9.4475 5.1669 8.8368 -1.5091 4.6748 -1.2324 -738.19
6.2045 3.1342 5.3344 -1.0461 -4.807 -2.4016 -3044.6
9.1965 5.4567 2.741 4.9886 -2.2476 -1.2381 -427.06
5.5178 2.9224 0.47097 -4.7451 -4.3422 2.582 -1363.8
1.4587 6.82 9.6364 -0.55175 -4.2558 4.4485 -5265.8
9.003 1.3252 0.91798 3.956 1.3036 2.8979 -788.03
3.658 4.7395 9.7259 3.8048 -0.8229 0.097753 -2241.8
3.661 9.6049 9.3967 -0.52114 1.8493 -3.1426 -2594.9
7.538 6.3236 3.6793 -0.27248 -2.4987 2.9187 -3992.8
2.8826 2.9379 7.6082 -4.3078 3.5267 -0.76454 -1637.5
2.5475 0.41993 2.1689 3.0568 -0.5199 -2.6993 -971.66
9.8318 9.7541 7.1499 -1.069 -0.10125 2.48 -3309.2
5.0904 0.4727 6.0548 -3.1414 -1.1608 2.6665 -2494.9
5.3166 9.3089 3.4795 1.1064 1.6673 3.2222 -3272.3
4.8304 0.35989 1.7593 4.4904 -4.7422 1.9583 -421.03
5.5926 7.7423 7.4898 1.6932 3.9333 3.2528 -911.6
8.5082 8.2249 6.6189 4.0599 1.9319 -0.82999 -6993
7.0123 4.0614 2.0719 -2.5927 -3.013 -1.769 -1800.4
3.6205 2.0072 4.4781 -1.2097 3.811 2.8131 -1086.1];
X = Data(:,1:6);
Y = Data(:,7);
for ii=1:500 %%% 10 groups of Loop, each Loop 20 iterations based on the former X Y data, since the initial [X Y] values are also included in the iteration numbers
% disp([X,Y]);
disp(ii);
Len=length(Y);
% disp(Len);
%%% trained with fitrgp to find the optimal hyperparameters for X Y
Md1 = fitrgp(X,Y,'OptimizeHyperparameters','all','HyperparameterOptimizationOptions',struct('ShowPlots',false,'Verbose',0));
% disp([Md1.BasisFunction,' ',Md1.KernelFunction,' ',num2str(Md1.Sigma)]);
sat1 = optimizableVariable('s1',[0,10],'Type','real');
sat2 = optimizableVariable('s2',[0,10],'Type','real');
sat3 = optimizableVariable('s3',[0,10],'Type','real');
delta1 = optimizableVariable('d1',[-5,5],'Type','real');
delta2 = optimizableVariable('d2',[-5,5],'Type','real');
delta3 = optimizableVariable('d3',[-5,5],'Type','real'); %% set the variable
var=[sat1 sat2 sat3 delta1 delta2 delta3];
initialXList = table;
initialXList.s1 = X(:,1);
initialXList.s2 = X(:,2);
initialXList.s3 = X(:,3);
initialXList.d1 = X(:,4);
initialXList.d2 = X(:,5);
initialXList.d3 = X(:,6);
initialObjList = Y; %%% set te initial X Y values
bayesObject = bayesopt(@(tbl)mdlfun(tbl,Md1),var,...
'MaxObjectiveEvaluations',Len+1,... %%%
'InitialX',initialXList,...
'InitialObjective',initialObjList,...
'PlotFcn',{},...
'Verbose',0);
if mod(ii,5)==0
disp([bayesObject.XAtMinObjective array2table(bayesObject.MinObjective)]);
disp([bayesObject.XAtMinEstimatedObjective array2table(bayesObject.MinEstimatedObjective)]);
end
Results(ii,:) = [table2array(bayesObject.XAtMinObjective) bayesObject.MinObjective table2array(bayesObject.XAtMinEstimatedObjective) bayesObject.MinEstimatedObjective];
X=table2array(bayesObject.XTrace);
Yplusone=Updates(X(end,:)); %%% Updates the database with the optimal Xi parameters found by the fitted objective function
if Yplusone>0
Yplusone=0;
end
Y=bayesObject.ObjectiveTrace;
Y(end)=Yplusone;
end
figure;
plot(Results(:,7),'Linewidth',1.5);
hold on
plot(Results(:,14),'o');
xlabel('Number of iteration','FontSize',20);
ylabel('Slope','FontSize',20);
set(gca,'FontSize',20,'LineWidth',1.5);
toc;
function f = mdlfun(tbl,mdl)
sat1 = tbl.s1;
sat2 = tbl.s2;
sat3 = tbl.s3;
delta1 = tbl.d1;
delta2 = tbl.d2;
delta3 = tbl.d3;
var = [sat1 sat2 sat3 delta1 delta2 delta3];
f = predict(mdl,var);
end
both fitrgp and fitrlinear works well, but fitrlinear is much faster than the fitrgp~~~

  0 Comments

Sign in to comment.

Tags


Translated by