Optimization is an important tool in making decisions and in analyzing physical systems. In mathematical terms, an optimization problem is the problem of finding the best solution from among the set of all feasible solutions.
Introduction to Genetic Algorithms
A genetic algorithm is a search heuristic that is inspired by Charles Darwin’s theory of natural evolution. This algorithm reflects the process of natural selection where the fittest individuals are selected for reproduction in order to produce offspring of the next generation.
Five phases are considered in a genetic algorithm.
- Initial population
- Fitness function
- Selection
- Crossover
- Mutation
Advantages of Genetic Algorithms
-
Parallelism
-
Global optimization
-
A larger set of solution space
-
Requires less information
-
Provides multiple optimal solutions
-
Probabilistic in nature
-
Genetic representations using chromosomes
Disadvantages of Genetic Algorithms
-
The need for special definitions
-
Hyper-parameter tuning
-
Computational complexity
How do genetic algorithms differ from traditional algorithms?
-
A search space is a set of all possible solutions to the problem. Traditional Algorithms maintain only one set in a search space whereas Genetic Algorithms use several sets in a search space (Feature selection using R.F.E vs. Genetic Algorithms).
-
Traditional Algorithms require more information to perform a search whereas Genetic Algorithms just require one objective function to calculate the fitness of an individual.
-
Traditional Algorithms cannot work in parallel whereas Genetic Algorithms can work in parallel (calculating the fitness of the individuals are independent).
-
One big difference in Genetic Algorithms is that instead of operating directly on candidate solutions, genetic algorithms operate on their representations (or coding), often referred to as chromosomes.
-
Traditional Algorithms can only produce one solution in the end whereas in Genetic Algorithms multiple optimal solutions can be obtained from different generations.
-
Traditional Algorithms are not more likely to produce global optimal solutions, Genetic Algorithms are not guaranteed to produce global optimal solutions as well but are more likely to produce global optimal solutions because of the genetic operators like crossover and mutation.
-
Traditional algorithms are deterministic in nature whereas genetic algorithms are probabilistic and stochastic in nature.
-
Real-world problems are multi-modal (contains multiple locally optimal solutions), the traditional algorithms don’t handle well these problems whereas Genetic Algorithms, with the right parameter setting, can handle these problems very well because of the large solution space.
AIM: To Write a code in MATLAB to optimise the stalagmite function and find the global maxima of the function.
Procedure:
- Stalagmite function is defined separately,this function is later called in to main GA program
- Now we define x and y inputs using linspace command
- Then the input vectors are defined into array[xx,yy],so that it can be used to evaluate of two variables and plot three dimensional mesh/surface plots
- Now the values created in the different matrix can be put as a input into the function that we have created,by calling it.by this we can get the value of stalagmite fucntion at various values of x and y
- now since we have got the value of the function at various values,we now need to calculate the maxima value,which can be calculated using the GA command ,which is an in-built function in MATLAB
- then ,the total number of time the GA needs to run is defined using num_cases and with help of for loop GA is made to run for each iteration and the value is stored in an array
- GA generally gives the minima value(in build in Matlab),we need the maxima hence we can just inverse teh stalagmite function by introducting a negative sign
- the total study time is calculated using the Tic and toc command at the start and end of the GA for loop respectively
- the results are then plotted using the surf command and plot command
- GA is made to run for 3 iterations with each iteration
Program
study 1 num_cases=50
clear all
close all
clc
%defining our search space
x=linspace(0,0.6,150);
y=linspace(0,0.6,150);
[xx,yy]=meshgrid(x,y)
for i=1:length(xx);
for j=1:length(yy);
input_vector(1)=xx(i,j);
input_vector(2)=yy(i,j)
f(i,j)=stalagmite(input_vector);
end
end
%Study 1-statistics behavior
num_cases=50;
tic
for i=1:num_cases
[inputs,f_opt(i)]=ga(@stalagmite,2);
x_opt(i)=inputs(1);
y_opt(i)=inputs(2);
opt_f(i)=-f_opt(i);
end
study1_time=toc;
figure(1)
subplot(2,1,1)
hold on
surfc(x,y,f)
shading interp
plot3(x_opt,y_opt,f_opt,'marker','o','markersize',5,'markerfacecolor','r')
title('unbounded inputs')
subplot(2,1,2)
plot(opt_f)
xlabel('iterations')
ylabel('Function minimum')
study 2
%Study 2-statistics behavior with upper and inner bound
tic
for i=1:num_cases
[inputs,f_opt(i)]=ga(@stalagmite,2,[],[],[],[],[0;0],[0.6;0.6]);
x_opt(i)=inputs(1);
y_opt(i)=inputs(2);
end
study2_time=toc;
figure(2)
subplot(2,1,1)
hold on
surfc(x,y,f)
shading interp
plot3(x_opt,y_opt,f_opt,'marker','o','markersize',5,'markerfacecolor','r')
title('unbounded inputs')
subplot(2,1,2)
plot(opt_f)
xlabel('iterations')
ylabel('Function minimum')
Output:
Study 3
%Study 3-Increasing GA Iterations
options=optimoptions('ga')
options=optimoptions(options,'PopulationSize',170);
tic
for i=1:num_cases
[inputs,f_opt(i)]=ga(@stalagmite,2,[],[],[],[],[0;0],[0.6;0.6],[],[],options);
x_opt(i)=inputs(1);
y_opt(i)=inputs(2);
end
study3_time=toc;
figure(3)
subplot(2,1,1)
hold on
surfc(x,y,f)
shading interp
plot3(x_opt,y_opt,f_opt,'marker','o','markersize',5,'markerfacecolor','r')
title('unbounded inputs')
subplot(2,1,2)
plot(opt_f)
xlabel('iterations')
ylabel('Function minimum')
References: