Artificial bee colony algorithm

From Scholarpedia
Dervis Karaboga (2010), Scholarpedia, 5(3):6915. doi:10.4249/scholarpedia.6915 revision #91003 [link to/cite this article]
Jump to: navigation, search
Post-publication activity

Curator: Dervis Karaboga

The Artificial Bee Colony (ABC) algorithm is a swarm based meta-heuristic algorithm that was introduced by Karaboga in 2005 (Karaboga, 2005) for optimizing numerical problems. It was inspired by the intelligent foraging behavior of honey bees. The algorithm is specifically based on the model proposed by Tereshko and Loengarov (2005) for the foraging behaviour of honey bee colonies. The model consists of three essential components: employed and unemployed foraging bees, and food sources. The first two components, employed and unemployed foraging bees, search for rich food sources, which is the third component, close to their hive. The model also defines two leading modes of behaviour which are necessary for self-organizing and collective intelligence: recruitment of foragers to rich food sources resulting in positive feedback and abandonment of poor sources by foragers causing negative feedback.

In ABC, a colony of artificial forager bees (agents) search for rich artificial food sources (good solutions for a given problem). To apply ABC, the considered optimization problem is first converted to the problem of finding the best parameter vector which minimizes an objective function. Then, the artificial bees randomly discover a population of initial solution vectors and then iteratively improve them by employing the strategies: moving towards better solutions by means of a neighbour search mechanism while abandoning poor solutions.


Contents

Global Optimization Problem

A global optimization problem can be defined as finding the parameter vector \(\vec x\) that minimizes an objective function \(f (\vec x )\ :\)

\[\tag{1} {\rm minimize~} f(\vec x),~~\vec x = (x_1 , x_2, \ldots ,x_i, \ldots,x_{n-1}, x_n ) \in \mathbb{R}^n \]


which is constrained by the following inequalities and/or equalities:

\[\tag{2} {\rm ~~~~~~~~~~~~~~~~~~~~~~}l_i \leq x_i\leq u_i, ~~~i=1,\ldots,n \]


\[\tag{3} {\rm subject~to:~}~~~g_j( \vec{x} ) \leq 0,~{\rm for} ~~j=1,\ldots,p \]


\[\tag{4} {\rm ~~~~~~~~~~~~~~~~~~~} h_j(\vec{x})=0,~ {\rm for}~ ~j=p+1,\ldots,q \]


\(f (\vec x )\) is defined on a search space, \(S\ ,\) which is a \(n-\)dimensional rectangle in \(\mathbb{R}^n\) (\(S \subseteq \mathbb{R}^n\)). The variable domains are limited by their lower and upper bounds (2).

This problem is also known as a constrained optimization problem. If it is an unconstrained optimization problem, then both \(p=0\) and \(q=0\ .\)

The Artificial Bee Colony Meta-heuristic

In ABC, the colony of artificial bees contains three groups of bees: employed bees associated with specific food sources, onlooker bees watching the dance of employed bees within the hive to choose a food source, and scout bees searching for food sources randomly. Both onlookers and scouts are also called unemployed bees. Initially, all food source positions are discovered by scout bees. Thereafter, the nectar of food sources are exploited by employed bees and onlooker bees, and this continual exploitation will ultimately cause them to become exhausted. Then, the employed bee which was exploiting the exhausted food source becomes a scout bee in search of further food sources once again. In other words, the employed bee whose food source has been exhausted becomes a scout bee. In ABC, the position of a food source represents a possible solution to the problem and the nectar amount of a food source corresponds to the quality (fitness) of the associated solution. The number of employed bees is equal to the number of food sources (solutions) since each employed bee is associated with one and only one food source.

The general scheme of the ABC algorithm is as follows:

Initialization Phase 
REPEAT
Employed Bees Phase
Onlooker Bees Phase
Scout Bees Phase
Memorize the best solution achieved so far
UNTIL(Cycle=Maximum Cycle Number or a Maximum CPU time)

Initialization Phase

All the vectors of the population of food sources, \(\vec{x_{m}} \)’s, are initialized \((m=1...SN\ ,\) \(SN:\) population size) by scout bees and control parameters are set. Since each food source, \(\vec{x_{m}}\ ,\) is a solution vector to the optimization problem, each \(\vec{x_{m}}\) vector holds \(n\) variables, (\(x_{mi}, i=1...n\)), which are to be optimized so as to minimize the objective function.

The following definition might be used for initialization purposes (5):

\[\tag{5} x_{mi}=l_{i}+rand(0,1)*(u_i-l_i)\]


where \(l_i\) and \(u_i\) are the lower and upper bound of the parameter \(x_{mi}\ ,\) respectively.

Employed Bees Phase

Employed bees search for new food sources (\(\vec{\upsilon_{m}}\)) having more nectar within the neighbourhood of the food source (\(\vec{x_{m}}\)) in their memory. They find a neighbour food source and then evaluate its profitability (fitness). For example, they can determine a neighbour food source \(\vec{\upsilon_{m}}\) using the formula given by equation (6):

\[\tag{6} \upsilon_{mi} = x_{mi} + \phi_{mi}(x_{mi} - x_{ki})\]


where \(\vec{x_k}\) is a randomly selected food source, \(i\) is a randomly chosen parameter index and \(\phi_{mi}\) is a random number within the range \([-a,a]\ .\) After producing the new food source \(\vec {\upsilon_m}\ ,\) its fitness is calculated and a greedy selection is applied between \(\vec{\upsilon_{m}}\) and \(\vec{x_{m}}\ .\)

The fitness value of the solution, \(fit_m(\vec{x_m})\ ,\) might be calculated for minimization problems using the following formula (7)

\[\tag{7} fit_ m (\vec{x_m})= \left\{ {\begin{array}{*{20}c} {\frac{1}{{1 + f_m (\vec{x_m})}}} & {} & {{\rm if}~~{\rm{ }}f_m(\vec{x_m}) \ge 0} \\ {1 + abs(f_m (\vec{x_m}))} & {} & {{\rm if}~~{\rm{ }}f_m (\vec{x_m}) < 0} \\ \end{array}} \right\}\]


where \(f_m(\vec{x_m})\) is the objective function value of solution \(\vec{x_m}\ .\)

Onlooker Bees Phase

Unemployed bees consist of two groups of bees: onlooker bees and scouts. Employed bees share their food source information with onlooker bees waiting in the hive and then onlooker bees probabilistically choose their food sources depending on this information. In ABC, an onlooker bee chooses a food source depending on the probability values calculated using the fitness values provided by employed bees. For this purpose, a fitness based selection technique can be used, such as the roulette wheel selection method (Goldberg, 1989).

The probability value \(p_m\) with which \(\vec{x_m}\) is chosen by an onlooker bee can be calculated by using the expression given in equation (8) :

\[\tag{8} p_m = \frac{{fit_m(\vec{x_m}) }}{{\sum\limits_{m = 1}^{SN} {fit_m (\vec{x_m})} }} \ .\]


After a food source \( \vec{x_m}\) for an onlooker bee is probabilistically chosen, a neighbourhood source \( \vec{\upsilon_m}\) is determined by using equation (6), and its fitness value is computed. As in the employed bees phase, a greedy selection is applied between \(\vec{\upsilon_{m}}\) and \(\vec{x_{m}}\ .\) Hence, more onlookers are recruited to richer sources and positive feedback behaviour appears.

Scout Bees Phase

The unemployed bees who choose their food sources randomly are called scouts. Employed bees whose solutions cannot be improved through a predetermined number of trials, specified by the user of the ABC algorithm and called “limit” or “abandonment criteria” herein, become scouts and their solutions are abandoned. Then, the converted scouts start to search for new solutions, randomly. For instance, if solution \(\vec{x_m}\) has been abandoned, the new solution discovered by the scout who was the employed bee of \(\vec{x_m}\) can be defined by (5). Hence those sources which are initially poor or have been made poor by exploitation are abandoned and negative feedback behaviour arises to balance the positive feedback.

In summary, the ABC algorithm,

1) is inspired by the foraging behaviour of honeybees,

2) is a global optimization algorithm,

3) has been initially proposed for numerical optimization (e.g.: Karaboga, 2005),

4) can be also used for combinatorial optimization problems (eg: Pan et al, 2010),

5) can be used for unconstrained and constrained optimization problems (eg: Karaboga and Akay, 2009; Karaboga and Basturk 2007b; Domínguez 2009),

6) employs only three control parameters (population size, maximum cycle number and limit) that are to be predetermined by the user,

7) is quite simple, flexible and robust (some of the relevant publications expressing these merits of the ABC algorithm are Rao et al, 2008; Kang et al, 2009; Singh, 2009; Karaboga, 2009, included in the References list).

Applications of ABC

An Unconstrained Optimization Problem: Neural Network Training for the XOR problem

Training an artificial neural network is an optimization task since it is desired to find the optimal set of weights of a neural network in the training process. The goal is to optimize the network weights by minimizing an objective function such as the mean square error (MSE) given by (9):

\[\tag{9} E(\vec{w}(t)) = \frac{1}{n}\sum\limits_{j = 1}^n {\sum\limits_{k = 1}^K {(d_k - o_k )^2 } } \ ,\]


where \(E(\vec{w}(t)) \) is the error at the \(t\)th iteration; \(\vec{w}(t) \) is the vector of the weights in the connections at the \(t\)th iteration; \(d_k\) is the desired output node; \(o_k\) is the actual value of the \(k\)th output node; \(K\) is the number of output nodes; and \(n\) is the number of patterns.

The neural networks are being successfully applied to solving problems in pattern classification, function approximation, optimization, pattern matching and associative memories.

Exclusive-OR (XOR) is a difficult classification problem mapping two binary inputs to a single binary output as (0 0;0 1;1 0;1 1)> (0;1;1;0). This classical benchmark problem is a hard task also for the neural networks.

Figure 1: An example of neural network structure for the XOR problem

In the simulations a 2-2-1 feed-forward neural network having six connection weights and no biases (having six parameters, XOR6), a 2-2-1 feed-forward neural network having six connection weights and three biases (having 9 parameters, XOR9) and a 2-3-1 feed-forward neural network having nine connection weights and four biases (having thirteen weights, XOR13) were used.

In Table 1, mean MSE values of 30 runs of each configuration are recorded for ABC and for the standard Particle Swarm Optimization (PSO) (Eberhart and Kennedy, 1995); each run of the algorithms was started with a random population with different seeds. The population size, SN, was set to 50 and the limit value was set to SN*n, where n is dimension of the weight set.

Table 1: Mean MSE of 30 runs of ABC and standard PSO algorithms in ANN training process
XOR6 XOR9 XOR13
ABC 0.007051 0.006956 0.006079
PSO 0.097255 0.057676 0.014549

From the results presented in Table 1, it is clear that the basic ABC algorithm is more successful than the standard PSO at training neural networks on the XOR benchmark problem in all cases (Karaboga et al., 2007).

A Constrained Optimization Problem: Welded Beam Design

The welded beam design is a real-life application problem. As illustrated in Fig.2, the problem consists in dimensioning a welded steel beam and the welding length so as to minimize its cost subject to constraints on shear stress, \(\tau\ ,\) bending stress in the beam, \(\sigma\ ,\) buckling load on the bar, \(P_c\ ,\) end deflection of the beam, \(\delta\ ,\) and side constraints. There are four design variables\[ x_1, x_2, x_3, x_4\ ,\] which in structural engineering are commonly symbolized by the letters shown in Fig. 2 (\( h, l, t, b\)). Structural analysis of this beam leads to the following nonlinear objective function subject to five nonlinear and two linear inequality constraints as given below:

\[\tag{10} min f(X) = 1.1047x_1^2x_2 + 0.004811x_3x_4(14.0 + x_2) \]


subject to\[ {\rm{ }}\begin{array}{*{20}l} {} & {g_1 (X):{\rm{ }}\tau (x) - \tau _{\max } {\rm{ }} \le 0} \\ {} & {g_2 (X):{\rm{ }}\sigma (x) - \sigma _{\max } \le 0} \\ {} & {g_3 (X):{\rm{ }}x_1 - x_4 \le 0} \\ {} & {g_4 (X):{\rm{ }}0.10471x_1^2 + 0.04811x_3 x_4 (14.0 + x_2 ) - 5.0 \le 0} \\ {} & {g_5 (X):{\rm{ }}0.125 - x_1 \le 0} \\ {} & {g_6 (X):{\rm{ }}\delta (x) - \delta _{\max } \le 0} \\ {} & {g_7 (X):{\rm{ }}P - P_c (x) \le 0} \\ \end{array} \]


The optimum solution is located on the boundaries of the feasible region, and the ratio of the feasible region to the entire search space is quite small for this problem, which makes it a truly difficult problem for any optimization algorithm.

Figure 2: The Welded Beam Design Problem

Generally, a constraint handling technique should be incorporated to the optimization algorithms proposed for solving unconstrained problems. Therefore, in order to handle the constraints of this problem, the ABC algorithm employs Deb’s rules, which are used instead of the greedy selection employed between \(\vec{\upsilon_{m}}\) and \(\vec{x_{m}}\) in the version of ABC proposed for unconstrained optimization problems (Karaboga and Basturk, 2007). Deb’s method uses a tournament selection operator, where two solutions are compared at a time by applying the following criteria (Deb, 2000):

  • Any feasible solution satisfying all constraints is preferred to any infeasible solution violating any of the constraints,
  • Among two feasible solutions, the one having better fitness value is preferred,
  • Among two infeasible solutions, the one having the smaller constraint violation is preferred.

Table 2 presents the values of the variables and the constraints for the optimum solution found by ABC. Table 3 shows the summary statistics obtained from 30 runs of the ABC algorithm as compared to those of the \((\mu+\lambda)\)-ES method (Montes and Coello, 2005). The low mean and standard deviation values show the robustness of the ABC algorithm.

Table 2: Parameter and constraint values of the best solution obtained by the ABC algorithm
x1 x2 x3 x4 g1 g2 g3 g4 g5 g6 g7 f(x)
0.20573 3.470489 9.036624 0.20573 0 -2E-06 0 -3.43298 -0.08073 -0.23554 0 1.724852
Table 3: Statistics of the results obtained by the ABC and (\(\mu+\lambda\)-ES) algorithms
Best Mean Std. Dev. Evaluations
ABC 1.724852 1.741913 3.1E-02 30000
(\(\mu+\lambda\)-ES) Montes and Coello, 2005 1.724852 1.777692 8.8E-2 30000

Other Approaches Inspired from Honeybee Foraging Behaviour

Approaches for Combinatorial Optimization

In 2001, Lucic and Teodorovic introduced a Bee System based on the foraging behaviour of bee colonies for solving difficult combinatorial optimization problems (Lucic and Teodorovic, 2001). Teodorovic and Dell (2005) proposed a Bee Colony Optimization (BCO) meta-heuristic for the ride-matching problem in 2005. Another algorithm simulating foraging behaviour is the BeeAdHoc model described by Wedde and Farooq (2005c), which is an energy efficient routing method in mobile ad hoc networks. In 2005, Drias and Yahi (2005) described a meta-heuristic named Bees Swarm Optimization and its adaptation to the features of the MAX-W-SAT problem was introduced to contribute to its resolution. Chong et al. (2006) proposed a bee colony optimization algorithm based on dance durations to select a new path; the algorithm was applied to job shop scheduling. Quijano and Passino (2007) introduced a model of honey bee social foraging for solving a class of optimal resource allocation problems.

Approaches for Numerical Optimization

In 2005, Yang (2005) developed a virtual bee algorithm (VBA) to solve numerical optimization problems. The original algorithm works with only two variables. Pham et al. (2005) described the Bees Algorithm which performs a kind of neighbourhood search combined with random search and can be used for both combinatorial optimization and numerical optimizations.

Current Trends

The initial applications of ABC were in the area of numerical optimization since it was originally proposed for these kinds of problems (Karaboga, 2005).

Current research topics include the extension of ABC to the optimization of hybrid functions; to the solution of integer programming and engineering design problems (Rao et al., 2008; Singh, 2009; Karaboga, 2009); to the solution of combinatorial (Pan et al, 2010) and multi-objective optimization problems (Omkar et al, 2010) and to the solution of clustering (Karaboga and Ozturk, 2010), neural network training (Karaboga and Ozturk, 2009) and image processing (Xu and Duan, 2010) problems.

Several papers reporting the research related to ABC and its applications can be found at http://mf.erciyes.edu.tr/abc

References

Chong, C. S., Sivakumar, A. I., Malcolm Low, Y. H., Gay, K. L. (2006). A bee colony optimization algorithm to job shop scheduling. In Proceedings of the 38th conference on Winter simulation WSC '06, pages 1954-1961, California.

Deb, K. (2000). An efficient constraint handling method for genetic algorithms, Computer Methods in Applied Mechanics and Engineering, 186(2- 4):311–338, Elsevier, Netherlands.

Domínguez, O. C. (2009), An adaptation of the scout bee behavior in the Artificial Bee Colony algorithm to solve constrained optimization problems, Laboratorio Nacional de Informática Avanzada (LANIA), MsC, Thesis, Supervisor: Efrén Mezura-Montes.

Drias, H. S. S., Yahi, S. (2005). Cooperative bees swarm for solving the maximum weighted satisfiability problem. In Computational Intelligence and Bioinspired Systems, volume 3512/2005 of LNCS: 318 – 325, Springer, Berlin.

Goldberg, D.E. (1989), Genetic algorithms in search, optimization and machine learning, Addison-Wesley Professional, ISBN: 0201157675.

Kang, F., Li, J., Xu, Q. (2009), Structural inverse analysis by hybrid simplex artificial bee colony algorithms, Computers & Structures, 87 (13:14), 861-870, Elsevier, Netherlands.

Karaboga, D, (2005). An idea based on honey bee swarm for numerical optimization. Technical Report TR06, Erciyes University, Engineering Faculty, Computer Engineering Department, 2005.

Karaboga D., Basturk B. (2007), Artificial bee colony (ABC) optimization algorithm for solving constrained optimization problems, Advances in Soft Computing: Foundations of Fuzzy Logic and Soft Computing, , volume 4529/2007 of LNCS: 789-798, Springer, Berlin.

Karaboga, D., Basturk, B., Ozturk, C. (2007), Artificial bee colony (ABC) optimization algorithm for training feed-forward neural networks, Modeling Decisions for Artificial Intelligence, volume 4617/2007 of LNCS: 318-319, Springer, Berlin.

Karaboga, D., Akay,B. (2009), A Comparative Study of Artificial Bee Colony Algorithm, Applied Mathematics and Computation, 214, 108-132, Elsevier, Netherlands.

Karaboga, D., Ozturk, C. (2009),Neural Networks Training by Artificial Bee Colony Algorithm on Pattern Classification, Neural Network World, 19 (3), 279-292, Institute of Computer Science AS CR, v. v. i., Czech Republic.

Karaboga, D., Ozturk, C. (2010), A novel clustering approach: Artificial Bee Colony (ABC) algorithm, Applied Soft Computing, Elsevier, Netherlands, In Press.

Karaboga, N. (2009), A new design method based on artificial bee colony algorithm for digital IIR filters, Journal of Franklin Institute, 346 (4): 328-348, Elsevier, Netherlands.

Kennedy, J., Eberhart, R.C. (1995), Particle Swarm Optimization, In Proceedings of 1995 IEEE International Conference on Neural Networks, 4: 1942–1948, Perth, Western Australia.

Lucic, P. and Teodorovic, D. (2001). Bee system: Modeling combinatorial optimization transportation engineering problems by swarm intelligence. In Preprints of the Tristan IV Triennial Symposium on Transportation Analysis, pages: 441-445, SaoMiguel, Azores Islands, Portugal.

Mezura-Montes, E., Coello Coello, C. (2005), Useful infeasible solutions in engineering optimization with evolutionary algorithms. Advances in Artificial Intelligence, volume 3789/2005 of LNCS: 652–662, Springer, Berlin.

Omkar, S.N., Senthilnath, J. , Khandelwal, R., Narayana Naik, G., Gopalakrishnan, S. (2010), Artificial Bee Colony (ABC) for multi-objective design optimization of composite structures, Applied Soft Computing, Elsevier, Netherlands, In Press.

Pan, Q-K. Tasgetiren, M. F., Suganthan; P. N., T. Chua, J. (2010), A Discrete Artificial Bee Colony Algorithm for the Lot-streaming Flow Shop Scheduling Problem, Information Sciences, Elsevier, Netherlands, In Press.

Pham, D. T., Ghanbarzadeh, A., Koc, E., Otri, S., Rahim, S., Zaidi, M. (2005). The bees algorithm. Technical report, Manufacturing Engineering Centre, Cardiff University, UK.

Quijano, N. and Passino, K. (2007). Honey bee social foraging algorithms for resource allocation, part i: Algorithm and theory, In Proceedings of American Control Conference, 2007. ACC '07, pages 3383-3388, New York.

Rao, R. S., Narasimham, S.V.L., Ramalingaraju, M., Optimization of distribution network configuration for loss reduction using artificial bee colony algorithm, International Journal of Electrical Power and Energy Systems Engineering (IJEPESE) , 1 (2) :708-714, World Academy of Science, Engineering and Technology.

Singh, A. (2009), An artificial bee colony algorithm for the leaf-constrained minimum spanning tree problem, Applied Soft Computing, 9 (2), 625-631, Elsevier, Netherlands.

Tereshko, V., Loengarov, A. (2005), Collective decision-making in honey bee foraging dynamics, Computing and Information Systems, 9 (3): 1-7, University of the West of Scotland, UK.

Teodorovic, D., Dell, M. O. (2005). Bee colony optimization - a cooperative learning approach to complex transportation problems, In Proceedings of 10th EWGT Meeting and 16th Mini EURO Conference, pages: 51- 60.

Vries, H., Biesmeijer J.C. (1998), Modelling collective foraging by means of individual behaviour rules in honey-bees, Behavioral Ecology and Sociobiology , 44: 109-124, Springer, Berlin.

Wedde, H., Farooq, M. (2005). The wisdom of the hive applied to mobile ad-hoc networks. In Proceedings of the Swarm Intelligence Symposium 2005, pages: 341-348, Pasadena, California.

Xu, C., Duan, H. (2010), Artificial bee colony (ABC) optimized edge potential function (EPF) approach to target recognition for low-altitude aircraft, Pattern Recognition Letters, Elsevier, Netherlands, In Press.

Yang, X. S. (2005). Engineering optimizations via nature-inspired virtual bee algorithms. In Artificial Intelligence and Knowledge Engineering Applications: A Bioinspired Approach, volume 3562/2005 of LNCS: 317-323, Springer, Berlin.

Internal references

  • Howard Eichenbaum (2008) Memory. Scholarpedia, 3(3):1747.


Appendix -- Foraging Behaviour of Honey Bees

An interesting swarm is a honey bee colony which performs tasks within and out of the hive in an intelligent manner. Since the availability of the nectar sources around the hive varies in space and time, the colony of bees has to adapt its foraging behaviour to the changes in the environment. For instance, an appropriate division of the workers’ efforts between exploring new sources and exploiting the available ones is very important for the maintenance of the colony (Vries and Biesmeijer, 1998). As stated before, the minimal model of forage selection that leads to the emergence of collective intelligence of honey bee swarms consists of three essential components: food sources, employed foragers, and unemployed foragers (Tereshko and Loengarov, 2005).

(i) Food sources: The “profitability” of a food source is related to several factors such as its closeness to the nest, richness of energy, and the ease of extracting the energy from the source. In the minimal model, the “profitability” may be defined by one of these quantities.

(ii) Employed foragers: These foragers are associated with a specific food source they exploit. They carry information to the hive and share it with other foragers waiting in the hive by dancing.

(iii) Unemployed foragers: These foragers consist of scouts and onlookers. The scouts randomly search the environment surrounding the hive for new food sources, and the onlooker bees waiting in the hive detect a food source by means of the information presented to them by the employed foragers (Tereshko and Loengarov, 2005).

The exchange of information among the foragers is very important for the formation of collective knowledge. The most important part of the hive for exchanging information is the dancing area where different types of dances are performed: Waggle dance, Round dance, Tremble dance, etc. Communication among bees related to the quality of food sources is called the waggle dance. Since information about all the current rich sources is available to onlooker bees on the dance floor, they watch numerous dances and direct themselves to profitable sources. Employed foragers share their information in proportion to the profitability of their food sources. As the information circulating about them increases, the probability of the onlooker bees choosing the more profitable sources also increases (Tereshko and Loengarov, 2005).

External Links

http://mf.erciyes.edu.tr/abc http://mf.erciyes.edu.tr/abc]: This is the official web site dedicated to the ABC Algorithm

See also

Algorithm, Swarm intelligence, Ant colony optimization, Flocking and swarming

Personal tools
Namespaces

Variants
Actions
Navigation
Focal areas
Activity
Tools