Journals 
Book Chapters 
Conferences, Workshops & Symposia 
Theses 
Work in Progress 
Others 
Google Scholar Statistics



JOURNALS 







The benchmark functions and some of the algorithms proposed for the
special session on real parameter optimization of the 2005
IEEE Congress on Evolutionary Computation (CEC'05) have played
and still play an important role in the assessment of the state of
the art in continuous optimization. In this note, we first show
that, if bound constraints are not enforced for the final reported
solutions, stateoftheart algorithms produce on a majority of the
CEC'05 benchmark functions infeasible best candidate solutions. This
is the case even though the optima of the CEC'05 functions are
within the specified bounds. This observation has important
implications on algorithm comparisons and this note also draws the
attention to the fact that authors may have drawn wrong conclusions
from experiments using the CEC'05 problems.
@article{LiaMonStu12,
author = {Tianjun Liao and Daniel Molina and Marco A. {Montes de
Oca} and Thomas St{\"u}tzle},
title = {A Note on Bound Constraints Handling for the IEEE CEC'05 Benchmark Function Suite.
Evolutionary Computation},
journal = {Evolutionary Computation},
year = {2014},
volume = {22},
number = {2},
pages = {351359}
}







In this article, we propose UACOR, a unified ant colony optimization (ACO)
algorithm for continuous optimization. UACOR includes algorithmic components
from ACOR, DACOR and IACORLS, three ACO algorithms for continuous optimization
that have been proposed previously. Thus, it can be used to instantiate each of
these three earlier algorithms; in addition, from UACOR we can also generate new
continuous ACO algorithms that have not been considered before in the
literature. In fact, UACOR allows the usage of automatic algorithm configuration
techniques to automatically derive new ACO algorithms. To show the benefits of
UACOR's flexibility, we automatically configure two new ACO algorithms, UACORs
and UACORc, and evaluate them on two sets of benchmark functions from a recent
special issue of the Soft Computing (SOCO) journal and the IEEE 2005 Congress on
Evolutionary Computation (CEC'05), respectively. We show that UACORs is
competitive with the best of the 19 algorithms benchmarked on the SOCO benchmark
set and that UACORc performs superior to IPOPCMAES and statistically
significantly better than five other algorithms benchmarked on the CEC'05 set.
These results show the high potential ACO algorithms have for continuous
optimization and suggest that automatic algorithm configuration is a viable
approach for designing stateoftheart continuous optimizers.
@article{LiaMonStu12,
author = {Tianjun Liao and Thomas St{\"u}tzle and Marco A. {Montes de
Oca} and Marco Dorigo},
title = {A Unified Ant Colony Optimization Algorithm for Continuous
Optimization},
journal = {European Journal of Operational Research},
year = {2014},
volume = {243},
number = {3},
pages={597609}
}






In this paper, we introduce ACOMV, an ant colony
optimization (ACO) algorithm that extends the ACOR algorithm
for continuous optimization to tackle mixedvariable optimization
problems. In ACOMV, the decision variables of an optimization
problem can be explicitly declared as continuous, ordinal, or
categorical, which allows the algorithm to treat them adequately.
ACOMV includes three solution generation mechanisms: a continuous
optimization mechanism (ACOR), a continuous relaxation
mechanism (ACOMVo) for ordinal variables, and a categorical
optimization mechanism (ACOMVc) for categorical variables.
Together, these mechanisms allow ACOMV to tackle mixedvariable
optimization problems.
We also define a novel procedure to generate artificial, mixedvariable
benchmark functions and we use it to automatically
tune ACOMV's parameters. The tuned ACOMV is tested on
various realworld continuous and mixedvariable engineering
optimization problems. Comparisons with results from the literature
demonstrate the effectiveness and robustness of ACOMV
on mixedvariable optimization problems.
@article{LiaMonStu12,
author = {Tianjun Liao and Krzysztof Socha and Marco A. {Montes de Oca} and Thomas St{\"u}tzle and Marco Dorigo},
title = {Ant Colony Optimization for MixedVariable Optimization Problems},
journal = {IEEE Transactions on Evolutionary Computation},
year = {2014},
volume = {18},
number = {4},
pages = {503518}
}






In this article, we apply an automatic algorithm configuration tool to improve
the performance of the CMAES algorithm with increasing population size (iCMAES),
the best performing algorithm on the CEC'05 benchmark set for continuous function
optimization. In particular, we consider a separation between tuning and test sets
and, thus, tune iCMAES on a different set of functions than the ones of the CEC'05
benchmark set. Our experimental results show that the tuned iCMAEs improves significantly
over the default version of iCMAES. Furthermore, we provide some further analysis
on the impact of the modified parameter settings on iCMAES performance and a comparison
to recent results of algorithms that use CMAES as a subordinate local search.
@article{LiaMonStu12,
author = {Tianjun Liao and Marco A. {Montes de Oca} and Thomas St{\"u}tzle},
title = {Computational Results for an Automatically Tuned {CMA}{ES} with Increasing
Population Size on the {CEC}'05 Benchmark Set},
journal = {Soft Computing},
year = {2013},
volume = {17},
number = {6},
pages={10311046}
}






The problem of inferring the structure of a network of interactions between
biological variables from a set of observations is called reverse engineering.
In this paper, we propose an optimization algorithm, called MORE, that tackles
the two components of this problem: a discrete component, which represents the
topology of the underlying molecular interaction network, and a continuous
component, which represents the strength of the interactions. The model used to
generate candidate time series patterns is a sparse system of nonlinear
differential equations, complex enough to realistically describe the dynamics of
a biological system. The goal of the optimization process is to minimize the
error between the model’s dynamics and the observed data. Decomposing the
problem into the two aforementioned components allows us both to enforce system
sparsity, by globally constraining the number of edges in the network, and to
easily integrate a priori information about the structure of the underlying
interaction network. The design of MORE is based on thorough experimentation
with simulated and realworld networks of up to 10 nodes, which is larger than
the size of the networks used in the majority of the current proposals of
reverse engineering algorithms based on sufficiently complex interaction models.
@article{SamMonDiCTofStu12,
author = {Francesco Sambo and Marco A. {Montes de Oca} and Barbara {Di Camillo} and Giovanna Toffolo and Thomas St{\"u}tzle},
title = {{MORE}: {M}ixed {O}ptimization for {R}everse {E}ngineering.
An application to modeling biological networks response via sparse systems of
nonlinear differential equations},
journal = {IEEE/ACM Transactions on Computational
Biology and Bioinformatics},
year = {2012},
volume = {9},
number = {5},
pages={14591471}
}






The performance of optimization algorithms, including those
based on swarm intelligence, depends on the values assigned to their parameters.
To obtain high performance, these parameters must be finetuned.
Since many parameters can take real values or integer values from
a large domain, it is often possible to treat the tuning problem as a continuous
optimization problem. In this article, we study the performance
of a number of prominent continuous optimization algorithms for parameter
tuning using various case studies from the swarm intelligence
literature. The continuous optimization algorithms that we study are
enhanced to handle the stochastic nature of the tuning problem. In particular,
we introduce a new postselection mechanism that uses FRace
in the final phase of the tuning process to select the best among elite
parameter configurations. We also examine the parameter space of the
swarm intelligence algorithms that we consider in our study and we show
that by finetuning their parameters one can obtain substantial improvements
over default configurations.
@article{ZhiMonBirStu12,
author = {Yuan Zhi and Marco A. {Montes de Oca} and Mauro Birattari and Thomas St{\"u}tzle},
title = {Continuous Optimization Algorithms for Tuning Real and Integer Parameters of Swarm Intelligence Algorithms},
journal = {Swarm Intelligence},
year = {2012},
volume = {6},
number = {1},
pages={4975}
}

Dorigo M., Floreano D., Gambardella L. M., Mondada F., Nolfi S., Baaboura T.,
Birattari M., Bonani M., Brambilla M., Brutschy A., Burnier D., Campo A., Christensen A. L., Decugnière A., Di Caro G.,
Ducatelle F., Ferrante F., Fröster A., Martinez Gonzales J., Guzzi J., Longchamp V., Magnenat S., Mathews N.,
Montes de Oca M. A., O'Grady R., Pinciroli C., Pini G., Rétornaz P., Roberts J., Sperati V., Stirling T.,
Stranieri A., Stützle T., Trianni V., Tuci E., Turgut A. E. and Vaussard F.
Swarmanoid: A Novel Concept for the Study of Heterogeneous Robotic Swarms.
IEEE Robotics and Automation Magazine, 20(4):6071. 2013.





Swarm robotics systems are characterised by decentralised control, limited
communication between robots, use of local information and emergence of global
behaviour. Such systems have shown their potential for flexibility and
robustness. However, existing swarm robotics systems are by in large still
limited to displaying simple proofofconcept behaviours under laboratory
conditions. It is our contention that one of the factors holding back swarm
robotics research is the almost universal insistence on homogeneous system
components. We believe that swarm robotics designers must embrace heterogeneity
if they ever want swarm robotics systems to approach the complexity required of
real world systems.
@article{Swarmanoid,
author = {Marco Dorigo and Dario Floreano and Luca Maria Gambardella and Francesco Mondada and Stefano Nolfi and
Tarek Baaboura and Mauro Birattari and Michael Bonani and Manuele Brambilla and Arne Brutschy and Daniel Burnier and
Alexandre Campo and Anders Lyhne Christensen and Antal Decugni{\`e}re and Gianni Di Caro and Frederick Ducatelle and
Eliseo Ferrante and Alexander F{\"o}rster and Javier Martinez Gonzales and Jerome Guzzi and Valentin Longchamp and
St{\'e}phane Magnenat and Nithin Mathews and Marco A. {Montes de Oca} and Rehan O’Grady and Carlo Pinciroli and Giovanni Pini and
Philippe R{\'e}tornaz and James Roberts and Valerio Sperati and Timothy Stirling and Alessandro Stranieri and Thomas St{\"u}tzle
and Vito Trianni and Elio Tuci and Ali Emre Turgut and and Florian Vaussard},
title = {Swarmanoid: A novel concept for the study of heterogeneous robotic swarms},
journal = {IEEE Robotics \& Automation Magazine},
year = {2013},
volume = {20},
number = {4},
pages={6071}
}







Collective decisionmaking is a process whereby the members of a group decide on
a course of action by consensus. In this paper, we propose a collective
decisionmaking mechanism for robot swarms deployed in scenarios in which robots
can choose between two actions that have the same effects but that have
different execution times. The proposed mechanism allows a swarm composed of
robots with no explicit knowledge about the difference in execution times
between the two actions to choose the one with the shorter execution time. We
use an opinion formation model that captures important elements of the scenarios
in which the proposed mechanism can be used in order to predict the system's
behavior. The model predicts that when the two actions have different average
execution times, the swarm chooses with high probability the action with the
shorter average execution time. We validate the model's predictions through a
swarm robotics experiment in which robots must choose one of two paths of
different length that connect two locations. Thanks to the proposed mechanism, a
swarm made of robots that do not measure time or distance is able to
choose the shorter path.
@article{MonFerSchPinBirDor11,
author = {Marco A. {Montes de Oca} and Eliseo Ferrante and Alexander Scheidler and Carlo Pinciroli and Mauro Birattari and Marco Dorigo},
title = {MajorityRule Opinion Dynamics with
Differential Latency: A Mechanism for
SelfOrganized Collective DecisionMaking},
journal = {Swarm Intelligence},
year = {2011},
volume = {5},
number = {34},
pages={305327}
}







The design cycle of highperformance optimization algorithms requires the
designer to make several decisions. These decisions range from implementation
details to the setting of parameter values for testing intermediate designs.
Proper parameter setting can be crucial for the effective assessment of
algorithmic components because a bad parameter setting can make a good algorithmic
component perform poorly. This situation may lead the designer to discard
promising components that just happened to be tested with bad parameter
settings. Automatic parameter tuning techniques are being used by practitioners
to obtain peak performance from already designed algorithms. However, automatic
parameter tuning also plays a crucial role during the design cycle of optimization
algorithms. In this paper, we present a case study of a tuningintheloop
approach for redesigning a particle swarmbased optimization algorithm
for tackling largescale continuous optimization problems. Rather than
just presenting the final algorithm, we describe the whole redesign process.
Lastly, we study the scalability behavior of the final algorithm in the context
of this special issue.
@article{MonAydStu11,
author = {Marco A. {Montes~de~Oca} and Dogan Aydin and Thomas St{\"u}tzle},
title = {An Incremental Particle Swarm for LargeScale Continuous Optimization Problems: An Example of Tuningintheloop (Re)Design of Optimization Algorithms},
journal = {Soft Computing},
volume = {15},
number = {11},
pages={22332255},
year = {2011}
}
Version: 1.0
Author: Marco A. Montes de Oca, Dogan Aydin, and Thomas Stützle
Copyright (c) Marco A. Montes de Oca, Dogan Aydin, and Thomas Stützle, 2010
Download






Incremental social learning (ISL) was proposed as a way to improve the
scalability of systems composed of multiple learning agents. In this article, we
show that ISL can be very useful to improve the performance of populationbased
optimization algorithms. Our study focuses on two particle swarm optimization
(PSO) algorithms: a) the incremental particle swarm optimizer (IPSO),
which is a PSO algorithm with a growing population size in which the initial
position of new particles is biased toward the bestsofar solution, and
b) the incremental particle swarm optimizer with local search (IPSOLS),
in which solutions are further improved through a local search
procedure.
We first derive analytically the probability density function induced by the
proposed initialization rule applied to new particles. We then compare the
performance of IPSO and IPSOLS on a set of benchmark functions with that of
other PSO algorithms (with and without local search) and a random restart local
search algorithm. Finally, we measure the benefits of using incremental social
learning on PSO algorithms by running IPSO and IPSOLS on problems with different
fitness distance correlations.
@article{ISLPSMdeO,
author = {Marco A. {Montes~de~Oca} and Thomas St{\"u}tzle and Ken {Van den Enden} and Marco Dorigo},
title = {Incremental Social Learning in Particle Swarms},
journal = {IEEE Transactions on Systems, Man, and Cybernetics  Part B: Cybernetics},
volume={41},
number={2},
pages={368384},
year={2011}
}







During the last decade, many variants of the original particle swarm optimization (PSO) algorithm have been proposed. In
many cases, the difference between two variants can be seen as an algorithmic component being present in one variant
but not in the other. In the first part of the paper, we present the results and insights obtained from a detailed empirical study
of several PSO variants from a component difference point of view. In the second part of the paper, we propose a new PSO
algorithm that combines a number of algorithmic components that showed distinct advantages in the experimental study
concerning optimization speed and reliability. We call this composite algorithm Frankenstein's PSO in an analogy to
the popular character of Mary Shelley's novel. Frankenstein's PSO performance evaluation shows that by integrating
components in novel ways effective optimizers can be designed.
@article{FPSOMdeO09,
author = {Marco A. {Montes~de~Oca} and Thomas St{\"u}tzle and Mauro Birattari and Marco Dorigo},
title = {Frankenstein's {PSO}: {A} Composite Particle Swarm Optimization Algorithm},
journal = {IEEE Transations on Evolutionary Computation},
volume = {13},
number = {5},
pages={11201132},
year = {2009}
}
Version: 1.0
Author: Marco A. Montes de Oca, Thomas Stützle, Mauro Birattari, and Marco Dorigo
Copyright (c) Marco A. Montes de Oca,Thomas Stützle, Mauro Birattari, and Marco Dorigo, 2010
Download






Particle swarm optimization is a populationbased stochastic approach for solving continuous and discrete optimization problems.
In particle swarm optimization, simple software agents, called particles, move in the solution space of an optimization problem. The position of a
particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing
their velocity according to rules originally inspired by behavioral models of bird flocking.
Particle swarm optimization belongs to the class of swarm intelligence techniques that are used to solve optimization problems.
@article{PSODorigo08,
author = "Marco Dorigo and Marco A. {Montes~de~Oca} and Andries P. Engelbrecht",
title = "Particle Swarm Optimization",
journal = "Scholarpedia",
volume = "3",
number = "11",
pages = "1486",
year = "2008"
}


  

Los algoritmos de agrupación de clases basados en el comportamiento colectivo de algunas especies de insectos
sociales son herramientas para el descubrimiento de conocimiento que se basan en la comunicación
estigmérgica, esto es, en la comunicación indirecta entre agentes a través de modificaciones locales al ambiente.
Inspirados en algunos aspectos del fenómeno de la trofalaxis, o el intercambio de alimento líquido entre
insectos, estudiamos dos estrategias de comunicación directa entre agentes en este tipo de algoritmos: (i)
compartición de memoria y (ii) compartición de mapas ambientales. El objetivo del intercambio de información
entre agentes es mejorar la calidad de la agrupación final. El efecto de la inclusión de estos mecanismos
de comunicación se evalúa mediante la comparación de la calidad de la agrupación final obtenida con comunicación
directa y sin comunicación (aparte de la estigmérgica). Se muestra además, que los resultados
dependen de la densidad de agentes en el ambiente, así como de la utilidad de la información intercambiada.
@article{ComDirectMdeO05,
author = "Marco A. {Montes~de~Oca} and Leonardo Garrido and Jos\'e L. Aguirre",
title = "Efectos de la comunicaci\'on directa entre agentes en los algoritmos de agrupaci\'on de
clases basados en el comportamiento de insectos sociales",
journal = "Inteligencia Artificial",
volume = "9",
number = "25",
pages = "5969",
year = "2005"
}



BOOK CHAPTERS 







Swarm intelligence is the collective problemsolving behavior of
groups of animals and artificial agents. Often, swarm intelligence is the result
of selforganization, which emerges from the agents' local interactions with one
another and with their environment. Such local interactions can be positive,
negative, or neutral. Positive interactions help a swarm of agents solve a
problem. Negative interactions are those that block or hinder the agents'
taskperforming behavior. Neutral interactions do not affect the swarm's
performance. Reducing the effects of negative interactions is one of the main
tasks of a designer of effective swarm intelligence systems. Traditionally, this
has been done through the complexification of the behavior and/or the
characteristics of the agents that comprise the system, which limits
scalability and increases the difficulty of the design task. In collaboration
with colleagues, I have proposed a framework, called incremental social learning
(ISL), as a means to reduce the effects of negative interactions without
complexifying the agents' behavior or characteristics. In this paper, I describe
the ISL framework and three instantiations of it, which demonstrate the
framework's effectiveness. The swarm intelligence systems used as case studies
are the particle swarm optimization algorithm, ant colony optimization algorithm
for continuous domains, and the artificial bee colony optimization algorithm.
@InCollection{Mon12,
author = {Marco A. {Montes~de~Oca}},
title = {Incremental Social Learning in
Swarm Intelligence Algorithms for Continuous Optimization},
booktitle = {Computational Intelligence.
Revised and Selected Papers of the International Joint Conference, IJCCI 2011.},
publisher = {Springer},
pages = {3145},
address = {Berlin, Germany},
year = 2013,
editor = {Kurosh Madani and others}
}

Montes de Oca M. A, Cotta C., and Neri F.
Local Search. Handbook of Memetic Algorithms. F. Neri, C. Cotta, and P. Moscato (Eds.) pp. 2941. 2012. Springer. Berlin, Germany.





In memetic algorithms, the improvement of individual solutions through
local search is of crucial importance for attaining high performance. In this chapter,
we give a concise overview of the main issues that arise in the definition of
local search algorithms, their implementation and their integration into a memetic
algorithm. Our overview covers local search algorithms for both combinatorial and
continuous optimization problems.
@InCollection{MonCotNer11,
author = {Marco A. {Montes~de~Oca} and Carlos Cotta and Ferrante Neri},
title = {Local Search},
booktitle = {Handbook of Memetic Algorithms},
publisher = {Springer},
pages = {2941},
address = {Berlin, Germany},
year = 2012,
editor = {F. Neri and C. Cotta and P. Moscato}
}

Stützle T., LópezIbañez M., Pellegrini P., Maur M., Montes de Oca M. A., Birattari M., and Dorigo M.
Parameter Adaptation in Ant Colony
Optimization. Autonomous Search. Y. Hamadi, E. Monfroy, and F. Saubion (Eds.) pp. 191215. 2011. Springer. Berlin, Germany.





This chapter reviews the approaches that have been studied for the online
adaptation of the parameters of ant colony optimization (ACO) algorithms, that is,
the variation of parameter settings while solving an instance of a problem. We classify
these approaches according to the main classes of online parameteradaptation
techniques. One conclusion of this review is that the available approaches do not
exploit an indepth understanding of the effect of individual parameters on the behavior
of ACO algorithms. Therefore, this chapter also presents results of an empirical
study of the solution quality over computation time for Ant Colony System
and MAXMIN Ant System, two wellknown ACO algorithms. The first part of
this study provides insights on the behaviour of the algorithms in dependence of
fixed parameter settings. One conclusion is that the best fixed parameter settings
of MAXMIN Ant System depend strongly on the available computation time. The
second part of the study uses these insights to propose simple, prescheduled parameter
variations. Our experimental results show that such prescheduled parameter
variations can dramatically improve the anytime performance of MAXMIN Ant
System.
@InCollection{Stuetzle11PAACO,
author = {Thomas St{\"u}tzle and Manuel L{\'o}pezIb{\'a}{\~n}ez and Paola Pellegrini and Michael Maur and Marco A. {Montes~de~Oca} and Mauro Birattari and Marco Dorigo},
title = {Parameter Adaptation in Ant Colony Optimization},
booktitle = {Autonomous Search},
pages = {191215},
publisher = {Springer},
address = {Berlin, Germany},
year = 2011,
editor = {Y. Hamadia and E. Monfroy and F. Saubion},
}

Dorigo M., Montes de
Oca M. A., Oliveira S., and Stützle T. Ant Colony
Optimization. Wiley Encyclopedia of Operations Research and Management
Science. J. J. Cochran et al. (Eds.) Vol. 1, pp. 114125. 2011. John Wiley & Sons, Inc. New
York.





Ant Colony Optimization (ACO) is a class of algorithms
for tackling optimization problems that is inspired by the pheromone trail laying and
following behavior of some ant species. While foraging, ants leave on the ground a
chemical substance, called pheromone, that attracts other fellow nestmates. The
pheromone trail laying and following behavior of the ants induces a positive feedback
process whereby trails with high concentration of pheromones become more and more
attractive as more ants follow them. As a result, whenever two paths to the
same food source are discovered, the colony is more likely to select the shortest one because
ants will traverse it faster and thus it will have a higher pheromone concentration
than the longer one.
ACO algorithms exploit a mechanism analogous to the one that allows colonies of
real ants to find shortest paths. In ACO, (artificial) ants construct candidate solutions to
the problem instance under consideration. Their solution construction is stochastically
biased by (artificial) pheromone trails, which are represented in the form of numerical
information that is associated with appropriately defined solution components, and
possibly by heuristic information based on the input data of the instance being solved.
A key aspect of ACO algorithms is the use of a positive feedback loop implemented by
iterative modifications of the artificial pheromone trails that are a function of the ants'
search experience; the goal of this feedback loop is to bias the colony towards the most
promising solutions.
The ACO metaheuristic is a highlevel algorithmic framework for applying the
above ideas to the approximate solution of optimization problems. When applied to a
specific optimization problem, this ACO framework needs to be concretized by taking
into account the specifics of the problem under consideration and possibly by adding
additional techniques such as problemspecific solution improvement procedures. The
development of effective ACO algorithm variants has been one of the most active research
directions in ACO. This chapter gives an overview of some of the most important developments
in this direction.
@incollection{DorMonOliStu2011eorms,
Author = {Marco Dorigo and Marco A. {Montes de Oca} and Sabrina Oliveira and Thomas St{\"u}tzle},
Booktitle = {Wiley Encyclopedia of Operations Research and Management Science},
Editor = {Cochran, James J. and Cox, Louis A. and Keskinocak, Pinar and Kharoufeh, Jeffrey P. and Smith, J. Cole},
Pages = {114125},
Publisher = {John Wiley \& Sons, Inc.},
Title = {Ant Colony Optimization},
Volume = {1},
Year = {2011}}



PEER REVIEWED CONFERENCES / WORKSHOPS/ SYMPOSIA 






In this paper, we present a continuoustime binary consensus protocol
whereby entities connected via a directed ring topology solve the
onedimensional density classification problem. In our model, the
participating entities behave as nonideal relays, that is, they have
memory of the trajectory of an internal state variable, which gives
them hysteretic properties. We show that this feature is necessary for
the system to reach consensus on the state shared by the initial
majority. The connections between this protocol and collective
decisionmaking mechanisms in swarm intelligence systems are also
discussed.
@incollection{MonRos14,
Author = {Marco A. {Montes de Oca} and Louis F.\ Rossi},
Booktitle = {Proceedings of the International Conference on the Synthesis and Simulation of Living Systems (ALIFE 2014)},
Title = {A ContinuousTime Binary Consensus Protocol with Hysteretic Units Applied to the Density Classification Problem},
Editor = {H.\ Sayama and others},
Address={Cambridge, MA},
Publisher={MIT Press},
Notes = {To appear},
Year = {2014}
}






In previous work, we introduced a novel swarming interpolation framework and
validated its effectiveness on static fields. In this paper, we show that a
slightly revised version of this framework is able to track fields that
translate, rotate, or expand over time, enabling interpolation of both static
and dynamic fields. Our framework can be used to control autonomous mobile
sensors into flexible spatial arrangements in order to interpolate values of a
field in an unknown region. The key advantage to this framework is that the
stable sensor distribution can be chosen to resemble a Chebyshev distribution,
which can be optimal for certain ideal geometries.
@incollection{KirMonSenRosShe13,
Author = {Joshua Kirby and Marco A. {Montes de Oca} and Steven Senger and Louis F.\ Rossi and ChienChung Shen},
Booktitle = {Proceedings of the IEEE Conference on SelfAdaptive and SelfOrganizing Systems (SASO 2013)},
Title = {Tracking Timedependent Scalar Fields with Swarms of Mobile Sensors},
Editor = {T.\ Holvoet and others},
Address={Los Alamitos, CA},
Publisher={IEEE Computer Society Press},
Pages = {159168},
Year = {2013}
}






Automated algorithm configuration methods have proven to be instrumental
in deriving highperforming algorithms and such methods are increasingly
often used to configure evolutionary algorithms. One major challenge in
devising automatic configuration techniques is to handle the inherent
stochasticity in the configuration problems. This article analyses a
postselection mechanism that can also be used for this task. The
central idea of the postselection mechanism is to generate ina first
phase a set of highquality candidate algorithm configurations and then
to select in a second phase from this candidate set the (statistically)
best configuration. Our analysis of this mechanism indicates its high
potential and suggests that it may be helpful to improve
automatic algorithm configuration methods.
@incollection{YuaStuLauBirMon13,
Author = {Zhi Yuan and Thomas St{\"u}tzle and Hoong Chuin Lau and
Mauro Birattari and Marco A. {Montes de Oca}},
Booktitle = {Proceedings of Genetic and Evolutionary Computation Conference (GECCO 2013)},
Title = {An Analysis of PostSelection in Automatic Configuration},
Editor = {Christian Blum},
Pages = {15571564},
Year = {2013},
Address={New York, NY},
Publisher={ACM Press}
}

Montes de Oca M. A.,
Ferrante E., Scheidler A., and Rossi L. F.
Binary Consensus via Exponential Smoothing. Proceedings
of the Second International Conference on Complex Sciences: Theory and Applications (COMPLEX 2012).
K. Glass et al. (Eds.) Santa Fe, NM, USA. December 2012 pp. 244255. LNICTS
126. Copyright Springer.





In this paper, we reinterpret the most basic exponential smoothing equation as a
model of social influence. Exponential smoothing has been used as a timeseries
forecasting and data filtering technique since the 1950s. The most basic
exponential smoothing equation, $S^{\,t+1} = (1\alpha)S^{t} + \alpha X^{t}$, is
used to estimate the value of a series at time $t+1$, denoted by $S^{\,t+1}$, as
a convex combination of the current estimate $S^{t}$ and the actual observation
of the time series $X^{t}$. In our work, we interpret the variable $S^{t}$ as an
agent's tendency to imitate the observed behavior of another agent, which is
represented by a binary variable $X^{t}$. We study the dynamics of the resulting
system when the agents exhibit a behavior for a period of time, called latency,
whose duration is stochastic. Latency allows us to model reallife situations
such as product adoption, or action execution. When different latencies are
associated with the two different behaviors, a bias is produced. This bias makes
all the agents in a population adopt one specific behavior. This phenomenon has
applications in domains where it is desired that a group of agents adopts a
behavior by consensus.
@incollection{MonFerSchRos12,
Author = {Marco A. {Montes de Oca} and Eliseo Ferrante and Alexander Scheidler and Louis F. Rossi},
Booktitle = {LNICST 126. Proceedings of the Second International
Conference on Complex Sciences: Theory and Applications (COMPLEX 2012)},
Title = {Binary Consensus via Exponential Smoothing},
Editor = {Kristin Glass and others},
Pages={244255},
Year = {2013},
Address={Berlin, Germany},
Publisher={Springer}
}

Liao T.,
Molina D., Stützle T., Montes de Oca M. A., and Dorigo M.
An
ACO Algorithm Benchmarked on the BBOB Noiseless
Function Testbed. Proceedings of the Workshop on BlackBox
Optimization Benchmarking of the Genetic and Evolutionary Computation Conference (GECCO 2012).
A. Auger, N. Hansen, V. HeidrichMeisner, O. Mersmann, P. Posik, and M. Preuss (Eds.)
Philadelphia, PA, USA. July 2012 pp. 159166. Copyright ACM.





ACO_{R} is an ant colony optimization algorithm for continuous domains. In this
article, we benchmark ACO_{R} on the BBOB noiseless function testbed, and compare
its performance to PSO, ABC and GA algorithms from previous BBOB workshops. Our
experiment shows that ACO_{R} performs better than PSO, ABC and GA on the moderate
functions, illconditioned functions and multimodal functions. Among 24
functions, ACO_{R} solved 19 in dimension 5, 9 in dimension 20, and 7 across
dimensions from 2 to 40. Furthermore, in dimension 5, we present the results of
the ACO_{R} when it uses variable correlation handling. The latter version is
competitive on the ve dimensional functions to (1+1)CMAES and BIPOPCMAES.
@incollection{LiaMolStuMonDor12,
Author = {Tianjun Liao and Daniel Molina and Thomas St{\"u}tzle and Marco A. {Montes de Oca} and Marco Dorigo},
Booktitle = {Proceedings of the Workshop on BlackBox Optimization Benchmarking of the Genetic and Evolutionary
Computation Conference (GECCO 2012)},
Editor = {A. Auger and N. Hansen and V. HeidrichMeisner and O. Mersmann and P. Posik and M. Preuss},
Title = {An {ACO} Algorithm Benchmarked on the {BBOB} Noiseless Function Testbed},
Year={2012},
Pages = {159166},
Address={New York, NY},
Publisher={ACM Press}}
}






In this paper, we describe a novel swarming framework that guides autonomous
mobile sensors into a flexible arrangement to interpolate values of a field in
an unknown region. The algorithm is devised so that the locations of the sensors
approximate a Chebyshev distribution, which can be optimal for certain ideal
geometries. The framework is designed to dynamically adjust to changes in the
region of interest, and operates well with very little a priori knowledge of the
given region. For comparison, we interpolate a variety of nontrivial fields
using a standard swarming algorithm that produces a uniform distribution and our
new algorithm. We find that our new algorithm interpolates fields with greater
accuracy.
@incollection{KirMonSenRosShe12,
Author = {Josh Kirby and Marco A. {Montes de Oca} and Steven Senger and Louis F. Rossi and ChienChung Shen},
Booktitle = {Proceedings of the Eighth International Conference on Swarm Intelligence (ANTS 2012)},
Title = {Swarm Interpolation Using an Approximate Chebyshev Distribution},
Editor = {M. Dorigo and others},
Pages = {324331},
Address={Berlin, Germany},
Publisher={Springer},
Year = {2012}
}






We modify an artificial bee colony algorithm to (i) make the
population size grow over time, and to (ii) apply local search on strategically
selected solutions. The modified algorithm obtains very good results
on a set of largescale continuous optimization problems. This is not the
first time we see that these algorithmic components make an initially
noncompetitive algorithm produce stateoftheart results. In previous
work, we have shown that the same modifications substantially improve
the performance of particle swarm optimization (PSO) and ant colony
optimization (ACO) algorithms. Altogether, these results suggest that
population growth and local search are key components for obtaining
highquality results.
@incollection{AydLiaMonStu11,
Author = {Dogan Aydin and Tianjun Liao and Marco A. {Montes de Oca} and Thomas St{\"u}tzle},
Booktitle = {LNCS 7401. Proceedings of the International Conference on Artificial Evolution (EA 2011)},
Title = {Improving Performance via
Population Growth and Local Search: The Case of the Artificial Bee Colony Algorithm},
Pages = {8596},
Address = {Berlin, Germany},
Publisher = {Springer},
Year={2012}
}






SepGCMAES is a variant of GCMAES with lower time
complexity. In this paper, we evaluate the impact various
ways of tuning have on the performance of SepGCMAES
on large scalable continuous benchmark problems. We
have extracted seven parameters from SepGCMAES and
tuned them across training functions with various search
landscapes by an automatic algorithm configuration tool, Iterated
FRace. Considering different compositions of training
functions, the best performance of tuned SepGCMAES
was obtained using mixed dimensional problems. Our
comparative study on large scalable benchmark functions
also shows that the default SepGCMAES outperforms GCMA
ES. Moreover, tuned SepGCMAES significantly improves
over both GCMAES and default SepGCMAES
@incollection{LiaMonStu11,
Author = {Tianjun Liao and Marco A. {Montes de Oca} and Thomas St{\"u}tzle},
Booktitle = {Proceedings of the Workshop on Scaling Behaviours of
Landscapes, Parameters and Algorithms of the Genetic and Evolutionary
Computation Conference (GECCO 2011)},
Editor = {Ender {\"O}zcan and Andrew J. Parkes and Jonathan Rowe},
Title = {Tuning Parameters across Mixed Dimensional Instances: A Performance Scalability Study of SepGCMAES},
Year={2011},
Pages = {703706},
Address={New York, NY},
Publisher={ACM Press}}
}






ACO_{R} is one of the most popular ant colony optimization
algorithms for tackling continuous optimization problems.
In this paper, we propose IACO_{R}LS, which is a variant
of ACO_{R} that uses local search and that features a growing
solution archive. We experiment with Powell's conjugate
directions set, Powell's BOBYQA, and LinYu Tseng's
Mtsls1 methods as local search procedures. Automatic parameter
tuning results show that IACO_{R}LS with Mtsls1
(IACO_{R}Mtsls1) is not only a significant improvement over
ACO_{R}, but that it is also competitive with the stateoftheart
algorithms described in a recent special issue of the Soft
Computing journal. Further experimentation with IACO_{R}
Mtsls1 on an extended benchmark functions suite, which
includes functions from both the special issue of Soft Computing
and the IEEE 2005 Congress on Evolutionary Computation,
demonstrates its good performance on continuous
optimization problems.
@incollection{LiaMonDogStuDor11,
Author = {Tianjun Liao and Marco A. {Montes de Oca} and Dogan Aydin and Thomas St{\"u}tzle and Marco Dorigo},
Booktitle = {Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2011)},
Editor = {N. Krasnogor and others},
Title = {An Incremental Ant Colony Algorithm with Local Search
for Continuous Optimization},
Year={2011},
Pages = {125132},
Address={New York, NY},
Publisher={ACM Press}}
}






Positive feedback and a consensusbuilding procedure
are the key elements of a mechanism that allows a
population of agents to select the fastesttoexecute action from
a set of two alternatives. This selforganized decisionmaking
mechanism can be seen as a collective learning algorithm
because even though individual agents do not directly compare
the available alternatives, the population selects the action that
improves the efficiency of the system. However, when a large
population is involved, the time required for achieving consensus
on one of the available choices may render impractical
such a decisionmaking mechanism.
In this paper, we tackle this problem by applying the incremental
social learning approach, which consists of a growing
population size coupled with a social learning mechanism.
The experimental results obtained show that by using the
incremental social learning approach, the collective learning
process can be accelerated substantially. The conditions under
which this is true are described.
@incollection{MdeO:ISLSASO10,
Author = {Marco A. {Montes de Oca} and Thomas St{\"u}tzle and Mauro Birattari and Marco Dorigo},
Booktitle = {Proceedings of the Fourth IEEE International Conference on SelfAdaptive and SelfOrganizing Systems (SASO 2010)},
Editor = {I. Gupta and others},
Title = {Incremental Social Learning Applied to a Decentralized DecisionMaking Mechanism: Collective Learning Made Faster},
Pages={243252},
Year={2010},
Note = {Preliminary version available at \url{http://iridia.ulb.ac.be/IridiaTrSeries/IridiaTr2010018r001.pdf}}
}






In this paper, we study how an opinion dynamics model can be the core of a
collective decisionmaking mechanism for swarm robotics.
Our main result is that when opinions represent action choices, the opinion associated
with the action that is the fastest to execute spreads in the population. Moreover, the
spread of the best choice happens even when only a minority is initially advocating for
it. The key elements involved in this process are consensus building and positive feedback.
A foraging task that involves collective transport is used to illustrate the potential of
the proposed approach.
@incollection{MdeO:OpinionDynamicsANTS10,
Author = {Marco A. {Montes~de~Oca} and Eliseo Ferrante and Nithin Mathews and Mauro Birattari and Marco Dorigo},
Booktitle = {LNCS 6234. Proceedings of the Seventh International Conference on Swarm Intelligence (ANTS 2010)},
Editor = {M. Dorigo and others},
Title = {Opinion Dynamics for Decentralized DecisionMaking in a Robot Swarm},
Pages = {252263},
Address={Berlin, Germany},
Publisher={Springer},
Year = {2010}
}






To design high performing algorithms in swarm intelligence requires
frequently the choice of appropriate parameter settings. Frequently, the
parameters to be set are either continuous ones or have a large integer domain.
For such tasks it seems therefore interesting to apply stateoftheart
continuous optimization algorithms instead of using a tedious and errorprone
handson approach. In this article we study the performance of various
continuous optimization algorithms for this algorithm configuration task using
various case studies of algorithms to be configured from the swarm intelligence
literature.
@incollection{Yuan:ParameterTuningANTS10,
Author = {Zhi Yuan and Marco A. {Montes~de~Oca} and Thomas St{\"u}tzle and Mauro Birattari},
Booktitle = {LNCS 6234. Proceedings of the Seventh International Conference on Swarm Intelligence (ANTS 2010)},
Editor = {M. Dorigo and others},
Title = {Modern Continuous Optimization Algorithms for Tuning Real and Integer Algorithm Parameters},
Pages = {204215},
Address={Berlin, Germany},
Publisher={Springer},
Year = {2010}
}

   

In nature, there are examples of large groups of animals that are capable of
making optimal collectivelevel decisions without the need for global control or
information. Understanding the underlying mechanisms of such decentralized
decisionmaking processes may help us to design artificial systems that exhibit
some of the desirable properties, like scalability or fault tolerance, that are
usually observed in these natural systems. In this paper, we show how a simple
social influence mechanism, based on the binary particle swarm optimization
algorithm, can make a whole population of agents achieve consensus on one of two
possible choices in a completely decentralized way. Furthermore, we show that,
if the conditions for achieving consensus are met and each choice is bound to an
action that takes time to perform, the population converges to the choice
associated with the shortest execution time. We illustrate the applicability of
the decisionmaking mechanism presented in this paper on an example scenario in
swarm robotics.
@incollection{MdeO:ECAL09Workshop,
Author = {Marco A. {Montes de Oca} and Eliseo Ferrante and Nithin Mathews and Mauro Birattari and Marco Dorigo},
Booktitle = {Proceedings of the Workshop on Organisation, Cooperation and Emergence in Social Learning Agents of the European
Conference on Artificial Life (ECAL 2009)},
Editor = {D. Curran and C. O'Riordan},
Title = {Optimal Collective DecisionMaking through Social Influence and Different Action Execution Times}
}

Spanevello P. and Montes de Oca M. A.
Experiments on Adaptive Heterogeneous PSO Algorithms. Proceedings of the Doctoral Symposium on Engineering Stochastic Local Search
Algorithms. SLSDS'09. F. Hutter and M. A. Montes de Oca (Eds.) Brussels, Belgium. September 2009. pp. 3640.

  

Particle swarm optimization (PSO) algorithms are used for solving real parameter
optimization problems that may be characterized by discontinuities, plateaus,
multiple local optima, and other features. In most PSO variants, the swarm is
homogeneous, that is, all particles use the same search mechanism; however,
given the possible complex nature of the solution space, a single search
mechanism can be more effective in certain areas of the search space than in
others. Thus, particle homogeneity can negatively affect the performance of PSO
algorithms. A possible way to tackle this issue, is to use heterogeneous PSO
algorithms, which are composed of particles that use different search
mechanisms. In this paper, we propose a number of different strategies to allow
particles to select, from a set of two candidates, their search mechanism
according to information obtained during the optimization process. We show
preliminary results and outline future research.
@incollection{Spenevello:SLSDS09,
Author = {Paolo Spanevello and Marco A. {Montes de Oca}},
Booktitle = {Proceedings of SLSDS 2009: Doctoral Symposium on Engineering Stochastic Local Search Algorithms},
Editor = {F. Hutter and M. A. {Montes de Oca}},
Title = {Experiments on Adaptive Heterogeneous PSO Algorithms},
Pages={3640},
Publisher = {IRIDIA, Universit{\'e} Libre de Bruxelles},
Address = {Brussels, Belgium},
Note={Technical Report No. TR/IRIDIA/2009024}
}

   

Inferring gene regulatory networks from expression profiles is a challenging problem that has been tackled using many different approaches. When posed as an optimization problem, the goal is often to minimize the value of an error measure, usually the mean squared error or the relative squared error, between the real profiles and those generated with a model whose parameters are the variables being optimized. In this paper, we use recurrent neural networks to model regulatory interactions and study systematically the ``fitness landscape'' that results from measuring the relative squared error between the target and the inferred gene expression profiles. Although the results of the study indicate that the generated landscapes have a positive fitnessdistance correlation, the error values span several orders of magnitude over very short distance variations. This suggests that the fitness landscape features extremely deep valleys, which can make generalpurpose stateoftheart continuous optimization
algorithms exhibit a very poor performance. Further results obtained from an analysis based on perturbations of different magnitude to the optimal network topology, support approaches in which the spaces of network topologies and of network parameters are decoupled.
@incollection{SamboGRN:EA,
Author = {Francesco Sambo and Marco A. {Montes de Oca} and Barbara {Di Camillo} and Thomas St{\"u}tzle},
Booktitle = {LNCS 5975. Proceedings of the 9th International Conference on Artificial Evolution (EA 2009)},
Editor = {P. Collet and others},
Title = {On the Difficulty of Inferring Gene Regulatory Networks: A Study of the Fitness Landscape Generated by Relative Squared Error},
Pages = {7485},
Address={Berlin,Germany},
Publisher={Springer},
Year = {2009}
}

Montes de Oca M. A., Peña J., Stützle T., Pinciroli C., and Dorigo M. Heterogeneous Particle Swarm Optimizers.
IEEE Congress on Evolutionary Computation. CEC 2009. P. Haddow et al. (Eds.) Trondheim, Norway. May 2009. pp. 698705. Copyright
IEEE.

  

Particle swarm optimization (PSO) is a swarm intelligence technique
originally inspired by models of flocking and of social influence that
assumed homogeneous individuals. During its evolution to become
a practical optimization tool, some heterogeneous variants have been
proposed. However, heterogeneity in PSO algorithms has never been
explicitly studied and some of its potential effects have therefore been
overlooked. In this paper, we identify some of the most relevant types
of heterogeneity that can be ascribed to particle swarms. A number of
particle swarms are classified according to the type of heterogeneity
they exhibit, which allows us to identify some gaps in current knowledge
about heterogeneity in PSO algorithms. Motivated by these observations,
we carry out an experimental study of a couple of heterogeneous particle
swarms each of which is composed of two kinds of particles. Directions
for future developments on heterogeneous particle swarms are outlined.
@incollection{MonPenStuetal2009:cec,
Address = {Piscataway, NJ},
Author = {Marco A. {Montes de Oca} and Jorge Pe{\~n}a and Thomas St{\"u}tzle and Carlo Pinciroli and Marco Dorigo},
Booktitle = {IEEE Congress on Evolutionary Computation  CEC 2009},
Editor = {P. Haddow and others},
Pages = {698705},
Publisher = {IEEE Press},
Title = {Heterogeneous Particle Swarm Optimizers},
Year = {2009}}

  


We present an algorithm that is inspired by theoretical and
empirical results in social learning and swarm intelligence research. The
algorithm is based on a framework that we call incremental social learning.
In practical terms, the algorithm is a hybrid between a local search
procedure and a particle swarm optimization algorithm with growing
population size. The local search procedure provides rapid convergence
to good solutions while the particle swarm algorithm enables a comprehensive
exploration of the search space. We provide experimental evidence
that shows that the algorithm can find good solutions very rapidly
without compromising its global search capabilities.
@incollection{IPSOLSMdeO08,
author = "Marco A. {Montes~de~Oca} and Ken {Van~den~Enden} and Thomas St{\"u}tzle",
title = "Incremental Particle SwarmGuided Local Search for Continuous Optimization",
booktitle = "LNCS 5296. Proceedings of the International Workshop on Hybrid Metaheuristics. HM 2008",
editor = "M. J. Blesa and others",
pages = "7286",
address=" Berlin,Germany",
publisher="Springer",
year = "2008"
}


  

Social learning is a mechanism that allows individuals to
acquire knowledge from others without incurring the costs of
acquiring it individually. Individuals that learn socially can
thus spend their time and energy exploiting their knowledge
or learning new things. In this paper, we adapt these ideas
for their application to both optimization and multiagent
learning. The approach consists of a growing population of
agents that learn socially as they become part of the main
population. We find that learning socially in an incremental
way can speed up the optimization and learning processes,
as well as improve the quality of the solutions and strategies
found.
@inproceedings{ISLMdeO08,
author = "Marco A. {Montes~de~Oca} and Thomas St{\"u}tzle",
title = "Towards Incremental Social Learning in Optimization and Multiagent Systems",
booktitle = "Proceedings of the Evolutionary Computation and MultiAgent Systems and Simulation (ECoMASS) Workshop of the Genetic and Evolutionary Computation Conference (GECCO 2008)",
editor = "W. Rand, S. G. Ficici, and R. Riolo",
pages = "19391944",
publisher="ACM",
year = "2008"
}


  

The fully informed particle swarm optimization algorithm (FIPS) is
very sensitive to changes in the population topology. The velocity update
rule used in FIPS considers all the neighbors of a particle to update its
velocity instead of just the best one as it is done in most variants. It
has been argued that this rule induces a random behavior of the particle
swarm when a fully connected topology is used. This argument could
explain the often observed poor performance of the algorithm under that
circumstance.
In this paper we study experimentally the convergence behavior of
the particles in FIPS when using topologies with different levels of con
nectivity. We show that the particles tend to search a region whose size
decreases as the connectivity of the population topology increases. We
therefore put forward the idea that spatial convergence, and not a ran
dom behavior, is the cause of the poor performance of FIPS with a fully
connected topology. The practical implications of this result are explored.
@inproceedings{FIPSMdeO08,
author = "Marco A. {Montes~de~Oca} and Thomas St{\"u}tzle",
title = "Convergence Behavior of the Fully informed Particle Swarm Optimization Algorithm",
booktitle = "Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2008)",
editor = "M. Keijzer and others",
pages = "7178",
publisher="ACM",
year = "2008"
}


  

Many automaticallysynthesized programs have, like their handmade
counterparts, numerical parameters that need to be set properly before
they can show an acceptable performance. Hence, any approach to the
automatic synthesis of programs needs the ability to tune numerical
parameters efficiently.
Grammatical Evolution (GE) is a promising grammarbased genetic
programming technique that synthesizes numbers by concatenating digits.
In this paper, we show that a naive application of this approach can lead
to a serious number length bias that in turn affects efficiency. The root of
the problem is the way the contextfree grammar used by GE is defined.
A simple, yet effective, solution to this problem is proposed.
@inproceedings{GEMdeO08,
author = "Marco A. {Montes~de~Oca}",
title = "Exposing a Bias Toward ShortLength Numbers in Grammatical Evolution",
booktitle = "LNCS 4971. Proceedings of the Eleventh European Conference on Genetic Programming. EuroGP 2008",
editor = "M. O'Neill, L. Vanneschi, S. Gustafson, A. I. EsparciaAlcázar, I. De Falco, A. Della Cioppa, and E. Tarantino",
pages = "278288",
address=" Berlin",
publisher="Springer",
year = "2008"
}

Montes de Oca M. A., Stützle, Birattari M., and Dorigo M. Composing Particle Swarm Optimization Algorithms. Proceedings of Doctoral Symposium on Engineering Stochastic Local Search Algorithms. SLSDS'07. E. Ridge and T. Stützle and M. Birattari and H. H. Hoos (Eds.) Brussels, Belgium. September 2007. pp. 610. IRIDIA, Université Libre de Bruxelles Technical Report No. TR/IRIDIA/2007014
   

Particle Swarm Optimization (PSO) is primarily used for solving continuous optimization problems. Over the years and as a result of the interest on PSO, many different variants of the original algorithm have been proposed and, more often than not, it is claimed that the new variant is superior in some way. It follows that if two variants differ only by some algorithmic components, then their difference in performance can be ascribed to differences in these components. A question then arises: Can we compose a highperforming PSO variant using components present in other variants? In this paper, we present an attempt to answer this question. The performance of the resulting composite variant suggests that the answer is positive. We believe that algorithm composition can help in devising effective optimization algorithms.
@incollection{MdO:SLSDS07,
Author = {Marco A. {Montes de Oca} and Thomas St{\"u}tzle and Mauro Birattari and Marco Dorigo},
Booktitle = {Proceedings of the Doctoral Symposium on Engineering Stochastic Local Search Algorithms  SLSDS 2007},
Editor = {E. Ridge and T. St{\"u}tzle and M. Birattari and H. H. Hoos},
Title = {Composing Particle Swarm Optimization Algorithms},
Pages = {610},
Publisher = {IRIDIA, Universit{\'e} Libre de Bruxelles},
Address = {Brussels, Belgium},
Note={Technical Report No. TR/IRIDIA/2007014}
}

   


In this paper we present an estimation of distribution particle
swarm optimization algorithm that borrows ideas from recent developments
in ant colony optimization which can be considered an estimation
of distribution algorithm. In the classical particle swarm optimization
algorithm, particles exploit their individual memory to explore
the search space. However, the swarm as a whole has no means to exploit
its collective memory (represented by the array of previous best
positions or pbests) to guide its search. This causes a reexploration of
already known bad regions of the search space, wasting costly function
evaluations. In our approach, we use the swarm's collective memory to
probabilistically guide the particles' movement towards the estimated
promising regions in the search space. Our experiments show that this
approach is able to find similar or better solutions than the canonical
particle swarm optimizer with fewer function evaluations.
@inproceedings{EDAPSOMdeO06,
author = "Mudassar Iqbal and Marco A. {Montes~de~Oca}",
title = "An Estimation of Distribution Particle Swarm Optimization Algorithm",
booktitle = "LNCS 4150. Proceedings of the Fifth International Workshop on Ant Colony Optimization and Swarm Intelligence. ANTS 2006",
editor = "Marco Dorigo and Luca M. Gambardella and Mauro Birattari and Alcherio Martinoli and Riccardo Poli and Thomas St{\"u}tzle",
pages = "7283",
address=" Berlin",
publisher="Springer",
year = "2006"
}
Version: 1.0
Author: Mudassar Iqbal and Marco A. Montes de Oca
Copyright (c) Mudassar Iqbal and Marco A. Montes de Oca, 2009
Download

   

In this paper we report an empirical comparison of some
of the most influential Particle Swarm Optimization (PSO) algorithms
based on runlength distributions (RLDs). The advantage of our approach
over the usual report pattern (average iterations to reach a predefined
goal, success rates, and standard deviations) found in the current
PSO literature is that it is possible to evaluate the performance of an
algorithm on different application scenarios at the same time. The RLDs
reported in this paper show some of the strengths and weaknesses of the
studied algorithms and suggest ways of improving their performance.
@inproceedings{PSORLDMdeO06,
author = "Marco A. {Montes~de~Oca} and Thomas St{\"u}tzle and Mauro Birattari and Marco Dorigo",
title = "A Comparison of Particle Swarm Optimization Algorithms Based on RunLength Distributions",
booktitle = "LNCS 4150. Proceedings of the Fifth International Workshop on Ant Colony Optimization and Swarm Intelligence. ANTS 2006",
editor = "Marco Dorigo and Luca M. Gambardella and Mauro Birattari and Alcherio Martinoli and Riccardo Poli and Thomas St{\"u}tzle",
pages = "112",
address=" Berlin",
publisher="Springer",
year = "2006"
}

   

Communication among agents in swarm intelligent systems
and more generally in multiagent systems, is crucial in order to coordinate
agents' activities so that a particular goal at the collective level is
achieved. From an agent's perspective, the problem consists in establishing
communication policies that determine what, when, and how to communicate
with others. In general, communication policies will depend on
the nature of the problem being solved. This means that the solvability
of problems by swarm intelligent systems depends, among other things,
on the agents' communication policies, and setting an incorrect set of
policies into the agents may result in finding poor solutions or even in
the unsolvability of problems. As a case study, this paper focus on the
effects of letting agents use different communication policies in antbased
clustering algorithms. Our results show the effects of using different communication
policies on the final outcome of these algorithms.
@inproceedings{EffectsMdeO05,
author = "Marco A. {Montes~de~Oca} and Leonardo Garrido and Jos\'e L. Aguirre",
title = "Effects of Interagent Communication in Antbased Clustering Algorithms: A Case Study on Communication Policies in Swarm Systems",
booktitle = "LNAI 3789. Proceedings of the Fourth Mexican International Conference on Artificial Intelligence. MICAI 2005",
editor = "Alexander Gelbukh and Alvaro {de~Albornoz} and Hugo Terashima",
pages = "254263",
address=" Berlin",
publisher="Springer",
year = "2005"
}

   

Conventional antbased clustering algorithms and growing
neural gas networks are combined to produce an unsupervised
classification algorithm that exploits the strengths of
both techiques. The antbased clustering algorithm detects
existing classes on a training data set, and at the same time,
trains several growing neural gas networks. On a second
stage, these networks are used to classify previously unseen
input vectors into the classes detected by the antbased algorithm.
The proposed algorithm eliminates the need of
changing the number of agents and the dimensions of the
environment when dealing with large databases.
@inproceedings{HybridizationMdeO05,
author = "Marco A. {Montes~de~Oca} and Leonardo Garrido and Jos\'e L. Aguirre",
title = "An hybridization of an antbased clustering algorithm with growing neural gas networks for classification tasks",
booktitle = "Proceedings of the 20th Annual ACM Symposium on Applied Computing. SAC 2005",
editor = {Hisham Haddad and Lorie M. Liebrock and Andrea Omicini and Roger L. Wainwright},
pages = "913",
publisher="ACM",
year = "2005"
}

Montes de Oca M. A., Garrido L., and Aguirre J.L. A first approach
to study the effects of direct information exchange between agents in
antbased clustering. Proceedings of the First World Congress on
Lateral Computing. Bangalore, India. December 2004. World Federation on Lateral Computing.
[Winner of the best paper award in the Swarm Intelligence/Evolutionary Computing session]
   

This paper presents a first approach to study direct
information exchange between agents in antbased clustering
by comparing the clustering process development generated
by agents that maintain a shortterm memory against the
one generated by agents that share their memory with their
peers whenever an encounter occurs on the environment. The
information exchange can allow an agent to "change its mind"
regarding the most favorable dropping location on the grid, based
on the knowledge of another agent.
Our experimental evidence shows that this simple information
exchange strategy improves the quality of the resultant clustering.
This holds true, however, only for a small number of agents.
This suggests that there is a critical number of agents that can
exchange information, that when surpassed, the effects could even
be detrimental.
@inproceedings{FirstApproachMdeO04,
author = "Marco A. {Montes~de~Oca} and Leonardo Garrido and Jos\'e L. Aguirre",
title = "A first approach to study the effects of direct information exchange between agents in antbased clustering",
booktitle = "Proceedings of the First World Congress on Lateral Computing. WCLC 2004",
editor = {Suthikshn Kumar and Ajith Abraham and Jens Harnisch and Antony Satyadas},
publisher="World Federation on LateralComputing",
note = "Proceedings published in CDROM",
year = "2004"
}

Montes de Oca M. A., Garrido L., and Aguirre J.L. On the effects of
direct communication between agents in antbased clustering algorithms.
Fifth Iberoamerican Workshop on MultiAgent Systems. Iberagents 2004.
Puebla, México. November 2004.    

Antbased clustering algorithms are distributed selforganized knowledge discovery tools
inspired by the collective behavior of insect colonies. They are based in stigmergic communication
among participating agents, that is, on the nonintentional indirect influence on the agents behavior
through local modifications of the environment. Inspired by the oral thophalaxis phenomenon observed
in some ant species, two different knowledge exchange strategies between agents in antbased clustering
algorithms are investigated. The impact on the final clustering quality is evaluated by comparing the clustering
process development generated by each strategy. The investigated knowledge exchange strategies are: simple
local memory sharing and the shared use of environment maps. It is shown that benefits on the final clustering
are directly related to the usefulness of the exchanged knowledge and on the number of participating agents.
@inproceedings{EffectsMdeO04,
author = "Marco A. {Montes~de~Oca} and Leonardo Garrido and Jos\'e L. Aguirre",
title = "On the effects of direct communication between agents in antbased clustering algorithms",
booktitle = "Proceedings of the Fifth Iberoamerican Workshop on MultiAgent Systems. Iberagents 2004",
notes = "Proceedings delivered only to workshop participants",
year = "2004"
}



THESES 

Montes de Oca M. A. Incremental Social Learning in Swarm Intelligence Systems. Ph.D. thesis. Université Libre de Bruxelles, Brussels, Belgium. July 2011.





A swarm intelligence system is a type of multiagent system with the following
distinctive characteristics: (i) it is composed of a large number of agents,
(ii) the agents that comprise the system are simple with respect to the
complexity of the task the system is required to perform, (iii) its control
relies on principles of decentralization and selforganization, and (iv) its
constituent agents interact locally with one another and with their
environment.
Interactions among agents, either direct or indirect through the environment in
which they act, are fundamental for swarm intelligence to exist; however, there
is a class of interactions, referred to as interference, that actually
blocks or hinders the agents' goalseeking behavior. For example, competition
for space may reduce the mobility of robots in a swarm robotics system, or
misleading information may spread through the system in a particle swarm
optimization algorithm. One of the most visible effects of interference in a
swarm intelligence system is the reduction of its efficiency. In other words,
interference increases the time required by the system to reach a desired state.
Thus, interference is a fundamental problem which negatively affects the
viability of the swarm intelligence approach for solving important, practical
problems.
We propose a framework called incremental social learning (ISL) as a
solution to the aforementioned problem. It consists of two elements: (i) a
growing population of agents, and (ii) a social learning mechanism. Initially, a
system under the control of ISL consists of a small population of agents. These
agents interact with one another and with their environment for some time before
new agents are added to the system according to a predefined schedule. When a
new agent is about to be added, it learns socially from a subset of the agents
that have been part of the system for some time, and that, as a consequence, may
have gathered useful information. The implementation of the social learning
mechanism is applicationdependent, but the goal is to transfer knowledge from a
set of experienced agents that are already in the environment to the newly added
agent. The process continues until one of the following criteria is met: (i) the
maximum number of agents is reached, (ii) the assigned task is finished, or
(iii) the system performs as desired. Starting with a small number of agents
reduces interference because it reduces the number of interactions within the
system, and thus, fast progress toward the desired state may be achieved. By
learning socially, newly added agents acquire knowledge about their environment
without incurring the costs of acquiring that knowledge individually. As a
result, ISL can make a swarm intelligence system reach a desired state more
rapidly.
We have successfully applied ISL to two very different swarm intelligence
systems. We applied ISL to particle swarm optimization algorithms. The results
of this study demonstrate that ISL substantially improves the performance of
these kinds of algorithms. In fact, two of the resulting algorithms are
competitive with stateoftheart algorithms in the field. The second system to
which we applied ISL exploits a collective decisionmaking mechanism based on an
opinion formation model. This mechanism is also one of the original
contributions presented in this dissertation. A swarm robotics system under the
control of the proposed mechanism allows robots to choose from a set of two
actions the action that is fastest to execute. In this case, when only a small
proportion of the swarm is able to concurrently execute the alternative actions,
ISL substantially improves the system's performance.
@phdthesis{Mon11,
author = {Marco A. {Montes de Oca}},
title = {Incremental Social Learning in Swarm Intelligence Systems},
school = {Universit{\'e} Libre de Bruxelles},
address = {Brussels, Belgium},
month = {July},
year = {2011}
}

Montes de Oca M. A. On the Performance of Particle Swarm Optimizers. Diplôme d'Études Approfondies en Sciences Appliquées Thesis. Université Libre de Bruxelles. Brussels, Belgium. September 2006.





Since the introduction of the first Particle Swarm Optimization algorithm by
James Kennedy and Russell Eberhart in the mid90's, many variants of the
original algorithm have been proposed. However, there is no general agreement
on which variant(s) could be considered the stateoftheart in the field. This
is, in part, due to a general lack of crosscomparisons among variants in the
literature.
The work reported in this document was carried out with the goal of identifying
the bestperforming particle swarm optimization algorithms. For practical
reasons, we could not compare all the available algorithmic variants. Instead,
we focused on those that have been the most widely used variants. We also
considered algorithms that incorporate some of the latest developments in field.
The comparison of the chosen particle swarm optimization algorithms was
based on runlength and solutionquality distributions. These analytical tools
allow the comparison of stochastic optimization algorithms in terms of the probability
of finding a solution to an optimization problem within some minimum
acceptable solution quality and time bounds. This kind of analysis is helpful for
measuring the solutionquality vs. computational effort tradeoff each algorithm
presents.
Particle swarm optimizers are populationbased optimization algorithms that
rely on a population structure referred to as topology. We conduct an analysis
of the effects that different population topologies (the most commonly used
ones) have in their performance. We show how some topologies are better suited
for application scenarios in which only short runs are allowed or affordable, and
how other topologies are better suited for situations in which the only important
factor is solution quality.
Runlength and solutionquality distributions also serve as diagnostic tools
that help in the design of improved algorithms. After analyzing the behavior
of some variants with these tools, we found opportunities for improvement and
the resultant variants are presented and discussed.
@mastersthesis{PSOPerfMdeO06,
author = "Marco A. {Montes~de~Oca}",
title = "On the Performance of Particle Swarm Optimizers",
school = "Universit\'e Libre de Bruxelles",
address = "Brussels, Belgium",
month = "September",
note = "Dipl\^ome d'\'Etudes Approfondies en Sciences Appliqu\'ees",
year = "2006"
}

Montes de Oca M. A. Effects on clustering quality of direct and indirect communication among agents in antbased clustering algorithms. Master's Thesis. Instituto Tecnológico y de Estudios Superiores de Monterrey. Campus Monterrey. Monterrey, N.L. México. May 2005.





Antbased clustering algorithms are knowledge discovery tools inspired by the collective
behavior of social insect colonies. In these algorithms, insects are modeled as software agents
that communicate with each other indirectly through the environment. This particular kind
of communication is known as stigmergic communication.
In the classic antbased clustering algorithm, a group of agents that exhibit the same
behavior move randomly over a toroidal square grid. In the environment there are data
objects that were initially scattered in a random fashion. The objects can be picked up,
moved or dropped in any free location on the grid. An object is picked up with high
probability if it is not surrounded by similar objects and is dropped with high probability if
an agent's neighborhood is densely populated by other similar objects and its location is free.
Here, stigmergy occurs when an object is placed next to another. The resultant structure is
much more attractive to agents to drop other similar objects nearby.
However, stigmergy is not the only way social insects interact with each other. In most
species, trophallaxis or liquid food exchange among members of the same colony, plays a key
role in their social organization. Consider the case of some termite species which require
intestinal protozoa to derive benefits from cellulose. Their early instar nymphs are fed either
by oral or anal trophallaxis. The latter infects them with symbiotic protozoa or bacteria
contained in the proctodeal liquid. The subsocial association result of this codependence
has evolved into a complex social and morphological structure.
Inspired by the trophallaxis phenomenon observed in some ant and termite species,
two different communication strategies among agents in antbased clustering algorithms are
investigated: (i) direct and (ii) indirect communication. The impact on the final clustering
quality is evaluated by comparing the development of the clustering process generated by
each strategy. It is shown that benefits on the final clustering are directly related to the
usefulness of the exchanged information, its use, and on the number of participating agents.
@mastersthesis{EffectsMscMdeO05,
author = "Marco A. {Montes~de~Oca}",
title = "Effects on clustering quality of direct and indirect
communication among agents in antbased clustering algorithms",
school = "Instituto Tecnol\'ogico y de Estudios Superiores de Monterrey. Campus Monterrey.",
address = "Monterrey, N.L. M\'exico",
month = "May",
year = "2005"
}



WORK IN PROGRESS (SUBMITTED ARTICLES) 

Montes de Oca M. A. and Rossi L. Density Classification with a ContinuousTime Binary Consensus Protocol with Hysteretic Units. Submitted.




In this paper, we present a continuoustime binary consensus protocol whereby
entities connected via a directed ring topology solve the onedimensional
density classification problem. In our model, entities behave as nonideal
relays, that is, they have memory of the trajectory of an internal state
variable, which gives them hysteretic properties. We show that this feature is
necessary for the system to reach consensus on the state shared by the initial
majority.



OTHERS 

Montes de Oca M. A. Social Reinforcement for Collective
DecisionMaking Over Time. Abstract Proceedings of the 1st Multidisciplinary Conference on Reinforcement Learning and Decision Making. RLDM 2013. Princeton, NJ. To appear.




Social interactions underpin collective decision making in all animal societies.
However, the actual mechanisms that animals use to achieve a collective decision
differ among species. For example, ants use pheromones to bias the decisions of
other ants; birds observe and match the velocity of their neighbors; humans
match the speed of other people while driving (even above speed limits). Despite
the differences among these (and other) collective decisionmaking mechanisms,
some basic principles underlying them exist, like the tendency to conform to the
actions or opinions of others. In our examples, by following pheromone trails
ants effectively follow on their nestmates' steps, birds in a flock are more
likely to fly in the same direction, and we humans, while we do not always
agree, we do not like to be in permanent conflict with others and eventually
seek ways to compromise. Therefore, this principle's basic operating mechanism
is that an individual who is exposed to the actions or opinions of others tends
to perform the actions, or have the same opinions of the observed individuals.
In this communication, I describe two basic collective decisionmaking
mechanisms based on the principle outlined above. These mechanisms are tested in
a setting that simulates a robotics scenario in which a group of robots must
collectively find the shorter of two alternative paths between two areas without
measuring travel times or distances. First, I describe a mechanism that consists
of robots forming teams of three robots (or a greater oddnumber of robots) that
decide which path to use by locally using the majority rule. Then, I describe a
mechanism that consists in individual robots increasing the tendency to choose
either path based on the path recently taken by another robot. In both cases,
the group collectively chooses the shorter path with high probability. These
mechanisms have been proposed as swarm intelligence mechanisms for optimal
collective decisionmaking.
@incollection{Mon13b,
Author = {Marco A. {Montes~de~Oca}},
Booktitle = {Proceedings of the 1st Multidisciplinary Conference on Reinforcement Learning and Decision Making},
Title = {Social Reinforcement for Collective DecisionMaking Over Time},
Year = {2013}
}

Dorigo M., Birattari M., O'Grady R.,
Gambardella L. M., Mondada F., Floreano D., Nolfi S., Baaboura T., Bonani M., Brambilla M.,
Brutschy A., Burnier D., Campo A., Christensen A., Decugnière A., Di Caro G., Ducatelle F.,
Ferrante E., Martinez Gonzales J., Guzzi J., Longchamp V., Magnenat S., Mathews N.,
Montes de Oca M. A., Pinciroli C., Pini G., Rétornaz P., Rey F., Roberts J.,
Rochat F., Sperati V., Stirling T., Stranieri A., Stützle T., Trianni V., Tuci E., Turgut A. E.,
and Vaussard F. Swarmanoid, The Movie.
AAAI11 Video Proceedings 2011.
[Winner of the best video award] Play the video

Brutschy A., Montes de Oca M. A., and Pinciroli C. ANTS 2008
KI  Zeitschrift Künstliche Intelligenz 23(2) 2009. pp. 64. Download the entry here

Montes de Oca M. A., Stützle T., Birattari M. and Dorigo M. On the Performance Analysis of Particle Swarm Optimisers
AISB Quarterly No. 124. The Society for the Study of Artificial Intelligence and Simulation of Behaviour. Brighton, United Kingdom.
Autumn/Winter 2006. pp. 67. Download the issue here

Montes de Oca M. A., Garrido L. and Aguirre J.L. A first approach
to study the effects of direct information exchange between agents in
antbased clustering Abstract Compilation of the XXXV Congreso de
Investigación y Desarrollo Tecnológico. Tecnológico de Monterrey.
Monterrey, N. L. México. January 2005. pp. 280. Not available online. 

