Here you will find detailed information about the algorithms implemented in iMOPSE. Note that single-objective algorithms can work with multi-objective problems by using weighted fitness, where weights are provided to the algorithm configuration.
Genetic algorithms are a type of evolutionary algorithm that use techniques inspired by natural evolution, such as selection, crossover, and mutation. GAs are used for optimizing complex problems by iteratively improving a population of candidate solutions.
The process begins with the initialization of a population of random individuals, each representing a potential solution to the problem at hand. These individuals are encoded in a way suitable for the problem domain, such as binary strings, real-valued vectors, or integer permutations.
Each individual in the population is evaluated using a fitness function, which is defined within the problem class. This function quantifies how well an individual solves the problem and provides a measure of its performance.
The core of the genetic algorithm involves an iterative process where the population evolves over generations. In each generation, a subset of individuals is selected based on their fitness, with fitter individuals having a higher probability of being chosen. This selection process mimics the principle of survival of the fittest.
In order to create new (child) solution parents are selected and crossover (or recombination) is performed to produce offspring. Crossover combines the genetic information of two parent individuals to create one or more children. This process promotes the exchange of beneficial traits and helps explore new areas of the solution space.
Mutation is then applied to the offspring to introduce random variations. Mutation alters one or more genes in an individual's chromosome, providing genetic diversity and helping to avoid premature convergence to local optima.
After crossover and mutation, the offspring are evaluated using the fitness function. The new generation of individuals is formed by selecting the newly created offspring. This selection ensures that the population maintains diversity.
The iterative process of selection, crossover, mutation, and evaluation continues until a predefined number of generations is completed.
The final output of the genetic algorithm is the best solution found during the evolutionary process. This solution represents the optimal or near-optimal answer to the problem being solved.
initialize population evaluate fitness of each individual while stopping criterion not met: select parents crossover to produce offspring mutate offspring evaluate fitness of offspring return best solution
Particle Swarm Optimization (PSO) is a computational method used for optimizing a problem by iteratively improving a candidate solution with regard to a given measure of quality. PSO solves problems by having a population of candidate solutions, called particles, move around in the search space according to simple mathematical formulas. Each particle adjusts its position based on its own experience and the experience of neighboring particles, making use of both local and global information.
The process begins with the initialization of a swarm of particles with random positions and velocities in the search space. Each particle represents a potential solution to the optimization problem. The quality of each particle's position is evaluated using a fitness function, which provides a measure of how well the particle solves the problem.
During each iteration of the algorithm, the velocity of each particle is updated based on three components: its current velocity, the difference between its current position and its best-known position, and the difference between its current position and the global best-known position found by any particle in the swarm. This update mechanism guides the particles towards better solutions over time.
After updating its velocity, each particle's position is updated by adding the new velocity to the current position. The fitness of the new position is then evaluated. If a particle's new position is better than its previous best-known position, the new position is recorded as the particle's best-known position. Similarly, if a particle's new position is better than the global best-known position, the global best-known position is updated.
The iterative process of updating velocities, positions, and best-known positions continues until a stopping criterion is met. The stopping criterion can be a predefined number of iterations, a convergence threshold, or an acceptable fitness level. The final output of the PSO algorithm is the best solution found by the swarm.
initialize swarm with random positions and velocities evaluate fitness of each particle while stopping criterion not met: for each particle: update velocity update position evaluate fitness update personal and global best positions return best solution
Simulated Annealing (SA) is a probabilistic technique for approximating the global optimum (best solution) of a given function (problem). The method is inspired by the annealing process in metallurgy, where materials are heated and then slowly cooled to remove defects and optimize their structure.
The process begins with an single initial solution and a high temperature. The temperature controls the likelihood of accepting worse solutions as the algorithm progresses. Initially, the temperature is high, allowing the algorithm to explore the search space freely. As the temperature decreases, the algorithm gradually focuses on refining the best solutions found.
During each iteration, a new solution is generated by making a small random modification to the current solution. The change in the objective function value (ΔE) is calculated. If the new solution is better (ΔE < 0), it is accepted. If the new solution is worse (ΔE > 0), it is accepted with a probability that depends on the temperature and the magnitude of ΔE. This probability is given by exp(-ΔE / T), where T is the current temperature.
The temperature is gradually decreased according to a cooling schedule, which can be linear, geometric, or adaptive. The process continues until a stopping criterion is met, such as a fixed number of iterations or a minimum temperature.
initialize solution and temperature while stopping criterion not met: generate new solution by modifying current solution calculate change in objective function if new solution is better, accept it else accept it with a probability dependent on temperature decrease temperature return best solution
Differential Evolution (DE) is a method that optimizes a problem by iteratively trying to improve a candidate solution. DE uses float-based encoding and simple mathematical operations to combine the positions of existing candidate solutions to explore new candidate solutions.
The process begins with an initial population of candidate solutions, during each iteration, for each individual in the population, three distinct individuals are randomly selected. A mutant vector is created by adding the weighted difference between two of these individuals to the third individual.
Crossover is then performed between the mutant vector and the target vector (the current individual) to create a trial vector. This trial vector is compared to the target vector, and the one with the better fitness is selected to be part of the next generation. This process ensures that the population evolves towards better solutions over time.
initialize population while stopping criterion not met: for each individual: select three distinct individuals create mutant vector perform crossover to create trial vector if trial vector is better, replace target vector return best solution
Ant Colony Optimization (ACO) is a probabilistic technique inspired by the behavior of ants in finding paths to food. ACO is used for solving computational problems that can be reduced to finding good paths through graphs. Currently ACO is implemented only for problems with permutation-based encoding.
The process begins with the initialization of pheromone levels on the paths of the graph. During each iteration, a number of artificial ants construct solutions by moving through the graph. The probability of choosing a particular path is influenced by the pheromone level on that path and a heuristic value that depends on the TTP problem.
Once all ants have constructed their solutions, the quality of these solutions is evaluated. The pheromone levels are then updated based on the quality of the solutions, with better solutions depositing more pheromone. This process biases the search towards promising areas of the solution space.
The pheromone evaporation mechanism is also employed to avoid the convergence to suboptimal solutions. Over time, the algorithm converges to a high-quality solution.
initialize pheromone levels while stopping criterion not met: for each ant: construct solution evaluate solution update pheromones return best solution
NSGA-II is a popular multi-objective optimization algorithm that aims to find a set of solutions representing the trade-offs among multiple objectives. It uses a fast non-dominated sorting approach to classify solutions and a crowding distance mechanism to ensure diversity.
The algorithm starts with an initial population and evaluates the fitness of each individual. Non-dominated sorting is performed to classify individuals into different fronts based on dominance relations.
The crowding distance is calculated for individuals in the same front to ensure diversity. Parents are selected based on tournament selection, considering both rank and crowding distance.
Selected parents undergo crossover and mutation to produce offspring. The offspring are evaluated, and the best individuals are selected for the next generation based on their rank and crowding distance.
This process continues until the stopping criterion is met, which can be a predefined number of generations or a convergence threshold. The final output of the NSGA-II algorithm is the Pareto front approximation, representing the best found trade-offs among the objectives.
initialize population evaluate fitness of each individual while stopping criterion not met: fast non-dominated sort calculate crowding distance select parents crossover to produce offspring mutate offspring evaluate fitness of offspring return Pareto front approximation
NTGA2 is an advanced multi-objective genetic algorithm that employs non-dominated tournament selection to maintain a diverse set of Pareto front approximation solutions. It focuses on improving the spread and convergence of solutions in the objective space.
The algorithm starts with an initial population and evaluates the fitness of each individual. Non-dominated tournament selection is used to select parents, where pairs of individuals are compared based on their dominance relations.
Selected parents undergo crossover and mutation to produce offspring. The offspring are evaluated, and the best individuals are selected for the next generation based on non-domination and crowding distance.
This process continues until the stopping criterion is met, which can be a predefined number of generations or a convergence threshold. The final output of the NTGA2 algorithm is the Pareto front approximation, representing the best found trade-offs among the objectives.
initialize population evaluate fitness of each individual while stopping criterion not met: perform non-dominated tournament selection select parents crossover to produce offspring mutate offspring evaluate fitness of offspring return Pareto front approximation
MOEAD decomposes a multi-objective optimization problem into several single-objective subproblems and optimizes them simultaneously. It uses a neighborhood structure to share information among subproblems and improve convergence and diversity.
The algorithm starts with an initial population and decomposes the problem into a set of subproblems, each with its own weight vector. The fitness of each individual is evaluated based on the decomposition strategy.
Parents are selected from the neighborhood of each subproblem, and crossover and mutation are applied to produce offspring. The offspring are evaluated, and the best individuals are selected for the next generation based on the decomposition strategy.
This process continues until the stopping criterion is met, which can be a predefined number of generations or a convergence threshold. The final output of the MOEAD algorithm is the Pareto front approximation, representing the best found trade-offs among the objectives.
initialize population decompose the problem into subproblems assign weight vectors to subproblems evaluate fitness of each individual while stopping criterion not met: for each subproblem: select parents from neighborhood crossover to produce offspring mutate offspring evaluate fitness of offspring update neighborhood with offspring if better return Pareto front approximation
SPEA2 improves upon its predecessor by incorporating a fine-grained fitness assignment strategy and an enhanced archive truncation method. It maintains an archive of non-dominated solutions to guide the search process towards a diverse set of Pareto front approximation solutions.
The algorithm starts with an initial population and an empty archive. The fitness of each individual is evaluated, and non-dominated individuals are copied to the archive.
If the archive exceeds its capacity, it is truncated based on a clustering mechanism to maintain diversity. Fitness is assigned to individuals based on their dominance relations and distances to archive members.
Parents are selected based on their fitness, and crossover and mutation are applied to produce offspring. The offspring are evaluated, and the best individuals are selected for the next generation based on their fitness.
This process continues until the stopping criterion is met, which can be a predefined number of generations or a convergence threshold. The final output of the SPEA2 algorithm is the Pareto front approximation, representing the best found trade-offs among the objectives.
initialize population and archive evaluate fitness of each individual while stopping criterion not met: copy non-dominated individuals to archive truncate archive if necessary assign fitness based on archive select parents crossover to produce offspring mutate offspring evaluate fitness of offspring select the best individuals for the next generation return Pareto front approximation
Binary Non-dominated Tournament Genetic Algorithm (BNTGA) is a multi-objective genetic algorithm that uses a selection process based on gap selection by random dimension to maintain a diverse set of Pareto front approximation solutions. BNTGA focuses on balancing the spread and convergence of solutions in the objective space.
The algorithm begins with an initial population of individuals, each representing a potential solution. The fitness of each individual is evaluated based on the problem's objectives.
Gap selection by random dimension is employed to select pairs of parents for crossover. This method calculates gap sizes for parent individuals and selects them based on tournament selection or as random neighbors. The genetic material of two parents is then combined to produce offspring. Mutation is applied to introduce random variations in the offspring.
Offspring are evaluated, and the best individuals are selected for the next generation based on their dominance relations. This iterative process of selection, crossover, mutation, and evaluation continues until a stopping criterion is met, such as a maximum number of generations or a convergence threshold.
The final output of the BNTGA algorithm is the Pareto front approximation, representing the best found trade-offs among the objectives.
initialize population evaluate fitness of each individual while stopping criterion not met: perform gap selection by random dimension select parents crossover to produce offspring mutate offspring evaluate fitness of offspring return Pareto front approximation