TY - JOUR
T1 - Genetic Learning Particle Swarm Optimization
AU - Gong, Yue Jiao
AU - Li, Jing Jing
AU - Zhou, Yicong
AU - Li, Yun
AU - Chung, Henry Shu Hung
AU - Shi, Yu Hui
AU - Zhang, Jun
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2016/10
Y1 - 2016/10
N2 - Social learning in particle swarm optimization (PSO) helps collective efficiency, whereas individual reproduction in genetic algorithm (GA) facilitates global effectiveness. This observation recently leads to hybridizing PSO with GA for performance enhancement. However, existing work uses a mechanistic parallel superposition and research has shown that construction of superior exemplars in PSO is more effective. Hence, this paper first develops a new framework so as to organically hybridize PSO with another optimization technique for 'learning.' This leads to a generalized 'learning PSO' paradigm, the ∗L-PSO. The paradigm is composed of two cascading layers, the first for exemplar generation and the second for particle updates as per a normal PSO algorithm. Using genetic evolution to breed promising exemplars for PSO, a specific novel ∗L-PSO algorithm is proposed in the paper, termed genetic learning PSO (GL-PSO). In particular, genetic operators are used to generate exemplars from which particles learn and, in turn, historical search information of particles provides guidance to the evolution of the exemplars. By performing crossover, mutation, and selection on the historical information of particles, the constructed exemplars are not only well diversified, but also high qualified. Under such guidance, the global search ability and search efficiency of PSO are both enhanced. The proposed GL-PSO is tested on 42 benchmark functions widely adopted in the literature. Experimental results verify the effectiveness, efficiency, robustness, and scalability of the GL-PSO.
AB - Social learning in particle swarm optimization (PSO) helps collective efficiency, whereas individual reproduction in genetic algorithm (GA) facilitates global effectiveness. This observation recently leads to hybridizing PSO with GA for performance enhancement. However, existing work uses a mechanistic parallel superposition and research has shown that construction of superior exemplars in PSO is more effective. Hence, this paper first develops a new framework so as to organically hybridize PSO with another optimization technique for 'learning.' This leads to a generalized 'learning PSO' paradigm, the ∗L-PSO. The paradigm is composed of two cascading layers, the first for exemplar generation and the second for particle updates as per a normal PSO algorithm. Using genetic evolution to breed promising exemplars for PSO, a specific novel ∗L-PSO algorithm is proposed in the paper, termed genetic learning PSO (GL-PSO). In particular, genetic operators are used to generate exemplars from which particles learn and, in turn, historical search information of particles provides guidance to the evolution of the exemplars. By performing crossover, mutation, and selection on the historical information of particles, the constructed exemplars are not only well diversified, but also high qualified. Under such guidance, the global search ability and search efficiency of PSO are both enhanced. The proposed GL-PSO is tested on 42 benchmark functions widely adopted in the literature. Experimental results verify the effectiveness, efficiency, robustness, and scalability of the GL-PSO.
KW - Exemplar construction
KW - genetic algorithm (GA)
KW - hybrid method
KW - learning scheme
KW - particle swarm optimization (PSO)
UR - http://www.scopus.com/inward/record.url?scp=84941894209&partnerID=8YFLogxK
U2 - 10.1109/TCYB.2015.2475174
DO - 10.1109/TCYB.2015.2475174
M3 - Article
C2 - 26394440
AN - SCOPUS:84941894209
SN - 2168-2267
VL - 46
SP - 2277
EP - 2290
JO - IEEE Transactions on Cybernetics
JF - IEEE Transactions on Cybernetics
IS - 10
M1 - 7271066
ER -