广东工业大学学报 ›› 2023, Vol. 40 ›› Issue (03): 38-45.doi: 10.12052/gdutxb.210131

• • 上一篇    下一篇

一种使用最大均值差异方法的多因子进化算法

赖玉芳, 王振友   

  1. 广东工业大学 数学与统计学院, 广东 广州 510520
  • 收稿日期:2021-09-06 出版日期:2023-05-25 发布日期:2023-06-08
  • 通信作者: 王振友(1979-),男,教授,博士,主要研究方向为优化计算、医学统计分析,E-mail:zywang@gdut.edu.cn
  • 作者简介:赖玉芳(1996-),女,硕士研究生,主要研究方向为多目标优化
  • 基金资助:
    广东省基础与应用基础研究基金资助项目(2020B1515310001)

A Multi-factor Evolutionary Algorithm Using Maximum Mean Difference Method

Lai Yu-fang, Wang Zhen-you   

  1. School of Mathematics and Statistics, Guangdong University of Technology, Guangzhou 510520, China
  • Received:2021-09-06 Online:2023-05-25 Published:2023-06-08

摘要: 主要研究和改进多因子进化算法(Multi-Factor Evolutionary Algorithm, MFEA),使用最大均值差异(Maximum Mean Discrepancy, MMD)方法优化的后代种群的混合概率分布作为算法的度量准则。MFEA-MMD在MFEA-II算法的随机分配概率(Random Mating Probability, RMP) 矩阵的基础上改进,最大程度避免多因子进化算法最常见的负迁移的影响,改进后的算法收敛速度比MFEA-II更快,算法运算速率比MFEA-II高出29%,任务间知识迁移程度比MFEA-II高出35%。

关键词: 多因子进化算法, 最大均值差异, 负迁移, 知识迁移

Abstract: The multi-factor evolutionary algorithm (MFEA) is mainly studied and improved. The mixed probability distribution of the offspring population optimized by the maximum mean difference method (MMD) is used as the metric criterion of the algorithm. MFEA-MMD is improved on the basis of the Random Mating Probability matrix (RMP) of MFEA-II algorithm to avoid the influence of negative transfer most common in multi-factor evolutionary algorithms to the greatest extent. The convergence rate of the MFEA-MMD is faster than that of MFEA-II. The operation speed of the algorithm is 29% higher than that of MFEA-II, and the degree of knowledge transfer between tasks is 35% higher than that of MFEA-II.

Key words: multi-factor evolutionary algorithm, maximum mean discrepancy, negative transfer, knowledge transfer

中图分类号: 

  • TP301
[1] THRUN S. Is learning the n-th thing any easier than learning the first?[J]. Advances in Neural Information Processing Systems, 1996(8) : 640-646.
[2] PAN S J, QIANG Y. A Survey on transfer learning [J]. IEEE Transactions on Knowledge and Data Engineering, 2010, 22(10): 1345-1359.
[3] IQBAL M, BROWNE W N, ZHANG M. Reusing building blocks of extracted knowledge to solve complex, large-scale boolean problems [J]. IEEE Transactions on Evolutionary Computation, 2014, 18(4): 465-480.
[4] IQBAL M, BING X, AL-SAH AF H, et al. Cross domain reuse of extracted knowledge in genetic programming for image classification [J]. IEEE Transactions on Evolutionary Computation, 2017, 21(4): 569-587.
[5] RICH, CARUANA. Multitask learning [J]. Machine Learning, 1997, 28(1): 41-75.
[6] DA B, ONG Y S, FENG L, et al. Evolutionary multitasking for single-objective continuous optimization: benchmark problems, performance metric, and baseline results [J]. Computer Science, 2017, 1(3): 1-11.
[7] BON E V, CHAI K, WILL C. Multi-task gaussian process prediction[C]// Advances in Neural Information Processing Systems 20. New York: MIT Press, 2008: 153-160.
[8] BALL K K, ONG Y S, GUPTA A, et al. Multifactorial evolutionary algorithm with online transfer parameter estimation: MFEA-II [J]. IEEE Transactions on Evolutionary Computation, 2020, 24(1): 69-81.
[9] PEL M. Bayesian optimization algorithm[M]. Berlin: Springer Berlin Heidelberg, 2005.
[10] GUPTA A, ONG Y S, FENG L, et al. Multiobjective multifactorial optimization in evolutionary multitasking [J]. IEEE Transactions on Cybernetics, 2017, 47(7): 1652-1665.
[11] DEB K, PRATAP A, AGARWAL S, et al. A fast and elitist multiobjective genetic algorithm: NSGA-II [J]. IEEE Transactions on Evolutionary Computation, 2002, 6(2): 182-197.
[12] ZHANG Q F, HUI L. MOEA/D: a multiobjective evolutionary algorithm based on decomposition [J]. IEEE Transactions on Evolutionary Computation, 2008, 11(6): 712-731.
[13] SEADA H, DEB K. U-NSGA-III: a unified evolutionary optimization procedure for single, multiple, and many objectives: proof-of-principle results[C]// Evolutionary Multi-Criterion Optimization. Heidelberg: Springer International Publishing, 2015: 34-49.
[14] ZHANG H P, YAO O Y, WANG Z. Note on “generalized rough sets based on reflexive and transitive relations” [J]. Information Sciences, 2009, 179(4): 471-473.
[15] RICE J, ClONINGER C R, REICH T. Multifactorial inheritance with cultural transmission and assortative mating. I. description and basic properties of the unitary models [J]. The American Journal of Human Genetics, 1978, 30(6): 618-643.
[16] RICE J, ClONINGER C R, REICH T. Multifactorial inheritance with cultural transmission and assortative mating. II. a general model of combined poly-genic and cultural inheritance [J]. American Journal of Human Genetics, 1979, 31(2): 176.
[17] GUPTA A, J M, ONG Y S. Evolutionary multitasking in bi-level optimization [J]. Complex & Intelligent Systems, 2015, 1(1/4): 83-95.
[18] 徐庆征, 杨恒, 王娜, 等. 多因子进化算法研究进展[J]. 计算机工程与应用, 2018, 54(11): 15-20.
XU Q Z, YANG H, WNAG N, et al. Recent advances in multifactorial evolutionary algorithm [J]. Computer Engineering and Applications, 2018, 54(11): 15-20.
[19] CHEN Q, MA X, SUN Y, et al. Adaptive memetic algorithm based evolutionary multi-tasking single-objective optimization[C]// Asia-pacific Conference on Simulated Evolution and Learning. Heidelberg: Springer International Publishing, 2017: 462-472.
[20] LIAW R T, TING C K. Evolutionary many-tasking based on biocoenosis through symbiosis: a framework and benchmark problems[C]// Evolutionary Computation. Vancouver, BC: IEEE, 2017: 2266-2273.
[21] WEN Y W, TING C K, Parting ways and reallocating resources in evolutionary multitasking[C]// Evolutionary Computation. Toronto: IEEE, 2017: 2404-2411.
[22] WEN Y W, TING C K, et al. Learning ensemble of decision trees through multifactorial genetic programming[C]// IEEE Transactions on Evolutionary Computation. Toronto: IEEE, 2016: 5293-5299.
[23] CHANDRA R, GUPTA A, ONG Y S, et al, Evolutionary multitask learning for modular knowledge representation in neural networks[C]// Neural Process. Toronto: IEEE, 2017, 47(3) : 993-1009.
[24] GUPTA A, ONG Y S, DA B, et al. Measuring complementarity between function landscapes in evolutionary multitasking[C]// Evolutionary Computation. Toronto: IEEE, 2016: 879-892.
[25] BALI K K, GUPTA A, LIANG F, et al. Linearized domain adaptation in evolutionary multitasking[C]// 2017 IEEE Congress on Evolutionary Computation (CEC). Toronto: IEEE, 2017: 1295-1302.
[26] ABHISHEK, GUPTA A, ONG Y S, et al. Insights on transfer optimization: because experience is the best teacher [J]. IEEE Transactions on Emerging Topics in Computational Intelligence, 2017, 2(1): 51-64.
[27] LARRAAGA P, LOZANO J A. Estimation of distribution algorithms a new tool for evolutionary computation [J]. Springer-Verlag, 2002(2134): 454-469.
[28] SASTRY K, GOLDBERG D E, PELIKAN M. Limits of scalability of multiobjective estimation of distribution algorithms[J]. 2005 IEEE Congress on Evolutionary Computation, 2005(3) : 2217-2224.
[1] 胡迎城, 邢玛丽, 吴元清. 加权Petri网的字符串序列相似性度量[J]. 广东工业大学学报, 0, (): 0-10.
[2] 张宇, 刘波. 基于自步学习策略的归纳式迁移学习模型研究[J]. 广东工业大学学报, 2023, 40(04): 31-36.
[3] 张锐, 吕俊. 基于分离结果信噪比估计与自适应调频网络的单通道语音分离技术[J]. 广东工业大学学报, 2023, 40(02): 45-54.
[4] 刘冬宁, 王子奇, 曾艳姣, 文福燕, 王洋. 基于复合编码特征LSTM的基因甲基化位点预测方法[J]. 广东工业大学学报, 2023, 40(01): 1-9.
[5] 徐伟锋, 蔡述庭, 熊晓明. 基于深度特征的单目视觉惯导里程计[J]. 广东工业大学学报, 2023, 40(01): 56-60,76.
[6] 刘冬宁, 郑楚楚. 冷却时间约束多对多任务分配及其优化[J]. 广东工业大学学报, 2021, 38(05): 10-15.
[7] 张巍, 仝茹, 吴诗珏, 王子奇, 滕少华. 基于KD45闭包的群组角色指派研究[J]. 广东工业大学学报, 2021, 38(04): 26-34.
[8] 吕舒园, 刘富春, 赵锐, 邓秀勤, 崔洪刚. 分布式离散事件系统的模式故障预测研究[J]. 广东工业大学学报, 2021, 38(01): 54-63.
[9] 郝志峰, 黎伊婷, 蔡瑞初, 曾艳, 乔杰. 基于因果模型的社交网络用户购物行为研究[J]. 广东工业大学学报, 2020, 37(03): 1-8.
[10] 洪英汉, 郝志峰, 麦桂珍, 陈平华. 基于低阶条件独立测试的因果网络结构学习方法[J]. 广东工业大学学报, 2019, 36(05): 14-19.
[11] 陈鹤峰, 伍乃骐. 基于Petri网可达性和结构的最大许可控制器设计[J]. 广东工业大学学报, 2019, 36(04): 1-9.
[12] 雷瑞生, 凌永权. 基于改进的多时间尺度特征排列熵的心率变异性分析研究[J]. 广东工业大学学报, 2019, 36(03): 32-38.
[13] 石聪聪, 刘富春. 模糊离散事件系统基于模式的故障诊断[J]. 广东工业大学学报, 2019, 36(01): 35-41.
[14] 黄田安, 程良伦, 黄思猛. 制造物联网生产过程工序流波动分析方法研究[J]. 广东工业大学学报, 2019, 36(01): 57-62.
[15] 刘冬宁, 刘统武, 宋静静, 侯艳. 面向基站代维人员分工协作优化的多重指派研究[J]. 广东工业大学学报, 2018, 35(06): 69-76.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!