广东工业大学学报 ›› 2021, Vol. 38 ›› Issue (06): 1-8.doi: 10.12052/gdutxb.210109
• • 下一篇
Gary Yen1, 栗波2, 谢胜利2
Gary Yen1, Li Bo2, Xie Sheng-li2
摘要: 地球物理流体动力学的计算模型在数据同化和不确定性量化等任务中的计算代价非常大。有人提出了相应的替代模型以寻求减轻计算负担。研究人员已经开始应用人工智能和机器学习算法, 特别是人工神经网络, 针对地球物理流体建立数据驱动的替代模型。神经网络的性能在很大程度上取决于其网络结构设计和超参数的选择(调参)。一般情况下, 这些神经网络通过手动调参, 反复试错, 从而最大限度地提高其计算性能。这通常要求对底层神经网络结构以及特定领域问题有专业知识积累和认知。这一局限性可以通过使用进化算法, 自动设计和选择神经网络的最优超参数来解决。本文应用遗传算法进行了有效的长短期记忆(Long Short-Term Memory, LSTM)神经网络设计, 建立了NOAA海表温度数据集的温度预测模型。
中图分类号:
[1] LEWIS J M, LAKSHMIVARAHAN S, DHALL S. Dynamic data assimilation: a least squares approach[M]. Cambridge: Cambridge University Press, 2006. [2] MOIN P, MAHESH K. Direct numerical simulation: a tool in turbulence research [J]. Annual Review of Fluid Mechanics, 1998, 30(1): 539-578. [3] POLAT O, TUNCER I H. Aerodynamic shape optimization of wind turbine blades using a parallel genetic algorithm [J]. Procedia Engineering, 2013, 61: 28-31. [4] LUCIA D J, BERAN P S, SILVA W A. Reduced-order modeling: new approaches for computational physics [J]. Progress in Aerospace Sciences, 2004, 40(1-2): 51-117. [5] TAIRA K, BRUNTON S L, DAWSON S T M, et al. Modal analysis of fluid flows: an overview [J]. Aiaa Journal, 2017, 55(12): 4013-4041. [6] BENNER P, GUGERCIN S, WILLCOX K. A survey of projection-based model reduction methods for parametric dynamical systems [J]. SIAM Review, 2015, 57(4): 483-531. [7] XIAO D, FANG F, PAIN C, et al. Non-intrusive reduced-order modelling of the Navier-Stokes equations based on RBF interpolation [J]. International Journal for Numerical Methods in Fluids, 2015, 79(11): 580-595. [8] HESTHAVEN J S, UBBIALI S. Non-intrusive reduced order modeling of nonlinear problems using neural networks [J]. Journal of Computational Physics, 2018, 363: 55-78. [9] PAWAR S, RAHMAN S M, VADDIREDDY H, et al. A deep learning enabler for nonintrusive reduced order modeling of fluid flows [J]. Physics of Fluids, 2019, 31(8): 085101. [10] SCHNEIDER T, LAN S, STUART A, et al. Earth system modeling 2.0: a blueprint for models that learn from observations and targeted high-resolution simulations [J]. Geophysical Research Letters, 2017, 44(24): 12396-12417. [11] O'GORMAN P A, DWYER J G. Using machine learning to parameterize moist convection: potential for modeling of climate, climate change, and extreme events [J]. Journal of Advances in Modeling Earth Systems, 2018, 10(10): 2548-2563. [12] REICHSTEIN M, CAMPS-VALLS G, STEVENS B, et al. Deep learning and process understanding for data-driven earth system science [J]. Nature, 2019, 566(7743): 195-204. [13] MOHAN A T, GAITONDE D V. A deep learning based approach to reduced order modeling for turbulent flow control using LSTM neural networks[EB/OL]. arXiv preprint arXiv: 1804.09269, 2018. [2021-07-02]. http://export.arxiv.org/abs/1804.09269. [14] RAHMAN S M, PAWAR S, SAN O, et al. Nonintrusive reduced order modeling framework for quasigeostrophic turbulence [J]. Physical Review E, 2019, 100(5): 053306. [15] TAN C, SUN F, KONG T, et al. A survey on deep transfer learning[C]//International Conference on Artificial Neural Networks. Switzerland: Springer, Cham, 2018: 270-279. [16] MJOLSNESS E, DECOSTE D. Machine learning for science: state of the art and future prospects [J]. Science, 2001, 293(5537): 2051-2055. [17] BRUNTON S L, NOACK B R, KOUMOUTSAKOS P. Machine learning for fluid mechanics [J]. Annual Review of Fluid Mechanics, 2020, 52: 477-508. [18] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. Imagenet classification with deep convolutional neural networks [J]. Advances in Neural Information Processing Systems, 2012, 25: 1097-1105. [19] ELSKEN T, METZEN J H, HUTTER F. Neural architecture search: a survey [J]. The Journal of Machine Learning Research, 2019, 20(1): 1997-2017. [20] HOLMES P, LUMLEY J L, BERKOOZ G, et al. Turbulence, coherent structures, dynamical systems and symmetry[M]. Cambridge: Cambridge University Press, 2012. [21] ROWLEY C W, DAWSON S T M. Model reduction for flow analysis and control [J]. Annual Review of Fluid Mechanics, 2017, 49: 387-417. [22] PATHAK J, HUNT B, GIRVAN M, et al. Model-free prediction of large spatiotemporally chaotic systems from data: a reservoir computing approach [J]. Physical Review Letters, 2018, 120(2): 024102. [23] VLACHAS P R, BYEON W, WAN Z Y, et al. Data-driven forecasting of high-dimensional chaotic systems with long short-term memory networks [J]. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 2018, 474(2213): 20170844. [24] MAULIK R, MOHAN A, LUSCH B, et al. Time-series learning of latent-space dynamics for reduced-order model closure [J]. Physica D: Nonlinear Phenomena, 2020, 405: 132368. [25] AHMED S E, RAHMAN S M, SAN O, et al. Memory embedded non-intrusive reduced order modeling of non-ergodic flows [J]. Physics of Fluids, 2019, 31(12): 126602. [26] WAN Z Y, VLACHAS P, KOUMOUTSAKOS P, et al. Data-assisted reduced-order modeling of extreme events in complex dynamical systems [J]. PloS One, 2018, 13(5): e0197704. [27] HOCHREITER S, SCHMIDHUBER J. Long short-term memory [J]. Neural Computation, 1997, 9(8): 1735-1780. [28] XIE L, YUILLE A. Genetic CNN[C]//Proceedings of the IEEE International Conference on Computer Vision.Venice: IEEE, 2017: 1379-1388. [29] SUN Y, XUE B, ZHANG M, et al. Automatically designing CNN architectures using the genetic algorithm for image classification [J]. IEEE Transactions on Cybernetics, 2020, 50(9): 3840-3854. [30] SUN Y, YEN G G, YI Z. Evolving unsupervised deep neural networks for learning meaningful representations [J]. IEEE Transactions on Evolutionary Computation, 2018, 23(1): 89-103. [31] SUN Y, WANG H, XUE B, et al. Surrogate-assisted evolutionary deep learning using an end-to-end random forest-based performance predictor [J]. IEEE Transactions on Evolutionary Computation, 2019, 24(2): 350-364. [32] SUN Y, XUE B, ZHANG M, et al. Evolving deep convolutional neural networks for image classification [J]. IEEE Transactions on Evolutionary Computation, 2019, 24(2): 394-407. [33] SUN Y, XUE B, ZHANG M, et al. Completely automated CNN architecture design based on blocks [J]. IEEE Transactions on Neural Networks and Learning Systems, 2019, 31(4): 1242-1254. [34] ZHOU Y, YEN G G, YI Z. Evolutionary compression of deep neural networks for biomedical image segmentation [J]. IEEE Transactions on Neural Networks and Learning Systems, 2019, 31(8): 2916-2929. [35] DROZDZAL M, VORONTSOV E, CHARTRAND G, et al. The importance of skip connections in biomedical image segmentation[C]//Deep learning and data labeling for medical applications. Switzerland: Springer, 2016: 179-187. [36] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV: IEEE, 2016: 770-778. [37] MAULIK R, LUSCH B, BALAPRAKASH P. Non-autoregressive time-series methods for stable parametric reduced-order models [J]. Physics of Fluids, 2020, 32(8): 087115. |
[1] | 吴俊贤, 何元烈. 基于通道注意力的自监督深度估计方法[J]. 广东工业大学学报, 2023, 40(02): 22-29. |
[2] | 刘冬宁, 王子奇, 曾艳姣, 文福燕, 王洋. 基于复合编码特征LSTM的基因甲基化位点预测方法[J]. 广东工业大学学报, 2023, 40(01): 1-9. |
[3] | 徐伟锋, 蔡述庭, 熊晓明. 基于深度特征的单目视觉惯导里程计[J]. 广东工业大学学报, 2023, 40(01): 56-60,76. |
[4] | 刘洪伟, 林伟振, 温展明, 陈燕君, 易闽琦. 基于MABM的消费者情感倾向识别模型——以电影评论为例[J]. 广东工业大学学报, 2022, 39(06): 1-9. |
[5] | 章云, 王晓东. 基于受限样本的深度学习综述与思考[J]. 广东工业大学学报, 2022, 39(05): 1-8. |
[6] | 郑佳碧, 杨振国, 刘文印. 基于细粒度混杂平衡的营销效果评估方法[J]. 广东工业大学学报, 2022, 39(02): 55-61. |
[7] | 王美林, 曾俊杰, 成克强, 陈晓航. 改进遗传算法求解基于MPN混流制造车间调度问题[J]. 广东工业大学学报, 2021, 38(05): 24-32. |
[8] | 赖峻, 刘震宇, 刘圣海. 基于全局数据混洗的小样本数据预测方法[J]. 广东工业大学学报, 2021, 38(03): 17-21. |
[9] | 杨兴雨, 刘伟龙, 井明月, 张永. 基于模糊收益率的分散化投资组合调整策略[J]. 广东工业大学学报, 2020, 37(05): 13-21. |
[10] | 岑仕杰, 何元烈, 陈小聪. 结合注意力与无监督深度学习的单目深度估计[J]. 广东工业大学学报, 2020, 37(04): 35-41. |
[11] | 曾碧, 任万灵, 陈云华. 基于CycleGAN的非配对人脸图片光照归一化方法[J]. 广东工业大学学报, 2018, 35(05): 11-19. |
[12] | 赵晓健. 基于遗传算法的多约束叠合梁斜拉桥钢梁维护策略优化[J]. 广东工业大学学报, 2018, 35(04): 75-80. |
[13] | 李云, 王志红, 王琦, 漆文光, 李斌, 吉瑞博, 龙志宏. 改进NSGA-Ⅱ算法在水质监测点多目标优化研究中的应用[J]. 广东工业大学学报, 2018, 35(02): 35-40. |
[14] | 陈旭, 张军, 陈文伟, 李硕豪. 卷积网络深度学习算法与实例[J]. 广东工业大学学报, 2017, 34(06): 20-26. |
[15] | 刘震宇, 李嘉俊, 王昆. 一种基于深度自编码器的指纹匹配定位方法[J]. 广东工业大学学报, 2017, 34(05): 15-21. |
|