中国物理B ›› 2025, Vol. 34 ›› Issue (11): 110701-110701.doi: 10.1088/1674-1056/adfc43

• • 上一篇    下一篇

Improved physics-informed neural networks incorporating lattice Boltzmann method optimized by tanh robust weight initialization

Chenghui Yang(杨程晖)1, Minglei Shan(单鸣雷)1,†, Mengyu Feng(冯梦宇)1, Ling Kuai(蒯玲)1, Yu Yang(杨雨)2, Cheng Yin(殷澄)1, and Qingbang Han(韩庆邦)1   

  1. 1 College of Information Science and Engineering, Hohai University, Changzhou 213200, China;
    2 College of Information Science and Technology, Nanjing Forestry University, Nanjing 210037, China
  • 收稿日期:2025-06-18 修回日期:2025-07-22 接受日期:2025-08-18 发布日期:2025-10-30
  • 基金资助:
    Project supported by the National Natural Science Foundation of China (Grant Nos. 12474453, 12174085, and 12404530).

Improved physics-informed neural networks incorporating lattice Boltzmann method optimized by tanh robust weight initialization

Chenghui Yang(杨程晖)1, Minglei Shan(单鸣雷)1,†, Mengyu Feng(冯梦宇)1, Ling Kuai(蒯玲)1, Yu Yang(杨雨)2, Cheng Yin(殷澄)1, and Qingbang Han(韩庆邦)1   

  1. 1 College of Information Science and Engineering, Hohai University, Changzhou 213200, China;
    2 College of Information Science and Technology, Nanjing Forestry University, Nanjing 210037, China
  • Received:2025-06-18 Revised:2025-07-22 Accepted:2025-08-18 Published:2025-10-30
  • Contact: Minglei Shan E-mail:shanml@hhu.edu.cn
  • Supported by:
    Project supported by the National Natural Science Foundation of China (Grant Nos. 12474453, 12174085, and 12404530).

摘要: Physics-informed neural networks (PINNs) have shown considerable promise for performing numerical simulations in fluid mechanics. They provide mesh-free, end-to-end approaches by embedding physical laws into their loss functions. However, when addressing complex flow problems, PINNs still face some challenges such as activation saturation and vanishing gradients in deep network training, leading to slow convergence and insufficient prediction accuracy. We present physics-informed neural networks incorporating lattice Boltzmann method optimized by tanh robust weight initialization (T-PINN-LBM) to address these challenges. This approach fuses the mesoscopic lattice Boltzmann model with the automatic differentiation framework of PINNs. It also implements a tanh robust weight initialization method derived from fixed point analysis. This model effectively mitigates activation and gradient decay in deep networks, improving convergence speed and data efficiency in multiscale flow simulations. We validate the effectiveness of the model on the classical arithmetic example of lid-driven cavity flow. Compared to the traditional Xavier initialized PINN and PINN-LBM, T-PINNLBM reduces the mean absolute error (MAE) by one order of magnitude at the same network depth and maintains stable convergence in deeper networks. The results demonstrate that this model can accurately capture complex flow structures without prior data, providing a new feasible pathway for data-free driven fluid simulation.

关键词: lattice Boltzmann method, physical-informed neural networks, fluid mechanics, tanh robust weight initialization

Abstract: Physics-informed neural networks (PINNs) have shown considerable promise for performing numerical simulations in fluid mechanics. They provide mesh-free, end-to-end approaches by embedding physical laws into their loss functions. However, when addressing complex flow problems, PINNs still face some challenges such as activation saturation and vanishing gradients in deep network training, leading to slow convergence and insufficient prediction accuracy. We present physics-informed neural networks incorporating lattice Boltzmann method optimized by tanh robust weight initialization (T-PINN-LBM) to address these challenges. This approach fuses the mesoscopic lattice Boltzmann model with the automatic differentiation framework of PINNs. It also implements a tanh robust weight initialization method derived from fixed point analysis. This model effectively mitigates activation and gradient decay in deep networks, improving convergence speed and data efficiency in multiscale flow simulations. We validate the effectiveness of the model on the classical arithmetic example of lid-driven cavity flow. Compared to the traditional Xavier initialized PINN and PINN-LBM, T-PINNLBM reduces the mean absolute error (MAE) by one order of magnitude at the same network depth and maintains stable convergence in deeper networks. The results demonstrate that this model can accurately capture complex flow structures without prior data, providing a new feasible pathway for data-free driven fluid simulation.

Key words: lattice Boltzmann method, physical-informed neural networks, fluid mechanics, tanh robust weight initialization

中图分类号:  (Neural networks, fuzzy logic, artificial intelligence)

  • 07.05.Mh
02.60.Cb (Numerical simulation; solution of equations) 02.30.Jr (Partial differential equations) 84.35.+i (Neural networks)