Hierarchical deep-learning neural networks: finite elements and beyond

  • PDF / 5,483,455 Bytes
  • 24 Pages / 595.276 x 790.866 pts Page_size
  • 17 Downloads / 239 Views

DOWNLOAD

REPORT


ORIGINAL PAPER

Hierarchical deep-learning neural networks: finite elements and beyond Lei Zhang1 · Lin Cheng2 · Hengyang Li2 · Jiaying Gao2 · Cheng Yu2 · Reno Domel3 · Yang Yang1 · Shaoqiang Tang1 · Wing Kam Liu2 Received: 5 July 2020 / Accepted: 8 September 2020 © Springer-Verlag GmbH Germany, part of Springer Nature 2020

Abstract The hierarchical deep-learning neural network (HiDeNN) is systematically developed through the construction of structured deep neural networks (DNNs) in a hierarchical manner, and a special case of HiDeNN for representing Finite Element Method (or HiDeNN-FEM in short) is established. In HiDeNN-FEM, weights and biases are functions of the nodal positions, hence the training process in HiDeNN-FEM includes the optimization of the nodal coordinates. This is the spirit of r-adaptivity, and it increases both the local and global accuracy of the interpolants. By fixing the number of hidden layers and increasing the number of neurons by training the DNNs, rh-adaptivity can be achieved, which leads to further improvement of the accuracy for the solutions. The generalization of rational functions is achieved by the development of three fundamental building blocks of constructing deep hierarchical neural networks. The three building blocks are linear functions, multiplication, and inversion. With these building blocks, the class of deep learning interpolation functions are demonstrated for interpolation theories such as Lagrange polynomials, NURBS, isogeometric, reproducing kernel particle method, and others. In HiDeNNFEM, enrichment functions through the multiplication of neurons is equivalent to the enrichment in standard finite element methods, that is, generalized, extended, and partition of unity finite element methods. Numerical examples performed by HiDeNN-FEM exhibit reduced approximation error compared with the standard FEM. Finally, an outlook for the generalized HiDeNN to high-order continuity for multiple dimensions and topology optimizations are illustrated through the hierarchy of the proposed DNNs. Keywords Neural network interpolation functions · Data-driven · r- and rh-adaptivity · Fundamental building block · Rational functions (i.e. RKPM, NURBS and IGA)

1 Introduction

Lei Zhang and Lin Cheng: Co-first authors Reno Domel: Summer 2019 undergraduate research intern at Department of Mechanical Engineering, Northwestern University.

B B

Shaoqiang Tang [email protected] Wing Kam Liu [email protected]

1

HEDPS and LTCS, College of Engineering, Peking University, Beijing 100871, China

2

Department of Mechanical Engineering, Northwestern University, 2145 Sheridan Rd., Evanston IL 60208-3111, USA

3

University of Notre Dame, Notre Dame IN 46556, USA

Machine learning is a process by which computers, when given data, create their own knowledge by identifying patterns in data [1,2]. Deep learning, being a subfield of machine learning, is where computers understand challenging and complex concepts by using and building upon several simpler concepts [2] in a hierarchical fashion