Document Type

Thesis

Degree

Master of Arts

Major

Mathematics

Date of Defense

7-22-2022

Graduate Advisor

Adrian Clingher

Committee

Dr. Adrian Clingher, Ph.D

Dr. Qingtang Jiang, Ph.D

Dr. Haiyan Cai, Ph.D

Abstract

Artificial Neural Networks have gained much media attention in the last few years. Every day, numer- ous articles on Artificial Intelligence, Machine Learning, and Deep Learning exist. Both academics and business are becoming increasingly interested in deep learning. Deep learning has innumerable uses, in- cluding autonomous driving, computer vision, robotics, security and surveillance, and natural language processing. The recent development and focus have primarily been made possible by the convergence of related research efforts and the introduction of APIs like Keras. The availability of high-speed compute resources such as GPUs and TPUs has also been instrumental in developing deep learning models.

While the development of the APIs like Keras offers a layer of abstraction and makes the model development convenient, the Mathematical logic behind the working of the Neural Networks is often misunderstood. The thesis focuses on the building blocks of a Neural Network in terms of Mathemat- ical terms and formulas. The research article also includes the details on the core parts of the Deep Learning algorithms like Forwardpropagation, Gradient Descent, and Backpropagation.

The research briefly covers the basic operations in Convolution Neural Networks, and a working example of multi-class classification problem using Keras library in R. CNN is a vast area of research in itself, and covering all the aspects of the ConvNets is out of scope of this paper. However, it provides an excellent foundation for understanding how Neural Networks work and how a CNN uses the concepts of the building blocks of a primary Neural Network in an image classification problem.

Share

COinS