TensorFlow Recognition Application

Building a DL model such as CNN from scratch using NumPy as we did helps us have a better understanding of how each layer works in detail. For practical applications, it is not recommended to use such implementation. One reason is that it is computational

  • PDF / 895,038 Bytes
  • 66 Pages / 504 x 720 pts Page_size
  • 86 Downloads / 149 Views

DOWNLOAD

REPORT


TensorFlow Recognition Application Building a DL model such as CNN from scratch using NumPy as we did helps us have a better understanding of how each layer works in detail. For practical applications, it is not recommended to use such implementation. One reason is that it is computationally intensive in its calculations and needs efforts to optimize the code. Another is that it does not support distributed processing, GPUs, and many more features. On the other hand, there are different already existing libraries that support these features in a time-efficient manner. These libraries include TF, Keras, Theano, PyTorch, Caffe, and more. This chapter starts with introducing the TF DL library from scratch by building and visualizing the computational graph for a simple linear model and a two-class classifier using ANN. The computational graph is visualized using TensorBoard (TB). Using TF-Layers API, a CNN model is created to apply the concepts previously discussed for recognizing images from the CIFAR10 dataset.

Introduction to TF There are different programming paradigms or styles for building software programs. They include sequential, which builds the programs as a set of sequential lines that the program follows from the beginning until the end; functional, which organizes the code into a set of functions that can be called multiple times; imperative, which tells the computer about every detailed step about how the program works; and more. One programming language might support different paradigms. But these paradigms have the disadvantage of being dependent on the language being written in.

© Ahmed Fawzy Gad 2018 A. F. Gad, Practical Computer Vision Applications Using Deep Learning with CNNs, https://doi.org/10.1007/978-1-4842-4167-7_6

229

Chapter 6

TensorFlow Recognition Application

Another paradigm is dataflow. Dataflow languages represent their programs as text instructions that describe computational steps from receiving the data until returning the results. A dataflow program could be visualized as a graph that shows the operations in addition to their inputs and outputs. Dataflow languages support parallel processing because it is much easier to deduce the independent operations that could be executed at the same time. The name “TensorFlow” consists of two words. The first is “tensor,” which is the data unit that TF uses in its computations. The second word is “flow,” reflecting that it uses the dataflow paradigm. As a result, TF builds a computational graph that consists of data represented as tensors and the operations applied to them. To make things easier to understand, just remember that rather than using variables and methods, TF uses tensors and operations. Here are some advantages of using dataflow with TF: •

Parallelism: It is easier to identify the operations that can be executed in parallel.



Distributed Execution: The TF program can be partitioned across multiple devices (CPUs, GPUs, and TF Processing Units [TPUs]). TF itself handles the necessary work for communication and cooper