Analysis on Applicability and Feasibility of Dynamic Programming on the Basis of Time Complexity and Space Through Case

Dynamic programming is a powerful and emerging tool in the programming field, which serves to optimize a problem by decomposing it to further subproblems, and then combining it to get the global optimal solution. The solution optimized is crucial to answe

  • PDF / 636,198 Bytes
  • 13 Pages / 439.37 x 666.142 pts Page_size
  • 10 Downloads / 169 Views

DOWNLOAD

REPORT


1 Introduction In the world of competitive programming, dynamic programming serves as a powerful technique for solving mathematical problems to find the optimal solution [1]. To solve the problem, this technique works on decomposing the actual problem into subproblems and memorizing them. This technique is called memoization [2]. The values are stored in a specific location and when the program gets the input, it goes to that storage, access the value it has already computed, and outputs it to the user. Memoization is the term, for example, in the real world to memorize the things. In the same aspect, the user analyzes the problem, and if he sees that the solution of the problem can be devised by dividing it into more subproblems and in some cases, further dividing, and then finally, computing it by using a suitable algorithm. In this process, the values have to be continuously computed to a limit and stored in some space. To get the desired output, the user has to access the suitable stored value and display it as the output. Tabulation in dynamic programming is used to compute higher cases that build upon the lower ones till we reach our destination [2]. Dynamic programming works on the basis of these two pillars, tabulation and memorization.

M. Dixit Department of CSE and IT, Madhav Institute of Technology and Science, Gwalior 474005, India e-mail: [email protected] M. A. Khan (B) Department of Electronics Engineering, Madhav Institute of Technology and Science, Gwalior 474005, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2019 R. K. Shukla et al. (eds.), Data, Engineering and Applications, https://doi.org/10.1007/978-981-13-6351-1_27

331

332

M. Dixit and M. A. Khan

Fig. 1 Graph plots the various time complexities based on Big O notation [4]

By analyzing this process, we see that the program has to run and store values consecutively and needs much space to store the values.

1.1 Time Complexity Time complexity of an algorithm signifies the time required by it to complete its execution. A simple way of denoting is through the Big O notation [3]. Big O notation is calculated by counting the number of elementary operations in the specific algorithm. We always take the worst-case time complexity because of the difference in the performance of algorithms on different types of input data. So, the worst-case complexity is basically the maximum time in which the algorithm completes its execution on any type of input data. Dynamic programming solution typically takes O(n2 ) or O(n3 ) for problems having an exponential time complexity (Fig. 1).

1.2 Time–Space Trade The intuition behind dynamic programming is to trade time with space, so that we do not have to repeat calculations again and again if the logic says that the future outcomes are dependent upon the past ones [1].

Analysis on Applicability and Feasibility of Dynamic …

333

So, instead of taking a lot of time to repeat calculations, we store the outputs and further build more outputs based on past outputs, store it as well to