A multi-step approximant for fixed point problem and convex optimization problem in Hadamard spaces
- PDF / 405,757 Bytes
- 17 Pages / 439.37 x 666.142 pts Page_size
- 18 Downloads / 187 Views
Journal of Fixed Point Theory and Applications
A multi-step approximant for fixed point problem and convex optimization problem in Hadamard spaces Muhammad Aqeel Ahmad Khan
and Prasit Cholamjiak
Abstract. The purpose of this paper is to propose and analyze a multistep iterative sequence to solve a convex optimization problem and a fixed point problem in an Hadamard space. We aim to establish strong and -convergence results of the proposed iterative sequence by employing suitable conditions on the control parameters and the structural properties of the underlying space. As a consequence, we compute an optimal solution for a minimizer of proper convex lower semicontinuous function and a common fixed point of a finite family of total asymptotically quasi-nonexpansive mappings in Hadamard spaces. Our results can be viewed as an extension and generalization of various corresponding results in the existing literature. Mathematics Subject Classification. 47H09, 47H10, 65K10, 65K15. Keywords. Convex optimization, lower semicontinuity, proximal point algorithm, total asymptotically quasi-nonexpansive mapping, common fixed point, asymptotic center.
1. Introduction The theory of nonlinear analysis is mainly divided into three major areas, namely convex analysis, monotone operator theory, and fixed point theory. These theories have been largely developed in the abstract setting having linear structures such as Euclidean, Hilbert, and Banach spaces. The theory of optimization, in particular, convex optimization, is prominent in the theory of convex analysis which studies the properties of minimizers and maximizers of the under consideration functions. The analysis of such properties rely on various mathematical tools, topological notions, and geometric ideas. Convex optimization not only provides a theoretical setting for the existence and uniqueness of a solution to a given optimization problem, but also provides efficient iterative algorithms to construct the optimal solution for such an optimization problem. As a consequence, convex optimization solves a variety 0123456789().: V,-vol
62
Page 2 of 17
M. A. A. Khan and P. Cholamjiak
of problems arising in disciplines such as mathematical economics, approximation theory, game theory, optimal transport theory, probability and statistics, information theory, signal and image processing, and partial differential equations, see, for example [1,19,20,28,42,43,46] and the references therein. One of the major problems in optimization theory is to find a minimizer of a convex function. The class of proximal point algorithms (PPA) contributes significantly to the theory of convex optimization as to compute a minimizer of a convex lower semicontinuous (lsc) function. Martinet [36] proposed and analyzed the initial draft of PPA as a sequence of successive approximation of resolvents. Rockafellar [41] generally established, by the PPA, the convergence characteristics to a zero of a maximal monotone operator in Hilbert spaces. Brezis and Lions [9] improved the Rockafellar’s algorithm un
Data Loading...