An Additive Approximation to Multiplicative Noise
- PDF / 1,492,549 Bytes
- 11 Pages / 595.276 x 790.866 pts Page_size
- 26 Downloads / 204 Views
An Additive Approximation to Multiplicative Noise R. Nicholson1
· J. P. Kaipio2
Received: 13 March 2019 / Accepted: 21 July 2020 © Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract Multiplicative noise models are often used instead of additive noise models in cases in which the noise variance depends on the state. Furthermore, when Poisson distributions with relatively small counts are approximated with normal distributions, multiplicative noise approximations are straightforward to implement. There are a number of limitations in the existing approaches to deal with multiplicative errors, such as positivity of the multiplicative noise term. The focus in this paper is on large dimensional (inverse) problems for which sampling-type approaches have too high computational complexity. In this paper, we propose an alternative approach utilising the Bayesian framework to carry out approximative marginalisation over the multiplicative error by embedding the statistics in an additive error term. The Bayesian framework allows the statistics of the resulting additive error term to be found based on the statistics of the other unknowns. As an example, we consider a deconvolution problem on random fields with different statistics of the multiplicative noise. Furthermore, the approach allows for correlated multiplicative noise. We show that the proposed approach provides feasible error estimates in the sense that the posterior models support the actual image. Keywords Multiplicative noise · Additive approximation · Pre-marginalisation
1 Introduction
how the noise is modelled. The most common model for f (x, n, η) is the additive error model [18,37]
A ubiquitous problem in science and engineering is to infer some parameter of interest, say x ∈ Rn , given noisy indirect measurements y ∈ Rm . Suppose the parameter and measurements are linked by a parameter-to-observable map f : Rn × R p × Rq → Rm , so that we can write y = f (x, n, η),
(1.1)
where n ∈ R p and η ∈ Rq denote uninteresting random variables, which can be interpreted as noise. Here, we consider the inference problem in the Bayesian framework [18,36,37], which naturally allows for the incorporation of uncertainties and prior knowledge, and results in a posterior distribution. In this framework, a natural first task would then be to marginalise over the uninteresting variables. However, the marginalisation process depends explicitly on
B
R. Nicholson [email protected]
1
Department of Engineering Science, University of Auckland, Auckland, New Zealand
2
Department of Mathematics, University of Auckland, Auckland, New Zealand
y = A(x) + η,
(1.2)
where the mapping A : x → y is referred to as the forward map (problem). However, in several imaging modalities including optical coherence tomography (OCT) [41,42], ultrasound [5,27], synthetic aperture radar (SAR) imaging [12,40], and electrical impedance tomography (EIT) [2,43], noise can be proportional to the data. Multiplicative noise is also common in systems and control theo
Data Loading...