Reasoning in Description Logic Ontologies for Privacy Management
- PDF / 649,730 Bytes
- 5 Pages / 595.276 x 790.866 pts Page_size
- 97 Downloads / 209 Views
DISSERTATION AND HABILITATION ABSTRACTS
Reasoning in Description Logic Ontologies for Privacy Management Adrian Nuradiansyah1
© The Author(s) 2020
Abstract This work is initially motivated by a privacy scenario in which the confidential information about persons or its properties formulated in description logic (DL) ontologies should be kept hidden. We investigate procedures to detect whether this confidential information can be disclosed in a certain situation by using DL formalisms. If it is the case that this information can be deduced from the ontologies, which implies certain privacy policies are not fulfilled, then one needs to consider methods to repair these ontologies in a minimal way such that the modified ontologies complies with the policies. However, privacy compliance itself is not enough if a possible attacker can also obtain relevant information from other sources, which together with the modified ontologies might violate the privacy policy. This article provides a summary of studies and results from Adrian Nuradiansyah’s Ph.D. dissertation that are corresponding to the addressed problem above with a special emphasis on the investigations on the worst-case complexities of those problems as well as the complexity of the procedures and algorithms solving the problems.
1 Introduction Ontologies have been well-known for sharing common understanding of the structure of information in various application domains, e.g., Semantic Web [7] or medicine [3]. Some real examples of ontologies that have been published are [3, 14]. Ontologies are also used to provide more semantics to formally describe a meaning of the data. In contrast to relational databases, information stored in ontologies are mainly assumed to be incomplete, which means that we can deduce additional facts from the ontologies, which are not explicitly stated there. Another difference with the database paradigm is that ontologies normally employ open-world assumption stating that knowledge that is not explicitly stored in the ontologies and cannot be inferred, is neither assumed to be true nor to be false. In addition to semantic features, ontologies are formulated using languages that are much more expressive than database schema languages. One of the most common languages formalizing ontology is the web ontology language (OWL)1 that has been standardized by W3C and frequently used in * Adrian Nuradiansyah adrian.nuradiansyah@tu‑dresden.de 1
Institute of Theoretical Computer Science, Technische Universität Dresden, Dresden, Germany
many application domains. This standardization results in a connection with a family of knowledge representation languages, called description logics (DLs) (see [1]), that are known as a fragment of first-order logic. However, all these attracting features of ontologies are still prone to privacy violations. Assume that there is a DL ontology O containing the following information:
O ∶= {∃𝗌𝖾𝖾𝗇𝖡𝗒.𝖮𝗇𝖼𝗈𝗅𝗈𝗀𝗂𝗌𝗍 ⊑ ∃𝗌𝗎𝖿𝖿𝖾𝗋.𝖢𝖺𝗇𝖼𝖾𝗋, ∃𝗌𝖾𝖾𝗇𝖡𝗒.𝖮𝗇𝖼𝗈𝗅𝗈𝗀𝗂𝗌𝗍(x)}. The first axiom of the form of general concept inclusion (GC
Data Loading...