Axiomatic fuzzy set theory-based fuzzy oblique decision tree with dynamic mining fuzzy rules

  • PDF / 2,403,185 Bytes
  • 16 Pages / 595.276 x 790.866 pts Page_size
  • 24 Downloads / 183 Views

DOWNLOAD

REPORT


(0123456789().,-volV)(0123456789(). ,- volV)

ORIGINAL ARTICLE

Axiomatic fuzzy set theory-based fuzzy oblique decision tree with dynamic mining fuzzy rules Yuliang Cai1 • Huaguang Zhang2



Shaoxin Sun1 • Xianchang Wang3 • Qiang He4

Received: 22 April 2019 / Accepted: 22 November 2019 Ó Springer-Verlag London Ltd., part of Springer Nature 2019

Abstract This paper proposes a novel classification technology—fuzzy rule-based oblique decision tree (FRODT). The neighborhood rough sets-based FAST feature selection (NRS_FS_FAST) is first introduced to reduce attributes. In the axiomatic fuzzy set theory framework, the fuzzy rule extraction algorithm is then proposed to dynamically extract fuzzy rules. And these rules are regarded as the decision function during the tree construction. The FRODT is developed by expanding the unique non-leaf node in each layer of the tree, which results in a new tree structure with linguistic interpretation. Moreover, the genetic algorithm is implemented on r to obtain the balanced results between classification accuracy and tree size. A series of comparative experiments are carried out with five classical classification algorithms (C4.5, BFT, LAD, SC and NBT), and recently proposed decision tree HHCART on 20 UCI data sets. Experiment results show that the FRODT exhibits better classification performance on accuracy and tree size than those of the rival algorithms. Keywords Fuzzy oblique decision tree  Fuzzy rule extraction  AFS theory  Decision function

1 Introduction

& Huaguang Zhang [email protected] Yuliang Cai [email protected] Shaoxin Sun [email protected] Xianchang Wang [email protected] Qiang He [email protected] 1

College of Information Science and Engineering, Northeastern University, Shenyang, Liaoning, China

2

College of Information Science and Engineering, State Key Laboratory of Synthetical Automation for Process Industries, Northeastern University, Shenyang, Liaoning, China

3

School of Sciences, Dalian Ocean University, Dalian, Liaoning, China

4

College of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China

Decision trees have received great attention on account of its significant potential applications, especially in statistics, machine learning and pattern recognition [1–3]. They have been widely used in classification problems due to the following three advantages: (1) the classification performance of the decision trees is close to or even outperforming other classification models, (2) the decision trees can handle different types of attributes, such as numeric and categorical, and (3) the results of decision trees are easy to be comprehended [4–6]. The decision trees grow in a top-down way, and recursively divide the training samples into segments having similar or the same outputs. Until now, there are three types of decision trees: ‘‘standard’’ decision trees [7–10], fuzzy decision trees [11–15], and oblique decision trees [16–22]. ‘‘Standard’’ decision trees are the simplest decision trees. However, they are incapab