Conclusion and Future Work

In this chapter, a summary of this book is provided. We will review the whole journey of this book, which starts from two schools of learning thoughts in the literature of machine learning, and then motivate the resulting combined learning thought includi

  • PDF / 80,365 Bytes
  • 5 Pages / 439.314 x 666.252 pts Page_size
  • 6 Downloads / 223 Views

DOWNLOAD

REPORT


In this chapter, a summary of this book is provided. We will review the whole journey of this book, which starts from two schools of learning thoughts in the literature of machine learning, and then motivate the resulting combined learning thought including Maxi-Min Margin Machine, Minimum Error Minimax Probability Machine and their extensions. Following that, we then present both future perspectives within the proposed models and beyond the developed approaches.

8.1 Review of the Journey Two paradigms exist in the literature of machine learning. One is the school of global learning approaches; the other is the school of local learning approaches. Global learning enjoys a long and distinguished history, which usually focuses on describing phenomena by estimating a distribution from data. Based on the estimated distribution, the global learning methods can then perform inferences, conduct marginalizations, and make predictions. Although containing many good features, e.g. a relatively simple optimization and the flexibility in incorporating global information such as structure information and invariance, etc., these learning approaches have to assume a specific type of distribution a prior. However, in general, the assumption itself may be invalid. On the other hand, local learning methods do not estimate a distribution from data. Instead, they focus on extracting only the local information which is directly related to the learning task, i.e. the classification in this book. Recent progress following this trend has demonstrated that local learning approaches, e.g. Support Vector Machine (SVM), outperform the global learning methods in many aspects. Despite of the success, local learning actually discards plenty of important global information on data, e.g. the structure information. Therefore, this restricts the performance of this types of learning schemes. Motivated from the investigations of these

162

8 Conclusion and Future Work

two types of learning approaches, we therefore suggest to propose a hybrid learning framework. Namely, we should learn from data globally and locally. Following the hybrid learning thought, we thus develop a hybrid model named Maxi-Min Margin Machine (M4 ), which successfully combines two largely different but complementary paradigms. This new model is demonstrated to contain both appealing features in global learning and local learning. It can capture the global structure information from data, while it can also provide a task-oriented scheme for the learning purpose and inherit the superior performance from local learning. This model is theoretically important in the sense that M4 contains many important learning models as special cases including Support Vector Machines, Minimax Probability Machine (MPM), and Fisher Discriminant Analysis; the proposed model is also empirically promising in that it can be cast as a Sequential Second Order Cone Programming problem yielding a polynomial time complexity. The idea of learning from data locally and globally is also applicable in regression