Heuristic Acquisition for Data Science
- PDF / 320,676 Bytes
- 7 Pages / 595.276 x 790.866 pts Page_size
- 30 Downloads / 249 Views
Heuristic Acquisition for Data Science Editorial for Special Issue of Information Systems Frontiers Lydia Bouzar-Benlabiod 1 & Stuart H. Rubin 2 Published online: 2 September 2020 # This is a U.S. Government work and not under copyright protection in the US; foreign copyright protection may apply 2020
1 Introduction The Traveling Salesman Problem (TSP) was first formulated in 1930 and is one of the most studied problems in optimization. If the optimal solution to the TSP can be found in polynomial time, it would then follow that every NP-hard problem could be solved in polynomial time, proving P = NP. We demonstrated an algorithm, which finds P ~ NP with scale, Rubin et al. 2016, 2018. Using a δ − ε proof, it is straightforward to show that as the number of cities goes to infinity, P goes to NP (i.e., δ > 0). This was demonstrated using a quadratic number of parallel processors because that speedup, by definition, is polynomial. A fastest parallel algorithm was defined. Using an arbitrary run of 5000 cities, we obtained a tour within 0.00001063% of the optimal using 4,166,667 virtual processors (Intel Xenon E5–1603 @ 2.8 GHz). To save the calculated extra 209 miles would take a quantum computer over (5000!/(2**4978 * 22!)) * 267,782 centuries. Clearly, the automated acquisition of heuristics and the associated P ~ NP solutions are an important problem warranting attention. Machine learning through self-randomization was demonstrated in the solution of the TSP. It is becoming increasingly rare for software to run in isolated environments. Rather, the software of the future will need to support government and industry in distributed heterogeneous computing environments connected by secure networks. Ensuring the quality of that software provides information dominance. * Stuart H. Rubin [email protected] Lydia Bouzar-Benlabiod [email protected] 1
Laboratoire de Communication des Systèmes Informatiques (LCSI), Ecole Nationale Supérieure d’Informatique (ESI, Algeria), Oued Smar, Algeria
2
Naval Information Warfare Center (NIWC) Pacific, Intelligent Sensing Branch, San Diego, CA, USA
It follows that the automatic programming (software automation) of quality software is a necessary objective towards achieving information dominance. Randomization can provide the means by which the human on the loop realizes a high-level specification in effective code. Conventional compilers are essentially context-free translators. They are thus realizable through the use of context-free grammars with the translation of a few context-sensitive features through the use of attribute grammars. Nevertheless, progress in automatic programming hinges on the use of context-sensitive specification languages (e.g., structured English). Such languages are effectively compiled through the actions of rule bases. In fact, the first compilers were rule-based. The design of rule-based compilers is not a theoretical problem. Rather, it is an economical process for the acquisition of knowledge for these compilers that has evaded solution to date. It f
Data Loading...