On-line data management for high-throughput experimentation

  • PDF / 238,745 Bytes
  • 6 Pages / 612 x 792 pts (letter) Page_size
  • 75 Downloads / 187 Views

DOWNLOAD

REPORT


0894-LL09-07.1

On-line data management for high-throughput experimentation Mikk Lippmaa2,3, Shinya Meguro1,2,3, Tsuyoshi Ohnishi2,3, Hideomi Koinuma1,3, and Ichiro Takeuchi4 1 National Institute for Materials Science (NIMS), Tsukuba 305-0044, Japan 2 Institute for Solid State Physics, University of Tokyo, Kashiwa 277-8581, Japan 3 Combinatorial Materials Exploration and Technology (COMET), NIMS, Tsukuba 305-0044, Japan 4 Department of Materials Science and Engineering, University of Maryland, College Park, MD 20742 ABSTRACT We discuss visualization and data dependency issues related to managing sample characterization data from high-throughput experiments. We are developing software for on-line storage, sharing, and visualization of data from combinatorial or high-throughput experiments. The system is based on a centralized database and a Web interface that can be accessed from any networked computer. The greatest challenge facing us is the variety of data types and formats that such a system has to support so that data from many different sources, i.e. from synthesis and characterization instruments, could be combined. In order to minimize the software development effort, we have developed generalized extensible markup language (XML) schema for storing and processing experimental data. The use of a standard data format like XML, together with the relevant schema definitions can be used to implement storage, processing, visualization and other tools that are generic and thus greatly simplify the task of managing high-throughput experiments. INTRODUCTION High-throughput experiments (HTE) can be used in solid-state materials science studies to synthesize hundreds of samples in parallel in a single sample library. Parallel sample fabrication has two basic benefits: the number of compositions or process parameter ranges that can be mapped in a single experiment is greatly increased and it is easier to detect systematic changes in materials properties as a function of composition, microstructure, or process variables. Experiments that quickly produce large numbers of samples also require efficient sample analysis and data processing. The primary data processing tasks involve collecting characterization data from analysis instruments, automated visualization of the raw data sets, and the ability to organize and share measurements results. Further processing may be necessary to extract relevant materials parameters from the measured data, but the timescale of these operations is usually longer than the sample turnover rate in the synthesis machines, such as thin film deposition chambers. Primary visualization of experimental data is therefore of great importance in order to be able to provide fast feedback to the design of follow-up experiments. We have developed software tools that help with the management and visualization of HTE data. The system is built as a Web application with a central relational database and a user interface

0894-LL09-07.2

written using the PHP scripting language[1]. The emphasis is on rapid ac