A feature location approach for mapping application features extracted from crowd-based screencasts to source code

  • PDF / 4,980,962 Bytes
  • 54 Pages / 439.37 x 666.142 pts Page_size
  • 1 Downloads / 218 Views

DOWNLOAD

REPORT


A feature location approach for mapping application features extracted from crowd-based screencasts to source code Parisa Moslehi, et al. [full author details at the end of the article] # Springer Science+Business Media, LLC, part of Springer Nature 2020

Abstract

Crowd-based multimedia documents such as screencasts have emerged as a source for documenting requirements, the workflow and implementation issues of open source and agile software projects. For example, users can show and narrate how they manipulate an application’s GUI to perform a certain functionality, or a bug reporter could visually explain how to trigger a bug or a security vulnerability. Unfortunately, the streaming nature of programming screencasts and their binary format limit how developers can interact with a screencast’s content. In this research, we present an automated approach for mining and linking the multimedia content found in screencasts to their relevant software artifacts and, more specifically, to source code. We apply LDA-based mining approaches that take as input a set of screencast artifacts, such as GUI text and spoken word, to make the screencast content accessible and searchable to users and to link it to their relevant source code artifacts. To evaluate the applicability of our approach, we report on results from case studies that we conducted on existing WordPress and Mozilla Firefox screencasts. We found that our automated approach can significantly speed up the feature location process. For WordPress, we find that our approach using screencast speech and GUI text can successfully link relevant source code files within the top 10 hits of the result set with median Reciprocal Rank (RR) of 50% (rank 2) and 100% (rank 1). In the case of Firefox, our approach can identify relevant source code directories within the top 100 hits using screencast speech and GUI text with the median RR = 20%, meaning that the first true positive is ranked 5 or higher in more than 50% of the cases. Also, source code related to the frontend implementation that handles high-level or GUIrelated aspects of an application is located with higher accuracy. We also found that term frequency rebalancing can further improve the linking results when using less noisy scenarios or locating less technical implementation of scenarios. Investigating the results of using original and weighted screencast data sources (speech, GUI, speech and GUI) that can result in having the highest median RR values in both case studies shows that speech data is an important information source that can result in having RR of 100%. Keywords Crowd-based documentation . Mining video content . Speech analysis . Feature location . Software traceability . Information extraction . Software documentation Communicated by: Gabriele Bavota

Empirical Software Engineering

1 Introduction In traditional software development processes, software documentation has played a vital role in capturing information relevant to the various stakeholders and as assessment criteria for the maturity and qu