I am a Computing Innovation Fellow (CIFellow) in the College of Information & Computer Sciences at the University of Massachusetts Amherst (UMass), working with Dr. Yuriy Brun. My primary research community is Software Engineering. I enjoy solving important and interesting problems with broad research interests (knowledge has no boundaries, right? :)). My current research focus is listed below. I received my Ph.D. in Computer Science from the University of Southern California (USC), advised by Dr. Nenad Medvidović. Prior to USC, I received my Bachelor’s degree in Software Engineering from the Harbin Institute of Technology (HIT) in China, advised by Dr. Zhongjie Wang (王忠杰).
PhD in Computer Science, 2020
University of Southern California, USA
BEng in Software Engineering, 2014
Harbin Institute of Technology, China
[Jan 2021] I have driven across country from Los Angeles to Amherst! I'm starting my new chapter at UMass as a postdoc!
[Dec 2020] Our paper was rejected from ICSE 2021… Heh
[Nov 2020] My PhD dissertation Reducing user-perceived latency in mobile applications via prefetching and caching is published now! If the title doesn't interest you that much, go to the Conclusion section where I also discuss Software Testing and Open Science! Cool stuff :)
[Nov 2020] FrUITeR‘s presentations are available now! I am also serving on the panel at ESEC/FSE 2020. Hit me up!
[Oct 2020] I have successfully defended my Ph.D. thesis! I'm Dr. Zhao now!
see CV for the full list :)
see CV for the full list :)
see CV for the full list :)
UI testing is tedious and time-consuming due to the manual effort required. Recent research has explored opportunities for reusing existing UI tests from an app to automatically generate new tests for other apps. However, the evaluation of such techniques currently remains manual, unscalable, and unreproducible, which can waste effort and impede progress in this emerging area. We introduce FrUITeR, a framework that automatically evaluates UI test reuse in a reproducible way. We apply FrUITeR to existing test-reuse techniques on a uniform benchmark we established, resulting in 11,917 test reuse cases from 20 apps. We report several key findings aimed at improving UI test reuse that are missed by existing work.
Network latency in mobile software has a large impact on user experience, with potentially severe economic consequences. Prefetching and caching have been shown effective in reducing the latencies in browser-based systems. However, those techniques cannot be directly applied to the emerging domain of mobile apps because of the differences in network interactions. Moreover, there is a lack of research on prefetching and caching techniques that may be suitable for the mobile app domain, and it is not clear whether such techniques can be effective or whether they are even feasible. This paper takes the first step toward answering these questions by conducting a comprehensive study to understand the characteristics of HTTP requests in over 1,000 popular Android apps. Our work focuses on the prefetchability of requests using static program analysis techniques and cacheability of resulting responses. We find that there is a substantial opportunity to leverage prefetching and caching in mobile apps, but that suitable techniques must take into account the nature of apps’ network interactions and idiosyncrasies such as untrustworthy HTTP header information. Our observations provide guidelines for developers to utilize prefetching and caching schemes in app development, and motivate future research in this area.
Reducing network latency in mobile applications is an effective way of improving the mobile user experience and has tangible economic benefits. This paper presents PALOMA, a novel client-centric technique for reducing the network latency by prefetching HTTP requests in Android apps. Our work leverages string analysis and callback control-flow analysis to automatically instrument apps using PALOMA's rigorous formulation of scenarios that address “what” and “when” to prefetch. PALOMA has been shown to incur significant runtime savings (several hundred milliseconds per prefetchable HTTP request), both when applied on a reusable evaluation benchmark we have developed and on real applications.