I am a Computing Innovation Fellow (CIFellow) in the College of Information & Computer Sciences at the University of Massachusetts Amherst (UMass), working with Dr. Yuriy Brun. My primary research community is Software Engineering. I enjoy solving important and interesting problems with broad research interests (knowledge has no boundaries, right? :)). My current research focus is listed below. I received my Ph.D. in Computer Science from the University of Southern California (USC), advised by Dr. Nenad Medvidović. Prior to USC, I received my Bachelor’s degree in Software Engineering from the Harbin Institute of Technology (HIT) in China, advised by Dr. Zhongjie Wang (王忠杰).
PhD in Computer Science, 2020
University of Southern California, USA
BEng in Software Engineering, 2014
Harbin Institute of Technology, China
[Mar 2021] Our paper Assessing the Feasibility of Web-Request Prediction Models on Mobile Platforms was accepted to MOBILESoft 2021! With this great news, all the pieces in my Ph.D. Dissertation are published now! YAY :D
[Jan 2021] I have driven across country from Los Angeles to Amherst! I'm starting my new chapter at UMass as a postdoc!
[Dec 2020] Our paper was rejected from ICSE 2021… Heh
[Nov 2020] My PhD dissertation Reducing user-perceived latency in mobile applications via prefetching and caching is published now! If the title doesn't interest you that much, go to the Conclusion section where I also discuss Software Testing and Open Science! Cool stuff :)
see CV for the full list :)
see CV for the full list :)
see CV for the full list :)
Prefetching web pages is a well-studied solution to reduce network latency by predicting users’ future actions based on their past behaviors. However, such techniques are largely unexplored on mobile platforms. Today's privacy regulations make it infeasible to explore prefetching with the usual strategy of amassing large amounts of data over long periods and constructing conventional, “large” prediction models. Our work is based on the observation that this may not be necessary: Given previously reported mobile-device usage trends (e.g., repetitive behaviors in brief bursts), we hypothesized that prefetching should work effectively with “small” models trained on mobile-user requests collected during much shorter time periods. To test this hypothesis, we constructed a framework for automatically assessing prediction models, and used it to conduct an extensive empirical study based on over 15 million HTTP requests collected from nearly 11,500 mobile users during a 24-hour period, resulting in over 7 million models. Our results demonstrate the feasibility of prefetching with small models on mobile platforms, directly motivating future work in this area. We further introduce several strategies for improving prediction models while reducing the model size. Finally, our framework provides the foundation for future explorations of effective prediction models across a range of usage scenarios.
Prefetching and caching is a fundamental approach to reduce user-perceived latency, and has been shown effective in various domains for decades. However, its application on today’s mobile apps remains largely under-explored. This is an important but overlooked research area since mobile devices have become the dominant platform, and this trend is reflected in the billions of mobile devices and millions of mobile apps in use today. At the same time, user-perceived latency has been shown to have a large impact on mobile-user experience and can cause significant economic consequences. ❧ In this dissertation, I aim to fill this gap by providing a multifaceted solution to establish the foundation for exploring prefetching and caching in the mobile-app domain. To that end, my dissertation consists of four major elements. As a first step, I conducted an extensive study to investigate the opportunities for applying prefetching and caching techniques in mobile apps, providing empirical evidence on their applicability and demonstrating insights to guide future techniques. Second, I developed PALOMA, the first content-based prefetching technique for mobile apps using program analysis, which has achieved significant latency reduction with high accuracy and negligible overhead. Third, I constructed HiPHarness, a tailorable framework for investigating history-based prefetching in a wide range of scenarios. Guided by today’s stringent privacy regulations that have limited the access to mobile-user data, I further leveraged HiPHarness to conduct the first study on history-based prefetching with “small” prediction models, demonstrating its feasibility on mobile platforms and in turn, opening up a new research direction. Finally, to reduce the manual effort required in evaluating prefetching and caching techniques, I have devised FrUITeR, a customizable framework for assessing test-reuse techniques, in order to automatically select suitable test cases for evaluating prefetching and caching techniques without real users’ engagement as required previously.
UI testing is tedious and time-consuming due to the manual effort required. Recent research has explored opportunities for reusing existing UI tests from an app to automatically generate new tests for other apps. However, the evaluation of such techniques currently remains manual, unscalable, and unreproducible, which can waste effort and impede progress in this emerging area. We introduce FrUITeR, a framework that automatically evaluates UI test reuse in a reproducible way. We apply FrUITeR to existing test-reuse techniques on a uniform benchmark we established, resulting in 11,917 test reuse cases from 20 apps. We report several key findings aimed at improving UI test reuse that are missed by existing work.
Reducing network latency in mobile applications is an effective way of improving the mobile user experience and has tangible economic benefits. This paper presents PALOMA, a novel client-centric technique for reducing the network latency by prefetching HTTP requests in Android apps. Our work leverages string analysis and callback control-flow analysis to automatically instrument apps using PALOMA's rigorous formulation of scenarios that address “what” and “when” to prefetch. PALOMA has been shown to incur significant runtime savings (several hundred milliseconds per prefetchable HTTP request), both when applied on a reusable evaluation benchmark we have developed and on real applications.