Research‎ > ‎

Incentive Management

People

Faculty:
Mustafa al'Absi, Anind Dey, Deepak Ganesan, Santosh Kumar, Mani Srivastava,
Post-docs: Andrew Raij
Students: Sasank Reddy

Motivation


Personal and participatory sensing platforms are enabling relatively unobtrusive data collection in the natural environment. However, reliable sensing platforms are not yet available for all personal data of interest. For example, it is difficult to automatically measure a person’s perceived stress level with today’s state-of-the-art sensing tools.

To collect data like perceived stress, manual tools such as experience sampling or ecological momentary assessment (EMA) are used [1]. In EMA, users periodically report information of interest as they go about their every day lives. EMA is usually facilitated by handheld mobile devices that prompt the user to answer questions. While EMA is an effective tool, it can also place a significant burden on the user. Every time a participant is prompted, their day is interrupted and thus affected by the EMA. Poorly-timed prompts, long questionnaires, and other factors [1, 2, 3, 4] all increase the burden from EMA. This burden can have negative consequences for user retention, compliance with the study protocol, and the quality of the data collected.

One approach to mitigating the effects of burden is incentivizing data collection. For example, users may be given money or course credit (for students) to attract them to participate in a study and encourage them to stay in the study until it ends. Traditionally, incentives are awarded once at the end of the study, or at predefined milestones (e.g., the end of each
week in a 3 week study). This traditional approach works, but is not always satisfactory because of the coarseness of the incentive structure. For example, participants could answer some EMA questionnaires at the beginning of the study and some at the end, and yet still receive full compensation for poor study compliance. In response to this problem, researchers are starting to employ fine-grained incentive structures that award micro-payments [2, 5] for individual data collection opportunities (such as EMAs). This approach is seeing increased use in EMA studies where handheld devices can keep track of the number of EMA questionnaires completed, and thus how much to pay participants at the end of the study. While the micro-payment approach is seeing more use, to our knowledge, no work explores how to best structure the incentives to maximize retention, compliance, and data quality.


Plans


We have planned a series of user studies to evaluate various micro-payment incentive structures for personal and participatory sensing applications. The incentive structures we are investigating include:
  • Uniform - Participants receive the same incentive for each data collection opportunity (e.g., $1 / answered EMA).
  • Progressive Incentive - Incentives progressively increase with each opportunity.
  • Variable - Incentives vary (randomly) from opportunity to opportunity
  • Variable but Hidden - Similar to variable, except the value of the incentive is hidden from the user until after the user provides the data.
  • Based on Data Quality - The incentive is tied to the quality of the data the user provides.
  • Based on Compliance - The incentive is tied to the quantity of the data the user provides.
  • Leaderboards - In addition to the active incentive scheme, the user is shown a listing of his/her earnings in comparison to other participating users.
Latest Work

Studies will begin in the first quarter of 2010.  Please return to this page for updates on our progress.

Related Work


[1] S. Shiffman, A. Stone, and M. Hufford. Ecological Momentary Assessment. Annu. Rev. Clin. Psychol, 4:1–32, 2008.

[2] S. Consolvo and M. Walker. Using the experience sampling method to evaluate ubicomp applications. IEEE Pervasive Computing, pages 24–31, 2003.

[3] T. Piasecki, M. Hufford, M. Solhan, and T. Trull. Assessing clients in their natural environments with electronic diaries: Rationale, benefits, limitations, and barriers. Psychological Assessment, 19(1):25, 2007.

[4] D. McFarlane and K. Latorella. The scope and importance of human interruption in human-computer interaction design. Human-Computer Interaction, 17(1):1– 61, 2002.

[5] B. Yu and M. Singh. Incentive mechanisms for peer-to-peer systems. In Agents and peer-to-peer computing: first international workshop, AP2PC 2002, Bologna, Italy, July 15, 2002, page 77. Springer-Verlag New York Inc, 2003.

[6] P. Klasnja, B. Harrison, L. LeGrand, A. LaMarca, J. Froehlich, and S. Hudson. Using wearable sensors and real time inference to understand human recall of routine activities. In Proceedings of the 10th international conference on Ubiquitous computing, pages 154–163. ACM, 2008.

[7] G. Hsieh, I. Li, A. Dey, J. Forlizzi, and S. Hudson. Using visualizations to increase compliance in experience sampling. In Proceedings of the 10th international conference on Ubiquitous computing, pages 164–167. ACM, 2008.