top of page

Here are representative quantitative projects where I drew from prominent HCI and social psychology research to identify users' pain points with digital and wearable products, develop prototypes with design solutions, and conduct empirical research to validate prototypes and make actionable recommendations.

Project 1. Using Similarity Cue to Promote Well-being in Social Media Users

Social media users feel depressed after seeing another user who looks better off (i.e., an upward social comparison target). To protect users' well-being, we designed "a similarity cue" to explicitly emphasize the overall similarity between the user and the upward comparison target. A 0% similarity cue emphasized low overall similarity, whereas 50% and 90% similarity cues emphasized high and moderate overall similarity. Our study found that the moderate and high overall similarity cues protected users' well-being after they compared themselves to the upward comparison target. Read more about the study here

Skills

  • Website prototyping 

  • Survey design

  • Multivariate testing (i.e., a cross-sectional between-subjects experiment with 9 versions of a chatbot tutor)

  • Quantitative data analysis (i.e., ANOVA, correlation, exploratory factor analysis, mediation analysis)

Recommendations for Social Media Platforms

  • Make the current features that tacitly emphasize the overall similarity between the user and the upward comparison target to explicitly emphasize it. Facebook's Mutual Friends feature is a good example that shows users the overall similarity tacitly (e.g., users see which group of friends they have in common with other users with the lukewarm phrase "Mutual Friends"). Facebook can adopt the stronger phrase that was used in our study, "You and Taylor are similar to each other by sharing 3 mutual friends"). 

This image shows correlation between major study outcome variables.
The effect of similarity cues on key user outcomes after upward social comparison.
This image shows the mediating role of perceived overall similarity.

Project 2. Reducing Imposter Syndrome via Chatbot Mentee

Many undergraduate students suffer from imposter syndrome (IP), and no formal interventions exist to support them. We examined the potential of a chatbot-based growth mindset intervention in reducing IP and prototyped 3 versions of such an intervention. Each version presented a chatbot in different roles (a mentee, a mentor, and a conversation partner). Our study showed that students talked more and reported a higher growth mindset with a chatbot mentee than with a chatbot mentor or a chatbot conversational partner. We also found that students' positive beliefs in effort predicted lower IP. You can read the study's short report here. 

Skills

  • Chatbot prototyping and conversational script design

  • Survey design

  • Multivariate testing (i.e., a cross-sectional between-subjects experiment with 3 versions of a chatbot-based growth mindset intervention)

  • Quantitative data analysis (i.e., trend analysis, correlation, ANOVA, binomial regression, exploratory factor analysis, multiple regression)

Recommendations for Chatbot Interventions

  • Introduce a chatbot as a user's mentee. A chatbot should not take on an authoritative or neutral role, such as a user's mentor, therapist, or companion. 

  • Let a chatbot mentee asks a user to share about the time they had positive beliefs in effort to succeed in class.​

  • A chatbot mentee demonstrates it has learned a lesson from a user.

    • A chatbot repeats what a user has said about a lesson.

    • Include a notebook feature where a chatbot notes what it has learned from a user, who can check the note at any time. 

This image shows a number of words exchanged between participants and their respective chatbot partner.
This image shows correlation between study variables.
This plot shows the relationship between imposter syndome and each predictor.

Project 3. Helping Students Accept Negative Feedback through Chatbot Tutor

Undergraduate students don't like to receive negative feedback that is critical for their growth. As a result, they do not accept the feedback and progress slowly to reach their potential. This study examined whether students were more likely to accept negative performance feedback from a chatbot tutor than from a human tutor. I hypothesized that a chatbot tutor's objectivity and lack of a mind would minimize the negative feedback's threat to students' self-esteem and public self-image, which in turn can lead to greater negative feedback acceptance. The quantitative data showed that students did not accept negative feedback from a chatbot and a human tutor. The qualitative data showed that most students preferred to receive negative feedback from a chatbot tutor and offered 2 features for a chatbot tutor. You can read the study's short report here

Skills

  • Chatbot prototyping and conversational script design

  • Survey design

  • Multivariate testing (i.e., a cross-sectional between-subjects experiment with 9 versions of a chatbot tutor)

  • Quantitative data analysis (i.e., correlation, ANOVA, exploratory factor analysis, path analysis)

This image shows participants estimated mean on outcome measures, broken down by conditions.
This image shows correlation between major study outcome variables.

Recommendations for AI-based Learning Platforms

  • Include positive emojis or introduce a "sandwich" technique (i.e., negative feedback is placed between two positive feedback) to tone down the negativity of the feedback.

  • User does not want to be reminded of how badly they are performing by receiving negative feedback, and they don't want to know how they are doing in comparison to other users. A learning platform can empower users to opt out from social competition features (e.g., a good example of this is Duolingo's Leaderboards).

Project 4. Increasing User's Attachment to Fitness Trackers through Customization

Problem: There is a high abandonment rate for fitness trackers. One reason for such a high rate is that users do not see the trackers as representing themselves. 

 

Recommendation: Offer many tracker customization options. With customizations, users can modify the tracker to represent them truly and thus feel more attached to it, ultimately lowering tracker abandonment. 

 

Read more about the study findings here.

Project 5. Increasing User Evaluation of Chatbot through Personalized Support

Problem: Many chatbots or AI companions (e.g., Siri) support users without considering their support needs. A user with an intrinsic need for emotional support may get instrumental support from a chatbot, which can dissatisfy the user. Is this current development practice benefitting users? 

 

Recommendation: Our data indicates the answer is 'no.' Users who received personalized support from a chatbot based on their support needs liked and trusted the chatbot more. 

 

Read more about the study findings here.

Project 6. Identifying Attributes of Persuasive Reminders to Motivate Users to Log 

Problem: Continuous logging of behaviours on tracking mobile apps using is critical for meaningful self-reflection. This is why the apps send reminders to users, but users do not listen to the reminders and forget to log their behaviours. What makes reminders persuasive?

 

Recommendation: We found 6 message attributes that can motivate users to log their behaviours on tracking devices. Reminders should look like they come from a close friend, specify users' health goals, be humourous, and more.

Read more about the study findings here.

bottom of page