A simple philosophy app seemed ready to go, but was it aligning with user behaviors?
Testing validation & usability for increased success in a mobile app
Hooked-model / mobile app / philosophy
100 HOURS
Validation
Usability Testing
Strategy
THOMAS JEFFERSON UNIVERSITY, PHILADELPHIA, FEB - MAY 2023

How can learning philosophy be as easy as reading a text message?
The strategy for this project was to build a philosophy app that tapped into the mindless ease or reading a text message and, at a deeper level, was designed around the Hooked-Model. The Hooked-Model is a concept of tapping into cognitive psychology and the power of habit, one that exists in many successful products.
Leveraging storyboards, Kano cards, and MoSCoW, a design strategy for Everyday Stoic had been hammered out. However, before significant investment in polishing and developing could proceed, I would have to measure the likelihood for risk through prototype testing methods. Concurrently, usability testing would help prioritize significant problems early.
For project I worked as a research team of one, but relied on the feedback of colleagues, industry experts, and stakeholders for constant feedback and guidance. My responsibilities included analyzing risky assumptions, building prototypes. I also planned and performed both validation and usability testing.

Research Process:
-
Measuring Risk - Testing for uncertainty & exposure in design solutions ...
-
Aligning with Users- Finding usability problems and prioritizing changes ...
Index
Measuring Risk
Testing for uncertainty and exposure in design solutions
With a clear strategy, it was my responsibility to:
-
Identify, categorize, and prioritize risky assumptions
-
Plan methods, metrics, recruitment to test biggest risks
-
Map user experience to build testable prototypes
-
Conduct validation tests with user-aligned participants
-
Evaluate validation test and report findings, next steps
-
Users will engage with the app twice a day.
-
The content and user investment will spur retention.
-
Many Stoic learners would use an app.
To carry this out, I personally took action on:
-
Affinity mapping risky assumptions - desirability, feasibility, viability risks prioritized by impact and uncertainty
-
Targeting biggest risks for testing - based on impact, uncertainty, accessibility of measuring risks
-
Planning success metrics and test methods - setting number-based quantifiers of success/failure
-
Implement recruitment strategy - finding high-value, user-aligned, accessible participants
-
Build testable prototypes - fake-front-door and Instagram ads, physical and digital hybrid product experiences
-
Conduct two concurring validation tests - collecting data in relation to success metrics
-
Analyze test results and report findings - comparing results with success metrics and considering next steps
1
/ 2


Participants set preferences for two SMS deliveries of content daily for 7 days.
1.
Thank you for signing up for the Everyday Stoic 7-day pilot. We value your opinion as it will help us improve the app!
Each day you will be sent two messages according to your preferences. We hope you can respond with a rating and feedback. To adjust your preferences, just let us know.

2.
Participants were encouraged to rank the content daily and provide feedback, which helped mark participation. Additionally, a closing survey provided confirmation.
Good morning
Today's virtue: Temperance
_____________________
"The happiness of your life depends on the quality of your thoughts" - Marcus Aurelius
Today, I will focus my thoughts through the lens of virtue and guard myself from the unreasonable.
"
I will give this a 4, not because its not excellent, but because I want to provide an honest ranking.
And I LOVE the format. A morning thought followed by an evening reflection is genius!
- P2
Strategy test - Prototype 1 - Minimum viable product
Results
The pilot validation test failed the success metric, raising questions on content strategy: 50% of participants engaged with the app an average of twice daily over a one week period. Depending on the results of the other hypothesis test, its close enough that it could warrant a second test with some tweaks to the design and prototype.
Will the content spur engagement and retention?
Depending on Fake Front Door hypothesis testing outcomes, another test could be run with changes:
-
12 Gold-level participants (gathered from Fake Front Door test below)
-
Integrate feeling of investment that app will produce- providing user feeling of achievement and collection
Recommendations
If Fake Front Door fails benchmark, more significant changes would be needed.
Success Metrics:
At least 60% of users in the pilot will stay active and engage with the content on an average of 2 times a day over a one week period.
Method:
Data Collection:
1 week pilot service, SMS interactions and personalized post-test feedback survey
MVP, SMS texting prototype service, budget $20

Goal: 60% of participants retain engagement.
!
Result: 50% of participants retained engagement.
For testing the 3 biggest risks, I carried out two simultaneous prototype tests with participants:
-
Minimum Viable Product Prototype Test - measuring engagement, ease-of-use, and retention
-
Fake Front Door Prototype Test - measuring market desirability for such a niche product
strategy test - Prototype 2 - FAKE FRONT DOOR
Results
The validation test succeeded its benchmark: .63% of all ad reach subscribed for a preorder notification for the product with their email. This validates demand for the product concept when kept lean, and warrants a redesigned pilot focused on content interaction strategy and greater implementation of the Hooked-Model.
Is there demand for a simple Stoic philosophy app?
Rerun a larger, more robust, and strategically tweaked MVP hypothesis test utilizing the emails gained here as a way to access gold-level participants that desire the product.
Recommendations
Begin usability testing building on assets developed.
Success Metrics:
At least .2% of Instagram users that are targeted by our ad will sign up for a Stoic App preorder notification on a landing page (email address required).
Method:
Data Collection:
unique emails collected from landing page, advertising and landing page engagement data
Instagram ad, links to Mailchimp landing page with mockups and email form.
1.
Create instagram account, content posts, product mockups, landing page, and an ad with a budget of $35.



Goal: .2% of reach subscribe
Result: .63% of reach subscribed
2.
Run ad for 7 days with link to landing page. Landing page contains mockups, value statement, and preorder notification sign up via email form. Unique emails help gauge real interest.

2,239 people reached
115 unique landing page visits
14 unique email subscribes

SEQ, time on task, initial response, indirect path navigation, and facial expressions all revealed the core experience of the app was taking too long, too much effort.
Users were overly-drawn, distracted by secondary controls in tasks.
Lots of text, font size, contrast ratios, and poor hierarchy reduced accessibility and increased cognitive loads on simple tasks.


Aligning with User Behavior
Discovering usability problems and prioritizing changes
Utilizing developed prototype assets, I set out to:
-
Plan usability research
-
Conduct usability tests
-
Build a roadmap of usability fixes
To carry this out, I personally took action on:
-
Identified key user flows for testing - examining important and complex navigation flows
-
Wrote usability testing script - utilizing natural, unbiased tasks and testing templates
-
Finalized Figma prototypes - ensured prototype fidelity would net usability problems
-
Recruited participants - user-aligned with diverse technology skills (low skills capture more problems)
-
Conducted usability tests - in-person and remote, following scripted prompts and recording results
-
Analyzed results - clustering frequent problems, highlighting severe issues, highlighting SEQ score results
-
Prioritized changes - based on severity ratings (frequency and impact), created list of usability fixes
-
Shared results - provided findings, resources and collaborated on roadmap to address issues
2
/ 2

17 usability issues discovered with just 5 participants
6 high severity usability issues prioritized
Uncertainty in navigation, cognitive overload, design heuristics, and accessibility were among the top problems.



Outcomes
-
The strategy for targeting a niche philosophy and building a simple, daily way to build knowledge within that area was validated in terms of demand. However, building engagement in the app that will last will take changes in usability, and a sense of investment that ties into behavioral psychology.
-
17 usability issues were discovered with just 5 participants, 6 with high severity, greatly handicapping user goals and first impressions of the app. Recommendations of priority and possible solutions were given for each of these issues.
Reflection
This project faced a problem by many products as they undergo design, the implementation escaped the values set out at the start. By testing throughout design and development, we can greater align with both the strategic vision of the business and the behaviors of our users.
-
There are two things that I would change about this project. First, I would love to have had a little less overlap in the two validation tests. By taking the emails gathered in the hypothesis test on demand (fake front door prototype), I would have netted the perfect pool of participants to recruit for the second. This would have increased the accuracy of the engagement test. I also would have liked to have implemented that engagement test with a greater sense of investment that the final product was going for. This could have been as simple as streaks.
-
Usability testing was a great strength for me. Thinking through natural ways to approach tasks, avoiding bias, and conducting and analyzing the tests came to me naturally (though I know there is always room to improve). I find a lot of parallels in user interviews and usability testing, in that there's that key aspect of psychology, gathering real behavior, and a need for good writing of scripts.
Just like with user interviews, going into usability testing is all about collecting natural behaviors. This is accomplished through awareness of bias, understanding of the user flows, and well-written (and facilitated) prompts that will set participants on a natural course.