Driving engagement for B2C Mobile App


A B2C Mobile application was seeking to increase its engagement metrics. Will users engage better with relevant content? this was the hypothesis that needed validation.

Current process involved manually curating of Topics, on which content creation by users happened. Fafadia Tech delivered this projects in 3 calendar months.

Fafadia Tech built a Persona Engine that was deployed and tested in production with substantial daily actives. Along the way we empowered our clients to create their first data pipeline.

Key Challenges

Some of the challenges that were tackled along the process included:

  1. Dealing with unstructured data. Data had to be transformed to let algorithms do their jobs.
  2. Working with clients on defining soft and hard success metrics.
  3. Building scalable data pipeline that processed data daily

Problem Definition

Given a user U​ and an action A,​ {where A can be an action E.g. Comment, Like and Write}

We would want to implement a model { recommendation algorithm} RECCO​ which can be defined as

RECCO(U, A, M) -> T

Where U​ is a User, A​ is a vector of actions {i.e. the context}, M​ is model which we used to generate T. T​ is set of ranked Topics

Project Plan

  1. Write Scripts to get data from DB/API to say MongoDB
  2. Implement non NLP methods for Recommendations
    1. Conditional Probability Method {using Frequency Counts}
    2. Set Similarity Metrics Methods
    3. Collaborative Filtering Algorithm: LDA
  3. Implement NLP Methods
    1. Clustering Methods: We will be using simple K-means algorithm
    2. Taxonomy Based Suggestion: Figure out sub-topics that user has interacted with and suggest siblings
  4. Evaluation of Algorithms/Techniques
  5. API for Integration



Some of the components used for building Persona Engine included:

  1. Flask: Framework for building APIs
  2. MongoDB: Datastore for storing user generated data and final results
  3. Python: Programming language to implement algorithms

Technical Solution

Project was executed in two phases,

  1. Exploratory Data Analysis (EDA)
  2. Implementation and Evaluation

During EDA phase basic statistics were computed, which answered questions like:

  1. On average how many topics a user engages with?
  2. Who is our typical user?
  3. When do they not engage with manually curated list of topics?
  4. What are some interesting topics users engage on during last 30 days?
  5. Is there a tribe that users belong to?

During implementation, following Algorithms were implemented

  1. Graph Based

For this category of recommendation algorithm, we had created a similarity of graph between items {based on cosine distance of their co-occurance vectors}

Suggestions for a user were generated by combining similarity scores. This algorithm emulates “neighbourhood” method.

Alternatively new techniques like item2vec use skip-gram negative sampling for learning vector representation of items. Cosine scores between the vector can be used for computing similarity.

  1. Frequency Based

Pair of items can be samples from the action logs and their Pointwise Mutual Information would be a good proxy of how they are “related”

  1. Content Based

Content based recommendation looks at semantic associations between items. Few techniques that were evaluated includes:

  1. Matrix Factorization

Matrix factorization techniques try to predict values of missing entries in a User to Item matrix. Thus it tries to predict how interesting a suggestion would be. Generally this techniques are useful where there is a scale rating {as opposed to binary E.g. Like/Dislikes}. Some of the techniques that were used for this class of Algorithm include:

Few lessons and tweaks that we performed on the way:

  1. Using time-window to restrict candidate improved runtime of algorithms
  2. Freshness had to be factored in {topics which were recent had their scores boosted}
  3. Faster techniques like LSH Locality Sensitive Hashing was used as opposed to cosine scores
  4. Infrastructure code to server and re-generate scores was built to run every 24 hours


  1. Use of theoretical values:
    1. F1 Scores {Combination of Precision and Recall}
    2. For some algorithms we can use cross-validation scores {i.e. Training Error vs. Testing Error}
  2. Use of analytics:
    1. A/B Testing


The project didn’t achieve the hard goal of increasing engagement by 30% weekly (Engagement did increase by 15%), however few softer results were achieved

  1. UI tweaks were identified to educate users that suggestions were coming from algorithms
  2. Thinking gears were in motion for Customer’s engineering team
  3. What was discovered was users show higher level of engagement for Trending topics