I'm a UX researcher who studies what's actually driving behavior in products — and helps teams build around what they find.

I've spent the last decade studying how people make decisions — first in the brain scanner during my PhD, now in products used by millions. I do both foundational and evaluative research, and I care a lot about matching the rigor of a study to the stakes of the decision it's informing. The work I'm most proud of tends to be when I collaborate with the team to define the question we previously didn't think to ask.

PhD, UCLA — Psychology & Computational Neuroscience MSc, UCL — Cognitive Neuroscience (Distinction) Previously: TikTok Shop · Perx Health · Kaiser Permanente
01
Case Study
When "repetitive" doesn't mean what you think
1 THE STUCK TEAM
Users said the feed felt repetitive. A previous fix had broken revenue. The team wasn't sure what to try next.

The first attempt was intuitive: show less repeated content. But when the ML team reduced repetition across the board, GMV dropped. Leadership got cautious, and the team was stuck — they couldn't ignore the complaints, but the only solution they'd tried made things worse.

I was brought in to figure out what was actually going on. Everyone was asking "how do we reduce repetition?" I had a feeling that wasn't the right question.

2 A DIFFERENT QUESTION
What if "repetitive" doesn't mean what we think it means?

When someone says "I keep seeing the same stuff," they might mean the content is actually identical. Or they might mean something subtler: the content feels irrelevant, and irrelevant content is more noticeable when it recurs.

So based on initial conversations with users, I reframed the question: does the experience of repetitiveness depend on how relevant the content is? If yes, the solution isn't less repetition — it's better relevance.

3 THE STUDY
Each user rated three carefully chosen videos — and the design is what made the findings credible.

I surveyed 5,000 TikTok Shop users, stratified by usage frequency. Each participant rated three videos that varied systematically in relevance — one from a category they'd actively searched for, one they'd engaged with but never searched, and one they'd never interacted with.

This isolated the effect of relevance on perceived repetitiveness while controlling for actual exposure. The stakes warranted a large sample and a proper regression model rather than something lighter.

N = 5,000 survey mixed-effects regression cognitive walkthroughs behavioral log analysis A/B validation
4 WHAT THE DATA SHOWED
Relevance was the moderator — not frequency.

Content that matched a user's search intent was rated as significantly less repetitive — even when shown more often. Content from categories they'd never engaged with felt the most repetitive, even at lower exposure.

Users tolerate repeated content when it matches their intent
Mean repetitiveness rating (1–7 scale) by content relevance level

Repetition isn't really about seeing the same content. It's about seeing content that feels irrelevant — and irrelevant content becomes more annoying the more it shows up.

5 MAKING IT BUILDABLE
The ML team needed to know which signals to weight — not just that "relevance matters."

"Relevance matters" isn't something an ML engineer can implement. I ran a secondary analysis ranking which signals (identified through the qualitative work) best predict whether a user will experience content as repetitive.

Which signals best predict repetition tolerance?
Relative effect size from mixed-effects regression model

This became the weighting structure for new diversity controls. Instead of suppressing all repeated content, the algorithm would check whether it matched the user's inferred intent — and only intervene when it didn't.

6 WHAT CHANGED
From blanket content reduction to relevance-aware diversity controls.

An A/B test with tens of thousands of users validated the approach: a low single-digit GMV lift (the previous attempt had decreased GMV), a measurable drop in repetition complaints, and the lowest complaint rates for high-relevance content. The recommendation algorithm was permanently updated.

02
Case Study
Why loyal customers still don't trust you
1 THE BIG BET
The team was about to invest millions in loyalty programs. More purchases should build trust, right?

US retention on TikTok Shop was significantly lower than other markets. The working theory: users needed more reasons to come back. The roadmap was built around frequency-driven features — loyalty programs, promotions, gamification.

The implicit assumption: as users buy more, barriers go down. Trust builds. The habit forms.

I wasn't sure that was true. And given how much was about to be spent we needed to be highly confident in our answer, quickly.

2 DESIGNING FOR FALSIFICATION
Before collecting any data, I discussed the study with the team and designed falsifiable research hypotheses.

The key hypothesis had a testable prediction: barriers should decline with purchase frequency. I designed the study around this. Before launching anything, I aligned with the team: if barriers decline, the loyalty strategy is right. If they're flat, it's not. By agreeing upfront on what would change their minds, they were ready to act on findings in either direction.

The stakes warranted convergent evidence — 6,000 survey responses, ~20 depth interviews, and behavioral log analysis.

N = 6,000 survey 20 depth interviews behavioral log analysis propensity score matching AI-assisted thematic coding
3 THE FLAT LINE
Barriers didn't decline. Not even a little.

Across every data source, the same pattern: barriers were flat. Seller trust, product quality, customer support — all virtually identical whether a user had 1 purchase or 15+.

The team expected the first chart. They got the second one.

What the team expected: barriers decline with purchase frequency
% of users mentioning each barrier, by purchase history
What I actually found: barriers persist regardless of purchase history
% of users mentioning each barrier, by purchase frequency segment
4 THE WHY
"Every purchase is a gamble."

Interviews explained the flat line. Users think of TikTok Shop as an impulse and discovery channel — not a shopping destination. They browse for entertainment, stumble on products, and take a chance. Sellers feel random. They buy different things each time, so no trust accumulates.

Each purchase is evaluated independently. No compounding trust. No habit forming.

You can't drive retention with frequency tactics when the platform's positioning in users' minds prevents trust from building — no matter how many times they buy.

41% of churned users were actively purchasing on competitor platforms. They hadn't stopped shopping. They weren't trusting TikTok Shop specifically.

5 WHAT CHANGED
The entire quarterly roadmap was redirected — in 30 days.

Before this research

Loyalty programs, promotions, gamification — all designed to drive purchase frequency.

After this research

TikTok Shop Protections with trust badges, seller quality vetting overhaul, and customer service improvements. A Q4 tracking study confirmed the pivot worked: trust metrics improved and retention closed the gap with other markets.

Research should be as rigorous as the decision needs it to be — and no more.

I don't believe in a one-size-fits-all research process. Some questions need a 6,000-person survey with propensity score matching. Others need five well-chosen interviews and a clear synthesis. The skill isn't knowing how to be maximally rigorous — it's knowing how much rigor this particular decision requires, given the stakes, the constraints, and what's already known.

What I do hold constant is intentionality. Every study I run, I can tell you what methodological tradeoffs I made and why. If I cut a corner, it was a deliberate choice — not an oversight.

I do both foundational and evaluative research. Some of the most valuable work I've done told a team not to build something — which is harder to quantify but just as important as pointing them toward the right thing. I care less about whether research was technically impressive and more about whether it actually changed what got built.

The cases above happened to require heavy rigor because the decisions were high-stakes. Plenty of my work doesn't look like that. But the thinking underneath is the same: understand the question, match the method to the stakes, make the tradeoffs explicit, and deliver something the team can act on.

Born in Morocco. Raised in Sweden. Curious everywhere in between.

I grew up between two cultures — born in Morocco and raised in Sweden — which probably explains why I ended up spending my career trying to understand how people think in different contexts. I've lived in Budapest, London, Los Angeles, and now I'm in Oak View, California, a small town between the mountains and the coast.

When I'm not thinking about research, I'm usually surfing. I have a place in Morocco that's mostly an excuse to be near the waves. Time in the water does something for my thinking that nothing else quite replicates — it's the one place where I'm genuinely not trying to solve a problem.

The thread that connects everything — the places I've lived, the work I do, what draws me outside of it — is curiosity about why people do what they do. That started long before I became a researcher. The PhD just gave me better tools.

Photo 1 Upload image & uncomment img tag
Photo 2 Upload image & uncomment img tag
Where I learned to question what looks obvious

Before I studied shopping behavior, I studied brains. My PhD at UCLA focused on decoded neuroreinforcement — a method where you decode someone's brain activity in real-time and selectively reinforce specific neural patterns. In practice, this meant I could influence people's preferences without them being consciously aware it was happening.

It sounds exotic, but it taught me something practical: people's behavior is driven by processes they often can't articulate. What someone tells you in an interview is real, but it's incomplete. The gap between what people say and what actually drives their behavior is where the most interesting research questions live.

That instinct — to look beneath the surface explanation, to check whether the obvious answer holds up under scrutiny — is what I brought from neuroscience into UX research. It's not that I distrust users. It's that I've learned to take what they say seriously while also designing studies that can reveal what they can't easily tell me.

Research

  • Causal inference & experimentation
  • A/B and multivariate testing
  • Behavioral modeling & segmentation
  • Survey design at scale
  • Depth interviews & cognitive walkthroughs
  • Behavioral log analysis
  • Mixed-methods research
  • AI-assisted research workflows

Statistical & ML

  • Propensity score matching
  • Mixed-effects regression
  • Bayesian methods
  • Python (pandas, statsmodels, scikit-learn)
  • R (tidyverse, lavaan)
  • SQL
  • Reinforcement learning models

Domains

  • Recommendation systems & search
  • E-commerce & marketplace
  • ML model evaluation
  • User retention & trust
  • Digital health & therapeutics
  • Computational neuroscience
  • Content & ad systems

Let's connect!