Member-only story
Using AI in UX research: a structured and ethical approach
A structured approach for assessing the ethical & effective use of AI in UX Research

Since November of last year, ChatGPT has dominated the zeitgeist in innovation, reaching 100 million users in record time.
That excitement also extended to the research and design communities. By our count, since ChatGPT’s release, the tool itself or the topic of AI has featured in roughly 15–20% of articles circulated in popular UX publications. Practitioners are wondering how we could use ChatGPT in our work, how we ought to use AI tools, and what AI means for the future of our profession.
However, the discourse hasn’t yet produced a framework — that is, a formal structured way of thinking through these problems.
Let’s first examine what a good one should have:
- Guiding principles — The question is no longer if AI will become a part of common UX research practice, but where and how we should use it. A framework can help provide guiding principles for why, how, and where AI can be most effectively and ethically used.
- Adaptability — A good framework can be applied to new systems and situations. Recently, ChatGPT has dominated conversations about AI. But this tool isn’t the final boss — newer AI technology with more robust capabilities will emerge, as will purpose-built UX applications. We need a structured way to think about new tools as they’re released, and how they fit into our practice.
- Practical applications — Finally, a proper framework will provide a structured way of holding specific examples against best practices, helping teams to evaluate and improve continually.
The PAIRED framework
After reviewing relevant discourse, we have proposed the PAIRED framework, which defines six core principles for integrating AI tools into UX research: Principled, Accountable, Initiated, Reviewed, Enabled, and Documented.
