Have you found yourself in the fortunate position of being the only designer in an organization that doesn’t have any design assets or previous research? Lucky you! Now is the time to do things right and build a solid foundation for excellent research and design practices while also creating value as soon as possible. Here’s my approach to starting off on the right foot as a UX practitioner in an environment where you are breaking ground and creating UX assets from scratch.
Laying the groundwork and understanding goals.
The first thing I did was talk with my new team. What are their goals and what are their concerns? What did they expect me to do for them? What do they understand about what the UX role does in general and how can I share what I do? Understanding expectations and where they want to go is important to the success of the partnership. Having these conversations lays the groundwork for good communication from the beginning. As a best practice and to take notes for writing your research plan, always document the outcomes of these discussions for later reference.
Map it out! Diagrams are your friend.
Next I wanted to get the lay of the land with regard to the digital assets I’d be working with. I like to start with a simple site map diagram showing the existing architecture of the site and how things work at a high level. Diagramming forces you to investigate and articulate how something is put together, which helps you learn how it works. It’s also great to document where things are when you begin so you can track changes over time. The theme of this blog post is documentation! Just kidding, but really it’s so important. Document as much as you can. It’s always helpful to have it when you need it.
Put together a ❤️research plan❤️.
I’ll plan a research effort to learn more about what’s going on with the app’s experience once I understand more about the areas of concern and the kind of impact the team is looking for. For OmbuLabs and FastRuby.io, I decided to conduct a mixed methods research effort including usability testing and product reaction cards. I often like to start with a moderated usability test to get familiar with the way that the sites work when people are trying to accomplish specific tasks that my team is worried about. Additionally, I included some questions about impressions of the site and brand using an abbreviated set of Microsoft Product Reaction Cards. Additionaly, I included a few other questions in other areas to try to get a landscape of how things were working and where desired impact and actual impact differed.
Next, I drafted out my research plan. I always make sure that the team feels that I captured their areas of concern accurately and that everyone is in agreement about the research plan before I begin, because ultimately your team is the audience for these results and you want to make sure they are on board before research begins.
Recruiting is hard. Let your team help.
I relied on my team to help recruit participants, and set up a screener survey to find participants who met the criteria I was looking for. I like to simplify the research ops part of things a little bit and use a scheduling tool like Calendly so that anyone who passes my screener survey can schedule themself into an available time slot.
Bring your team along on the journey.
Next, I set up a note taking document for easy note collection, run a pilot test, and invite team members to attend test sessions. I always send out an observer code of conduct guide so that everyone is on the same page about how observing sessions goes and make sure that they know they will have specific time for questions. The internal team joins the research session about 10 minutes before the participant and we review the note-taking document and observer’s guide before the session starts. For the note-taking document, I like to have a spreadsheet with separate pages for each notetaker, columns representing the participant, and rows for each task. Everyone brings unique and interesting observations, and we all see and notice different things. It's incredibly valuable to have these other perspectives. At the same time, I like to make sure the whole team feels like they are part of the research. It's also satisfying and interesting to see how people use the applications you've built.
After I moderate the test sessions, I debrief with the observers and walk through and share notes to compare what was found. I then do my own deep dive review of the recordings and make sure that I’ve captured all the details, like time on task and ratings for ease and anything else. I make video clips showing any interesting findings or other compelling documentation for things that I found notable in preparation for creating my report.
Reporting and Making sense of Results
Next I make a prioritization matrix so that tasks can be organized into priority order based on a few key ratings, like severity, estimated effort to fix for UX, and effort to fix for development (ask your development peers to give rough size estimates for issues). I include this information both as a list and plotted into quadrants by severity and effort to fix in my report. These items became my backlog.
Benchmarks and Backlogs
The metrics I collected help me make sure that the site is improving in the areas we want to see change. At the end of this research effort, I’ve got a nicely prioritized list of to-do’s that align with the team’s goals, documentation about the current site architecture and issues as well as the research plan showing what our goals were during that time period. This documentation helps define what success looks like and where things need to change. Who doesn’t love a robust and well defined backlog of work that is targeted to solve proven painpoints? Next I'll talk about setting up a design environment, creating the beginnings of a design system, and doing the work to enable both design and development efforts.