UXR fact: finding many participants is not the same as finding the right participants. While both can be daunting – sometimes seemingly impossible – tasks, it is essential to prioritise the latter over the former to get the best possible results from your carefully designed UX study.
One great way to tune out the deluge of participants and receive feedback only from your target users is – drum roll – through screening questions! Screening questions, sometimes grouped into screener surveys, are an essential part of any recruitment process.
Let’s assume your study only requires American women, aged 20-29: you may want to add screening questions assessing the gender, age and birthplace of your participants, and filter out anyone not belonging to your intended group.
When done well, screening questions can do the heavy lifting in your selection process and save you precious time otherwise devoted to participants’ scouting and interview QA. At the same time, when done wrong, screening questions might be nothing more than a nuisance to career panellists, and an actual hindrance to your intended group.
Therefore, it is essential to know how to get your screening questions right. And how can you ensure you’re gonna nail them on your next study? We’ve asked Rucha Joshi, Market Research Recruit Lead at UserTribe, to share some of her insights and suggestions.
Through rigorous screening criteria. When our participants fire off a study, they have to go through a screener first. Generally, we try to have a screener survey instead of a qualifier. So instead of saying “do you use dental floss?” We ask a more neutral, non-leading question like “what of the following oral care products do you use?” followed by a list of different oral care products.
This means that if I were a career panellist – and there are quite a few of them also for very niche studies – if I see a question asking “do you use dental floss?“, I would assume this study is about dental floss, so I would answer “yes” and I would be accepted into the study. With the second type of question, there’s a lower chance I am going to get the right answer.
First off, instead of leading questions, use more neutral questions. Instead of saying “on a scale of one to five how much do you like this product?” If the goal is to find people who like this product a lot, you should frame it as “what is your overall opinion on this product?”
Of course! If your platform allows you to filter in or out participants based on the keywords they use. However, it could also be a multiple-choice question or a radio button question. After that, I would also add one or more multiple answer questions with a lot of possible options – if they choose the right one/ones, they can move on.
We can’t ask participants 100 screening questions before the interview takes place: they will only get tired or quit, making our test unsuccessful. We want to have a screener survey, not a questionnaire – i.e. the interview itself. With a screener survey, we’re just trying to skim off the participants who are not in the target group.
Therefore, there must be a balance between the number of questions and how niche our target audience is going to be. Furthermore, no matter how detailed your screening questions are going to be, you must always save some time for video auditing, to double-check the participants are actually able to answer the questions.
Another component UX’ers should be wary of is the lack of clarity in questions. Indeed, most often than not, good participants might get kicked out of the study because of a screening question we phrased incorrectly. In one of my studies, I was looking for participants from a specific sector and function, who were also using a service from a bank. After having assessed the former, I went ahead and asked “do you use the XYZ platform or service from this bank?” and they mistook one type of service offered by the bank with another one.
So I had a lot of participants answering positively, only to find out after one or two interviews that they weren’t the intended users of the study. That meant I had to go back to my screening questions* and make them more explicit – i.e. describe the service in-depth and add a link to its web page for good measure.
*note: Rucha is using Sonar as a UX research platform to conduct her studies. With Sonar, you can pause your study, change any of the screening or study questions and resume it with just a couple of clicks. This grants exceptional flexibility in your study design, and prevents you from starting from scratch should you wish to make any changes!
Of course! This is quite a niche tip, though very useful to those who conduct multiple studies with the same target group. If you already have data about certain participants, provided you have received their approval to store and use their data, you can speed up your screening process and laser-focus your questions based on their previous answers.
Absolutely. If I have a certain user in mind and I know he/she responded positively during a previous study, I can go ahead and look directly at people matching my ideal user’s description.
Wanna see how you can ramp every step of your next UX study? Just click on the button below!