Let’s face it: in UX research, you can never have absolute certainty your insights are gonna hit the bullseye. This risk, combined with the big hurdle of convincing your stakeholders of the validity and utility of your qualitative studies, are primary pain points for many UX’ers worldwide, especially when running small scale studies with limited time or budget.
The solution to this conundrum? Well, you can choose to take no immediate risks, run no study and carry on with tried and tested formulas – but no risk it, no biscuit – or you can run a biased and limited study, relying on just one methodology and rolling the dice hoping for the best. Alternatively, you can look at the problem with a fresh take and try to find a way to mitigate risks while extracting more data from the same sample. Triangulation helps you do just that.
Triangulation is a land surveying and nautical term referring to the location of an object on a map through different compass bearings. In social science research, triangulation is a method used to increase the credibility and validity of research findings by pinpointing them with different observations, data, and methodologies – the latter of which is especially dear to UX’ers!
In UX research, methodological triangulation – or cross-examination – allows researchers to overcome the bias stemming from using only one research method or a limited sample, and to gather insights from multiple perspectives, adding extra layers to one-dimensional findings through different methodologies.
The core idea behind methodological triangulation – or just triangulation for the sake of your readability – is that by validating your insights through different methodologies you can:
– Mix the strengths of multiple research methods
– Supplement individual methodological weaknesses
– Increase the likelihood of making decisions with deep, meaningful and complete information
There are a lot of different methodologies you can use to spice up your user experience research. For instance, you can mix A/B with usability tests, and blend in survey questionnaires to add a quantitative kick.
However, if both time and sample size are of the essence, you must understand as much as possible from your users in one go. And one fast and effective way to do so is by setting up an unmoderated usability testing study entailing different qualitative and quantitative methodologies.
This is where using the WH-questions framework comes to the rescue. As we’ve already discussed in a previous piece, you can obtain very different answers from your users by framing your question ever so slightly. The same would work if you were to ask these questions yourself. Drawing from our previous article, you could ask yourself these three questions:
– What are the most used features in my online shopping experience?
– How are users engaging with my e-shopping experience?
– Why are users engaging with my e-shopping experience X rather than Y?
Each of these questions requires a different mindset in order to be answered correctly – or in UX research terms, a different methodology.
Starting off with the “What” question, you’re asking users to pinpoint specific elements and to give you close-ended answers. This is the ideal scenario for quantitative methodology. In usability testing, surveys are very effective at providing quantitative raw data. Alternatively, A/B testing is also effective, as it provides quantitative answers with behavioural connotations.
Both methods can be strengthened even further by observing your sample’s general behavioural patterns: by analysing “How” your users interact with your solutions, you can deduce pain points, levels of appeal and potential solutions to fix them. Usability tests, when powered up by a platform that records your users’ interactions, are one of the most effective methods to infer meaningful observations.
Closing up the triangle, let’s dive right into the “Why”. You have quantitative data and your expert observations based on your users’ behaviour. You now need to add more depth and bolster up the validity of your findings by giving a voice to your users with attitudinal qualitative methods – the “Why”.
Indeed, the key objective of any qualitative study is to understand your users’ motivations, opinions and ways of thinking when engaging with your solutions. By asking users to assess what they like/dislike or think about your solution with open-ended questions and in an interview-like format, you give them free space to go off-script and give you a lot more information. More tips on how to set up an interview-like format in a remote usability testing scenario here.
By creating a UX study with What, How and Why methodologies, you can obtain powerful information nuggets that will help you define insights backed by considerable quantitative and qualitative proof.
And with Sonar, you can set up remote usability tests with all these components! Set up your study with survey/quantitative questions (What), make observations based on your users’ behaviour (How), label and earmark them to a specific user or video section, and give a voice to your users through open-ended questions, either structured or unstructured (Why).
Furthermore, with our platform you can, among other things:
– Present solutions at every stage of development to customers and gather first impressions, levels of appeal, and suggestions for improvement.
– Get quantifiable insights on users’ attitudes through customisable survey question templates
– Leverage open qualitative questions to assess which aspects customers most like or dislike about each feature and why
– A/B test multiple solutions together to draw out comparisons and preferences
Wanna know how to understand your customers even more? Just click on the button below!