How do we source our respondents?

We have a proprietary database of over 250,000 people in 79 countries. What typically happens is we create a recruitment screener (most often brand blind), then post the screener for our participants to apply. Our Project Managers review potential applicants based on your specific screening criteria, and hand-pick respondents best suited for your study. Because our team hand picks recruits, you won’t find professional respondents appearing over and over again in our studies – and we collect data on the back end so we know when they were last accepted. Respondents also create a profile video when they sign up, and our team uses those videos to get a feel for whether participants are a good fit. If – for some reason – a respondent is accepted to a study and you deem them unqualified, we will replace them free of charge.

Do you handle international studies?

We do! Beyond the reach of our database which includes 250,000 respondents in 79 countries, we have recruiting partners in key regions: Europe, LatAm, the Middle East, and Asia. We translate recruitment screeners, study questions, and study responses (using machines & humans so the transcripts are accurate). Our local partners also help adapt the study questions, so they don’t come across as awkwardly translated. Responses are typically in the country’s local language (our clients tend to prefer that), but we have completed global studies with English speaking respondents on request.

How do we ensure quality respondents?

We are obsessed with the authenticity of respondents and have a number of ways to ensure they are high quality. Every participant in our database provides their age (not DOB for privacy protection), gender and location. They also complete a profile video which helps our Project Managers identify the most qualified recruits for your study. We collect data on when they apply and are accepted to studies, so we know they are not professional respondents. We also guarantee the quality of our recruits – we’ll even replace any respondent you deem unqualified, free of charge. We QA the incoming responses and will request respondents re-record answers that don’t meet our standards, or simply replace them.

What’s the right number of recruits to include in a study?

The short answer is 30-50. All of the academic research on the topic we could find points to the fact that the goal of qualitative research is to reach the point of saturation, where adding one more recruit does not add materially to the insights generated. We have found generative studies can be successful with smaller sample sizes, as are studies that include subject matter experts (SMEs’ responses tend to be quite dense in their insight). We have also found that the culture of the company sponsoring the research can impact the right sample size; some companies will only look at qual studies with a minimum number of respondents.

Can you recruit B2B?

Yes. We have a partnership with a leading B2B recruitment firm, and are able to find very specific B2B recruits. It’s not cheap, but our clients have been very pleased with the calibre of professionals we have been able to recruit.

Can you recruit kids?

Yes. For kids 13-17, we require the parent or guardian to appear in the first video response to expressly give their consent prior to their teen’s participation. For kids under 13, we require the parent to be present for all video responses.

Can I invite some of my own recruits into the study?

Yes. You can bring your own recruits to a study in one of two ways: (1) through our DIY platform (fabric.is), you can create and manage your own study, or (2) you can have our Project Managers assist you in creating a study, and have them add your recruits. In both cases, our platform automatically sends the study invitation, fulfills incentives and provides transcripts

What types of studies is mindswarms mobile video ethnography best suited for?

There are a number of ways to answer this question. Our clients tend to fall into a number of different job types: messaging development (advertising, marketing, branding) and product development (UX, product marketing). There are a number of methodologies we employ in any given study: (1) understanding people’s baseline perceptions and attitudes (2) having them show and tell on camera (3) having them view some sort of digital stimulus (ads, packaging, video) and (4) responding from a retail environment. At this point, we are still finding the breaking point for our methodology, but because we are video based, studies where seeing for yourself is important is where we fit best.

What are the best types of questions to ask?

See Resource center. As a quick tl;dr;

    • Perceptions and attitudes – these questions aim to get a read on baseline perceptions and attitudes from participants.
    • Prompt and react – these questions make use of stimulus material to get the initial, in-the-moment reactions from consumers.
    • Show and tell – these questions ask participants to show something that’s emotionally resonant, or that aligns with their perception of something. These make for awesome theater in a video highlight reel!

Is it possible to ask follow up questions? IDIs

Yes. IDIs can be discussed at the outset of a project, or added on if desired. Please note that we do not recruit just for IDIs – IDIs are used only as a follow up to a mindswarms study. We have found that this has some advantages: (1) the mindswarms study allows us to tailor the discussion guide to findings that bubble up in the first research phase; (2) the asynchronous portion allows us to tailor the discussion guide somewhat to each participant, rather than using a blanket discussion guide for all.

Can the responses be used for ad campaigns/social media?

Generally, no. Participants retain full rights to their likeness. You would need to look into contacting and compensating each participant you wished to use in public-facing work.

Can a study include more than 10 questions?

No. We find that after 10 questions, participants feel fatigued, lowering the quality of their responses. 10 questions has really served as the perfect balance point for us – longer and we risk fatigue; shorter and we may not get everything we want to hear from participants. Note that this means you will get 10 minutes of time per participant – which we have found is the average time a person will speak in a 90-minute focus group. We are happy to work with you to maximise the use of the 10 questions!

Can respondents screen share? Hug the laptop

We currently do not support screen sharing. We have a methodology for mobile in-app recording called ‘hug the laptop’. We’re happy to discuss the logistics of this over a call, or offer a demonstration.

How long does it take to conduct a study?

Studies generally take 10-12 days from kick off to study completion for domestic recruits, and 20-25 days for international. These timeframes are very general, and can be shifted around depending on the specificity of the recruit. The timelines listed are inclusive of first conversations, proposals, and the actual running of the study. We are also happy to talk about expediting options.

What’s the difference between a top line and a full report?

A topline report is an abbreviated summary report of the findings. Most topline reports include a background of the research, explanation of the methodology, an one-pager executive summary, and a few pages of supporting detail for a management summary.

A full report includes a topline as well as detailed findings. Detailed findings are typically outlined as a question by question summary with supporting verbatims, but can also be adjusted to other formats (such as journey mapping). A full report will also look to explain how the question by question findings hang together, to paint an overall picture.

What do I get for $5,000 (the starting point)?

For $5,000, you would receive a full report deliverable from a manual analysis and summary of the participant responses to the study. This process starts with an analysis check in session once response collection is complete, with updates along the way as the report draft is being built out. The report deliverable will be shared with you as a draft, which can be adjusted as needed for presentation purposes. This starting point does not include additional analysis exercises such as tally exercises, journey mapping, comparative analysis, nor segmentations.

Do you have report samples?

Depending on your research objectives and reporting needs, we can share a report sample with you. We have report samples for standard toplines and full reports, as well as some more creative samples that integrate video and highlight reels.

How long does a report take to create?

Once response collection is complete, it typically takes 5-7 calendar days for an initial report draft. After the initial report draft is delivered, we are happy to work iteratively to refine the report further if needed.

What is our methodology? Patterns/anomalies, AI, Miro

For a typical manual analysis, we start by summarizing the participant responses per study question to draw out any patterns across the sample. As we comb through the responses, we also note anomalies to investigate further by reviewing that participant’s responses across the survey as a whole to understand their outlying perceptions. Throughout this process, we may utilize tallies, categorizations, and build out verbatim maps to visualize our synthesis of the responses, which allows us to build stories around the patterns and anomalies that arise. We utilize the Miro platform to collaborate as a team throughout the process and create visual artifacts of the summaries.

 
Additionally, we utilize Fabric AI to supplement our findings and investigate any additional analysis starting points and direction that the AI output might provide. The AI is a combination of NLP and sentiment analysis that generates a summary of each study question, serving up videos and verbatims that align with key topics, and data that captures 8 primary emotions. It also captures mentions, when words appeared over time, and their relative priority.

How much is a video highlight reel? Starting at $3,500

A standard video highlight reel starts at $3,500, which includes collaborative outlining of the video edit (the mindswarms team will initiate a paper edit document) and professional video editing of the content for one 2-3 minute highlight reel. The scope and pricing of the video editing work can be adjusted from there to accommodate extra editing components (such as b-roll, music, length of video edit, or number of video edits).

How long does it take? 3-5 days

Once response collection is complete, it typically takes 5-7 days to get to an initial, rough cut of a standard video edit. The rough cut includes the selected clips cut and edited together. Once the content for the rough cut is approved, the editor will apply final edits to the video (smoothing out audio and transitions, adding b-roll and music).

What’s the process? Paper edit

Once response collection is complete, the team will connect with you to discuss objectives and audience for the video highlight reel, which will help to arrive at a storyline for the video. Once a story/narrative is decided upon, the team will initiate a paper edit document, which is a text outline with selected quotes from the research study. Once the paper edit is complete and approved by you (through a collaborative back-and-forth on the document), the team will brief our video editor on the paper edit and style requirements so that the editor can cut together the rough cut of the video highlight reel.

Are transcripts included?

Yes. Transcripts populate immediately via Google Speech. These transcripts are then looked over by a human and corrected for higher accuracy within 24 hours. We have found that the human + machine approach allows us to deliver extremely accurate transcripts, which help us and you comb over the responses to uncover key findings.

How are the transcripts created? Human + Machine

We use machine generated transcripts as a first pass, and these show up nearly instantly. The machine generated transcripts are created through a Google Speech API. These transcripts are then corrected by a human for accuracy.

How long does it take to get my transcripts?

Machine generated transcripts populate nearly instantly after a response is submitted. Human corrected transcripts are available within 24 hours, and often within 2-3 hours of response submission. Note that for international studies, this 24 hour window does not apply. Google Speech transcripts will still populate instantly as a translation from local language to English (if applicable), but human corrected transcripts need to go through an additional layer of translation before they populate to the platform.

Do you provide video subtitling?

We are happy to use subtitles in a video highlight reel. Please let us know if this is something you would be interested in, and we can work with you!
We do not offer subtitle options for our in-platform responses. Our transcripts are already very accurate, and we can replace participants whose audio is not up to our quality standards.

Can I download videos? What format?

Each video response can be downloaded individually. If you click a response, you should see an icon to download. This will allow you to select the video, or the video + transcript before downloading your selections! The default video format is .MOV.

Can I view screener responses?

You can see a participant’s screener responses by clicking on any of their video responses, and then selecting “screener” in orange text in the lower left corner of the video response player.

What are score and magnitude?

Score measures how positive (or not) a statement is, and magnitude measures the strength of a statement. So the statement “I’m ok with X” spoken flatly will be fairly neutral in score and low on magnitude, while emphatic claims like “I hate X!” or “I love X!” will be scored quite negatively and positively, respectively, and both will have a high magnitude.

Can I share the study? Links?

You can share the entire research matrix by copying the URL at the top of your browser so that other people in your org can see responses. You can share individual responses by clicking into them and clicking on “share response”.
FABRIC: You can add additional researchers (and give them permissions) via the study editor.

Can I see study progress?

Mindswarms: We will share a link with you to the research matrix once the first 1-2 participants have completed the question set. As more participants complete the study, the research matrix will populate with more completed response sets. No waiting until all responses have been collected to see the research assets!


Fabric DIY: You will see participants in progress as they respond to each of the questions. You’ll be able to know, for example, if somebody has stopped after the fifth question for some time, and be able to follow up with them.

Can I create segments?

Our project managers can create segments and sort participants into those segments for you. If you have segments in mind when developing a study, please let us know! We can also add segments at any time. Please note that segmentation is limited to one layer – so while we can have a segment for “coffee drinker” and one for “non coffee drinker”, we cannot have a segmentation like: “coffee/tea drinker”; “non coffee/tea drinker”; “coffee/non tea drinker” etc. as this would involve more than one layer.

What does your AI track exactly?

Our AI tracks emotion and sentiment, then maps that data to significant words and phrases pulled from the transcripts. Our AI uses parts of speech and paralanguage (pitch, intonation, pace, etc.) to develop it’s insights. Our AI was trained on a US demographically-representative dataset.

Subscribe to our newsletter
Get notified about the updates, latest news and studies.