The short answer is 30-50. All of the academic research on the topic we could find points to the fact that the goal of qualitative research is to reach the point of saturation, where adding one more recruit does not add materially to the insights generated. We have found generative studies can be successful with smaller sample sizes, as are studies that include subject matter experts (SMEs’ responses tend to be quite dense in their insight). We have also found that the culture of the company sponsoring the research can impact the right sample size; some companies will only look at qual studies with a minimum number of respondents.
Yes. For kids 13-17, we require the parent or guardian to appear in the first video response to expressly give their consent prior to their teen’s participation. For kids under 13, we require the parent to be present for all video responses.
There are a number of ways to answer this question. Our clients tend to fall into a number of different job types: messaging development (advertising, marketing, branding) and product development (UX, product marketing). There are a number of methodologies we employ in any given study: (1) understanding people’s baseline perceptions and attitudes (2) having them show and tell on camera (3) having them view some sort of digital stimulus (ads, packaging, video) and (4) responding from a retail environment. At this point, we are still finding the breaking point for our methodology, but because we are video based, studies where seeing for yourself is important is where we fit best.
See Resource center. As a quick tl;dr;
Yes. IDIs can be discussed at the outset of a project, or added on if desired. Please note that we do not recruit just for IDIs – IDIs are used only as a follow up to a mindswarms study. We have found that this has some advantages: (1) the mindswarms study allows us to tailor the discussion guide to findings that bubble up in the first research phase; (2) the asynchronous portion allows us to tailor the discussion guide somewhat to each participant, rather than using a blanket discussion guide for all.
Generally, no. Participants retain full rights to their likeness. You would need to look into contacting and compensating each participant you wished to use in public-facing work.
No. We find that after 10 questions, participants feel fatigued, lowering the quality of their responses. 10 questions has really served as the perfect balance point for us – longer and we risk fatigue; shorter and we may not get everything we want to hear from participants. Note that this means you will get 10 minutes of time per participant – which we have found is the average time a person will speak in a 90-minute focus group. We are happy to work with you to maximise the use of the 10 questions!
We currently do not support screen sharing. We have a methodology for mobile in-app recording called ‘hug the laptop’. We’re happy to discuss the logistics of this over a call, or offer a demonstration.
Studies generally take 10-12 days from kick off to study completion for domestic recruits, and 20-25 days for international. These timeframes are very general, and can be shifted around depending on the specificity of the recruit. The timelines listed are inclusive of first conversations, proposals, and the actual running of the study. We are also happy to talk about expediting options.
A topline report is an abbreviated summary report of the findings. Most topline reports include a background of the research, explanation of the methodology, an one-pager executive summary, and a few pages of supporting detail for a management summary.
A full report includes a topline as well as detailed findings. Detailed findings are typically outlined as a question by question summary with supporting verbatims, but can also be adjusted to other formats (such as journey mapping). A full report will also look to explain how the question by question findings hang together, to paint an overall picture.
For $5,000, you would receive a full report deliverable from a manual analysis and summary of the participant responses to the study. This process starts with an analysis check in session once response collection is complete, with updates along the way as the report draft is being built out. The report deliverable will be shared with you as a draft, which can be adjusted as needed for presentation purposes. This starting point does not include additional analysis exercises such as tally exercises, journey mapping, comparative analysis, nor segmentations.
Depending on your research objectives and reporting needs, we can share a report sample with you. We have report samples for standard toplines and full reports, as well as some more creative samples that integrate video and highlight reels.
Once response collection is complete, it typically takes 5-7 calendar days for an initial report draft. After the initial report draft is delivered, we are happy to work iteratively to refine the report further if needed.
For a typical manual analysis, we start by summarizing the participant responses per study question to draw out any patterns across the sample. As we comb through the responses, we also note anomalies to investigate further by reviewing that participant’s responses across the survey as a whole to understand their outlying perceptions. Throughout this process, we may utilize tallies, categorizations, and build out verbatim maps to visualize our synthesis of the responses, which allows us to build stories around the patterns and anomalies that arise. We utilize the Miro platform to collaborate as a team throughout the process and create visual artifacts of the summaries.
Additionally, we utilize Fabric AI to supplement our findings and investigate any additional analysis starting points and direction that the AI output might provide. The AI is a combination of NLP and sentiment analysis that generates a summary of each study question, serving up videos and verbatims that align with key topics, and data that captures 8 primary emotions. It also captures mentions, when words appeared over time, and their relative priority.
A standard video highlight reel starts at $3,500, which includes collaborative outlining of the video edit (the mindswarms team will initiate a paper edit document) and professional video editing of the content for one 2-3 minute highlight reel. The scope and pricing of the video editing work can be adjusted from there to accommodate extra editing components (such as b-roll, music, length of video edit, or number of video edits).
Once response collection is complete, it typically takes 5-7 days to get to an initial, rough cut of a standard video edit. The rough cut includes the selected clips cut and edited together. Once the content for the rough cut is approved, the editor will apply final edits to the video (smoothing out audio and transitions, adding b-roll and music).
Once response collection is complete, the team will connect with you to discuss objectives and audience for the video highlight reel, which will help to arrive at a storyline for the video. Once a story/narrative is decided upon, the team will initiate a paper edit document, which is a text outline with selected quotes from the research study. Once the paper edit is complete and approved by you (through a collaborative back-and-forth on the document), the team will brief our video editor on the paper edit and style requirements so that the editor can cut together the rough cut of the video highlight reel.
Yes. Transcripts populate immediately via Google Speech. These transcripts are then looked over by a human and corrected for higher accuracy within 24 hours. We have found that the human + machine approach allows us to deliver extremely accurate transcripts, which help us and you comb over the responses to uncover key findings.
We use machine generated transcripts as a first pass, and these show up nearly instantly. The machine generated transcripts are created through a Google Speech API. These transcripts are then corrected by a human for accuracy.
Machine generated transcripts populate nearly instantly after a response is submitted. Human corrected transcripts are available within 24 hours, and often within 2-3 hours of response submission. Note that for international studies, this 24 hour window does not apply. Google Speech transcripts will still populate instantly as a translation from local language to English (if applicable), but human corrected transcripts need to go through an additional layer of translation before they populate to the platform.
We are happy to use subtitles in a video highlight reel. Please let us know if this is something you would be interested in, and we can work with you!
We do not offer subtitle options for our in-platform responses. Our transcripts are already very accurate, and we can replace participants whose audio is not up to our quality standards.
Each video response can be downloaded individually. If you click a response, you should see an icon to download. This will allow you to select the video, or the video + transcript before downloading your selections! The default video format is .MOV.
You can see a participant’s screener responses by clicking on any of their video responses, and then selecting “screener” in orange text in the lower left corner of the video response player.
Score measures how positive (or not) a statement is, and magnitude measures the strength of a statement. So the statement “I’m ok with X” spoken flatly will be fairly neutral in score and low on magnitude, while emphatic claims like “I hate X!” or “I love X!” will be scored quite negatively and positively, respectively, and both will have a high magnitude.
You can share the entire research matrix by copying the URL at the top of your browser so that other people in your org can see responses. You can share individual responses by clicking into them and clicking on “share response”.
FABRIC: You can add additional researchers (and give them permissions) via the study editor.
Mindswarms: We will share a link with you to the research matrix once the first 1-2 participants have completed the question set. As more participants complete the study, the research matrix will populate with more completed response sets. No waiting until all responses have been collected to see the research assets!
Fabric DIY: You will see participants in progress as they respond to each of the questions. You’ll be able to know, for example, if somebody has stopped after the fifth question for some time, and be able to follow up with them.
Our project managers can create segments and sort participants into those segments for you. If you have segments in mind when developing a study, please let us know! We can also add segments at any time. Please note that segmentation is limited to one layer – so while we can have a segment for “coffee drinker” and one for “non coffee drinker”, we cannot have a segmentation like: “coffee/tea drinker”; “non coffee/tea drinker”; “coffee/non tea drinker” etc. as this would involve more than one layer.
Our AI tracks emotion and sentiment, then maps that data to significant words and phrases pulled from the transcripts. Our AI uses parts of speech and paralanguage (pitch, intonation, pace, etc.) to develop it’s insights. Our AI was trained on a US demographically-representative dataset.