Although the fundamentals of questionnaire design are fairly universal, taking into consideration key differences in the Japanese market will help ensure you get maximum value and insight from your study.
The Japanese Market Research Association (JMRA) recently released their Internet research quality guidelines which outlines key points that need to be carefully considered when conducting online research in Japan. Jun Uematsu, COO at dataSpring, provides a practical summary for global researchers and consultants conducting studies in Japan.
Jun Uematsu- Chief Operating Officer, dataSpring
Jun has over 15 years in the market research industry with Infoplant / Yahoo Japan Value Insight, Inc. (currently Macromill, Inc). and most recently, he served as Director for dataSpring's parent company, Marketing Applications, Inc. Jun is a sales and business development veteran and seeks to further strengthen dataSpring's brand in Japan and throughout Asia.
What prompted the JMRA to issue these guidelines?
Like other developed countries, online research in Japan expanded rapidly during the early 2000's with the growth of the PC and internet penetration. The industry's current questionnaire design best practices emerged from that period.
However around 2010, there was a major shift from PCs to Smartphones, but questionnaire design guidelines did not keep up with the shift. As a result, decreasing survey participation rates in Japan has become a serious issue. The JMRA put together a committee to study the issue and develop updated guidelines.
What are the key points and recommendations?
Although there are many details, there are three main recommendations to follow:
First, respect survey participants. This recommendation emphasizes that survey participation is voluntary and that a respondent's privacy must be respected and they should be incentivized in proportion to the time and effort put into the survey.
This point also reiterates the importance of quality recruiting, reputable panel sources and good panel management. This helps ensure quality response and combats fraud.
Second, questionnaire design should emphasize respondent experience. The industry as a whole really pushed the limits of questionnaire length with online. We allowed clients to field surveys that ran 30 or more minutes and this compromised data quality. This also led to lower participation rates, even with questionnaires that had a reasonable length.
Now the recommended maximum length of interview (LOI) is 10 minutes, which is keeping with today's busy consumer. In addition to excessive LOI, surveys are regularly packed with huge matrix/grid questions and multiple open-ends, which causes drop-out and irregular answers. The recommendation also cautions researchers about including unnecessary screening questions, which can be tedious for the respondent and also contribute to higher drop-outs.
Third, the survey experience must be mobile optimized. Survey participation on mobile cannot be ignored. We have found close to 40% of panelists access surveys via their smartphone, but often the platform/questionnaire is not optimized for mobile. This can result in a compromised experience and poor data. The platform must be flexible to accommodate access from multiple devices.
Of course, the survey must take into account mobile questionnaire best practices, as well as user experience considerations, such as constant scroll direction minimizing response options to avoid long page scrolls. It is also advisable to avoid including too many videos, photos or graphics that can be slow and/or difficult to download.
Based on your own experience at dataSpring, are there other things you'd recommend when designing a questionnaire for the Japanese market?
Yes, we do have additional recommendations. For the Japanese market, questionnaires must be in Japanese. Now this may seem like an obvious recommendation, but we have clients that want to run their surveys in English to cut down on translation and survey programming costs.
But the English proficiency in Japan is not high enough to justify this shortcut. It biases the sample, so the study won't be representative. It also effects panelists comprehension and leads to lower participation rates, which in the end, can raise the cost.
Related to this, we do not recommend using Google, or other automated translation, for Japan (or really any market). We strive to not only properly translate, but also localize our studies, which helps comprehension and response, thus increasing data quality. Japanese is a very nuanced language, so the questions and response options must be short and clear, without too much explanation.
Learn about our panelists!
Any other considerations for the Japanese market?
This relates more to data analysis, but in Japan, answers in scale questions tend to be more neutral to negative in comparison to other countries, especially Asian countries. In keeping with Japanese culture, respondents are more reserved, so ratings can be somewhat depressed. For this reason, we recommend using even numbered scales with no neutral mid-point to force a positive or negative response.
Thank you for your time and insights!
Certainly. This is a hot topic in Japan and I am happy to help clients get the most out of their research in Japan.