SSI Blog
Blog
Women remember faces better than men: research via eye tracking software has uncovered a possible reason.

July16.3
Women are known to be generally better at remembering faces than men. Data from researchers at Canada’s McMaster University has posited a reason for this.

Their research shows that when looking at a face, women focus on a facial feature 17 times during a five second period, men only 10 times in the same five second period. What this means is the women are moving their eyes between features more frequently. “The way we move our eyes across a new individual’s face affects our ability to recognize that individual later,” explains Jennifer Heisz, who co-authored the paper on this work. “More frequent scanning generates a more vivid picture in your mind,” she says.

The relevance for researchers when doing advertising and other visual recall studies is that using quotas by gender and analyzing by this dimension may be especially important in visual recall research – even if the gender variable is not a relevant variable to the product being studies, the research topic or the data being collected.

Study results were originally published in Psychological Sciencea journal of the Association for Psychological Science.

New Issue of Knowledgelink!

June Cover KL

A must read on Mobile! Don’t miss out on the latest issue, read these articles and more inside.

When it comes to mobile research: Mind your language!

June 24 blog

It is very rare to see data collected in multi-or single coded questions being different across PCs and mobile devices.

In the data set I use to present on the subject this is also pretty much the case. The exact same data is gained doing the survey on a PC as on a mobile device for every item…with the odd exception of 3 data points.

Now I know this is not because they are in the middle of the list and may have been missed out, because we randomised the list, and I know they were all fully visible on the screen, because I checked… In two of the three instances the PC sample recorded a higher reading, in the other one it was the mobile sample that was higher. Both samples were quota controlled to be demographically equal, so what could be going on?

The answer lies in the nature of the items themselves and the relationship between one behaviour (the propensity to take a survey on a mobile device, which may be linked to early adopting) and a second – the one we are trying to measure. The PC sample scored more highly on “visited a museum recently” and lower on “bought a smartphone recently” so no surprises there.

The conundrum is in the third one “researched a product online” – this was scored more highly by the PC sample. But why would a supposedly early adopter audience (the smartphone sample) be less likely to research products online? The answer may be simple – because they research their products “on their phone”! They may technically be “online” but, not as far as they are concerned. As ever in research we need to be really careful that we use the same language and the same concepts as the people we are researching.

So two lessons from a couple of innocuous looking bits of data: don’t think that preventing users on mobile devices from accessing your survey isn’t without risk, you may be adding considerable bias to your survey; and as your mother used to say – mind your language!

Much Food for Thought for Researchers in US Big Data Working Group Report

researchers in Us Big Data working report

CASRO recently shared key points from the 79-page report of the US White House Big Data Working Group, which was released on May 1. The report, CASRO notes, is likely to influence future federal privacy regulation. Among its observations, findings and recommendations:

–Need to move the focus from restriction on collection of personal information to restriction on its use and sharing.

–Is notice and consent realistic given the complexity of such legal notices?

–Is stripping data of its links to individuals realistic given advances in re-identification software?

–Are current health privacy laws strong enough to protect individuals from predictive healthcare analysis and its consequences for higher premiums or coverage issues? Similarly, with the rise of online education, is student information adequately protected?  Further does surveillance on behalf of law enforcement have a chilling effect on free speech?

–The report criticizes data brokers whose activities are largely unregulated and calls for national rather than the current state laws to notify individuals of data breaches.

Much food for thought here for researchers, who gather and manage masses of sensitive information. Anonymity has always been a keystone of research, but anonymity isn’t realistic in the online world. How will the evolving legislations impact the research industry? Expert legal knowledge is becoming an absolute necessity for any company handling data gathered from individuals.

How do you see the issues listed here evolving in the next few years? How are these issues impacting your business today?

 

 

What researchers are reading…

There’s a frightening vision of the future of research in Dave Eggers novel “The Circle”. Set on a Google-like campus, this satire about the insidiousness of social media’s power in our lives is a bit too close to reality to be a comfortable read. It features a young heroine who is co-opted to further the goals of her company whose social media empire is intent on taking over the world. As part of the employees’ work responsibilities they have to answer research questions piped into headsets while they work. To indicate willingness to answer a question, they nod their heads, the signal for the voice in the headset to start asking the research questions. 1,200 questions a day…. 1,500…. the targets get ever higher. Looking out across acres of cubicles on the gorgeous, hi-tech, light-filled campus, our heroine sees her fellow workers looking like a vast herd of animals, heads nodding in unison all day long.

This is a chilling vision of a totally connected future, where, in the cause of reducing crime and opening up the wonders of the world to everyone on the planet, tiny cameras are dropped everywhere across the globe; where, in the interest of open democracy and reducing corruption, politicians “go transparent” meaning their every word and action is recorded; and where maintaining an adequate social media profile requires an exhausting daily output of comments, “zings” (likes) and breezy e-mails. There’s no time left for personal interaction in the world Eggers creates. In this world, all experiences must be shared, “privacy is theft” and no data can ever be erased.

Despite a few over-the-top plot details, the book is an entertaining — yet disturbing — read.  The issues raised are spot-on timely for our industry today: in the era of mobile research we now ask our respondents to submit photos and videos to record every minute detail of their experiences; in the era of big data, we match question responses with recorded data in the name of shorter questionnaires and better respondent experiences.  Are we creeping towards a research future where we’ll one day ask our own respondents to “go transparent”?

Read the book? What did you think?

Looking for the Hard to Reach Professional

June 4th copy copy

One thing we can count on in market research is getting poor data quality from a participant that is both unfamiliar with the topic and unable to answer the questions provided to them. Under these conditions a participant is incapable of providing feedback in the open ended questions, select don’t know when it is provided, and forced to choose an inaccurate answer when don’t know is not available. This can occur when someone qualifies for a survey that they were not supposed to receive. Some readers may ask themselves “why don’t these people simply drop out of the survey”. My answer to that is….why should they? We asked them to join our panel to take market surveys and receive rewards for doing so. We invite them to a specific survey opportunity and then they qualify for the study through a poor screener process. They don’t realize that the study is not for them, in fact, just the opposite. They believe the study is meant for them.

The solution to this problem is to prevent participants from receiving market surveys they should not through proper targeting and screening. Proper targeting and screening is a real challenge for a B2B market study. Although suppliers have 100s if not 1000s of variables to target on, those targeting criteria are often not in line with the specific population the researcher wants to interview. Why? Because there are so many different types of positions in today’s job market. In the medical field alone there are 100s or even 1000s of job types and that is only one industry.

Researchers have a clear picture of who they want to interview, but may leave ambiguity in the screener section of their questionnaire or in their communication to their sample provider. It’s easy for suppliers to target workers of a specific industry, but often the business professionals they are looking for exists across fields. It’s easy for suppliers to target based on title, but title may not be representative of the specific population the researcher wishes to study. Let’s face it, each company and industry has their own unique titles for similar responsibilities. For this reason it’s important to have a robust screener section that clearer describes, not only the type of position, but the activities that the professional working the position needs to be familiar with. It’s not enough to assume that we have the right person, based on their professional profile. For example, I may believe that the marketing manager at a company of over 3,000 employees has some budget for TV advertisement. However, SSI has over 3,000 employees and does not use TV as a form of marketing. For this reason, it is necessary to use the behavior itself (purchasing TV ad time) as one of the criteria in the screener section of the question. Taking the steps to target the specific population needed for the market survey will improve the overall quality of the data. Not to mention the participant experience! Happy testing.

 

Subscribe to the SSI Blog for more thought-provoking views, time-proven tips and innovative research insights or visit our corporate home page to find out how we can help you with your next research project.

 

 

Understanding the Interaction of Fraud in Market Research Studies and Business Studies

Blog May 27

There’s more concern about fraud in business studies than in consumer.

Why is that?

The reason is often given that the higher rewards offered for business studies attract fraudulent respondents. There may be some truth to that. But the primary reason that fraud is more prevalent in business studies is that business studies are almost always low incidence studies. True frauds are a tiny percentage on a well-managed panel, but their impact can be magnified when the incidence is low.

Here’s an example to illustrate the point:

Imagine that 1% of a B2B panel is fraudulent and they will tend to over-qualify, because they lie to get into the market research study. You have a project that needs 100 interviews at 100% incidence. Of the 100 interviews one will be a fraud. This is unfortunate but it doesn’t affect the data.

Now assume you need 100 interviews at 5% incidence. This means 2,000 people have to start the survey. Of these 2000, 20 will be frauds (that’s 1%) and they lie to qualify. This means that now 20 of your 100 interviews are fraudulent. Your data really is going to be affected now. If you lower the incidence rate yet further, say to 2%. That’s 5000 starts of which 50 are frauds…. and so it goes on. As the incidence drops, the risk increases dramatically.

The fraud problem is not something unique to business studies–it’s the same for any low incidence study. The solution is to put powerful fraud controls in place throughout the market research study process – from recruitment source selection to reward redemption. Examples used by SSI include:

1. Certification for all sources used

2. Source monitoring

3. Phone verification of respondent information given at join stage

4. Digital fingerprinting to prevent duplication

5. Data mining of profiling data, (so, for example someone who has two computers among 200 employees might throw up a red flag)

6. Matching panel join data with information panelists have shared on social media like LinkedIn (with their permission of course)

7. Monitoring throughout a panelists’ lifetime, as they earn credit for good performance

8. Reward claim controls, such as claim delays and verification via phone before rewards can be unlocked

9. Quality controls within the questionnaire – SSI recommends five, with disqualification only if two or more are failed

The vast majority of market research study respondents act in good faith – our goal is to catch the tiny minority, while preserving a pleasant respondent experience for everyone else.

 

Subscribe to the SSI Blog for more thought-provoking views, time-proven tips and innovative research insights or visit our corporate home page to find out how we can help you with your next research project.