SSI Blog
Blog
The Power of graphical presentation …and a herd of Buffalo

Did you know that “buffalo buffalo buffalo buffalo buffalo buffalo buffalo buffalo” is a legitimate sentence in the English language? Would you like to read an explanation of how this could be true? Since the explanation would include multiple references to subordinate clauses, punctuation and arcane usages of the English language, the answer is no, you probably would just as soon NOT read such an explanation.

However, the infographic (below) illustrating this concept is almost irresistible. The infographic goes step-by-step through the process of arriving at this strange sentence and by the end you thoroughly understand its meaning.

The Power of graphical presentation …and a herd of Buffalo

http://www.prooffreader.com/2013/09/english-buffalo-buffalo-buffalo-buffalo.html

OK, it’s a bit of a stretch, even when all has been revealed, and it’s unlikely you’ll be using the sentence any time soon in general conversation. But it’s a good example of how a well-designed infographic can: a) draw us in and engage our interest in something we might have otherwise dismissed; b) succeed in making a confusing concept clear; and c) convince us of something we would never have previously thought could be true.

Have you been unexpectedly convinced of something by a graphical presentation of data?

  • 177342021
M-Turk Workers | Market Research Methods – SSI

Use Mechanical Turk Workers for Market Research?

Being asked to think about M-Turk “workers” as a sample source for market research I did some googling to catch up on M-Turk. Actually I “bing’ed” but I don’t think that is a verb yet…

I found an article describing “how M-Turk is broken”* which referenced a lemon market, a term used by Economists. As a student of Economics in my younger days (not a good one I might add) I was intrigued.

A lemon market is one where the quality of the goods cannot be assessed prior to purchase. In M-Turk you pay for HITs (Human Intelligence Tasks) and will get good workers and poorer workers, without any ability to know in advance who is who. In a lemon market the price is driven down, because quality cannot be assessed, and therefore the good workers would leave the market altogether, leaving only the poor – hence the assertion that M-Turk is “broken”.

M-Turk Workers May Create Low Quality Market Research Results

How much more so this must be when asking M-Turk workers to undertake market research projects as respondents. At least with many other HITs there is a quantifiable right and wrong answer to the task – the overall quality of the work done by an individual can be assessed, post hoc. In market research this is much harder. Sure you may know the overall answer you have got is “wrong”, because the proportion of people who gave that answer is incorrect. But who among the individual people gave the wrong answer and who gave the right answer? There is no way to tell. Indeed, what should the answer be? If you knew that you would not need to do the market research in the first place. Having never used M-Turk I cannot vouch for the accuracy of any data collected via it. But the “problem” of poor quality response must be just as likely there as in an online access panel.

Methods of Data Collection | Facebooks Social Research

Facebook’s Social Research: Track & Manipulate Your Mood

 

Facebook’s Social Research: Track & Manipulate Your Mood

How does Facebook make you feel?

How often do you read the terms and conditions for communities such as Facebook or Twitter?  It is often perceived that the photos and status updates are private only to you and your friends, but in reality, third party researchers are also able to see this. While most people may come to terms that there may be someone else “watching,” what happens when these methods of data collection are used to manipulate how you feel?

Questionable Methods of Data Collection

A recently released research paper is causing quite an uprising among Facebook users.  Researchers from Cornell, University of California and Facebook created a “mood” study where they manipulated the posts that a person would see based on whether they were “positive” posts or “negative” posts. They then measured the likelihood that the subject’s post would follow suit. It was found that subjects that were exposed to more “negative” posts were slightly more likely to post a negative comment than those who were exposed to more “positive” thoughts.  Not only are people outraged that Facebook would use these methods of data collection and “read” their status updates, but they are even more upset with the fact that Facebook’s Social Research project may have been responsible for their bad day they may or may not have had in early 2012 as a result of this experiment.

How does Facebook’s methods of data collection make you feel? Is this any different than conducting social experiments offline where a stimulus is presented to an unaware subject and then the outcome is recorded and measured?

Women remember faces better than men: research via eye tracking software has uncovered a possible reason.

July16.3
Women are known to be generally better at remembering faces than men. Data from researchers at Canada’s McMaster University has posited a reason for this.

Their research shows that when looking at a face, women focus on a facial feature 17 times during a five second period, men only 10 times in the same five second period. What this means is the women are moving their eyes between features more frequently. “The way we move our eyes across a new individual’s face affects our ability to recognize that individual later,” explains Jennifer Heisz, who co-authored the paper on this work. “More frequent scanning generates a more vivid picture in your mind,” she says.

The relevance for researchers when doing advertising and other visual recall studies is that using quotas by gender and analyzing by this dimension may be especially important in visual recall research – even if the gender variable is not a relevant variable to the product being studies, the research topic or the data being collected.

Study results were originally published in Psychological Sciencea journal of the Association for Psychological Science.

New Issue of Knowledgelink!

June Cover KL

A must read on Mobile! Don’t miss out on the latest issue, read these articles and more inside.

When it comes to mobile research: Mind your language!

June 24 blog

It is very rare to see data collected in multi-or single coded questions being different across PCs and mobile devices.

In the data set I use to present on the subject this is also pretty much the case. The exact same data is gained doing the survey on a PC as on a mobile device for every item…with the odd exception of 3 data points.

Now I know this is not because they are in the middle of the list and may have been missed out, because we randomised the list, and I know they were all fully visible on the screen, because I checked… In two of the three instances the PC sample recorded a higher reading, in the other one it was the mobile sample that was higher. Both samples were quota controlled to be demographically equal, so what could be going on?

The answer lies in the nature of the items themselves and the relationship between one behaviour (the propensity to take a survey on a mobile device, which may be linked to early adopting) and a second – the one we are trying to measure. The PC sample scored more highly on “visited a museum recently” and lower on “bought a smartphone recently” so no surprises there.

The conundrum is in the third one “researched a product online” – this was scored more highly by the PC sample. But why would a supposedly early adopter audience (the smartphone sample) be less likely to research products online? The answer may be simple – because they research their products “on their phone”! They may technically be “online” but, not as far as they are concerned. As ever in research we need to be really careful that we use the same language and the same concepts as the people we are researching.

So two lessons from a couple of innocuous looking bits of data: don’t think that preventing users on mobile devices from accessing your survey isn’t without risk, you may be adding considerable bias to your survey; and as your mother used to say – mind your language!

Much Food for Thought for Researchers in US Big Data Working Group Report

researchers in Us Big Data working report

CASRO recently shared key points from the 79-page report of the US White House Big Data Working Group, which was released on May 1. The report, CASRO notes, is likely to influence future federal privacy regulation. Among its observations, findings and recommendations:

–Need to move the focus from restriction on collection of personal information to restriction on its use and sharing.

–Is notice and consent realistic given the complexity of such legal notices?

–Is stripping data of its links to individuals realistic given advances in re-identification software?

–Are current health privacy laws strong enough to protect individuals from predictive healthcare analysis and its consequences for higher premiums or coverage issues? Similarly, with the rise of online education, is student information adequately protected?  Further does surveillance on behalf of law enforcement have a chilling effect on free speech?

–The report criticizes data brokers whose activities are largely unregulated and calls for national rather than the current state laws to notify individuals of data breaches.

Much food for thought here for researchers, who gather and manage masses of sensitive information. Anonymity has always been a keystone of research, but anonymity isn’t realistic in the online world. How will the evolving legislations impact the research industry? Expert legal knowledge is becoming an absolute necessity for any company handling data gathered from individuals.

How do you see the issues listed here evolving in the next few years? How are these issues impacting your business today?