Online Panel Research: A Data Quality Perspective with author Mario Callegaro


  • Author: Statistics Views
  • Date: 08 May 2014
  • Copyright: Image appears courtesy of Mr Callegaro and Google

This month, Wiley is proud to publish Online Panel Research: A Data Quality Perspective by Mario Callegaro, a survey research scientist based at Google.

This book provides new insights into the accuracy and value of online panels for completing surveys

Over the last decade, there has been a major global shift in survey and market research towards data collection, using samples selected from online panels. Yet despite their widespread use, remarkably little is known about the quality of the resulting data.

This edited volume is one of the first attempts to carefully examine the quality of the survey data being generated by online samples. It describes some of the best empirically-based research on what has become a very important yet controversial method of collecting data. Online Panel Research presents 19 chapters of previously unpublished work addressing a wide range of topics, including coverage bias, nonresponse, measurement error, adjustment techniques, the relationship between nonresponse and measurement error, impact of smartphone adoption on data collection, Internet rating panels, and operational issues.

The datasets used to prepare the analyses reported in the chapters are available on the accompanying website:

thumbnail image: Online Panel Research: A Data Quality Perspective with author Mario Callegaro

  • Covers controversial topics such as professional respondents, speeders, and respondent validation.
  • Addresses cutting-edge topics such as the challenge of smartphone survey completion, software to manage online panels, and Internet and mobile ratings panels.
  • Discusses and provides examples of comparison studies between online panels and other surveys or benchmarks.
  • Describes adjustment techniques to improve sample representativeness.
  • Addresses coverage, nonresponse, attrition, and the relationship between nonresponse and measurement error with examples using data from the United States and Europe.
  • Addresses practical questions such as motivations for joining an online panel and best practices for managing communications with panelists.
  • Presents a meta-analysis of determinants of response quantity.
  • Features contributions from 50 international authors with a wide variety of backgrounds and expertise.

This book will be an invaluable resource for opinion and market researchers, academic researchers relying on web-based data collection, governmental researchers, statisticians, psychologists, sociologists, and other research practitioners.

1. With an educational background in survey research and methodology from the University of Nebraska - Lincoln, when and how did you first become aware of statistics as a discipline?

I studied sociology as an undergraduate at the University of Trento in Italy. The sociology and social research department at Trento focuses on quantitative sociology, so I actually started taking courses in statistics right in my first year of college. That was how everything started.

2. You are currently a survey research scientist at Google, London. What led you to Google in the first place?

They recruited me via LinkedIn. Google’s quantitative marketing team was looking for a survey research scientist. They approached me, and I went through the whole interview process.

3. Can you please give us an idea of your every-day work at Google e.g. to the size of the data sets you work on and collaboration with other departments, statisticians, etc.?

On my team, most of my colleagues are statisticians. Some people—like me— have more of a background in survey or market research. The sizes of the datasets that I work on are not as huge as the ones my colleagues handle. Survey data is generally not that massive. At the same time, I used to work on many international customer satisfaction surveys. They provided good-sized datasets because of the scale and the international scope of these surveys.

4. What is it that Google aims for with customer satisfaction and survey projects?  

Our customer satisfaction goals are to improve the products and listen to customers’ feedback. It’s a way to get a quantitative view of a product from the user’s perspective. For example, I used to work on AdWords customer satisfaction surveys. The goal was to understand what customers wanted and get that feedback to the Google engineering team that was developing the product.

A secondary goal is to listen to new ideas. We always have open-end questions at the ends of our surveys to enable customers to respond more freely. You receive lots of great ideas and suggestions, as well as complaints and challenges with which our customers are struggling.

         Mario Callegaro

5. How does your own work at Google become integrated into the products that they produce as an organization?

When working on the voice of the customer surveys, my work eventually results in suggestions for product improvements—for example, training areas for the customer support team to focus on. In more traditional market research projects, the knowledge we acquire becomes integrated in guidance we give advertisers in order to better understand users. And sometimes my work goes directly into a product. We just launched a new ad format, AdWords Consumer Ratings Annotations. That ad format provides consumer opinions collected via Google Consumer Surveys. I was part of the team that designed and launched the product and provided my survey research expertise.

6. Survey responses have been dropping in the past two decades. What are the key challenges in engaging the public in surveys?

That is a very good question. We have seen a drop in response rates in industrialised countries all over the world. It is not a European or American phenomenon; it is definitely international. Response rates have been dropping for years, and there are many studies showing drops in response rates even with methods where we historically have always obtained high response rates, such as face-to-face interviews.

There are many reasons why they are dropping: different lifestyles and probably the fact that there are now many more surveys than in the past. Organisations are increasing their efforts to locate people, make more call-backs, send reminders, or offer incentives to increase response rates. In face-to-face and telephone surveys, it is difficult to contact the respondents, so nonresponse is higher because of noncontact.

In the specific case of web surveys, which are becoming more prominent, there are many challenges to obtain a good response rate. A lot of web surveys invitations are sent via email, so sometimes they end up in spam filters due to poor choices of keywords. A good practice is to always double check the language you are using for the invitation text. Another challenge is the fact that we receive more and more emails than in the past, so the salience of the invitation is lower. That invitation might just go in the queue and be forgotten. At Google, we generally send two reminders because, in our experience, that doubles the response rate from the first invitation.

It is very challenging, and it is a problem for everybody! Offices of National Statistics all over the world have the same issue. It is a constant struggle. The good news is that recent studies show that the relationship between bias and response rates is not as direct and linear as we first thought. It is important to look at response bias as well as the response rate. In other words, we can study if the non-respondents are different than respondents on some metrics. Sometimes they are; sometimes they’re not.

One of the agreements with the authors was to ensure that the datasets would be public, so anyone could redo or improve the analysis if they so wish. The goal was to be as transparent as possible—and our book is setting an example. The end game is to really understand the quality of data we obtain from online panels, which are becoming the primary source of research for many companies, academia, and other organizations.

Another problem is the sample size. If you are looking for a niche target of a population that is not big in numbers, response rate is important because you are already starting with few potential respondents. If the response rate is low, you have very few answers to analyse, so the power of the data goes down very quickly.

With small populations, we switch modes and contact them via telephone to receive a higher response rate. In some studies, we are using mixed mode—e.g. we start via phone and then follow up with an email to get some extra boost. We also send an advance letter to everybody a few days before the phone call, to assure them that the survey is legitimate and that it will not take long to complete.

Business surveys can also be challenging because sometimes there are company policies that forbid employees to answer surveys. That’s one reason why we always try to send an advance letter. Sometimes we have our dedicated customer support team promoting the survey in advance, stressing that whilst the call is made by another company, they work on our behalf. We do everything that we can to increase saliency and also to reduce the frequency. For many studies, there is no reason to ask questions on a quarterly basis when we can do it every six months instead.

We use many strategies to ensure a good response rate so we have enough respondents to do analysis at a finer level of detail. If you have a decent sample size but wish to break it down into multiple categories, very quickly you will have very few cases per cell, thus making all your statistical analysis very challenging.

7. Congratulations upon the publication of your book, Online Panel Research: A Data Quality Perspective. Could you please tell us what the book is about?

The process started in December 2010 when Wiley UK asked me to write it. The original idea was that I would write a book about online panels. I thought that even if I could have written a book on the topic, a) I wouldn’t have the time and b) I would very likely not push the boundaries of the scientific knowledge enough. I therefore turned around the original request and asked if I could lead an edited book teaming up with other experts in the field as associate editors. We arranged a call for book chapters in July of 2011 and then selected the best ones, ensuring that they were unpublished and innovative. That way, we could produce new knowledge on the topic. One of the agreements with the authors was to ensure that the datasets would be public, so anyone could redo or improve the analysis if they so wish. The goal was to be as transparent as possible—and our book is setting an example. The end game is to really understand the quality of data we obtain from online panels, which are becoming the primary source of research for many companies, academia, and other organizations.

The book is trying to answer many questions on the quality of such data. It is amazing that online panels have been used for the past 15 years and yet there is no textbook out there. This is the first book to focus on online quality, something that everyone is struggling with—including Google. I am always asked, “How good is this data?” That is a very difficult question to answer. A lot of people have collaborated on this book, and I am very excited about its launch. It was a big enterprise to manage 49 authors! We have six editors and a very international team. We will launch the book at the conference of the American Association of Public Opinion Research (AAPOR) this May. That is the main conference for survey methods. We will see how receptive the research community is to this book.

8. What will be your next book-length undertaking?

I was already working on a monograph on web survey methodology with two coauthors in parallel. We are submitting the manuscript to the publisher as we speak. The book will be out by the end of year or early in 2015.

As you can imagine, I have been very busy lately so I do not have any book projects for the next few years. I want to spend my time to go back to primary research and to work on the books in terms to understand the reaction from the readers and the research community. These are my first two books, so I am trying to learn as much as I can from the experience.

The book is trying to answer many questions on the quality of such data. It is amazing that online panels have been used for the past 15 years and yet there is no textbook out there. This is the first book to focus on online quality, something that everyone is struggling with—including Google. I am always asked, “How good is this data?” That is a very difficult question to answer.

9. Your current research areas are web survey design, smart-phone surveys, telephone / cell phone surveys, and questionnaire design in which you have published numerous papers, book chapters and conference presentations. What are you working on currently and what do you hope to achieve through your research?

I am working on mobile web surveys, which are really the future. For some populations and in some countries, collecting data via smartphone surveys or even apps is going to be the only way to do surveys. In emerging countries, most people who were not online before, are skipping the laptop/desktop internet connection and going straight from no internet to using a smartphone or a tablet. It’s important not to forget the respondent’s literacy level. That is needed not just for reading but also for being able to use the technology, whether it is a computer or a smartphone. Some of new research is just starting to surface on this topic, and I expect to see far more.

I am also working on survey paradata, which is a very new and exciting topic. Wiley actually published a book on paradata last year, in which I have a chapter on web survey paradata. Paradata can give insights on the quality of the actual survey responses, especially in self-administered modes because there is no interviewer we can talk to or debrief in order to assess issues with the questionnaire or other factors affecting the quality of the data.

10. Do you think governments and learned societies can do more to raise awareness of statistics and stop people being afraid of numbers in terms of survey research and if so, could a global corporation like Google help?

I really like this question. I myself am very active within a number of organisations such as AAPOR. Since I moved to the UK, I have been a member of the Market Research Society and also of the Royal Statistical Society.

Our Chief Economist, Hal Varian, spoke in an interview about how being a statistician was a sexy job. That got a lot of traction in the press and I hope inspired many students to study statistics. Google employs a lot of statisticians, so there is indeed awareness of the importance of the role of the statistician. There is now so much data to analyse compared to what was available only a few years ago. I would definitely say that there is more data than we can analyse, or that we have the time to analyse.

In terms of helping people being afraid of numbers, this is a broad challenge that should be tackled from different angles. 2013 was the year of statistics, and many initiatives were organized around the world by statistical associations. Now there is a permanent website——dedicated to the topic.

Another way is to educate journalists on how to interpret data and recognise the quality of data. I am a big fan of the Data Journals Handbook, for example (

Finally, not being afraid of numbers is something that should be taught in schools from the very beginning. I see a lot of potential in schools that start teaching kids programming and writing code. The act of learning a programming language is very related with numbers. Maybe that is the way to introduce the young generation to numbers in a more exciting way, by starting writing little apps in schools.

11. Are there people or events that have been influential in your career?

The most influential person in my career is Professor Alan McCutcheon, who recruited me to go to the University of Nebraska, Lincoln to start a Master and then later a PhD in survey research. He completely changed my career. I met him here in the UK when he was teaching at the summer school at the University of Essex on social science and data analysis. We kept in touch for years. I moved to the US to study for my Master and then PhD because of him. My first job as survey research scientist was at Knowledge Networks, which is now GFK; two years later, I moved to Google.

In terms of events, the major event that always influences me—and one I always look forward to—is the AAPOR conference. I normally present a paper each time. It is a very exciting conference and despite the fact that it’s held over a few days, I learn so much each time I go. It is an opportunity to meet everyone in the field, including my former classmates and colleagues.

I am very involved within the Association itself with different initiatives and taskforces. AAPOR produces many cutting-edge and up-to-date reports on new topics for the members and non-members such as the forthcoming reports on Social Media in Public Opinion Research and Mobile Technologies for Conducting, Augmenting and Potentially Replacing Surveys.

Related Topics

Related Publications

Related Content

Site Footer


This website is provided by John Wiley & Sons Limited, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ (Company No: 00641132, VAT No: 376766987)

Published features on are checked for statistical accuracy by a panel from the European Network for Business and Industrial Statistics (ENBIS)   to whom Wiley and express their gratitude. This panel are: Ron Kenett, David Steinberg, Shirley Coleman, Irena Ograjenšek, Fabrizio Ruggeri, Rainer Göb, Philippe Castagliola, Xavier Tort-Martorell, Bart De Ketelaere, Antonio Pievatolo, Martina Vandebroek, Lance Mitchell, Gilbert Saporta, Helmut Waldl and Stelios Psarakis.