NCME celebrates 75 years: An interview with Professor Derek Briggs
- Author: Statistics Views
- Date: 29 Dec 2013
- Copyright: Image appears courtesy of University of Colorado Boulder
The National Council on Measurement in Education celebrates its 75th anniversary this year. To commemorate this event, as earlier reported in the year, Educational Measurement: Issues and Practice has put together a virtual issue, looking back at some of the greatest changes and developments in the field of educational measurement, which is still available to read until the close of the year.
As the anniversary draws to a close, Statistics Views talks to the Editor-in-Chief Professor Derek Briggs, Professor and Program Chair, Research & Evaluation Methodology (REM) at the School of Education of the University of Colorado on the Council’s anniversary and the important role of statistics in education.
1. Congratulations on the National Council on Measurement in Education reaching its 75th anniversary. How did you first become involved with the Journal, Educational Measurement: Issues and Practice?
Thank you. I first became involved with EMIP in the way that most NCME members become involved—as a reader. Then, my involvement changed as I became both a reader and occasional reviewer. The next stage was to have my own work published in EMIP and I also served on the editorial board from 2007-2009. I was nominated to serve as editor in the spring of 2012 and now here I am!
2. What makes Educational Measurement different from other journals in the field?
A distinguishing feature of EMIP is the focus on “issues and practice” and they relate the field of educational measurement. For any article submitted to EMIP I ask “what are the real-world implications?” There are other journals in the field where it is perfectly appropriate to publish manuscripts with the primary aim of building or challenging ideas that would only make sense to a very select group. I have published articles like that myself, where my primary audience is fellow psychometricians (Note: a “psychometrician” is someone who you can think of as a statistician who works with data from educational assessments or tests. Psychometrics can be defined as “the study of mental measurement.”) The aspiration of EMIP is to publish manuscripts with a much broader target audience. To that end we look for manuscripts that address big picture issues in educational measurement with the potential to generalize beyond the setting of a single study. Another important distinction that I am trying the emphasis as editor is timeliness. EMIP should be the place to go for hot topic debates prompted by changes to laws and shifts in policies that have an impact on educational measurement. At some journals it can take between 6 months to a year from first submission to a decision letter. At EMIP if an important manuscript comes my way I want to make it possible for that manuscript to be out in the public domain with months, not years.
3. Who should be reading the Journal and why?
Well, certainly every member of NCME should be reading this journal. This membership is itself quite broad, and includes: psychometricians and test developers working in the testing industry, psychometricians working in academic settings, staff in schools, districts and state departments of education, researchers and analysts at universities and think-tanks, legislators and their staff, and interested citizens domestically (in the US) and internationally. However, not everyone in the roles just described is a member of NCME—they should be reading the journal as well! They should be reading the journal for some of the reasons I mentioned above. Specifically they should expect EM:IP to keep them abreast of big picture issues in the field. It should help people feel they are part of a professional, scholarly community, and it should facilitate debate and introspection.
4. There is current debate about the mechanics of peer review. What can a new author expect from the Journal’s policy?
Here’s what I see as the debate: there is a lot of arbitrariness in the peer review process. Peer reviewers are not randomly sampled but chosen based on the perception that they are qualified to evaluate the subject of the manuscript. But what happens when a manuscript takes a critical perspective on existing practice? It would be natural to have at least one (or more) peer reviewer who represents existing practice, and such reviewers are likely to be defensive. At this point instead of reviewing the manuscript on its own merits, the peer reviewers may well launch into a rebuttal. Some reviewers don’t always seem to appreciate that their role is not so much to critique the conclusion reached in a manuscript, but to critique the evidence and argument used to reach a given conclusion. My policy is on the one hand to try to find high quality reviewers and have their perspective drive the review process, but, on the other hand, I play the role of reviewer as well, not just as a “vote counter.” I read every submitted manuscript that gets sent out for peer review, and if I think the reviewers have missed the mark, I will confer with my Associate Editors and respond accordingly. So a prospective author submitting to EM:IP can be assured that their manuscript will receive a careful review and a decision letter that is constructively critical. A prospective author will never get a decision letter from me that simply says “Please see the comments from the reviewers and address all of them in your revision.” Who needs an editor for that?
I read every submitted manuscript that gets sent out for peer review, and if I think the reviewers have missed the mark, I will confer with my Associate Editors and respond accordingly. So a prospective author submitting to EM:IP can be assured that their manuscript will receive a careful review and a decision letter that is constructively critical. A prospective author will never get a decision letter from me that simply says “Please see the comments from the reviewers and address all of them in your revision.” Who needs an editor for that?
5. What do you enjoying most about being Editor?
Making things better. I love seeing a manuscript progress from initial submission through revision to its final version knowing that I was able to play a role in taking a good idea and bringing it into sharper focus. Because of this I take a lot of pride in my decision letters. Even when I reject a manuscript I like to think that my feedback will give authors new ideas that can help them push their research in a positive direction. I feel like I’m having an influence on my field one manuscript at a time.
6. What are your main priorities/objectives for the Journal in the year ahead?
• Get even more efficient in turnaround time from initial submission to decision letter. Right now the average is about 3 months. In the past it was more like 4 to 5 so this is already an improvement. But I think I can get it down to 2.
• I want to see EM:IP transition to a much greater online interface with each published article. For example, we should have links to each article that contain more detailed and interactive information. In this next year I plan to pick a feature article for each issue and have the lead author make a video abstract summarizing not only what is in their article but also what they are doing now.
• Another example is with the lack of colour graphics in the hard copy of the journal. That limitation doesn’t exist online. Pretty soon I think the hard copy of the journal will just serve as the advertisement that prompts readers to visit a website where they can get the deluxe version. In the next year I plan to make the cover graphics of each new issue available in colour at the EMIP website.
7. The Journal is sponsored by National Council on Measurement in Education. As Journal Editor, how does the relationship between the Journal and the Council work?
NCME gives the editor pretty free reign to impart his/her vision for EMIP while saying within certain parameters of the journal’s mission. NCME has a publication committee, and multiple members of that committee also happen to serve on my editorial board. When I want to make changes to things like author guidelines (for example, I recently imposed a page limit of manuscript submissions that didn’t previously exist), I just let the committee know. So far they have been very supportive.
8. Please tell us more about the virtual issue they has been put together to celebrate the 75th anniversary.
I chose the articles in this special issue after going through each issue of EM:IP one by one starting with the inaugural issue in 1982. I looked for themes that kept reappearing over the years and articles that in my own reading struck important notes. I chose 2007 as a cutoff since issues after this date are still likely to be "fresh" in people's minds and it is hard to judge the importance and relevance of some of the more recent articles. It was very hard to settle on a collection of 30 articles out of the ~400 that have been published to date, and this was not intended to represent the "best" EM:IP articles so much as it is intended to capture some important trends in educational measurement since the 1980s. The articles were organized by the following topic areas:
1) Debates & Controversies in Educational Measurement
2) Foundational Conceptions and Misconceptions about Educational Measurement
3) Teaching and Assessment
4) Using Tests for Educational Accountability
5) Performance Assessment
6) Computer-based Testing
I’m quite proud of the result, which is currently available for free download even for non-members of NCME at http://onlinelibrary.wiley.com/journal/10.1111/(ISSN)1745-3992/homepage/ncme_75th_anniversary_virtual_issue.htm. There is so much that can be learned when we recognize that history tends to repeat itself. So many of the “hot-button” issues of educational measurement were already hot-button issues 30 years ago!
9. From your experience, are there specific challenges in conveying or teaching statistics concepts within education as I see your other interests also include critical analyses of the statistical models used to make causal inferences about the effects of teachers, schools and other educational interventions on student achievement?
Teaching graduate students is a great challenge in most fields that teach statistics, particularly in the context of students who are interested in education. A lot of these students have had bad experiences of statistics or mathematics in the past. There is a hurdle to overcome in trying to make students understand why the subject matters and is exciting. If you can find a way to make the subject relevant to their own concerns, then you can overcome that hurdle. You have to be inventive in the way you connect the building blocks of statistical methods to the motivating interests of the students. When I start the course, I ask them “What comes to mind when you hear the word ‘statistics’?” and I give them a chance to tell me their horror stories (if they have any) and then I do what I can to win them over from there. Good statisticians are essentially very adept detectives, and being a good detective can be fun. If you can get students to make this connection, then they start to see the relevance of the topic to their applied interests. In education, the ongoing interest in making causal about the effects of teachers and on student achievement (using so-called value-added models) makes it especially important for all educational researcher to be not only literate when it comes to the nuts and bolts of descriptive and inferential statistics, but to have a solid grasp on the fundamentals of exerimental design and causal inference.
You have to be inventive in the way you connect the building blocks of statistical methods to the motivating interests of the students. When I start the course, I ask them “What comes to mind when you hear the word ‘statistics’?” and I give them a chance to tell me their horror stories (if they have any) and then I do what I can to win them over from there. Good statisticians are essentially very adept detectives, and being a good detective can be fun. If you can get students to make this connection, then they start to see the relevance of the topic to their applied interests.
10. What has been the most exciting development that you have worked on in teaching quantitative methods and policy analysis during your career?
My research to date has focused on methological issues germane to the measurement and evaluation of growth in student achievement. It turns out that making inferences about student growth is a lot harder than it sounds, just from a measurement standpoint. There is a famous aphorism “if you want to measure change, don't change the measure.” Yet in education we really have no choice—the knowledge, skills and abiities we expect to students to master don’t stay the same, so the tests we give have to change as well. But if the tests change, just how certain can one be that score “increases” reflect growth as opposed to an easier test? One exciting development over the last 10 years has been a gradual shift in psychometrics toward a recognition that design and analysis have to be two sides of the same coin. So, for example, if you want to make inferences about student growth in any absolute sense, you need tests that are being designed according to hypotheses about student cognition and development. Without these hypotheses, test scores are unlikely to tell us anything that will be useful to students and their parents and actionable for teachers and schools. I think the interest in making inferences about student growth is going to lead to major advances in the way large-scale assessments are being designed and analyzed. Another obviously exciting development is the widespread availability of data and the tools for managing and processing data quickly and efficiently. When I was an undergraduate in college in 1992, I travelled all the way from Minnesota to Washington DC to collect the data I needed for my thesis paper in economics. I still recall having someone from the Bureau of Labor Statistics hand me a brown envelope with computer printouts and floppy disks containing data. The data could only be read with specialized software. These days, I could obtain the same data in about 10 minutes over the internet, and I could analyze it for free using something like R, or for a relatively cheap cost using Stata or some other commercial software. Data democratization has made it that much easier to teach quantitative methods to my students because chances are, if there is a topic in education you care about, I can find you the data to investigate it.
11. Are there people or events that have been influential in your career?
Although many of the qualities that (I hope) make me a good editor—curiosity, meticulousness, never accepting things at face value) are qualities I suspect I was born with, I’ve been very lucky throughout my education and career to have great role models. I have such a great appreciation for sharp, quick-witted thinkers who are great communicators. As an undergraduate at Carleton College these were my economics professors (Scott Bierman, Mark Kanazawa); as a graduate student at UC Berkeley these were the members of my dissertation committee (David Freedman, Mark Wilson, Paul Holland, David Stern); and as a young faculty member these were my more senior colleagues in the field, people like Lorrie Shepard, Henry Braun, Ed Haertel and Bob Brennan. These are all people with a gift for taking complicated topics, stripping away the jargon and communicating the essence with clarity in a way that others could not. I aspire to do the same both as a journal editor and a scholar.