[House Hearing, 112 Congress]
[From the U.S. Government Publishing Office]



                    EDUCATION RESEARCH: IDENTIFYING
                     EFFECTIVE PROGRAMS TO SUPPORT
                         STUDENTS AND TEACHERS

=======================================================================

                                HEARING

                               before the

                    SUBCOMMITTEE ON EARLY CHILDHOOD,
                   ELEMENTARY AND SECONDARY EDUCATION

                         COMMITTEE ON EDUCATION
                           AND THE WORKFORCE

                     U.S. House of Representatives

                      ONE HUNDRED TWELFTH CONGRESS

                             FIRST SESSION

                               __________

           HEARING HELD IN WASHINGTON, DC, NOVEMBER 16, 2011

                               __________

                           Serial No. 112-47

                               __________

  Printed for the use of the Committee on Education and the Workforce








                   Available via the World Wide Web:
      http://www.gpoaccess.gov/congress/house/education/index.html
                                   or
            Committee address: http://edworkforce.house.gov




                                _____

                  U.S. GOVERNMENT PRINTING OFFICE
71-259 PDF                WASHINGTON : 2012
-----------------------------------------------------------------------
For sale by the Superintendent of Documents, U.S. Government Printing 
Office Internet: bookstore.gpo.gov Phone: toll free (866) 512-1800; DC 
area (202) 512-1800 Fax: (202) 512-2104  Mail: Stop IDCC, Washington, DC 
20402-0001





                COMMITTEE ON EDUCATION AND THE WORKFORCE

                    JOHN KLINE, Minnesota, Chairman

Thomas E. Petri, Wisconsin           George Miller, California,
Howard P. ``Buck'' McKeon,             Senior Democratic Member
    California                       Dale E. Kildee, Michigan
Judy Biggert, Illinois               Donald M. Payne, New Jersey
Todd Russell Platts, Pennsylvania    Robert E. Andrews, New Jersey
Joe Wilson, South Carolina           Robert C. ``Bobby'' Scott, 
Virginia Foxx, North Carolina            Virginia
Bob Goodlatte, Virginia              Lynn C. Woolsey, California
Duncan Hunter, California            Ruben Hinojosa, Texas
David P. Roe, Tennessee              Carolyn McCarthy, New York
Glenn Thompson, Pennsylvania         John F. Tierney, Massachusetts
Tim Walberg, Michigan                Dennis J. Kucinich, Ohio
Scott DesJarlais, Tennessee          David Wu, Oregon
Richard L. Hanna, New York           Rush D. Holt, New Jersey
Todd Rokita, Indiana                 Susan A. Davis, California
Larry Bucshon, Indiana               Raul M. Grijalva, Arizona
Trey Gowdy, South Carolina           Timothy H. Bishop, New York
Lou Barletta, Pennsylvania           David Loebsack, Iowa
Kristi L. Noem, South Dakota         Mazie K. Hirono, Hawaii
Martha Roby, Alabama
Joseph J. Heck, Nevada
Dennis A. Ross, Florida
Mike Kelly, Pennsylvania

                      Barrett Karr, Staff Director
                 Jody Calemine, Minority Staff Director

                    SUBCOMMITTEE ON EARLY CHILDHOOD,
                   ELEMENTARY AND SECONDARY EDUCATION

                  DUNCAN HUNTER, California, Chairman

John Kline, Minnesota                Dale E. Kildee, Michigan
Thomas E. Petri, Wisconsin             Ranking Minority Member
Judy Biggert, Illinois               Donald M. Payne, New Jersey
Todd Russell Platts, Pennsylvania    Robert C. ``Bobby'' Scott, 
Virginia Foxx, North Carolina            Virginia
Bob Goodlatte, Virginia              Carolyn McCarthy, New York
Richard L. Hanna, New York           Rush D. Holt, New Jersey
Lou Barletta, Pennsylvania           Susan A. Davis, California
Kristi L. Noem, South Dakota         Raul M. Grijalva, Arizona
Martha Roby, Alabama                 Mazie K. Hirono, Hawaii
Mike Kelly, Pennsylvania             Lynn C. Woolsey, California














                            C O N T E N T S

                              ----------                              
                                                                   Page

Hearing held on November 16, 2011................................     1

Statement of Members:
    Holt, Hon. Rush D., a Representative in Congress from the 
      State of New Jersey........................................     3
    Hunter, Hon. Duncan, Chairman, Subcommittee on Early 
      Childhood, Elementary and Secondary Education..............     1
        Prepared statement of....................................     3

Statement of Witnesses:
    Fleischman, Steve, deputy executive officer, Education 
      Northwest; director, Regional Educational Laboratory 
      Northwest..................................................    18
        Prepared statement of....................................    19
    Hoxby, Dr. Caroline, Scott and Donya Bommer professor of 
      economics, Stanford University.............................    11
        Prepared statement of....................................    13
    Smith, Dr. Eric J., former Florida Commissioner of Education, 
      Florida Department of Education............................    23
        Prepared statement of....................................    25
    Whitehurst, Dr. Grover J. ``Russ,'' senior fellow and 
      director of the Brown Center on Education Policy, Brookings 
      Institution................................................     5
        Prepared statement of....................................     7

Additional submissions:
    Foxx, Hon. Virginia, a Representative in Congress from the 
      State of North Carolina, questions submitted for the record    45
    Mr. Holt:
        The Learning and Education Academic Research Network, 
          prepared statement of..................................    43
        Report, ``From Compliance to Service: Evolving the State 
          Role to Support District Data Efforts to Improve 
          Student Achievement,'' Internet address to.............    44
    Dr. Hoxby, response to questions submitted for the record....    45
    Chairman Hunter, questions submitted for the record:
        To Dr. Hoxby.............................................    44
        To Dr. Smith.............................................    47
        To Dr. Whitehurst........................................    48
    McCarthy, Hon. Carolyn, a Representative in Congress from the 
      State of New York, policy brief, ``Increasing Participation 
      in No Child Left Behind School Choice,'' Internet address 
      to.........................................................    36
    Dr. Smith, response to questions submitted for the record....    48
    Dr. Whitehurst, response to questions submitted for the 
      record.....................................................    49

 
                    EDUCATION RESEARCH: IDENTIFYING
                     EFFECTIVE PROGRAMS TO SUPPORT
                         STUDENTS AND TEACHERS

                              ----------                              


                      Wednesday, November 16, 2011

                     U.S. House of Representatives

                    Subcommittee on Early Childhood,

                   Elementary and Secondary Education

                Committee on Education and the Workforce

                             Washington, DC

                              ----------                              

    The subcommittee met, pursuant to call, at 10:01 a.m., in 
room 2175, Rayburn House Office Building, Hon. Duncan Hunter 
[chairman of the subcommittee] presiding.
    Present: Representatives Hunter, Petri, Platts, Foxx, 
Hanna, Barletta, Roby, Kelly, Payne, Scott, McCarthy, Holt, 
Davis, and Woolsey.
    Staff present: Jennifer Allen, Press Secretary; Katherine 
Bathgate, Press Assistant/New Media Coordinator; Heather Couri, 
Deputy Director of Education and Human Services Policy; Lindsay 
Fryer, Professional Staff Member; Krisann Pearce, General 
Counsel; Mandy Schaumburg, Education and Human Services 
Oversight Counsel; Dan Shorts, Legislative Assistant; Linda 
Stevens, Chief Clerk/Assistant to the General Counsel; Alissa 
Strawcutter, Deputy Clerk; Brad Thomas, Senior Education Policy 
Advisor; Kate Ahlgren, Investigative Counsel; Daniel Brown, 
Junior Legislative Assistant; John D'Elia, Staff Assistant; 
Jamie Fasteau, Deputy Director of Education Policy; Ruth 
Friedman, Director of Education Policy; Brian Levin, New Media 
Press Assistant; Kara Marchione, Senior Education Policy 
Advisor; Melissa Salmanowitz, Communications Director for 
Education; Laura Schifter, Senior Education and Disability 
Advisor; and Michael Zola, Senior Counsel.
    Chairman Hunter. Good morning. A quorum being present, the 
subcommittee will come to order.
    Welcome to today's subcommittee hearing. I would like to 
thank our witnesses for joining us today. We look forward to 
hearing your testimony.
    Providing more information about educational quality to 
families and communities is essential to improving K-12 schools 
in America. We are here today to discuss the value of education 
research, explore the appropriate level of federal involvement, 
and examine ways to improve current law to provide more 
immediate and relevant data to parents and educators.
    Since the enactment of the Education Sciences Reform Act 
the federal government has played an important role in 
supporting research and program evaluations and gathering data 
about educational practice and the nation's schools. Today, 
federal expert panels and research centers offer support to 
state and local organizations that perform education research.
    The responsibility for education research is shared by both 
federal and nonfederal organizations in an effort to examine 
the quality of existing programs, develop and test innovative 
practices, and ensure the effective use of taxpayer dollars.
    The resultant data allows teachers, parents, and officials 
to gain a greater understanding of successful interventions, 
school performance, and student achievement. For example, the 
Institute of Education Sciences established the What Works 
Clearinghouse to provide educators, policymakers, and the 
public with a central and trusted source of scientific evidence 
of what works in education.
    Information from the clearinghouse showed the ``I CAN 
Learn'' curriculum resulted in significant achievement gains 
for 8th grade and math students. However, the What Works 
Clearinghouse needs improvement, especially in providing clear 
direction on applying research to classroom practices.
    Education research has also helped us identify programs 
that are not helping students succeed. Particularly in these 
times of trillion dollar deficits and record debt, 
congressional leaders must be careful stewards of taxpayer 
dollars.
    We can all agree on the need to dedicate federal education 
funds to the most effective programs; if research and data show 
a program is not working we should get rid of it. That is why 
my colleagues and I introduced legislation to eliminate more 
than 40 ineffective or duplicative programs as part of our K-12 
education reform package.
    Through the Education Sciences Reform Act and related 
initiatives we have made great strides in assessing the quality 
of K-12 schools, protecting taxpayers' investment, and 
identifying successful education practices. However, as we look 
toward reauthorization of this law we must acknowledge the 
challenges facing education research and the Institute of 
Education Sciences.
    For instance, we must find better ways to help states and 
school districts translate the best research principles into 
classroom practices. Existing research centers designed to 
provide technical assistance to states and districts need to do 
a better job sharing information to help local education 
officials identify and implement the practices and programs 
that are most likely to work for their students.
    Another challenge exists in establishing a more 
collaborative relationship between the director of the 
Institution of Education Sciences and the secretary of 
education. Maintaining the autonomy and independence of the IES 
is extremely important; the director's role must stay 
nonpolitical. However, more communication and data sharing 
between the two entities could ultimately lead to better, more 
effective federal education programs and initiatives.
    The witnesses here today have valuable insight into the 
ways we can ensure education research is beneficial to parents, 
teachers, and students. I look forward to a productive and 
informative discussion this morning.
    I will now recognize my distinguished colleague, Rush Holt, 
for his opening remarks.
    [The statement of Mr. Hunter follows:]

  Prepared Statement of Hon. Duncan Hunter, Chairman, Subcommittee on 
          Early Childhood, Elementary, and Secondary Education

    Providing more information about educational quality to families 
and communities is essential to improving K-12 schools in America. We 
are here today to discuss the value of education research, explore the 
appropriate level of federal involvement, and examine ways to improve 
current law to provide more immediate and relevant data to parents and 
educators.
    Since the enactment of the Education Sciences Reform Act, the 
federal government has played an important role in supporting research 
and program evaluations, and gathering data about educational practice 
and the nation's schools. Today, federal expert panels and research 
centers offer support to state and local organizations that perform 
education research. The responsibility for education research is shared 
by both federal and non-federal organizations in an effort to examine 
the quality of existing programs, develop and test innovative 
practices, and ensure the effective use of taxpayer dollars.
    The resultant data allows teachers, parents, and officials to gain 
a greater understanding of successful interventions, school 
performance, and student achievement. For example, the Institute of 
Education Sciences established the What Works Clearinghouse to provide 
educators, policymakers, and the public with a central and trusted 
source of scientific evidence of what works in education. Information 
from the Clearinghouse showed the ``I CAN Learn'' curriculum resulted 
in significant achievement gains for 8th grade math students. However, 
the What Works Clearinghouse needs improvement, especially in providing 
clear direction on applying research to classroom practices.
    Education research has also helped us identify programs that are 
not helping students succeed. Particularly in these times of trillion-
dollar deficits and record debt, Congressional leaders must be careful 
stewards of taxpayer dollars. We can all agree on the need to dedicate 
federal education funds to the most effective programs; if research and 
data show a program is not working, we should get rid of it. That's why 
my colleagues and I introduced legislation to eliminate more than 40 
ineffective or duplicative programs as part of our K-12 education 
reform package.
    Through the Education Sciences Reform Act and related initiatives, 
we have made great strides in assessing the quality of K-12 schools, 
protecting taxpayers' investments, and identifying successful 
educational practices. However, as we look toward reauthorization of 
this law, we must acknowledge the challenges facing education research 
and the Institute of Education Sciences.
    For instance, we must find better ways to help states and school 
districts translate the best research principles into classroom 
practices. Existing research centers designed to provide technical 
assistance to states and districts need to do a better job sharing 
information to help local education officials identify and implement 
the practices and programs that are most likely to work for their 
students.
    Another challenge exists in establishing a more collaborative 
relationship between the Director of the Institute of Education 
Sciences and the Secretary of Education. Maintaining the autonomy and 
independence of the IES is extremely important; the Director's role 
must stay non-political. However, more communication and data sharing 
between the two entities could ultimately lead to better, more 
effective federal education programs and initiatives.
    The witnesses here today have valuable insight into the ways we can 
ensure education research is beneficial to parents, teachers, and 
students. I look forward to a productive and informative discussion 
this morning.
                                 ______
                                 
    Mr. Holt. Thank you, Mr. Chairman, and thank you for the 
hearing.
    And I am pleased to welcome the witnesses here today. I 
think we will learn a lot. I thank you for taking time to 
provide us with guidance on how to use data and research to 
improve educational practices at large and individual student 
performance.
    A few years back I visited an elementary and middle school 
in Union City, New Jersey to learn more about innovations in a 
school district that had been troubled, and how they used data 
to improve student achievement. Union City is what we call in 
New Jersey an Abbott school district. It is a dense urban 
district with an overwhelming majority of English language 
learning students, and yet, to the surprise of many education 
experts, the district is meeting or exceeding state standards 
now.
    They did a number of things to accomplish this, but one 
thing in particular they did was to provide frequent evaluation 
of all students and shared the test data immediately with 
teachers. Union City teachers were able, then, to tailor their 
instruction to meet each student's individual needs. Data 
showed that teachers and administrators could identify trends 
that could be addressed systemically and individually, and this 
approach of continually using data to inform instruction helped 
the students do far better than previous classes of students 
had done.
    Now, each of us thinks we are an expert on education 
because we were students. We have to guard against that, and we 
have to remember that there are things that we can learn about 
how people learn. And we need data, we need evidence, we need 
research to help us understand how people learn and how we can 
improve instruction.
    The Educational Sciences Reform Act was intended to provide 
for the improvement of federal education research, statistics, 
evaluation, and dissemination of data to inform education 
policy and education practice. It supports data-driven 
development and supports practitioners in understanding 
research and data from their schools.
    I really believe that it helps educators make decisions 
about their students' learning experiences, and it helps states 
use research to identify successful instructional programs. It 
helps teachers and principals implement proven school 
improvement strategies, and it would help us if we would use 
those data and if we would use that research. The federal 
government plays an important role in supporting the research. 
Educators across the country need reliable research to enable 
them to make evidence-based decisions in the classroom, and 
they need data-driven systems that support instruction.
    In reauthorizing the Elementary and Secondary Education 
Act, I hope we will maintain accountability for acceptable, 
adequate progress for all students. We can be more flexible 
with how students improve, how schools improve, and how we 
empower schools to use their data, if we make more use of 
evidence and drive evidence-based decisions. As the committee 
continues to work on the reauthorization of ESEA and ESRA I 
hope we will continue to pay attention to the role of research 
and data in improving student outcomes.
    I am going to reintroduce soon the Metrics Act to help 
improve data sharing and instruction at the local level. I 
think improved use of data can help all students do better, and 
I hope we will be able to include my legislation in any 
reauthorization of ESEA.
    Strongly held beliefs or ideological commitment should not 
trump data or evidence. If we want to make the best policy we 
need evidence-based research. At the individual level, if we 
really want to hold schools accountable for adequate progress 
for each student they have to use data, and we have to see that 
it is used in the most illustrative way.
    So continued federal investment in educational research 
will be necessary if we are to ensure that all students receive 
a quality education that prepares them for life and further 
study. I hope the testimony today will provide us with some 
recommendations on how we can strengthen the ESRA and the 
federal investment in education research.
    Thank you, Mr. Chairman.
    Chairman Hunter. Thank the gentleman.
    Pursuant to Committee Rule 7c, all subcommittee members 
will be permitted to submit written statements to be included 
in the permanent hearing record, and without objection, the 
hearing record will remain open for 14 days to allow 
statements, questions for the record, and other extraneous 
material referenced during the hearing to be submitted in the 
official hearing record.
    It is now my distinguished pleasure to introduce our panel 
of witnesses. Dr. Grover J. ``Russ'' Whitehurst is the director 
of the Brown Center on Education Policy at the Brookings 
Institution. Previously, he was the first director of the 
Institute of Education Sciences.
    Dr. Caroline Hoxby is the Scott and Donya Bommer Professor 
in Economics at Stanford University, the director of the 
Economics of Education program at the National Bureau of 
Economic Research, and a senior fellow of the Hoover 
Institution and the Stanford Institute for Economic Policy 
Research.
    Mr. Steve Fleischman is the deputy executive officer of 
Education Northwest, formerly known as Northwest Regional 
Educational Laboratory, the organization that has managed the 
REL Northwest Laboratory since 1966. He has also served as 
director of REL Northwest.
    Lastly, Dr. Eric Smith is the former commissioner of 
education for the state of Florida. Dr. Smith is currently a 
consultant to a number of state education chiefs and school 
districts on several education reform projects.
    Before I recognize each of you to provide your testimony 
let me briefly explain our lighting system. When you start it 
will be green, you will have 5 minutes; when you have 1 minute 
left it will turn yellow; and when it turns red we would ask 
you to wrap up as best as you can. After everyone has 
testified, the members will have 5 minutes to ask a question of 
the panel.
    I would now like to recognize Dr. Whitehurst for 5 minutes.

 STATEMENT OF DR. GROVER J. ``RUSS'' WHITEHURST, SENIOR FELLOW 
AND DIRECTOR OF THE BROWN CENTER ON EDUCATION POLICY, BROOKINGS 
                          INSTITUTION

    Mr. Whitehurst. Thank you, Mr. Chairman and members of the 
committee. I really appreciate the invitation to testify, and I 
am pleased that you have such a keen interest in education 
research and reauthorizing ESRA.
    Everyone in the room knows that education is important. It 
has been true in this country throughout its history. In fact, 
before we were a country a first thing that a small, colonial 
village would do is set up a school once it had enough kids to 
require schooling.
    But in an age of globalization and the advent of a 
knowledge-based economy, the imperative for us to educate well 
is stronger than it has ever been. High quality education 
research is critical to the nation's effort to deliver better 
education and a future of opportunities to our citizens. 
Without good evidence we are destined to embrace education 
policies that move us forward, backwards, and sideways, and we 
are not even going to know in which of those directions we are 
heading.
    The Educational Sciences Reform Act, which originated in 
this subcommittee in 2001, made great strides towards improving 
quality and independence of federally sponsored education 
research. Prior to that legislation the federal stewardship of 
education research was widely viewed as a failure.
    Since then, we have seen considerable progress in the 
quality and relevance of that research and evidence for that 
comes from a number of sources. Let me just give you a very 
short list of some things we know now that we did not know 10 
years ago that are a product of the federal investment in 
education research.
    On teachers, we know that teachers vary dramatically in 
their effectiveness. A very effective compared to a very 
ineffective teacher can create achievement gains for a child in 
1 year that can wipe out a third of the achievement gap between 
white and black students, and you can see the effects of a very 
effective teacher in elementary school all the way into 
adulthood, in terms of college-going and job earnings.
    On the organization of schools, we know now that no excuses 
charter schools in urban areas do a dramatically better job 
than traditional public schools in raising student achievement.
    On standards, we have learned that the quality of state 
standards for what students should know, contrary to what seem 
to be reasonable assumptions, bear no relationship to student 
achievement. The states with the best standards can have low 
levels of achievement relative to states with weak standards, 
and vice-versa.
    On the effectiveness of federally funded education 
programs, we now know that a significant number of those 
programs are not achieving their intended effects.
    And finally, on basic learning and instructional processes 
we have a whole list of things we have learned, including the 
interesting fact that testing students on the content of the 
classroom assignments produces substantially more learning than 
the same amount of time spent restudying the material.
    So I could provide you a much longer list. There are many 
things we know now that we did not know 10 years ago. If 
knowledge is power we are in a much better shape than we used 
to be, and that augurs well for the future.
    ESRA is overdue for reauthorization. I will not take you 
through a to-do list for reauthorizing the law. Let me simply 
say it is a pretty good piece of legislation; I think it needs 
some fine-tuning, and that is about it.
    Finally, I want to address the federal role in 
incorporating the findings from research into program mandates. 
No Child Left Behind uses the phrase ``scientifically-based 
research'' 111 times--I counted--and it includes mandates for 
states and local education agencies to base their practices on 
research. The most extreme example is the now defunct program, 
Reading First, which dictated how early reading instruction was 
to be delivered at the classroom level at a very granular 
level.
    It is a fundamental mistake, in my view, for Congress to 
dictate how states and LEAs should use findings from research. 
Research is seldom definitive. Its reflection in statute and 
on-the-ground implementation is typically flawed, and our 
knowledge advances at too fast a rate for legislation to keep 
up.
    Instead of telling states and local education agencies what 
they should do and appealing to research as a justification, in 
my view, Congress should focus on creating incentives for 
practitioners and policymakers to incorporate research findings 
into their programs. Those incentives should be based around 
the performance of schools.
    When my grandfather learned about research findings that 
would help him generate a higher yield from his farm he didn't 
need to be told by government that he had to utilize those 
findings; it was in his self-interest to do so and he did. 
Likewise, education providers will use research when it helps 
them do something for which they are accountable.
    There are two ways to fashion an accountability system that 
will create a demand for research findings. One is top-down 
regulatory accountability, as we have seen in No Child Left 
Behind. Washington says, ``Here are your targets for student 
achievement. If you don't meet them the following things will 
happen.''
    The other approach is bottom-up marketplace accountability. 
Parents are given choices of where to send their kids to 
school. They get good information on school performance. 
Funding follows kids. Schools that aren't performing well lose 
students and funding. The managers of those schools are 
motivated to improve their performance and seek solutions, 
including those from good research.
    I am in favor of a market-based approach to creating demand 
for research and I urge you to consider it in the context of 
the reauthorization of the Elementary and Secondary Education 
Act.
    In conclusion, as a result of rigorous and relevant 
education research we know much more than we did about what 
works and what doesn't in education than we did a few years 
ago. We have got a long way to go before we know enough to 
assure a good education to every student.
    We have started. We are making progress. I appreciate this 
committee's understanding of the importance of the work and the 
critical role the federal government plays in advancing it.
    Thank you, Mr. Chairman.
    [The statement of Dr. Whitehurst follows:]

Prepared Statement of Dr. Grover J. ``Russ'' Whitehurst, Senior Fellow 
    and Director of the Brown Center on Education Policy, Brookings 
                              Institution

    Mr. Chairman and Members of the Committee: I am Russ Whitehurst. I 
direct the Brown Center on Education Policy at the Brookings 
Institution. Prior to holding my present position, I was the founding 
director of the Institute of Education Sciences within the U.S. 
Department of Education. Before entering government service I had a 
long career as a researcher and academic administrator.
    Thank you for the invitation to testify. I am pleased that there is 
such interest and leadership in addressing the quality of education 
research in America.
    Everyone in this room knows that education is important. I expect 
that all of us have had an experience with a teacher, a class, an 
educational institution, or through independent learning that has 
changed our lives. I certainly have. The American dream of opportunity 
and advancement and the educational system of the United State are 
inextricably connected. This has been true throughout our history. 
Indeed, well before the country was founded it was typical for colonial 
villages that had grown to more than a few hundred people to establish 
and fund a public school, with the first dating to 1639. Since that 
time, we have continued to value education and invest in it. But in an 
age of globalization and the advent of a knowledge based economy, the 
imperative to educate and educate well is stronger than it has ever 
been. The evidence that nations with a better educated populace 
experience higher growth rates is compelling, and during the current 
economic downturn the unemployment rate in the U.S. for young adults 
with just a high school diploma has been three times the rate for those 
with a college degree.
    High quality education research is critical to the nation's effort 
to deliver better education and a future of opportunity to our 
citizens. Without good evidence on the condition of education, what 
works and what does not, fundamental processes of learning and 
instruction, and breakthrough instructional technologies we are 
destined to embrace education policies that move us forward, backward, 
and sideways without even knowing in which of those directions we're 
heading. Without good education research, our approaches to education 
reform are more akin to fashion and fancy--the width of a man's tie or 
the length of a woman's skirt--than to anything that is rational and 
benefits from a systematic examination of evidence.
    Think of what federal investments in agricultural research have 
accomplished. My grandparents were farmers during the transition from 
the way things had always been to farming based on the knowledge 
produced by agricultural research. I remember well my grandfather 
coming back from a meeting with an agricultural extension agent excited 
about what new seeds and new approaches to crop rotation could do for 
the family farm. And because he was an early and eager adopter of 
research-based approaches to farming, he was always ahead of his 
neighbors in wringing a living from his land. These days America is the 
breadbasket for the world, largely because we invested in agricultural 
research and figured out how to disseminate the knowledge derived from 
that research to those who farm. We are on the cusp of a transformation 
of education to an evidence-based field that will have many 
similarities to the changes in agriculture that my grandparents 
experienced. The actions this Committee takes as it shapes the the 
federal role in education research will have far reaching effects on 
the quality and productivity of our schools, and through that on our 
economy and future.
    Mr. Chairman, the Education Sciences Reform Act, which originated 
in this subcommittee in 2001 and currently governs the education 
research enterprise at the Institute of Education Sciences within the 
U.S. Department of Education, made great strides towards improving the 
quality and independence of federally sponsored education research. 
Prior to that legislation, the federal stewardship of education 
research was widely viewed as a failure. To that point, in 1999 the 
National Academies of Science came to the conclusion that:
    One striking fact is that the complex world of education--unlike 
defense, health care, or industrial production--does not rest on a 
strong research base. In no other field are personal experience and 
ideology so frequently relied on to make policy choices, and in no 
other field is the research base so inadequate and little used.
    Since the National Academies report and as a direct result of 
Education Sciences Reform Act we have seen considerable progress in the 
quality and relevance of education research. Evidence for this comes 
from numerous sources, not the least of which is the Office of 
Management and Budget. OMB's most recent program assessment of the 
Institute of Education Sciences concluded that----
    Since its creation by the Education Sciences Reform Act of 2002, 
IES has transformed the quality and rigor of education research within 
the Department of Education and increased the demand for scientifically 
based evidence of effectiveness in the education field as a whole.
    Let me give you some examples of things we've learned from recent 
education research that are very important to improving America's 
schools and student achievement.
 On teachers
    Teachers vary dramatically in effectiveness--a very effective 
compared to a very ineffective teacher can create achievement gains for 
a child in one year that can wipe out a third of the achievement gap 
between white and black students.
    On-the-job performance is the single strong predictor of how good a 
teacher will be in the future--almost every other observable 
characteristic of teachers is at best only weakly predictive of how 
they will perform in the classroom, e.g. whether they are regularly 
certified or not, were trained in a school of education or not, got a 
high or low score on a certification exam, received a lot of 
professional development or a little, and were mentored as novices or 
not tells us almost nothing about how effective they will be as 
teachers.
    Most professional development programs for teachers are a waste of 
time and money.
 On the organization of schools, choice, and competition
    No excuses charter schools in urban areas do a dramatically better 
job than traditional public schools in raising student achievement.
    Armed with good information on school performance and the ability 
to choose schools, low-income parents choose better schools than the 
ones to which their school district would assign their children, and 
their children do better academically as a result.
    Schools that are subject to competition from other schools for 
students improve more than schools not subject to competition.
 On standards, accountability, and curriculum
    The quality of state standards for what students should know bears 
no relationship to student achievement--states with the best standards 
can have low levels of achievement relative to states with weak 
standards and vice-versa.
    No Child Left Behind-type accountability for schools and districts 
raises student achievement modestly, with the effects focused in 
mathematics in the earlier grades.
    Curriculum choices can make a sizable difference--for example the 
difference between using the most effective vs. the least effective 
elementary school mathematics curriculum, each costing about the same, 
is as much as a third of a year of learning over the course of one 
school year.
    Presently available educational technology programs as used in 
schools do not raise student achievement.
 On the effectiveness of federally funded education programs
    There is a long-list of federal education programs that have no 
measurable effect on student outcomes, including:
     The 21st Century Community Learning Centers Program 
(afterschool )
     Even Start
     Head Start (for outcomes at the end of first grade)
     Upward Bound
     Reading First
     On basic learning and instructional processes
    Spacing the occasions when students are asked to study related 
content rather than massing the study of that content into a short time 
frame remarkably increases learning and retention.
    Testing students on the content of their classroom assignments 
produces substantially more learning than the same amount of time spent 
restudying the material.
    I could provide many more pages of example of things we know now 
about education that we did not know 15 years ago. If knowledge is 
power, we're in much better shape than we used to be and that augurs 
well for the future.
    The Education Sciences Reform Act is overdue for reauthorization. I 
will not take you through a to-do list for reauthorizing the law, one 
reason being that the National Board for Education Sciences has already 
generated such a list and I'm supportive of the Board's 
recommendations. Let me simply suggest three principles that should 
underlie the reauthorization.
    1. If It Ain't Broke Don't Fix It--There are various groups, with 
the American Educational Research Association being the most prominent, 
that would have you make fundamental changes in the law that appeal to 
their interests. They would, for example, have you change the 
definitions of what constitutes rigorous research and evaluation to 
lower the methodological bar their members confront when trying to 
obtain federal grant money, and they would have you separate the 
National Center for Education Statistics from the Institute of 
Education Sciences in order to create another federal entity that they 
can try to influence and with which they could curry favor. The key 
question you should ask of advocates of any significant changes in the 
language in the bill is, ``What evidence do you have you that the 
current language has had bad effects?'' ESRA a pretty good piece of 
legislation and most efforts to change it are going to come from 
organizations that want a return to the wonderful days of yesteryear 
when education research produced little of value except funding for 
education researchers.
    2. Independence Is Fundamental--One of the most important advances 
in the Education Sciences Reform Act was to create a greater degree of 
independence between the Department's research arm and the political 
leadership of the Department. I led the Department's research office 
for 8 years under two secretaries and multiple lesser political 
appointees. I had good relationships with the political leadership of 
the Department and we worked well together, but I needed every bit of 
independence granted me by statute along with a fair amount of grit to 
keep my office and its functions from being politicized. I think this 
is in the nature of the beast rather than the personalities or 
political parties involved. Anything you can do to further arm future 
IES directors with independence from political direction will be 
positive. At the same time, the IES director needs to be inside the 
tent in order for the Department to benefit from education research and 
to have education research informed by insights on federal policies.
    3. The Regional Educational Lab Program (the RELs) Is Broken and 
Should be Fixed--The REL program goes back to 1966 and the very first 
Elementary and Secondary Education Act. Since then, year in and year 
out, the RELs have pulled down a significant proportion of the total 
federal investment in education R&D with little to show of value from 
that investment and a lot to show that should be an embarrassment. I 
don't think any amount of tinkering with the legislative language that 
authorizes the RELs or aggressive intervention by the Institute of 
Education Sciences can fix what is wrong with the program. But there is 
a function the RELs are intended to serve that is desperately needed: 
helping states answer questions about the effectiveness and 
productivity of their own education programs using state administrative 
data. The goal of having statewide longitudinal education databases in 
every state was pursued vigorously in the George W. Bush 
administration. The Obama administration has added substantially to 
funding for this effort through the American Recovery and Reinvestment 
Act of 2009. In the near future all states will have data warehouses 
with longitudinal student achievement data linked to a variety of 
education input variables. However, having data available and being 
able to use it are two different things. Only a few states have the 
staff capacity within their state education office to conduct analyses 
of longitudinal data to address policy questions. This means that most 
policy initiatives fly blind, both in original design and subsequent 
appraisal. RELs might be assigned through legislation to carry out this 
task, but they have multiple masters (including the federal government, 
their own boards, the governors and state legislatures in their 
region), they vary substantially in their capabilities, and they have 
no easy way to prioritize among various claims on their resources. It 
would be much better in my view to eliminate the REL program and 
substitute for it a research voucher program for state education 
departments. The current REL budget would be split among states, taking 
some account of state population but making sure that smaller states 
receive a cut of the pie that is large enough to be useful. The states 
could spend their vouchers to contract for research on issues of high 
interest to them. The research plans and products would undergo 
methodological review at IES to assure quality, but would otherwise be 
independent of the Department. The current RELs could compete for this 
work. If they could do the work well they would prosper. If they could 
not they would have to go into another line of work. It is a 
marketplace solution to a problem that has proven intractable to 
previous legislative and administrative solutions.
    4. You Get What You Pay For--Although federal budgetary support for 
education research has increased in the last decade, it remains a 
pittance when compared with levels of investment in research, 
evaluation, and statistics in other areas of the economy. For example, 
more than 40% of the discretionary budget of the U.S. Department of 
Health and Human Services is invested in knowledge production and 
dissemination through the National Institutes of Health, the Centers 
for Disease Control, the Food and Drug Administration, and many other 
operational components. In the U.S. Department of Education, the 
corresponding investment is less than 1%. In education research and 
development, no less than in R&D in health or transportation or 
communication or energy or agriculture, the public gets what it pays 
for.
    Finally, I want to address the federal role in incorporating the 
findings from educational research into program mandates. NCLB uses the 
phrase ``scientifically-based research'' 111 times, and includes many 
mandates for states and local education agencies to base their 
practices on the findings from such research. The most extreme example 
is the now defunct program, Reading First, which dictated how early 
reading instruction was to be delivered at a very granular level based 
on research findings. There is no evidence that children are reading 
better as a result. It is a fundamental mistake, in my view, for 
Congress to dictate how states and LEAs should use findings from 
research. Even if the research were absolutely definitive, which it 
seldom is; and Congress could translate it into legislation without 
distortion, which it can't; and bureaucrats in the U.S. Department of 
Education could implement it unimpeachably, which is unlikely; science 
is dynamic. We shouldn't accept a process that requires Congress to 
rewrite legislation in order to bring education practice in line with 
evolving research findings.
    Instead of telling states and local education agencies what they 
should do and appealing to research as the justification, Congress 
should focus on creating incentives for practitioners and policy makers 
to want to incorporate findings from the best research into their 
programs. Those incentives should be around the performance of schools. 
If those who are responsible for the management of schools are held 
accountable for schools' performance, and if research findings are both 
readily consumable and provide a obvious boost to school performance, 
then the research will be utilized. When my grandfather learned about 
research findings that would give him a leg up in the yield from his 
farm he didn't need to be told by big government that he had to base 
his practices on that research. It was in his self-interest to do so 
because he was accountable for earning a living from his farm. 
Likewise, education providers will use research when it helps them do 
something for which they're accountable.
    There are two ways to fashion an accountability system that will 
create a demand for research findings. One is top-down regulatory 
accountability as we've seen in NCLB--Washington says, ``Here are your 
targets for student achievement. If you don't meet them the following 
unpleasant things will happen.'' The other approach is bottom-up market 
place accountability--Parents are given choices of where to send their 
children to school and good information on school performance. Funding 
follows the child. Schools that aren't performing well lose students 
and funding. The managers of those schools are motivated to improve 
their performance and seek solutions, including those from good 
research.
    I'm in favor of the market-based approach to creating demand for 
education research and I urge you to consider it in the context of the 
reauthorization of ESEA.
    We know much more about what works and what doesn't in education 
than we did 15 years ago as a result of advances in research, but our 
level of ignorance dwarfs our understanding by orders of magnitude. It 
has been so in the early years of the transformation of other fields to 
evidence-based practice. Moving education to a point at which our 
research base is sufficient to assure a good education for every 
student is the work of a generation, not of a few years. We've started 
and we're moving in the right direction. I appreciate this Committee's 
understanding of the importance of the work and the critical role the 
federal government plays in advancing it.
                                 ______
                                 
    Chairman Hunter. Thank you, Doctor.
    Dr. Hoxby is recognized for 5 minutes.

    STATEMENT OF DR. CAROLINE HOXBY, SCOTT AND DONYA BOMMER 
          PROFESSOR OF ECONOMICS, STANFORD UNIVERSITY

    Ms. Hoxby. Mr. Chairman and members of the committee, thank 
you very much for inviting me to testify. It is an honor.
    The United States faces a bleak future if we do not improve 
the education of our population. The American industries that 
are still growing quickly and exporting are those that are most 
dependent on having educated workers, and if our economy is to 
grow fast enough to solve our fiscal problems we really need to 
have a more productive education sector--in other words, 
achieve more with the same amount of spending.
    As the Education Sciences Research--Reform Act greatly 
transformed education research and moved it much closer to the 
successful models that we associate with the National Science 
Foundation and the National Institutes for Health. Crucially, 
ESRA stated that education research should meet high scientific 
standards. Before ESRA, much of U.S. Department of Education-
funded research was wasted on fairly unreliable studies that 
misinformed families and educators.
    The most acute problem prior to ESRA was that Department of 
Education-funded studies often made bold causal claims when 
they used unscientific methods that really could not support 
those claims. Claims of causation (such as stricter teacher 
licensure raising student achievement), were made when the 
study often showed nothing more than a correlation. And in that 
particular case, it turns out that the correlation is not 
particularly informative about the causal effect of teacher 
licensure on achievement.
    I want to make three main points. The first is that 
although IES has greatly improved education research, vigilance 
and continued improvements are needed. We must continue to 
raise, not relax, scientific standards.
    My second point is that the federal government, 
universities, and philanthropic organizations should share the 
responsibility for supporting education research. And my third 
point is that the research functions of the U.S. Department of 
Education should be the functions on which people can most 
easily agree, and this is because all markets work better when 
the people in them are informed, in this case parents, 
students, and educators.
    I think scientific research is one of our best hopes for 
improving American education quickly without our having to 
spend more money.
    So IES has greatly improved education, but now is the time 
to further raise standards, not relax them. I don't think high 
scientific standards are so ingrained in the education research 
community that IES can afford to take its foot off the gas.
    Since its creation, IES has mainly promoted experimental 
and quasi-experimental methods. These methods tend to produce 
reliable results as long as they are used properly, and they 
are not terribly difficult to use properly.
    Perhaps something like 10 percent of randomized base 
studies are unreliable, and that number might rise to about 25 
percent with quasi-experimental studies. That the mistakes are 
not corrected by the authors themselves does demonstrate, 
however, that even experimental studies are not dummy-proof.
    Moreover, there are many important questions that cannot be 
answered with experimental studies and the remaining evaluation 
methods require even more expertise to apply. This means that 
IES, if it is to be able to answer all the questions of 
interest to the American people, needs to develop greater 
expertise.
    Expert review panels are the main means by which IES 
maintains high standards. While IES reviewing is not yet quite 
the equal of NSF reviewing, in my experience it has made 
remarkable progress, and I would say that the institute is now 
in a virtuous cycle whereby good standards attract good 
reviewers, and the good reviewers attract good proposals. It is 
a virtuous cycle, but vigilance is needed because that can 
easily break down into a vicious cycle where poor standards 
attract poor proposals for research.
    Another thing that IES is doing well but that requires 
vigilance is data collection. IES has traditionally been very 
strong in collecting survey-based data, but now most top-notch 
education research is migrating away from survey data and 
towards administrative data sets based on schools' records. 
This is because most scientific methods now require the 
completeness and the large scale of administrative records.
    Unfortunately, our country is not at the frontier in this. 
Most South American countries and most Northern European 
countries have better administrative data sets than we do.
    This is a problem because researchers tend to migrate 
towards doing research on the things for which there is the 
best data. For instance, right now I could write a much better 
study of Dutch school choice reforms than I could of American 
school choice reforms. Their data are just better.
    The final thing that IES has done well that Dr. Whitehurst 
also mentioned is really courageously contract for rigorous 
studies of high profile programs. And I would cite as examples 
the evaluation of the D.C. Opportunity Scholarship Program and 
the evaluation of the 21st Century Community Learning Centers. 
It is not acceptable for taxpayers to continue to pay for 
programs year after year after year without having any rigorous 
evidence on whether the programs actually work.
    My second point is that responsibility for education 
research should be shared by the federal government, 
universities, and philanthropic organizations. Each one of 
these entities plays a distinct role.
    I have already mentioned that the federal government should 
collect data, but it also needs to be a supporter of 
university-based scholars, and I will return to this point. 
Philanthropic organizations also play a key role, but unlike 
the federal government, they should be mainly interested in the 
evaluation of speculative, innovative programs, and that is 
because they are using donors' money to evaluate programs 
rather than using taxpayers' money.
    Finally, universities: University-based researchers are 
primarily responsible for developing new scientific methods, 
validating them, and training people to use them. It is 
essential that these researchers interact with the federal 
government on a regular basis so that cutting edge methods are 
known by researchers at IES.
    Another important role for university researchers is to 
work on topics that are currently politically unpopular. If 
they had not been doing research on school choice in the 1990s 
we wouldn't know very much about it today, and they didn't make 
very many friends doing that research.
    Finally, I said that the research functions of the U.S. 
Department of Education should be the functions on which people 
can most easily agree. Americans really do disagree on the 
extent to which the federal government should mandate education 
standards and policies, and many Americans believe that it 
should be families who make most of the choices.
    But really no one argues that anyone--the families, 
educators, or policymakers--would be better off if they had 
less access to reliable information, and that means that it is 
one of our best hopes to improve education if we use scientific 
information to spend smarter rather than just spending more. 
Thank you.
    [The statement of Dr. Hoxby follows:]

   Prepared Statement of Dr. Caroline Hoxby, Scott and Donya Bommer 
              Professor of Economics, Stanford University

    Mr. Chairman and Members of the Committee: My name is Caroline 
Hoxby. I am the Scott and Donya Bommer Professor of Economics at 
Stanford University and the Director of Economics of Education at the 
National Bureau of Economic Research, the nation's leading nonprofit 
economic research organization. I served for several years as a 
presidential appointee to the National Board for Education Sciences. 
Over my career, first at Harvard and recently at Stanford, I have 
conducted research on a wide array of topics in elementary, secondary, 
and higher education including class size, charter schools, college 
tuition, school finance, and bilingual education. There is a common 
theme in my research and the research of the many Ph.D.s I have 
trained: we attempt to answer questions in education by applying the 
most reliable, most advanced, most scientific methods to the best 
available data.
    Thank you for the invitation to testify. It is an honor to address 
you, and I believe that today's topics are absolutely key to improving 
education in the United States.
    The United States faces a very bleak future if we do not figure out 
how to quickly and continuously improve the education of our 
population. The American industries that are still growing, thriving, 
and exporting are the industries that are most dependent on educated 
workers. If our economy is to grow fast enough to help solve our fiscal 
crisis, we must have a smarter, more productive education sector, not 
one that is simply more costly.
    If this sounds like an insurmountable challenge, it is only because 
Americans can point to so little educational improvement over the past 
four decades that we, as a nation, have begun to believe that very 
little improvement is possible. Contrast this with medicine or almost 
any other field of applied knowledge. If we were offered the choice 
between a medical procedure that relied on today's knowledge versus the 
knowledge of 1970, we would--all of us--choose today's. We would 
probably be ambivalent about today's schools versus the schools of 
1970.
    The difference between education and medicine is not that 
improvement is impossible in education but possible in medicine. It is 
not that all children are difficult to manage and all patients are easy 
to manage. The difference is that education has not, until recently, 
benefitted from rigorous, scientific research.
    The Education Sciences Reform Act (ESRA) of 2002 greatly 
transformed education research, moving it much closer to the successful 
models used by the National Science Foundation and the National 
Institutes of Health. ESRA stated unequivocally that the Institute for 
Education Sciences (IES) should facilitate research that met high, 
scientific standards in order that it produce reliable results. This 
was the crucial statement. Until ESRA, much of the U.S. Department of 
Education's budget for research was wasted on studies that were widely 
recognized to be unreliable. Not only was taxpayer money wasted, but 
the Department unintentionally endorsed and promoted poor research 
methods by funding low-standard studies.
    Prior to ESRA, there were two particularly acute problems with 
Department of Education-funded studies. The first was that they often 
employed subjective measures of what schools did and what students 
achieved. If a study relies on subjective measures, a researcher's 
ideology often dictates what the data says. The second and more 
pervasive problem was that Department-funded studies often made bold 
causal claims despite the fact that they used methods that could not 
possibly support such claims. Claims of causation--such as ``stricter 
teacher licensure rules raise student achievement''--were made when the 
study showed nothing more than a correlation. For instance, in my 
example, schools with a higher percentage of teachers who are licensed 
are schools that serve students who come from more advantaged 
backgrounds. These students tend to have higher achievement regardless 
of how their teachers are licensed. It turns out that the correlation 
between teacher licensure and achievement tells us literally nothing 
about the causal effect of teacher licensure on achievement. In short, 
prior to ESRA, Department of Education-funded research routinely 
provided misinformation to American families and schools.
    I support the recommendations that the National Board for Education 
Sciences has already made regarding the reauthorization of ESRA. Those 
recommendations, however, are necessarily detail-oriented. In my 
remaining time, I wish to provide a ``big picture'' perspective on 
ESRA, IES, and--more broadly--the role of the federal government in 
education research.
    I have three main points.
    1. IES has greatly improved education research since the enactment 
of ESRA, but vigilance and continued improvements are needed. We cannot 
afford to relax standards now. Rather, even higher scientific standards 
should be the goal.
    2. The federal government, universities, and philanthropic 
organizations should share the responsibility for supporting education 
research. This mixed model, somewhat peculiar to the U.S., is 
essentially the right model. Each entity plays an important and 
distinct role.
    3. The data collection and research support functions of the U.S. 
Department of Education should be the functions on which people with 
diverse political views can agree. This is because no market functions 
better in the absence of information on which parents, students, and 
schools can make choices. Also, truly scientific research in education 
is probably our best hope for improving the skills of Americans 
quickly, with the expenditures we are already making.
    Again, my first point is that IES has greatly improved education 
but that now is the time to further raise, not relax, the scientific 
standards that are the crucial contribution of ESRA. We are not yet in 
the situation where high, scientific standards are so ingrained in the 
education research community that IES can take its ``foot off the 
gas.'' Since its creation, IES has consistently promoted scientific 
methods by favoring studies that employ experimental and quasi-
experimental methods such as randomized controlled trials, 
randomization built into pilot programs, and regression discontinuity. 
These methods produce reliable results when used properly. That is why 
they are also used in fields such as medicine and social program 
evaluation. Vigilance is needed, however, because even the best 
experiment is not ``dummy proof.'' IES should continue to raise the 
bar, insisting on even better training in issues like attrition and 
measurement that arise in experiments. Also, not all important 
questions can be answered with experimental or quasi-experimental 
methods, and IES therefore needs to develop greater expertise in other 
evaluation methods, methods that produce reliable results only when 
they are applied by researchers who are very highly trained.
    Expert review panels are the key means by which IES gains access to 
expert opinion, maintains high research standards, and improves its own 
staff's knowledge of the latest methods and research. The Department of 
Education's expert panels have improved greatly since the enactment of 
ESRA. They now contain a sufficient percentage of well-trained experts 
that the panel process can be said to, very often but not always, fund 
research that produces reliable results. While IES reviewing is not yet 
equal in quality to the NSF reviewing I have experienced, IES has made 
remarkable progress. The Institute is only able to convene top experts 
and attract high quality proposals because researchers believe that the 
Department turned the corner with ESRA and now promotes scientifically-
grounded research. Top experts only participate in review processes in 
which they believe. Top researchers, who can devote themselves to 
issues other than education, only submit proposals to reviewers who are 
expert enough to judge proposals well. In other words, IES is currently 
in a virtuous cycle: higher scientific standards induce participation 
by more expert reviewers. This leads better researchers to submit 
higher quality proposals, and the cycle continues. Vigilance is 
necessary, however: the virtuous cycle can easily break down and become 
a vicious cycle in which poor standards lead to poor participation, at 
which point the review process attracts only poor proposals.
    Another thing that IES is doing well but that requires vigilance is 
data collection. IES, through its National Center for Education 
Statistics, has been collecting survey data on students and schools for 
decades. These data tend to be well-respected--this is one function of 
the Department's research arm that was high quality prior to ESRA. 
However, top-notch education research has migrated away from survey 
data and towards detailed administrative data. About 75 percent of 
studies published by top applied journals now rely on administrative 
data--datasets based on schools recording what a student does, what 
teachers and policies and classrooms he encounters, and what outcomes 
he attains, both in the short-term (test scores) and long-term (college 
graduation, earnings, and so on). The reason that research is migrating 
from survey to administrative data is that modern scientific methods 
that produce reliable estimates often require the large scale and 
completeness of administrative data. While the U.S. continues to have 
some of the world's best survey data on education, our country has 
fallen far behind the frontier in administrative data on education. 
Currently, most northern European countries and some South American 
countries have substantially better administrative data than the U.S. 
This matters because top researchers are motivated just as much by the 
availability of data that allow them to write excellent studies as they 
are motivated by funding. Thus, researchers are increasingly drifting 
away from the analysis of U.S. education policies and toward the 
analysis of other countries' education policies. To be concrete, I 
could now write a study of English, Dutch, or Swedish school choice 
reforms using better data than are available to me in the U.S. IES is 
making valiant efforts, which I praise, to create and sponsor stronger 
administrative databases, but this is another area in which continued 
exertion is needed. Integrating states' data and data from its own 
agencies (like the National Student Loan system) is probably the 
cheapest and quickest way for IES to improve education research.
    A final thing that IES has done well under ESRA is courageously 
contract for rigorous studies of high profile programs and programs on 
which the federal government already spends substantial money. I would 
cite, as examples, the evaluation of the D.C. Opportunity Scholarship 
Program, the evaluation of the 21st Century Community Learning Centers, 
and the evaluations of Professional Development programs in mathematics 
and reading. It simply does not make sense for U.S. taxpayers to fund 
programs year after year in the absence of scientific evidence of their 
effects, and findings from such rigorous studies should play an 
important role in any debate about their future. You may have observed 
that I said these contracts were courageous. They were. When one 
conducts a study using strong, scientific methods, one cannot know how 
it will turn out. It is always possible that some constituency will be 
angered by the results, but--then--that is the entire point of doing 
research. If we could accurately choose education programs simply by 
knowing ``in our hearts'' that they were right, we would already have 
very successful schools.
    There are a few areas in which IES has great intentions but is not 
having the effect for which it hopes. The Regional Education 
Laboratories and the What Works Clearinghouse are examples.
    My second point is that support and responsibility for education 
research should be shared by the federal government, universities, and 
philanthropic organizations. In the U.S., we have a successful model in 
which each of these entities plays an important and distinct role. 
While I would never argue that our model is perfect, I am routinely 
struck by how well it functions when I am abroad and experience other 
countries' education research. A similar mixed model of support is used 
for medical research.
    The federal government should play a few roles in education 
research. First, and most obviously, it should collect and make 
available accurate data on all aspects of education that can be 
measured: expenditures, revenues, achievement, personnel, curriculum, 
school policy, and so on. Because there are enormous economies of scale 
and scope in data collection and because cross-state comparisons are so 
important to research, it is important that the federal government and 
not just state governments collect data and make it available in a 
timely way.
    Second, the federal government should publish descriptive reports 
on American education. The word descriptive is important because such 
reports are part of the government's duty to disseminate data, rather 
than a duty to do causal research. A report that describes where 
English Learners enroll is descriptive. This must be distinguished from 
research that attempts to test a causal hypothesis such as whether 
bilingual education raises English Leaners' achievement. The federal 
government is not in a good position to conduct causal research itself. 
This is because such research requires methods that need expert review, 
and the government cannot both convene the reviewers and be the entity 
that is reviewed. In the same way, we would not want an accused person 
to convene his own jury. A good review process requires independence.
    Third, the federal government should contract for highly reliable 
evaluations of the education programs it supports. These evaluations 
cost only a small fraction of what is spent on the programs themselves. 
For this small expenditure, a good evaluation can save taxpayers vast 
amounts of money, either by providing the evidence that improves a 
partially-successful program or by providing the evidence that gives 
Congress the grounds for abolishing an unsuccessful program. The 
federal government should be prepared to fund evaluations of its 
programs with little financial help from universities or 
philanthropies. This is because the goal of such evaluations is not to 
be innovative or to explore new questions. The goal is to produce clear 
answers to well-specified questions regarding established programs. The 
ideal evaluation should employ methods that are well-validated that the 
evaluation is boring in every way except for the results. Fortunately, 
in the U.S., we have active competition for such contracts among a good 
number of organizations: Mathematica, Abt, Rand, Westat, AIR, MDRC, and 
so on.
    Fourth, the federal government should share in the support of (but 
not be the exclusive supporter of) research by university-based and 
similar scholars. These are the people who develop new methods, who ask 
questions that are still somewhat speculative, and who conduct ``basic 
research'' in education. I will return to this point.
    Philanthropic institutions also play a vital role in education 
research. In some ways, their role parallels the federal role except 
that philanthropies should focus more on trial programs that are 
innovative and less on established programs funded by the government. 
This is because the government uses money that taxpayers are obliged to 
pay while philanthropic organizations use money that their donors 
freely give. If a philanthropy spends money on a speculative 
educational program that does not succeed, the consequences fall on its 
donors--people who are affluent enough to accept this risk in return 
for the prospect of developing exciting new programs that benefit 
society. Philanthropies can obtain reliable evaluations by contracting 
with the same organizations that contract with the federal government. 
And, like the government, philanthropies should share in supporting 
research by university-based and similar scholars.
    Let me now turn to the role of university-based researchers. As I 
mentioned, university-based researchers are primarily responsible not 
only for developing new and more scientific methods of evaluation, but 
also testing them, validating them in an array of applications, and 
training people to use them. For instance, university researchers 
developed the cutting-edge methods to deal with attrition and non-
compliance in randomized controlled trials. They also developed the 
quasi-experimental methods that are currently the workhorses of 
evaluation. In addition, university-based researchers are almost 
entirely responsible for conducting basic research--research that has 
no immediate policy relevance but that provides fundamental information 
on which policies should be ultimately based. For example, I study peer 
effects--how students' achievement is affected by the other students 
who share the classroom with them. This basic research is a fundamental 
that we need to evaluate policies like school choice that affect which 
students are in each school. Another good example of basic research is 
the recent spate of studies that show (a) that different teachers have 
very different effects on achievement and (b) that a teacher's effect 
is not related to her credentials. This basic research is a fundamental 
we need for thinking about teacher pay incentives, teacher training, 
teacher tenure, and policies that affect which teachers end up in which 
schools. Finally, university researchers should be primarily 
responsible for investigating educational programs that are 
speculative, still under development, or implemented on a purely trial 
basis. University researchers must also do the uncomfortable work of 
analyzing programs that are currently unpopular with the administration 
and/or philanthropies. As an example of a purely trial program designed 
and investigated by university researchers, I would point to the recent 
study that shows that students are more likely to enroll in college if 
their family can automatically file the Free Application for Federal 
Student Aid when it files its taxes. As an example of unpopular 
research, I would point to studies of school choice from the 1990s. 
Researchers who worked on such topics did not win many friends in the 
education establishment, but we are now glad that the studies exist 
because they inform us about how to structure choice policies.
    I have said that the federal government and philanthropies should 
share in the support of university-based education research. Why? If 
the government and philanthropies do not have ``skin in the game'', 
they will not attract university researchers to study the policies or 
develop the methods that are important to them (the government and 
philanthropies). They will not attract top experts to review the 
contract-based studies they support. They will not learn about cutting-
edge research and cutting-edge methods in real time. It is the nature 
of cutting-edge work that you cannot learn about it just by reading an 
article after the fact. You need to interact with researchers--ask them 
questions, pose alternatives.
    Universities themselves should also share in supporting education 
research. Why? If we want university-based researchers to invent better 
methods and conduct basic research, they need to be rewarded for these 
activities. No one is better at generating these rewards than 
universities themselves. This is because universities' constituents 
give them incentives to create knowledge that is original and a public 
good, as all basic research is.
    By sharing in the support for educational research, the federal 
government, universities, and philanthropists also share in setting the 
research agenda. This is a good thing. Innovation never benefits from 
one entity having a monopoly on what questions are interesting.
    My third and final point is brief. The data collection and research 
support functions of the U.S. Department of Education should be the 
functions on which people can most easily agree. Americans tend to 
disagree on the degree to which the federal government should mandate 
educational standards and impose policies on schools. Many Americans 
believe that families and local communities should make education 
choices for themselves. But, it is hard to argue that anyone--families, 
communities, schools, or federal policy makers--will make better 
choices if they have less access to reliable information. As I stated 
at the outset, Americans badly need to be better educated--and soon--
because our economic growth and well-being depend on this. I truly 
believe that our best hope is to improve education by spending 
smarter--using scientific methods to identify which programs and 
policies are effective and which are counterproductive or just a waste 
of money.
                                 ______
                                 
    Chairman Hunter. Thanks. I think we can all agree, too, 
that 5 minutes really is not that long to talk, is it? Not very 
long.
    Mr. Fleischman?

 STATEMENT OF MR. STEVE FLEISCHMAN, DEPUTY EXECUTIVE OFFICER, 
                      EDUCATION NORTHWEST

    Mr. Fleischman. Chairman Hunter, Mr. Holt, members of the 
subcommittee, thank you for this opportunity to offer 
testimony. I think that what I say will continue in the theme 
of trying to provide better evidence for better decisions.
    I am Steve Fleischman. I am the deputy executive officer of 
Education Northwest. I have been involved in the promotion of 
evidence-based education for more than 15 years.
    I believe, however, that my most important qualification 
for offering testimony today is that I am a former middle and 
high school teacher. When I entered teaching as a second 
profession in the mid-1980s there was almost no evidence that I 
could find to help me manage my class better, teach my history 
lessons more effectively, improve the writing skills of my 
students, on and on. We have come a long way since then, but 
not far enough.
    Before becoming a teacher I was a business person, and I 
often think in market terms. To me, the challenge in building 
an effective education research enterprise is to create a 
market that has mechanisms to supply high quality research, 
create demand for it, and ease its use.
    Peter Drucker often observed that there is no business 
without a customer. Simply put, I believe that we will not have 
an education system in which reliable evidence is widely used 
to drive decision making unless and until we provide educators 
the research they want and need.
    Recent studies on research use by educators, including one 
conducted by my own organization, document this research-to-use 
gap. Three findings from our study, however, suggest important 
principles to narrow this gap.
    One: Research should be contextualized. The observation 
that all politics is local has its equivalent in the 
observation that all research is local. That is, participants 
in our study expressed a strong preference for research 
evidence that is linked to local contacts.
    Two: Research should be easy to read, absorb, and apply. 
Participants expressed preferences in how studies should be 
presented, including the report should be brief and written in 
nontechnical language.
    Three: Research often requires translation and transmission 
by intermediaries. Intermediaries were identified by the 
participants as unbiased organizations and individuals that can 
help locate, sort, and prioritize the available evidence.
    IES has taken significant strides in promoting an increase 
in the amount of rigorous evidence available to educators. As 
well, regional educational labs and the What Works 
Clearinghouse have begun to move forward the relevance and 
usefulness agenda.
    Some of the promising practices and developments initiated 
by IES working with other program offices of the Department of 
Education include the production of so-called practice guides; 
the holding of REL bridge events; the Ask A REL information 
services; coordination across the Department of Education in 
fields such as research, development, and technical assistance 
projects; and an increased focus in meeting the real-world 
improvement needs of education stakeholders that I think is 
exemplified in the new REL competition statement of work.
    My suggestions regarding how ESRA can be improved in the 
next reauthorization result from many conversations, including 
those held by members of Knowledge Alliance, a trade 
association of leading education R&D organizations. These 
recommendations are: One, engage consumers. The most powerful 
way to increase research use is to engage the prospective 
consumers of evidence in defining the practical problems that 
should be analyzed, designing the modes in which findings will 
be presented, and supporting ways for the evidence to be 
applied effectively in the field.
    Two: Pay attention to implementation. Research consistently 
demonstrates that even the best programs fail to provide their 
intended benefits if poorly implemented. Therefore, greater 
focus should be devoted to learning more about how strong 
programs and practices can be implemented well.
    Three: Support intermediaries. As noted above, research 
consumers often turn to intermediaries who serve as trusted 
sources that help sort through the evidence. Many of these 
trusted sources represent projects and individuals either 
directly support through current federal research, development, 
and technical assistance infrastructure or interact with this 
infrastructure.
    Fourth and finally: Promote the coordination of U.S. 
Department of Education program offices. Taking the point of 
view of consumers of evidence, education stakeholders should 
have a much clearer idea of who to contact and what services 
are available to meet their evidence needs.
    I believe that when Congress passed ESRA and created IES it 
had a vision that science, properly conducted and effectively 
applied, could be a significant engine in improving education 
in this country. As Mr. Holt has written and argued, recent 
history demonstrates that investments in R&D can drive the 
economy forward. Yet, the Department of Education spends less 
than 1 percent of its budget on R&D, one of the smallest 
investments of any federal agency.
    Ongoing federal investment in education research enterprise 
will be required if we are to achieve the promise that all 
students will receive a quality education that prepares them 
for fulfilling lives as contributing citizens in our society.
    Thank you.
    [The statement of Dr. Fleischman follows:]

   Prepared Statement of Steve Fleischman, Deputy Executive Officer, 
    Education Northwest; Director, Regional Educational Laboratory 
                               Northwest

    Chairman Hunter, Ranking Member Kildee, and Members of the 
Subcommittee: Thank you for this opportunity to offer testimony as you 
consider how education research can help to promote the identification 
and use of effective programs to support students and teachers.
    I am Steve Fleischman, the deputy executive officer of Education 
Northwest. We are a nonprofit organization created in Oregon more than 
45 years ago to apply research to improve education in the Northwest, 
and across the country. Some of the projects that we conduct on behalf 
of the U.S. Department of Education, and which provide part of the 
experience base for my testimony include the Regional Educational 
Laboratory (REL) Northwest, Northwest Regional Comprehensive Center, 
and the Region X Equity Assistance Center.
    I have been involved in the promotion of evidence-based education 
for more than 15 years. In the last decade, with different 
organizations, I have participated in a variety of U.S. Department of 
Education projects to provide educators better evidence, including 
serving as the first communications director of the What Works 
Clearinghouse, director of a project to provide education 
decisionmakers with consumer reporting on the quality and effectiveness 
of school reform models, and senior leader of the Doing What Works 
project. Currently, I serve as director of REL Northwest. These and 
other projects in which I have been engaged have given me insight into 
the need for better evidence in education that helps identify and 
implement effective programs and practices. This need led to the 
passage of the Education Sciences Reform Act (ESRA) in 2002, and the 
creation of the Institute of Education Sciences (IES).
    I believe, however, that my most important qualification for 
offering testimony is that I am a former middle and high school 
teacher. When I entered teaching as a second profession in the mid-
1980s I did what most other new teachers do: scramble desperately for 
any support to help do my job. One of the places I turned to was 
research literature on best practices. There was almost no evidence I 
could find to help me manage my class better, teach my history lessons 
more effectively, improve the writing skills of my students, or do any 
of the other things I needed to do to be a good teacher. This 
experience has been the single most important one in helping to guide 
my actions for the past 15 years, as I've been increasingly involved in 
the education research enterprise. Although the situation is much 
better today than a quarter of a century ago, we have a long way to go 
before education research fulfills its promise as an engine of 
educational improvement.
    Before going further in my testimony on the topic, I would like to 
clarify how I will use the term ``education research.'' My experience 
is that when making decisions, educators in the field are focused on 
``evidence use'' which can include formal research, program 
evaluations, reviews of bodies of research, and various data. That is, 
educators turn to many sources of ``evidence'' when searching for 
guidance on policy and practice, formal research being only one of 
them. In this testimony, I will use this more expansive conception of 
``education research'' that encompasses the sources just mentioned.
Start with the consumer
    Before becoming a teacher, I was a business person, and I often 
think in market terms. To me, the challenge in building an effective 
education research enterprise is to create a market that has mechanisms 
to supply high quality research, create demand for it, and ease its 
use. Peter Drucker, the revered management thinker, often observed that 
there is no business without a customer. Simply put, I believe that we 
will not have an education system in which reliable evidence is widely 
used to drive decision making unless and until we provide educators the 
research that they want and need.
    The past decade has seen advances in increasing the supply of 
rigorous education research as well as some closing of the ``research-
to-use'' gap. In my testimony I will suggest ways that federal 
investments and action can help to further close this gap.
    Recent studies on research use by educators point to this ongoing 
challenge. For example, in a 2009 study that my organization and others 
conducted for the William T. Grant Foundation, a wide ranging group of 
education practitioners and policymakers observed that:
     There is a gulf between research design and real-world 
practice, which often results in findings that have limited 
applicability.
     They are challenged to apply research because of their own 
lack of knowledge and skills in acquiring and interpreting research.
     Numerous obstacles exist to research use, including ``time 
constraints, the volume of research evidence available, the format in 
which it is presented, and the difficulty in applying research to their 
own situations.''
     They are often skeptical about research and concerned that 
it is conducted and reported for ulterior motives or can be shaped to 
``say anything.''
     Research is often not timely.
    Most troubling is the fact that none of the study participants 
could identify any ``breakthrough'' research or ``cite any findings 
that they feel had a dramatic effect on practice or policy.''
Principles for increased research use
    Our study cited above and others point to current gaps, but also to 
ways to improve the connection between research and practice. Three 
findings from our study suggest important principles to narrow the 
``research-to-use'' gap:
    1. Research should be contextualized. The observation that ``all 
politics is local,'' has its education research equivalent, in which 
``all research is local.'' Participants in our study expressed a strong 
preference for research evidence that is linked to local contexts. 
Thus, for research to be seen as useful and to be used, it must be 
contextualized. One way to accomplish this is to involve education 
research consumers in studies from the very beginning: in setting the 
questions, designing the studies, and writing reports that answer 
questions of local interest.
    2. Research should be easy to read, absorb, and apply. Participants 
expressed preferences in how studies should be presented, including 
that reports should be brief and written in non-technical language. 
This principle suggests that much more attention needs to be paid to 
communicating research effectively. Otherwise, potentially important 
research findings might not be read at all.
    3. Research often requires ``translation'' and ``transmission'' by 
intermediaries. Intermediaries were identified by the participants as 
``unbiased organizations and individuals that can help locate, sort, 
and prioritize the available research.'' Among examples identified by 
participants were ``research institutions, professional associations, 
partners, coalitions, peers, networks, and constituents.'' A key 
implication is that it is important to find ways to strengthen the role 
of intermediaries by making sure they have the knowledge, skills, and 
resources to play this important role.
The IES track record on promoting research use
    Since the passage of ESRA nearly a decade ago, IES has taken 
significant strides in promoting an increase in the amount of rigorous 
evidence available to education decision makers. It has improved the 
quality of quantitative research and data through various mechanisms 
including grant competitions, sponsored research, and the operation of 
the National Center for Education Statistics, Regional Educational 
Laboratory (REL) system, and the What Works Clearinghouse (WWC). While 
some of these mechanisms have focused largely on increasing research 
and data rigor others, particularly the RELs and the WWC, have begun to 
move forward the relevance and usefulness agenda necessary to meet 
consumer needs and desires for evidence.
    Some of the promising practices and developments initiated by IES, 
working with other program offices of the Department of Education, 
include:
     The production of Practice Guides. These guides, currently 
numbering 14 and largely produced by the WWC, offer practical 
recommendations based on the best available evidence. Developed by 
panels of nationally recognized researchers and practitioners, they 
offer actionable recommendations, strategies for overcoming potential 
practice roadblocks, and an indication of the strength of evidence 
supporting each recommendation. Topics range from turning around low-
performing schools and reducing high school dropouts, to using data to 
support instructional decision making and structuring out-of-school 
time to improve academic achievement.
     The holding of REL Bridge Events. These are in-person or 
webinar events held for education stakeholders across the nation by the 
10 RELs to share and discuss the recommendations of the Practice Guides 
and other rigorous and relevant evidence. The events have proven to be 
highly popular and represent a key mechanism to link educators to the 
``best available'' research-based guidance on critical topics of 
regional or local interest.
     Ask A REL information services. Every REL offers this free 
service that allows education stakeholders to call or e-mail with their 
questions of practice. These questions are posed by state officials, 
school board members, superintendents, principals, teachers, parents, 
and others seeking to find our ``what the research says'' on particular 
topics. The requests, which are turned around quickly, often result in 
research literature reviews that are then shared with other 
stakeholders.
     Coordination across the U.S. Department of Education 
research, development, and technical assistance infrastructure. Centers 
and projects sponsored by various Department program offices have come 
together more regularly than in the past to hold joint activities that 
provide stakeholders needed information. One example was a series of 
regional events on School Improvement Grants (SIG) jointly sponsored by 
the RELs and Comprehensive Centers this past year. In another recent 
example, REL Northwest worked together recently with two regional 
comprehensive centers and the Center on Innovation and Improvement to 
bring together state officials and leaders from rural SIG schools in 
five states to learn about effective practices to turn around their 
schools.
     An increased focus of the REL system on meeting the 
improvement needs of education stakeholders. In a highly encouraging 
development, the current IES competition for new REL contractors that 
will launch a new five-year cycle of REL activity beginning in January 
2012 is focused on the creation of research and data-use partnerships 
with educators and policymakers in the field. These so-called 
``research alliances'' will be long-lasting, help to set the research 
agendas for the RELs so that they concentrate on real world ``problems 
of practice,'' and provide capacity building so that alliance partners 
are increasingly able to conduct their own research and data-analysis 
projects. Without sacrificing rigor, these alliances will go a long way 
in deeply engaging consumers of research in its production and use.
Considerations for ESRA reauthorization
    Discussions in the education research community regarding how ESRA 
can be improved in its next reauthorization have been ongoing in the 
field for several years. For example, Knowledge Alliance, a trade 
association of leading education research and development (R&D) 
organizations that I currently chair, has engaged its members and 
experts in the field in this discussion. As well, my own organization's 
Board of Directors composed of nearly 30 education stakeholders across 
the states of Alaska, Idaho, Montana, Oregon, and Washington, has been 
discussing this issue over the past two years. The considerations below 
are suggestions that have arisen from these conversations. As Congress 
considers reauthorization of ESRA, I recommend that you keep in mind 
the following goals which might result in new mechanisms and practices 
or the strengthening of current ones to better connect evidence and 
practice:
     Engage consumers. The most powerful way to increase 
research use is to engage the prospective consumers of evidence in 
defining the practical problems that should be analyzed, designing the 
modes in which findings will be presented, and supporting ways for the 
evidence to be applied effectively in the field. This should include 
building consumer capacity to find, judge, and apply evidence that is 
provided at the federal level and beyond. Key consumers on which to 
focus capacity building efforts might be state education agency and 
local district staff who lead research and data analysis tasks. 
Finally, this effort might include studies and other efforts to 
determine how to better serve education consumers' evidence needs.
     Pay attention to implementation. The identification and 
sharing of effective programs and practices represents only part of an 
effort to promote an evidence-based education system. Research 
consistently demonstrates that even the best programs fail to provide 
their intended benefits if poorly implemented. Therefore, greater focus 
should be devoted to learning more about how strong programs and 
practices can be implemented well.
     Support intermediaries. As noted above, research consumers 
often turn to intermediaries who serve as trusted sources that help 
sort through the evidence to find that which is most relevant for 
consumer decision making needs. Many of these trusted sources represent 
projects and individuals either directly supported through the current 
federal research, development, and technical assistance infrastructure 
or that interact with this infrastructure. Examples of the latter are 
associations of state education officials, school boards, 
administrators, principals, teachers, and education journalists. These 
intermediary organizations must be engaged and supported systematically 
if we are to improve the connection between research and practice.
     Promote the coordination of U.S. Department of Education 
program offices. There are notable examples of how program offices such 
as IES, the Offices of Elementary and Secondary Education, Innovation 
and Improvement, Special Education Programs, and others work together 
to promote evidence use. However, there is much more that can be done 
to promote this coordination. Taking the point of view of consumers, 
education stakeholders should have a much clearer idea of who to 
contact and what services are available to meet their evidence needs. 
Applying this customer-based approach would require the U.S. Department 
of Education to structure its information and support activities in a 
more coordinated way to promote an evidence-based system.
Federal investments in education research can pay dividends
    This testimony has focused largely on the supply side of the 
research use equation, in the hopes that if research can be made more 
timely, relevant, and useful, it will more likely factor into decision 
making. However, there other aspects of ``market building'' that I have 
only mentioned briefly in this testimony that require a federal role. 
For example, ongoing federal communication regarding the importance of 
evidence use sends a powerful signal in the system to promote its use. 
Emphasis in federal education technical assistance that increases the 
capacity and support provided for evidence use increases the likelihood 
that research and data will be used effectively.
    In the early 1950s, parents in this country had to worry about 
their child contracting Polio, the dreaded disease of the day. In 1952, 
the year before I was born more than 3,000 children, a record number, 
died from the disease. Today, thanks to significant investments in 
scientific research and effective public health campaigns, Polio no 
longer exists in this country. However, what does still exist in 
America are far too many crippling conditions such as students who 
cannot read by grade three, drop out before completing high school, or 
reach college unprepared for success. Like Polio, these conditions 
demand a substantial investment in research and then translation of 
that research into practical action.
    I believe that when Congress passed ESRA and created the Institute 
of Education Sciences, it had a vision that science, properly conducted 
and effectively applied, could be a significant engine in improving 
education in this country. Further, as Mr. Holt, a member of this 
subcommittee, has argued, recent history ``demonstrates that 
investments in R&D can drive the economy forward.'' Yet, the Department 
of Education spends less than 1 percent of its budget on R&D, one of 
the smallest investments of any federal agency.
    Ongoing federal investment in the education research enterprise 
will be required if we are to achieve the promise that all students 
will receive a quality education that prepares them for fulfilling 
lives as contributing citizens in our society.
    In closing, thank you for this opportunity to offer testimony 
today.
                               references
Coburn, C.E., Honig, M.I., & Stein, M.K. (in press). What is the 
        evidence on district's use of evidence? In J. Bransford, L. 
        Gomez, D. Lamn, & N. Vye (Eds.) Research and Practice: Towards 
        a Reconciliation. Cambridge: Harvard Education Press.
Fleischman, S. (2009). User-driven research in education: A key element 
        in promoting evidence-based education. In W. Bottcher, J.N. 
        Dicke, & H. Siegler (Eds.), Evidenzbasierte bildung (pp. 69-
        82). Munster, Germany: Waxmann.
Fleischman, S., Harmon, J.A., & Darwin, M.J. (2007). Promoting 
        evidence-based practice and better student outcomes through 
        improved consumer reporting. Journal of Education for Students 
        Placed at Risk, 12(1), 1-7.
Fleischman, S. (2006). Moving to evidence-based professional practice. 
        Educational Leadership, 63(6), 87-90.
Nelson, S.R., Leffler, J.C., & Hansen, B.A. (2009) Toward a research 
        agenda for understanding and improving the use of research 
        evidence. Portland, OR: Northwest Regional Educational 
        Laboratory (now Education Northwest).
Tseng, V. (2010). Learning about the use of research to inform 
        evidence-based policy and practice: Early lessons and future 
        directions. William T. Grant Foundation 2009 Annual Report. 
        William T. Grant Foundation, New York, NY.
                                 ______
                                 
    Chairman Hunter. Thank you.
    I would now like to recognize Dr. Smith for 5 minutes.

  STATEMENT OF DR. ERIC SMITH, FORMER FLORIDA COMMISSIONER OF 
           EDUCATION, FLORIDA DEPARTMENT OF EDUCATION

    Mr. Smith. Thank you very much. Mr. Chairman, I'm honored 
to be before all of you this morning on what I consider to be 
an extraordinarily important issue. I will first state that I 
am not a researcher, but I am a consumer of research.
    I have been a classroom teacher, school administrator, 
district administrator, and B.S. in Newport News, Virginia, and 
a state commissioner, and have had a strong belief in the work 
of people like Ron Edmonds, and others, that give this notion 
that when leadership chooses to make a difference with the 
outcome of children, when they are committed to making that 
difference they will do so wherever it is important to them to 
do so.
    It is the fabric of the basis of accountability. That 
accountability hinges on the ability to make smart decisions 
about what you do in schools and classrooms and districts and 
states. To call yourself a reformer requires that you have the 
ability to move systems in a way that brings with it progress 
and improves student achievement and student outcomes.
    Over the years I have had the opportunity to not only work 
as a consumer, but also been asked to serve as the chairman of 
the Title 1 Review Committee. For a good number of years, I 
worked with Russ Whitehurst and others, and that was an 
exciting time for me.
    That was a time when I did see, in my view, dramatic 
improvement in the way we approach the question of research, 
what is important for a high school principal to do in their 
schools? What is important for a district superintendent that 
is under pressure on outcome and achievement to modify and 
change the way they deliver reading and mathematics?
    One of my questions that I had of Russ one day late in the 
afternoon was, in this great nation, can't we tell our 
educators what the most effective method of teaching 
mathematics is? Should it still be a question out there for 
those that have to and are expected to deliver every day?
    So it has been excited to serve in those capacities. 
Brilliant people and great passion around trying to find the 
right answers.
    For the consumer there are two big questions, though, that 
get divided--it is kind of inside baseball to me--divided 
between how research is conducted and the quality of the 
research, and then how that research is disseminated. To the 
superintendent, to the building principal, the classroom 
teacher, those lines are blurred, and it is kind of an inside 
discussion about how that works. All they know is they want an 
answer and they can't find it.
    We have seen dramatic, dramatic improvement, and I would, 
part of my recommendation is to continue to fund the kind of 
research that has been done. But often it comes out rigid, it 
comes out, in order to get it right, to have all the controls 
in place, it becomes so different from the real world it 
becomes hard for a practitioner to put it into place.
    What do I do with this? Those aren't my classrooms. That is 
not my district.
    And so that translation, whether it be fault of the 
research and the way it is conducted or it is a failure of the 
translation and the dissemination of the research, there is 
still--we are better; we have farther to go. There is more work 
to be done, and those lines are blurred.
    So I would move, in my remaining 1 minute and 27 seconds, 
to talk specifically about some recommendations. One is that--I 
will start with my second recommendation in my written remarks 
first--is that I do think we need to be very thoughtful, very 
wise about broadening our scope on how we might gain 
information; I wouldn't say even conduct research, but gain 
information about effective practices. I think we can make them 
more relevant, more timely, and more cost effective by 
broadening our views.
    There are places and things that need to be under a 
rigorous scientific model and approach, but as I stated in my 
comments about the application of PSAT, it came from a 
relational table on the back of a document produced by--that I 
get annually. It was translating that that makes a difference, 
so, and as a result we translated data, information about a 
product, and we made lives different for tens of thousands of 
children across America--dare I say, hundreds of thousands of 
children across America. So there are other ways of knowing 
what is important.
    On that note, on the research side it is important that 
practitioners, the consumers, help to drive the problems that 
they have today and the conditions that exist today, and that 
researchers help inform the best way to get at that answer but 
look at it from a broad view.
    Second, on dissemination, it is hard to get a 
superintendent or a principal's time, or a commissioner's time. 
They are swamped. And so the ability to go through existing 
channels where practitioners will be there and show up, if it 
is not important to be on the keynote panel it probably isn't 
important to those out in the field. So the dissemination 
process, how that is done, is critically important.
    And I would just finally say is that what this committee, 
the question this committee is asking, is everything. It is 
about reauthorization. It is about accountability. It is about 
school reform. It is about our nation's future.
    Our ability to know what works and how to get it in the 
classrooms is of critical importance today for children sitting 
in classrooms at this moment. Thank you very much.
    [The statement of Dr. Smith follows:]

Prepared Statement of Dr. Eric J. Smith, Former Florida Commissioner of 
               Education, Florida Department of Education

    I appreciate the opportunity to address this committee about a 
topic I find to be extraordinarily important to our nation's academic 
progress; research on the tools and strategies that we give our 
teachers to use in the classroom. I am speaking to you not as a 
researcher but as a consumer of research on educational strategies, 
tools and practices. In my career I have had the honor of serving 7 
years as a classroom teacher, 8 years as a high school administrator, 
17 years as a superintendent and 4 years as a state commissioner of 
education. Throughout my career, in each of these positions, I have 
been constantly searching for tools, strategies and practices that had 
some independent evidence that if properly used would result in 
positive outcomes for children. Said another way, I have always been 
searching for those tools and techniques that are unique, that can be 
used in a single classroom or used on a large scale and will generally 
result in positive achievement gains for children. To be blunt it has 
been a frustrating search. There are numerous approaches to choose from 
and as a consumer you will always be told that educational practices 
and tools are aligned to your standards, are research based and you 
will always be shown data that is intended to demonstrate that an 
instructional approach is extremely successful in raising achievement 
levels. Unfortunately there is still far too little independent 
research or information on the impact of various approaches to student 
learning.
    My interest is in trying to find better information on the 
effectiveness of educational tools and approaches to help teachers, 
administrators and governing bodies to make more informed strategic 
decisions on how to improve student achievement. The question of 
effectiveness and impact is central to discussions of accountability, 
and should be part of the foundation in the development of reform 
strategies. School reform and accountability have as a premise that 
leadership can shape and control for academic outcomes by thoughtful 
strategic planning and execution. There is also an implicit assumption 
that the needs of individual children can be addressed through the 
careful planning of practices and strategies as well. Common variables 
include conditions of time, resources and quality instruction using 
high quality materials. The primary classroom materials chosen and 
given to the teacher to deliver the level of instruction required is an 
essential component. An example of connecting instructional strategy to 
the needs of individual students is found in the emerging development 
of adaptive testing. Adaptive testing is showing great promise in 
helping educators to be much more student centric in the delivery of 
instruction and in meeting the individual needs of students. Reform 
strategies such as these should be built around the type of work that 
is to be done in the classroom. Such strategies should be framed by the 
selection of classroom practices and selection of primary and 
supporting classroom materials. Those that make the decisions about 
classroom instructional practices and materials should be held 
accountable for their decisions. I have been in classrooms where a 
school or district has selected an instructional strategy with 
supportive tools and you will see teachers who have so little 
confidence in the approach, that they secretively have hoards of other 
materials to do the job. The quality of instructional tools and 
approaches matters to teachers and matters to students. Some help, some 
don't offer much and it can be assumed that some may do harm.
    So my interest in the question of what classroom practices and 
tools are effective resulted in me being selected to chair the Title 1 
Independent Review Panel. It was an extraordinary experience. My 
colleagues on the panel were both brilliant and passionate about the 
issue of instructional improvement. I credit the work of Russ 
Whitehurst and others for pioneering a new way to look at the process 
of educational research. It was bold and aggressive and had the intent 
to base findings on a scientifically rigorous research methodology. As 
superintendent in Charlotte, our children benefited from much of these 
early efforts to redefine the research. In Charlotte we had no district 
wide strategy for reading instruction. You could go into an elementary 
school and reading would be taught differently at the opposite ends of 
a hallway. Strategically we needed to go to a district adoption so all 
teachers could be supported through professional development and 
adequate materials. But the question was: what approach would be most 
helpful for students? National research helped us make that decision 
and it was the right decision, reading achievement went up 
dramatically. Down the road in Florida, at about the same time, the 
entire state was making decisions about reading. Those decisions were 
also being informed by quality research and the results over the last 
decade have also been extraordinary.
    But often a strict application of scientific research has 
significant challenges; the selection of the control group can be 
difficult if you are fairly certain that the intervention will be 
beneficial. There is also difficulty in maintaining the fidelity of the 
experimental group in a real situation and the process is slow and 
expensive. The instructional strategies will ultimately be used in 
states, schools and districts that don't have strict and rigid 
structures, kids come and go in classrooms as do teachers, schedules 
get interrupted, materials sometimes are in short supply and 
professional development can be delivered with varying quality. As a 
result, the nature of the research often fails to mirror reality. The 
research methodology has the tendency to be cumbersome in its 
implementation and lead to findings that are rigid and artificial. As a 
result, the research has limited relevance to the real conditions found 
in schools and classrooms.
    Research that is available is also proving difficult to disseminate 
and get in to the hands of those who have the responsibility for making 
educational decisions. The regional labs are of widely different 
quality and unfortunately are not the ``go to'' place for information 
on meaningful research. Some of the labs do very good work but the 
quality and reputation varies, and as a result, they don't form a 
network of dissemination that provides national coverage. The What 
Works Clearinghouse is making good strides in dissemination, but is 
limited on bridging the research to application challenge, research 
findings are slow to become available and because of the nature of the 
research often lacks application in real situations.
    My recommendations going forward are three fold; 1. continue to 
support independent research on the quality of educational strategies, 
tools, and practices, 2. develop new methods to gain insight into the 
effectiveness of educational strategies, tools, and practices and 3. 
expand and create new channels for the dissemination of educational 
research.
    My first point; the need for continued support for education 
research is critical because it is so central to all discussions of 
accountability and reform. I often say that schools don't fail, 
districts fail. The reason for that belief is that most of the 
important decisions relating to how a school operates are made at the 
district level; leadership, hours of instruction, calendar, staffing 
restraints and yes, selection of instructional tools and practices. The 
ability of a district to make sound strategic decisions about their 
selection of tools and practices is dependent on quality and timely 
information regarding the impact of the tools and practices. That 
should not be done district by district. States and the Federal 
Government have a responsibility to support independent research on the 
educational effectiveness of tools and practices. The research should 
be led in large part by practitioners, answering questions that are 
timely and relevant to their work with children.
    Regarding my second point, in my testimony I have cited two 
examples where children benefited by making strategic decisions that 
were informed by quality research. I would also share that I have used 
other methods of gaining insight into the quality and effectiveness of 
educational practice that weren't based on rigorous scientific methods 
and proved to be very timely, cost effective and also resulted in 
significant benefit for students. I would give you one example. In 
Charlotte, one of my staff noted that the correlation between a 
student's PSAT scores and AP performance could be built into a program, 
and rosters of students that had good potential for success in AP could 
be generated. These simple correlation tables provided valuable insight 
into the use of the PSAT. The impact of knowing the correlation 
information and being able to apply it resulted in significant 
increases in college level high school work that was being offered to 
students and resulted in a significant increase in overall college 
readiness for the students in Charlotte. A second example is from my 
work as Commissioner in Florida. As Commissioner, I was able to develop 
plans that will expand our statewide data base to include the primary 
instructional practices and tools used in each classroom. The intent 
was that we could develop relational information between instructional 
practices and tools and student achievement in a variety of different 
school settings. These findings would be made available to districts 
for use in their strategic planning process.
    Finally, there needs to be a stronger link between educational 
research and real world application. If there is a judgment about 
strictly designed research versus real world conditions of application, 
the call should favor the real world conditions in every instance. 
Information that is disseminated needs to be timely, addressing 
challenges the field has today not yesterday. It needs to address 
broadly defined challenges, the big questions, not narrowly defined 
questions that have little relevance. And dissemination needs to 
utilize existing organizations to communicate findings such as The 
Council of Great City Schools, CCSSO, Chiefs For Change, AASA and ASCD 
to name a few. If the research findings are not of interest to these 
organizations, they won't be of interest to their members either and 
dissemination will fail, fail because the research is not important.
    This committee is addressing an issue of great national importance, 
important to our country and also important for our children. I commend 
you for your work.
                                 ______
                                 
    Chairman Hunter. Thank you.
    I thank the panel, once again, for your testimony.
    I would like to start out by saying that it is 
interesting--I spent some time in the Marine Corps, so if you 
noticed, in the U.S. military there are four different 
uniforms, and when you go to combat there are four different 
uniforms--two or three, because the Navy wears the Marine Corps 
uniform now. But the problem is, if you look at it 
scientifically--and you should be able to using different light 
spectrums and so forth, and matching up the uniform with the 
surroundings and the environment in which you are fighting--
there has got to be one good uniform.
    There is obviously, if you test these uniforms using 
different spectrums of light and so forth, there is one uniform 
that protects the wearer better than any of the other uniforms 
do. But if that was true then we would all just have one 
uniform. We wouldn't have a Marine uniform, and an Army 
uniform, and an Air Force uniform, but that is what we have. 
And you would think--it is kind of sad, if we can't do it at 
that level what makes us think we can take best practices and 
scientific research and datum and use it at this level.
    And secondly, there seems to be some disparity between the 
ability to get the best practices and things that work and the 
best curriculums for teaching teachers that then transfer to 
teaching students, and then the implementation--there seems to 
be a disconnect. I don't necessarily think we are talking about 
that disconnect today or the implementation of research and 
your findings, but that has got to be, that is, that gap has to 
be bridged at some point, and that is going to be fairly 
difficult to do, I think.
    First, Dr. Whitehurst, you say it is a mistake for Congress 
to dictate how schools--how states and school districts should 
use findings from research. Can you provide some examples of 
this?
    Mr. Whitehurst. Yes, Mr. Chairman. I mentioned Reading 
First as an example of the federal government specifying at a 
fine level of detail how reading should be taught. There is no 
evidence indicating that kids are reading better as a result.
    We have in current federal policies and Race to the Top 
specifications an indication that there are four ways that a 
failing school should be turned around. How do we know that 
there aren't nine ways, or seven ways, or six ways?
    So to try to get down to the operational level, in terms of 
how a teacher should do his job, or how a district 
superintendent should do his job, through legislation seems to 
me to be a mistake. And if you go through the current version 
of the Elementary and Secondary Education Act you will find 
almost every section of the bill dictates that practice be 
based on research findings.
    Often, the research findings aren't there. I remember when 
Dr. Smith pulled me aside and said, ``Well, what math 
curriculum should I be using?'' And I said, ``I don't know.'' 
And he said, ``How could you be requiring me to meet the 
mandates of No Child Left Behind to use the scientifically-
based research and there is no research to tell me how to do 
that?''
    So, you know, it is easy to overreach at the federal level. 
Again, my point is if the research is done, if it is relevant, 
if we have good ways of transmitting, and if educators are held 
responsible for the results they will use it. You don't have to 
force them to do it.
    Chairman Hunter. Answer this, too: How do you make sure, 
then, if you have the data, and you have the best practices, 
and you have the research that shows what should be taught, how 
do you--if you don't want to get down in the weeds on 
implementation, because you don't want to because every--there 
is no silver bullet for--you could have two schools side by 
side on two different blocks and they would require different 
implementation. How do you guarantee, then, that the research 
is taken to bear in that school, or do you? Do you just let--
kind of let the--if they want to use it then they can use; if 
they don't want to use it then they don't have to use it?
    Mr. Whitehurst. Well, you certainly raise a very great 
challenge. But there is research relevant to that challenge, 
and it is research on implementation. So we are developing a 
knowledge base about the ways you need to transmit knowledge, 
the ways you need to provide professional development around 
that knowledge, the way you need to monitor implementation to 
make sure that a program is being carried out well.
    So I think, you know, on the forefront of education 
programs that are being shown to be effective is a very strong 
component having to do with implementation. So you are not 
asking school personnel, you know, to take something off the 
shelf and figure out how to implement it. The implementation is 
built into the program--to the program itself.
    Chairman Hunter. Thank you very much.
    And as my time is expired, I would like to recognize Mr. 
Holt for 5 minutes.
    Mr. Holt. Thank you.
    Actually, we have a very broad topic today, or a collection 
of many topics. Of course, we must not forget that what 
underlies all this is that research communicated well and made 
relevant is our best protection here, and also the teachers' 
best protection, against allowing one's deeply held beliefs and 
ideologies from blinding them, us, to reality and best 
practice.--And we need to make that research practical.
    We are talking today about national research. We are 
talking about the National Center for Educational Statistics. 
We are talking about comprehensive centers. We are talking 
about the regional labs that are--I like to think of as akin to 
the Agricultural Extension Service that maybe your father the 
farmer actually used, because there are best practices that 
come from the federal level that a farmer would depart from 
only at his own peril.
    But we are also talking today about local data. I mentioned 
in my opening remarks that I will be reintroducing the Metrics 
Act to provide federal assistance to local agencies to apply 
data and use it locally. Let me start with Mr. Fleischman and 
then Mr. Smith to ask, what do you think is a useful federal 
role in supporting local data system development, and can you 
give me examples of how that has or how it could be used well? 
And then as time allows we will go to other questions and other 
witnesses.
    Mr. Fleischman. Yes. Thank you for that question.
    And first of all, I would say there are a couple of very 
good recent reports out of the Data Quality Campaign that I 
think are worth looking at. One came out last month and one 
just came out this month, and it looks at the connection 
between state data systems and how districts use that. I think 
they have a number of recommendations in there that are 
valuable to keep in mind, because in the end, the state data 
systems have to be used at the district level, the school 
level, and the classroom level. So the question is how to 
better connect all of those pieces of the system.
    Going back to this notion of focus on the consumer or the 
user of the data, I think it is really important--and the Data 
Quality Campaign cite some examples of how states have worked 
really well with their end users--to create the support 
mechanisms necessary so that data is not used for compliance 
purposes but for continuous improvement purposes. I will cite 
one specific example at the local level--for me a local level 
is both the state and the district--through our Regional 
Educational Laboratory work right now. We are using some of the 
framework that is provided by a number of practice guides. 
These are kind of taking best available evidence and then 
helping educators by providing them practical recommendations.
    We are working with several of those practice guides, 
including one on turnaround and one on data-driven decision 
making, with a set of local schools and local school districts 
in the Columbia Gorge area of Oregon to help them use in a 
rapid increase cycle where they look at their data continuously 
for the purpose of improvement. So they take the action, they 
look at the data, they focus, and they----
    Mr. Holt. To give Mr. Smith some time to answer that----
    Mr. Fleischman. Sure.
    Mr. Holt [continuing]. Let me turn to him.
    Mr. Smith. Thank you very much. I think, you know, the 
driver--why do people in the field want to do anything? Why do 
they want to look at data? What data do they need, and so 
forth?
    And it has been my experience as commissioner and 
superintendent that the school leadership and district 
leadership is driven by data because of the issues around 
accountability and trying to find out if they are being 
effective or not, how do they benchmark their work over the 
course of the year, if they need to make corrective action. And 
so I think there is a--based on the structure we have in our 
nation, there is inherent desire on the part of districts and 
schools----
    Mr. Holt. My specific question is, can we help local 
educational agencies use data better, and do you have examples 
of that?
    Mr. Smith. I think--I don't know if that is--the federal 
government needs to be involved with that or that is more of a 
state and district issue. I think that in terms of----
    Mr. Holt. But it is not happening.
    Mr. Smith. I would share that there is a great deal of data 
analysis going on in schools every day trying to determine the 
effectiveness of instruction that takes place. Connecting that 
between what is effective--what do I do when I find that the 
work that is going on in the classroom or schools isn't working 
isn't effective? What is the solution? Where do I go?
    That is where the breakdown is. It is not so much that I 
don't know that School X, as a commissioner, is failing and has 
failed. What do I do? What solution set do I bring to it and 
apply to it?
    Mr. Holt. Thank you.
    Chairman Hunter. Thank you, Mr. Holt.
    I would now like to recognize Mrs. Roby, from the great 
state of Alabama.
    Mrs. Roby. Thank you, Mr. Chairman. And I, too, know how 
quickly 5 minutes goes by, so we will just jump right in.
    Dr. Whitehurst, you talked with the chairman a little bit 
about implementation. I just want to expand on the fact that in 
your testimony you said that Congress should focus on creating 
incentives for practitioners to want to incorporate findings 
from the best research into their programs centered around the 
performance of schools.
    And we hear that word a lot in here--incentives--and rarely 
are we given the opportunity to hear specifics as to what those 
incentives might be and how the federal government actually 
offers those incentives. So could you expand on that a little 
bit?
    Mr. Whitehurst. Yes, I can. I think there are two 
categories of incentives. The one is top-down regulatory 
incentives, where, as has been the case in No Child Left 
Behind, states have to define targets for performance of 
schools. Schools that are not meeting those targets face 
various consequences. You have similar sort of mechanisms 
structured around positive incentives in Race to the Top.
    But somebody at either the state or the federal level is 
saying, ``This is what you need to do, and here are the things 
that are going to happen if you don't do those things well.'' 
There is decent evidence that that kind of top-down 
accountability has effects, and you will hear practitioners 
like Dr. Smith say, ``Well, we are concerned with 
accountability. We wanted to do something about the schools 
that were failing, as defined by the accountability system.''
    The other form of accountability is market-based 
accountability. Your school is failing not because you are not 
reaching some target set by the state; your school is failing 
because parents don't want to send their kids there and they 
have other places to send their kids----
    Mrs. Roby. And I guess that is the--and sorry for 
interrupting----
    Mr. Whitehurst. Yes.
    Mrs. Roby [continuing]. That is the problem, because not in 
every school district do you have that opportunity to make that 
choice. And that is the real rub is that if my school is 
failing and I don't have a choice to go anywhere but that 
school then that incentive doesn't exist.
    Mr. Whitehurst. Well, I think that it is possible for the 
federal government to do more to incentivize states to 
incentivize districts to allow at least public school choice. 
Now, if you live in a community in which there is only one 
elementary school obviously you are not going to have choice. 
But if you live in a community like Washington, D.C., in which 
there are hundreds of elementary schools, if you can choose 
among them based on good information on how you are performing, 
that sends a very strong accountability message.
    And again, I think that is a different form of 
accountability. It is not fundamentally incompatible with top-
down accountability, but I believe we need more of it. And the 
best evidence is that when that accountability is in place the 
schools that are subject to the loss of students improve, and 
that parents--low-income parents--given good information, will 
choose a better school than the school that the district 
assigns their child to, and their kids will do better as a 
result.
    So incorporating that kind of market-based approach in the 
accountability system, I think, is a way to go, and we could do 
more of that at the federal level.
    Mrs. Roby. And certainly we know that part of that 
challenge, too, is how to get that information into the hands 
of the parent, and that is a whole 'nother topic of 
conversation. But thank you for your answer.
    Dr. Hoxby, how can IES effectively partner with the private 
sector to conduct quality research and make it accessible to 
teachers in the classroom?
    Ms. Hoxby. Well, I think in many ways the best way to 
answer that question is to explain what happens abroad in other 
countries, because there is no partnership between the private 
sector and the government in most other countries, and as a 
result, their education research is very narrow. The government 
really has a monopoly on what are the interesting questions and 
what are the right ways to answer those questions. And also, 
they don't tend to have very much advancement in terms of their 
scientific methods for answering those questions.
    In the U.S. we do have a pretty effective partnership 
already between the government and philanthropic organizations 
and universities. And I think if we look at what something like 
the Gates Foundation does, it starts interesting, innovative 
programs, some of which are never brought to scale; it has 
those programs evaluation, sometimes by university researchers, 
sometimes by other private sector researchers, like 
Mathematica, a contracting organization, and then it makes 
decisions about which of these programs to continue and which 
of these programs should be discontinued.
    I think that is a fantastic role for philanthropic 
organizations because it is their money and they want the, if 
they want the credit for being innovative they should take the 
risks of being innovative.
    I think universities play a much bigger role in the United 
States, as well. I don't know whether we consider that the 
private or the public sector. I suppose it depends on the 
university.
    But I think the key thing that the universities do is that 
they will do basic research, and basic research is important 
not because--basic research is research that doesn't apply 
immediately to policy, but it usually applies to policy just 
one step down the road. So as an example, all of the research 
that has come out recently on teachers--the effects that 
teachers have on students--some teachers have much more 
positive effects, some teachers have much worse effects on 
students.
    That is all basic research because all it tells us is that 
we know that teachers differ a great deal. It doesn't tell us 
how we ought to pay teachers, but we need to know that if we 
are going to then consider teacher pay policies.
    Mrs. Roby. Thank you so much. My time is expired.
    Chairman Hunter. Thank you.
    Mr. Scott is recognized for 5 minutes.
    Mr. Scott. Thank you, Mr. Chairman.
    And I want to welcome Dr. Smith. You just breezed by, Mr. 
Chairman--breezed by the highlight of his career, and that is 
superintendent of the Newport News, Virginia public schools.
    Dr. Smith, it is good to see you again. When you were 
superintendent I think that they had the research--federal 
research was under--I forget what it stands for, but it is 
OERI. Did you ever use any of that research in Newport News?
    Mr. Smith. No.
    Mr. Scott. In your other capacities have you used federal 
research in your--you asked Dr. Whitehurst for research on 
things you needed. Was the research there?
    Mr. Smith. No. We have. We got it and we have used it a 
lot, and--from a variety of sources, but research from the 
federal government, where available and applicable, I would--we 
have a--in Florida we have a very well developed reading office 
and we constantly stay up with the most current research on 
reading, and so forth.
    And I would say that, you know, in the field--and again, it 
varies a bit from state to district to school; perhaps it goes 
back to the question asked earlier about data. A lot of 
research can inform the work in general and overall. Day to 
day, a lot of schools--most schools I run into--do have good 
data, or they have data; they don't have good data, and they 
actually are doing research on their own.
    Mr. Scott. But that data and research are two different 
things. If you have done some research and found out what 
works, do you report back to whoever did the research to see if 
it worked in your locality? Because I suspect that some very 
successful programs would work in one setting and not in 
another.
    Mr. Smith. We do find that the application of what is 
learned can vary from setting to setting. And again, you know, 
sometimes information we will gain helps us with that; what 
practices work best with students that have limited experiences 
when they come to the classroom, or the converse.
    Adaptive testing is helping us now, because we have some 
work with adaptive testing that allows us to measure up and 
measure down, and so we can be more student-centric in our 
review and trying to find the right kind of solutions to, you 
know, based on the research, on what needs to be done.
    Mr. Scott. Thank you.
    Dr. Hoxby, you indicated the importance of making sure you 
get the best proposals. Does the Institute of Educational 
Research wait for proposals or do they put out RFPs of subjects 
that need to be studied?
    Ms. Hoxby. Both, and I think both are important. By putting 
out priorities IES does get researchers engaged in questions 
that are important for policy, especially federal policy 
makers, and certainly some of the priorities come right out of 
federal programs that are funded. I think those are very 
important priorities.
    But I don't think we want IES to be establishing all of the 
priorities simply because sometimes the most important 
innovative programs really come out of nowhere, or out of some 
educator's idea, out of a particular school, out of a 
particular school district that is doing something innovative, 
and then often those proposals flow into IES. So I think we 
have a pretty good balance at this point of establishing 
priorities and attracting researchers to them, but also 
allowing researchers to notice what is going on out there in 
the field and then bringing that into IES and saying, ``I can 
evaluate this.''
    Mr. Scott. And with that process do we have information--I 
mean, if we want to reduce the achievement gap and the school 
board gets together and says, that is our priority; we want 
to--is there somewhere you can go to get research on what they 
need to do?
    Ms. Hoxby. Well, I think that is a tricky question. Ideally 
they could go to the What Works Clearinghouse, which is part of 
IES, and look up something like, ``How do I close the 
achievement gap,'' but it is really not that straightforward. 
What the What Works Clearinghouse would tell you is maybe what 
reading curriculum works best, or what math curriculum works 
best, or it might give you a good sense of whether charter 
schools are doing better than public schools in a particular 
domain.
    So we still have a problem in that the school 
superintendent really has to put all of these pieces together, 
and I do think that is the gap everyone is identifying.
    Mr. Scott. So we have 15,000 superintendents home making 
their own process and no central research to help them out. Is 
that what we have, Dr. Whitehurst?
    Mr. Whitehurst. Not exactly. We certainly have research to 
help them out. I agree with Dr. Hoxby that often a practitioner 
will come at the problem with different slices than the 
research community has, and so there is a challenge in putting 
it together and answering the practitioner's immediate problem.
    Part of this is simply a lack of knowledge. We have not 
been at this game seriously for very long, and one of the 
frustrations I had when I was the director of IES is people 
would ask me what to do and they would want an answer, and I 
could not give them an answer based on the knowledge base that 
we had created.
    So some of it is a problem of translating what we know more 
effectively. Some of it is a problem of our just not knowing 
yet the best way to go about doing what needs to be done.
    Chairman Hunter. Thank you.
    Mrs. Foxx is recognized for 5 minutes.
    Mrs. Foxx. Thank you, Mr. Chairman.
    And I want to say that this has been a very enlightening 
panel. I want to thank all of you for coming today. I have had 
a little experience in this area, and am very fascinated by the 
subject of research.
    Serendipitously, over the weekend I read an article from 
National Review Online--and I am sorry Mr. Scott left--it is 
called ``Closing the Achievement Gap.'' I don't know the people 
who wrote it; Reihan Salam and Tino Sanandaji are their names. 
But it is a fascinating article that brings up the issue of 
research and how different people can look at the same research 
in different ways. And I think that is an underlying issue that 
is pretty important.
    I want to make a couple of comments and then ask some 
questions. As I said, I have been in this field for a long 
time, and as you all were talking and as I have read your 
statements, I kept coming back to that statement, ``Everything 
I need to know I learned in kindergarten.''
    Dr. Whitehurst, while you said we are in this field only a 
short time, the comment you made about what we have learned 
from research, teachers vary dramatically in effectiveness. A 
very effective teacher compared to a very ineffective teacher 
can create achievement gains for a child in 1 year that can 
wipe out a third of the--haven't we always known that? I mean, 
did we need to do research to figure out that there are some 
good teachers and some not-so-good teachers? I mean, why did we 
have to have research to teach us that?
    And I guess the question that I would like to ask and 
quickly get a quick answer, if I could, from all of you--very 
quick answer: Is there research that has not been done that 
needs to be done? Just give me two or three words, each one of 
you, if you would. What don't we know that we should know?
    Dr. Whitehurst?
    Mr. Whitehurst. Well, there is a lot we don't know about 
effective curriculum, particularly how to deliver it digitally. 
We are moving into a digital age, and being able to use that 
medium would be extraordinarily important.
    Mrs. Foxx. Okay.
    Dr. Hoxby?
    Ms. Hoxby. I think the most obvious gap is that we don't 
know how well teacher incentives work for improving teaching in 
the classroom. Most of our studies are now from other 
countries, not from this country.
    Mrs. Foxx. Mr. Fleischman?
    Mr. Fleischman. I think we need more research on data use, 
how to use it effectively, and also, across the board, 
implementation--how to implement more effectively.
    Mrs. Foxx. Mr. Smith?
    Mr. Smith. Yes. I would say how to help classrooms to 
better adapt to the variability that comes to the teacher every 
day--the high flyer, the high performer--and still be able to 
adapt to the need of the child that is struggling on a given 
topic.
    Mrs. Foxx. I have one child. She is an average kid, and I 
always felt sorry for every teacher she had because she was 
always in a class--we were in a small community--where they had 
very, very bright kids and kids with major challenges, and a 
whole lot of kids right in the middle. And I felt sorry every 
year for those teachers because they had that range to deal 
with, and I think you have identified a very important point.
    The other thing I would like--Dr. Hoxby, you brought this 
up so let me direct the question first to you, and then if 
others want to respond I would be happy for you to do that. You 
mentioned the Gates Foundation and what they have been doing. 
Has the Gates Foundation been more effective in its 
implementation of what they have learned than the federal 
government has been, or other places like the Gates Foundation?
    Ms. Hoxby. I wouldn't say that they have been more 
effective. I would say they have looked more at speculative 
programs as opposed to established programs. I think that is a 
difference between the role of the philanthropic organization 
and the government.
    I would say they are also faster at shutting down 
unsuccessful programs. That is the other thing: When they 
figure out that something is unsuccessful it doesn't take them 
a couple of years to shut down; it takes them a couple of 
months.
    Mrs. Foxx. Thank you very much, Mr. Chairman.
    Chairman Hunter. Thank you.
    I would like to recognize Mrs. McCarthy for 5 minutes.
    Mrs. McCarthy. Thank you. Actually, I am finding this quite 
fascinating, and I have got 2 million questions in my head as 
we go through all this.
    One of the things that I have always been kind of looking 
at--you know, we have great people that want to be teachers. 
Yet we find when they get into a school to be teachers most of 
them are put into lower grades. I am just wondering if the 
research has been out there on what our teaching colleges are 
doing to make sure that teachers are well prepared to go into 
the lower grades. Because what I have found in talking to an 
awful lot of young teachers when they first start, they felt 
totally unprepared to be teachers. A lot of them have left 
within 5 years because they felt that they had the biggest 
responsibility to take the youngest and to give them the best, 
and yet they felt they were not capable of doing that.
    I would just be interested because I think when you talk 
about the digital age that is coming in--our younger people 
that are graduating, hopefully they are going to be more 
focused, because that is the way kids want to learn today. I 
think that is one of the problems that we are seeing in our 
schools, also.
    I guess the other question that I would have would be that 
when the data come in and if you have someone that is a 
superintendent or a principal that is not interested in data or 
doesn't even have time to look at data, is the state prepared 
to be able to get that information down when they see those 
schools are failing? I will throw that open to everybody.
    Oh, and before I forget, I have an article from the RAND 
Education on some research that I would like to submit to the 
committee, because I am a supporter of charter schools. I also 
believe it is not the silver bullet that everybody is looking 
for. With that, Mr. Chairman----
    [The policy brief, ``Increasing Participation in No Child 
Left Behind School Choice,'' may be accessed at the following 
Internet address:]

      http://www.rand.org/pubs/research_briefs/RB9424/index1.html

                                 ______
                                 
    Chairman Hunter. Without objection.
    Mrs. McCarthy. Thank you.
    Mr. Whitehurst. I will go first, since I am on the right 
here. There are good survey data indicating that teachers in 
general have the reactions that you have just described. They 
feel badly underprepared for the jobs that they have to do. It 
does result in a lot of loss from the profession.
    Innovations around that are several. Some districts are 
setting up their own teacher preparation programs so that the 
practical experience is directly related to what the district 
wants to provide. We have programs like Teach for America that 
are providing alternative pathways in teaching that bypass the 
traditional school of education preparation routes.
    But clearly, we need to do a better job in preparing 
teachers for the jobs that they have to do, that we are, for 
the first time, to Mrs. Foxx's issue, actually able to measure 
teacher effectiveness rather than just having the intuition 
that there are good and bad teachers. It allows us to tie the 
performance of classroom teachers to their preparation 
institutions, so for the first time the colleges of education 
can be held responsible for the quality of instruction that the 
teachers provide. So I will handle that question, and I will 
let my colleagues take on that or other ones, if they wish.
    Ms. Hoxby. Let me just follow up on that, and I won't 
repeat what Dr. Whitehurst said. But we do know that if you 
look now at data it does not appear to be the case that 
teachers who are educated in different channels are 
systematically better or worse than one another. Teachers who 
are educated through alternative teacher channels often look 
about the same, in terms of their performance, as teachers who 
go all the way through a traditional ed school and it takes 
them 6 years to get their degree.
    And that is somewhat disturbing because it means that 
whatever it is that we are doing in the training, it does not 
systematically work. I think these days we have to look 
backwards, the other way. Because we can identify teachers who 
are effective, we can look at the schools that are producing 
effective--the education schools that are producing effective 
teachers systematically.
    Another thing that we have learned is that effective 
teachers are good at spreading effective teaching around them. 
If you drop one effective teacher into an elementary school it 
turns out that the teachers who interact with her will also 
become more effective.
    So we are getting a better understanding of how teachers 
can learn, but it appears, to a large extent, that they learn 
from one another and that they learn from classroom practice, 
not so much just from getting a credential.
    Mr. Fleischman. I think we still have a ways to go in terms 
of what was just said, in terms of having teacher preparation 
institutions and other vehicles to prepare teachers to be ready 
to do the job. In part, having been a teacher, there is a lot 
of on-the-job training, and mentoring, and support you need 
once you get there, but there is no question that there could 
be better preparation.
    In fact, I mentioned before the Data Quality Campaign 
report, just out this month, and they looked at 10 state 
actions to support effective data use. Only one of them was 
implementing policies to ensure educators know how to use data 
appropriately once they have that in place. So there is a lot 
of work that needs to be done in the system to get to the issue 
that you just raised.
    Mr. Smith. I would just add that, very quickly, one, on 
teacher quality issues, a lot of states aren't out pursuing the 
link between student achievement and the institution that 
prepared the teacher to enter the profession. There is some 
work done by some organizations to gather more national 
information on teacher preparation and they are having an 
extraordinarily difficult time getting state institutions to 
give that data up, so having to actually go to a Freedom of 
Information Request to get that information, so it is a very 
slow process, but very important one.
    On the data side, there--you know, data management 
systems--again, a lot of schools have data; it is not the data 
that they need to focus their attention on the things that are 
important, and an area that I think we need to continue to do 
research on is what are the most effective data management 
systems out there and the most informative for school 
administrators and superintendents to use to, you know, to help 
drive improvement.
    Mrs. McCarthy. Thank you.
    My time is up.
    Chairman Hunter. Thank you.
    Mr. Hanna is recognized for 5 minutes.
    Mr. Hanna. I would like to use the balance of my time and 
give it to Mrs. Foxx. Thank you, Chairman.
    Mrs. Foxx. I want to thank the gentleman from New York for 
yielding me his time.
    I could not let this panel get away without mentioning 
something that is a particular bone of contention with me, and 
so far three of you have sort of violated my norm on this. You 
used the term ``training'' in association with human beings. It 
is a shame to admit this, but I remember one thing from my 
doctoral program, and one of my professors said, ``You train 
animals and you educate people,'' and that has really stuck 
with me.
    And so especially when I am in education settings I try to 
point that out to people because I want you to think about the 
fact that we are educating people. Dr. Whitehurst, it is in 
your material that you put out.
    And, Dr. Hoxby, you and Mr. Fleischman both just said it.
    So I would like you to think about whether you want to use 
that term in conjunction with human beings, because I think 
that has something to do with our mindset in education. I 
really believe language is important, and I am sure you all 
would probably agree with that.
    There are a couple of things that came up. Dr. Whitehurst, 
in a time--we always have limited resources, and I know, as you 
say, in research this has been an area where we have used a 
very small amount of resources, and in some ways have come to 
it very lately, so I agree with you on that aspect of it.
    I would like to start with you again, particularly. Again, 
I asked this question a slightly different way; where could we 
best use our dollars? And something none of you have mentioned, 
which I would like you to think about as you answer that 
question, we are always focused on the teacher, and obviously 
that is important. The teacher is the person interacting most 
with the student.
    But I have felt in all my life of being involved in 
education is we never look enough at the structure of 
education. I believe that most of what we do in education is 
designed for the adults and not the children.
    For example, we have known for a long time that adolescents 
do a very poor job early in the morning, and yet, high schools 
begin at 8 a.m. We have ignored that research for the 
convenience of the adults.
    So would you make any recommendations in terms of research 
on structure of education, and would you make some comments 
about that; and again, very quickly so each person has a chance 
to make some comments about that?
    Mr. Whitehurst. Well, if you mean by structure the 
arrangement of the school day and the circumstances in which 
instruction is delivered, yes, I think we need policymakers and 
practitioners to pay attention first to the research that we 
already have. We know, for example, that investments in pre-K 
programs pay a large dividend, and yet they are typically 
underfunded.
    We have very strong research demonstrating that high school 
kids' learning is negatively affected by starting them before 
they are awake in the morning. We have a variety of research 
that rates the organization of the school day.
    And so, you know, I am in favor of--certainly we ought to 
use what we know when we can do that.
    Ms. Hoxby. I think that you are making a very important 
point. I often say to people that the problem in some areas of 
education is not that we don't know the answer but simply that 
the stakeholders will not listen to the answer or will not use 
the answer.
    An example of that, for instance, is the longer school year 
and the longer school day. These are just not popular with 
stakeholders, but it--the evidence suggests that they are very 
good for students. So that is a perfect example of where the 
structure gets in the way of improvement.
    Mrs. Foxx. Mr. Fleischman?
    Mr. Fleischman. Yes. Mrs. Foxx, first of all, thank you for 
that reminder about training. It was made to me by my 
colleagues last week. I didn't remember. Thank you.
    The one place, I think, where we need to do a lot more 
research--it is also on structure--is the connection between 
secondary school and college-going--college readiness, college 
attendance, college success. That is one of those places where 
there are two different structures coming together and we need 
to better understand how the secondary school can do a better 
job and how the institutions of higher education can do a 
better job to ensure the students' success.
    Mr. Smith. I would just agree with you that a longstanding 
belief of mine that schools don't fail; districts fail--that 
when you find--that districts are the ones responsible for 
setting the policies that drive much of what goes on in 
schools. And to fix schools school-by-school is extraordinarily 
challenging because the envelope that it operates within is 
usually broken also.
    Mrs. Foxx. Again, Mr. Chairman, I want to thank you very 
much for your tolerance, and I want to thank the panel.
    You have been a----
    Chairman Hunter. I thank the gentlelady.
    I would now like to recognize my neighbor in San Diego, 
Mrs. Davis, for 5 minutes.
    Mrs. Davis. Thank you, Mr. Chairman.
    Thank you to all of you for being here. You have all had a 
great deal of experience in this area, and I am wondering what 
you see has been the best way that research has been 
disseminated to teachers.
    I think, Dr. Hoxby, you mentioned that we do know--and I 
have heard Bill Gates say--that putting new teachers or 
teachers who, perhaps, aren't doing as well in front of 
teachers who are doing very well and seeing how they engage 
with students, and how they get so much from students is 
beneficial. We don't see that enough.
    It seems to me we fail to do that. We fail to provide the 
resources so that we can have those really great teachers in 
front of new and unsuccessful teachers.
    Is that one way that we could do that? Have you seen that? 
How do we do that?
    Because going to a practicum 1 day is not going to do it. 
We know that. How do you think it works?
    Ms. Hoxby. Well, let me first say that one of the things we 
know because of IES is that a lot of the professional 
development programs that are highly regarded in the United 
States don't appear to have the effects that we--that they are 
intended to have. So simply putting teachers into a 
professional development program does not necessarily have big 
effects.
    I think that we--one thing that we lack in the United 
States that other countries have, and particularly England has, 
is a system of school inspectors, and these inspectors it 
sounds like someone is just coming down to inspect your school 
like inspect your house, but that is not really what they do. 
They come into your classroom, they observe you for several 
days. They are experts; they have all of the data on what is 
happening in your classroom and the achievement of your 
students; they have the diagnostic data.
    And they sit down with the teacher at the end of the 
inspection and try to relay best practices to the person. And 
they have an intense experience that we simply do not have 
paralleled in the United States where a principal would often 
spend as little as maybe an hour in a teacher's classroom each 
year.
    Mrs. Davis. Right. Yes.
    Mr. Smith. I would just share a couple thoughts. One is 
that you have to get the information out to where people go, 
where they attend, whether it be through national conferences, 
or whatever, but there has to be a strong push to disseminate 
good quality research through the normal channels.
    The second, what I have learned from my experience in 
Florida's commissioner is that there is, I think, a great deal 
of dissemination that could be done--I don't think it is being 
done yet in--at least consistently across the nation--by 
working with a combination of state departments of education 
and legislative committee staff in state government, where 
there is a keen interest in taking research findings, be it 
school day, or connecting teacher quality with teacher 
preparation, and trying to drive that into state policy and 
state statute.
    Mrs. Davis. So where--is that a federal role? Should there 
be some way--we are all familiar with the military and defense 
research, and others in environmental and energy areas.
    Mr. Smith. I think the dissemination--and again, I think--
you know, the dissemination is, you know, if, I would say that 
if some sources of information, be it regional labs or 
whatever, my friend here, but if they had to depend on checks 
coming in for how much service was provided they might go broke 
within a month. And so I think that, again, there needs to be 
that consumer-driven process. This research is critically 
important to us.
    Mrs. Davis. Mr. Fleischman?
    Mr. Fleischman. What I would add is not to forget the human 
factor. Just in the same way that we are having a dialogue 
right now and we are learning about something, I think that a 
lot of the learning that takes place takes place in context 
with people doing their jobs and then having better data and 
better research to inform that.
    A good example of that through the Regional Educational 
Laboratory system are the so-called bridge events, where we 
take things like the practice guides, which are based on the 
best available research, and give practical recommendations, 
and then work with folks out in the field. We just recently 
held one on rural school turnaround where we were looking at 
the recommendations of rural school turnaround, looking at the 
school improvement grant models, working with rural school 
educators and state departments of education, and working 
through the process of learning how to apply that in real time 
for real problems.
    Mrs. Davis. If I might, but really--oh, looks like my time 
is up. I can't do that.
    Thank you, Mr. Chairman.
    Chairman Hunter. I thank the gentlelady.
    Mr. Barletta is recognized for 5 minutes.
    Mr. Barletta. Thank you, Mr. Chairman.
    Dr. Whitehurst, current law requires that education 
programs be supported by scientifically-based research. Based 
on your past experiences in the field of education policy and 
your current work at the Brookings Institution, how do you 
define scientifically-based research?
    Mr. Whitehurst. Research in education that draws on the 
methods that are the canon for the social and behavioral 
sciences is scientifically-based research. People are trained 
to do it. People who are trained to do it recognize it when 
they see it and recognize it when it is not happening.
    It is a moving game in that the methods improve and our 
ability to focus those methods on questions that are important 
changes over time. And I think there is, you know, a 
congressional role in mandating that federally funded research 
meets high standards for its scientific base. And it is also 
ultimately the role of the science community, the research 
community, to define specifically what that means, because 
again, it will change and advance over time.
    Mr. Barletta. And upon the reauthorization of ESEA, how do 
you think this definition needs to be revised?
    Mr. Whitehurst. The current definition, I think, is a 
pretty good one. I think you have a choice either to leave it 
out and leave the definition up to the research community or to 
take what is there and fine-tune it where necessary.
    I think it would be a mistake to take the current 
definition and water it down because that is a signal that we 
will be moving back to where we were 15 years ago, where what 
passed for education research was frequently a subject of 
derision in any department and any university except the 
education school.
    Mr. Barletta. Dr. Hoxby, same question: How do you define 
scientifically-based research, and do you think the definition 
needs to be redefined?
    Ms. Hoxby. I really define scientifically-based research in 
the way I would define it in medicine, or physics, or anything 
else. It is the use of the scientific method.
    And one of the ways that we know we are doing scientific 
research is that we should be able to come to conclusions that 
are based on the data and the logic as opposed to based on our 
presuppositions. Sometimes you should realize that the data 
overturns your presuppositions. That is the scientific method.
    I don't think that we need to take science out of ESRA 
reauthorization. I completely agree with Dr. Whitehurst that 
the situation we were in 15 years ago is so much worse than the 
situation we are in right now that we need to keep that 
scientific standard in the legislation.
    At the same time, it is almost impossible to define what 
scientific method is because it is a moving target, and that is 
a good thing, right? We wouldn't want it to be true in medicine 
that the science of today was the same as the science of 10 
years ago.
    And similarly in education, one of the great points of 
using the scientific method and requiring that it is used is 
that the methods actually improve because we realize we can't 
answer this sort of question so we need to have a new method to 
answer that sort of question, or this question has been 
answered very imperfectly so we need to develop a new method. 
We want to actually keep the development of methods so that 20 
years from today we are not just in a different place in terms 
of what we know on education but we are in a different place in 
terms of what we can know because we have better methods.
    Mr. Barletta. And, Dr. Smith, as a past classroom teacher 
and school principal, how do you define scientifically-based 
research?
    Mr. Smith. You know, I have worked with this a lot and I 
don't know--I don't think that the definition needs to be 
changed a lot. Because, again, I came out of the world when we 
didn't really have any research. Whatever felt good and seemed 
right and the adults were comfortable with seemed to be okay.
    And so I think we have made huge strides forward. I think 
the question is what drives the application of the definition, 
and is it being driven by--strictly by researchers that don't 
understand the connection and application in the real world or 
is it--is working within that definition in a way that gives 
you real-world, timely answers?
    And I think, as in medicine, you can deal with an epidemic 
in a lot of ways--you can define it very narrowly, very 
rigorously, with controls, and by the time the epidemic has 
already taken its toll. Or you can find other ways of working 
on very scientific, highly respected results that give you more 
practical--mirroring the conditions that exist at the moment in 
a timely fashion.
    So I think how that gets gauged and who helps guide the 
structure of the research I think is the key.
    Mr. Barletta. Thank you.
    Thank you, Mr. Chairman.
    Chairman Hunter. I thank the gentleman.
    I think that is all the questions we have.
    I would like to thank the witnesses and finish by just 
saying this: I am optimistic because--not necessarily because I 
think we are all smart people and we can all handle this, but 
because technology, and especially adaptive learning 
technology, you know, it is going to be working and it is going 
to be implemented at some point, I would say, over the next 
decade or two, and--I mean, if they already have adaptive video 
games, things that work that way where the smartest kids get to 
learn as the smartest kids do and excel and the average kids 
get to have the education curriculum matched to them, and so 
forth for every learning possibility.
    So I am optimistic, one, and I do think that sunshine and 
data can create accountability. I think just the fact that if 
it is easy to consume and it is easy for all the players to be 
able to read it, and understand it, and see who is winning and 
who is not, and where they should send their kids to school and 
where the educators want to go to work at, I think that is a 
big motivator for everybody at every level for all the 
different stakeholders.
    So thank you, again, to our witnesses.
    And there being no further business, this subcommittee 
stands adjourned.
    [Additional submissions of Mr. Holt follow:]

            Prepared Statement of the Learning and Education
                       Academic Research Network

    As the panel considers reauthorization of the Education Sciences 
Reform Act of 2002 (ESRA), the Learning and Education Academic Research 
Network (LEARN Coalition) is pleased to submit this statement in 
support of this process and, in particular, to highlight the role of 
research-intensive colleges of education in fulfilling the potential of 
this landmark legislation.
    The LEARN Coalition was formed nearly seven years ago to advocate 
for quality education research at the federal level. Our institutions 
are dedicated to the most rigorous standards for designing and 
executing the critical research needed to inform better teaching, 
stronger schools, and, most importantly, higher performing students. 
The Department of Education, the National Science Foundation, and the 
National Institutes of Health are our primary agency partners in this 
endeavor. As a result of the investments that have been made in 
education research, new tools have been developed to inform teacher 
practice and impact student performance. Investments in research across 
the education spectrum are required to translate what we have learned 
through basic research on the brain, cognition, and learning into 
effective teacher preparation and practice, standards for learning, 
assessment, and inform curriculum development. Investments in 
educational research and rigorous evaluation systems lead to better 
educational programs, schools, effective teaching, and higher student 
achievement. It is a direct investment in our nation's economic 
competitiveness.
    Since LEARN's launch, we have witnessed significant expansion in 
the federal resources invested in education research. Moreover, there 
has been an ongoing and collaborative effort between institutions of 
higher education and government leaders to ensure that taxpayer 
resources are used to address the most important challenges for our 
schools and students. ESRA, and the Department of Education's Institute 
for Education Sciences (IES), are critical building blocks in an 
increasingly robust education research system. In particular, IES 
facilitates the kind of research that enables the translation of theory 
into practice using systematic study of phenomena from small scale to 
large. LEARN member institutions contribute to the mission of IES by 
conducting research and setting the knowledge base in a variety of 
different areas including: Teacher Performance Systems; Assessment 
Standards; Educational Interventions for Special Education Students; 
STEM Education; and English Language Learners.
    As the Committee moves forward with ESRA reauthorization, we 
encourage careful consideration of how IES and its programs can fully 
utilize peer-reviewed, high quality research capabilities, such as 
those found in the nation's higher education community, to drive 
student achievement. The benefits of this approach include:
    1. Innovation--higher education faculty are at the center of 
critical, creative thinking about the learning and teaching processes, 
including with interdisciplinary teams that combine insights across 
biologic, environmental, and social factors;
    2. Evaluation--universities frequently work with state and local 
education agencies, as well as other stakeholders, to conduct field-
based research and evaluation that promotes timely understanding of 
what works; and
    3. Dissemination--through a variety of education, publication, and 
engagement tactics, higher education participants are a critical link 
for translating new knowledge into practice, on both a focused and 
larger scale.
    The LEARN members are prepared to provide the Committee with a 
comprehensive perspective on how research-intensive higher education 
institutions contribute to better student outcomes. The ESRA 
reauthorization process clearly is an opportunity to accelerate and 
expand the nation's efforts here through sound evidence development and 
use. Our institutions are committed to being at the forefront of 
producing these student performance solutions and to working with 
federal policymakers to improve student outcomes.
                       learn member institutions
Indiana University, W.W. Wright School of Education
Iowa State University, College of Human Sciences
New York University, Steinhardt School of Culture, Education, and Human 
        Development
Purdue University, College of Education
Rutgers University, Graduate School of Education
State University of New York at Buffalo, Graduate School of Education
Syracuse University, School of Education
Texas A&M University, College of Education and Human Development
The Ohio State University, College of Education and Human Ecology
University of California--Irvine, Department of Education
University of California--Santa Barbara, Gevirtz Graduate School of 
        Education
University of Florida, College of Education
University of Illinois Urbana-Champaign, College of Education
University of Iowa, College of Education
University of Maryland College Park, College of Education
University of Minnesota Twin Cities, College of Education and Human 
        Development
University of Pittsburgh, School of Education
University of Southern California, Rossier School of Education
University of Virginia, Curry School of Education
University of Washington, College of Education
Vanderbilt University, Peabody College of Education and Human 
        Development
                                 ______
                                 
    [The report, ``From Compliance to Service: Evolving the 
State Role to Support District Data Efforts to Improve Student 
Achievement,'' may be accessed at the following Internet 
address:]

                 http://dataqualitycampaign.org/files/
                  From%20Compliance%20to%20Service.pdf

                                 ______
                                 
    [Questions submitted for the record and their responses 
follow:]

                                             U.S. Congress,
                                  Washington, DC, December 5, 2011.
Dr. Caroline Hoxby,
Department of Economics, Stanford University, 579 Serra Mall, Stanford, 
        CA 94305.
    Dear Dr. Hoxby: Thank you for testifying before the Subcommittee on 
Early Childhood, Elementary and Secondary Education at the hearing 
entitled, ``Education Research: Identifying Effective Programs to 
Support Students and Teachers'' on Wednesday, November 16, 2011. I 
appreciate your participation.
    Enclosed are additional questions submitted by members of the 
Committee after the hearing. Please provide written responses no later 
than December 19, 2011 for inclusion in the final hearing record. 
Responses should be sent to Dan Shorts of the Committee staff who can 
be contacted at (202) 225-6558.
    Thank you again for your important contribution to the work of the 
Committee.
            Sincerely,
                                Duncan D. Hunter, Chairman,
         Subcommittee on Early Childhood, Elementary and Secondary 
                                                         Education.
                  representative duncan hunter (r-ca)
    1. Given that the focus of this hearing is to examine the most 
effective ways of utilizing student research to help teachers better 
understand students' instructional needs, it would be helpful to hear 
your thoughts on computer adaptive assessments. These assessments 
adjust automatically to each student's ability level, generating more 
difficult questions if the student is answering correctly and easier 
ones if the student is answering incorrectly. In doing so, these 
assessments enable teachers to pinpoint the proficiency level of each 
student across a range of subjects that correspond with the standards 
set by a state's curriculum.
    There are a few states who have already implemented computer 
adaptive assessments as a tool of measuring student achievement and 
growth--including Oregon--and a number of others who are interested in 
following suit, given that computer adaptive assessments provide 
essential and timely data that can more accurately illustrate student 
placement, student growth, and instructional needs.
    Can you provide the Committee with your views on computer adaptive 
assessments and whether they can be of benefit to teachers, 
administrators, parents, and ultimately students?
    2. How does education research play a role in providing reliable 
information to parents? How can the federal government aid states and 
school districts in improving these efforts?
    3. With such high standards for scientific evaluation, how can the 
federal government ensure that the research methodology is not overly 
cumbersome, leading to artificial results that are not relevant in a 
dynamic and fast-changing classroom?
                  representative virginia foxx (r-nc)
    4. During the hearing you mentioned that other countries 
(specifically in Europe and Latin America) have better administrative 
data sets than the United States, and you could therefore do better 
research in other countries. I think a specific example you cited was 
the Dutch school reform and choice movement. Why do other countries 
have better data sets? Is there something in the US prohibiting them 
from collecting the same data sets (i.e. student privacy concerns)? 
Please expand more on why other countries do a better job with 
administrative data sets.
                                 ______
                                 

       Dr. Hoxby's Response to Questions Submitted for the Record

                            chairman hunter
    1. Given that the focus of this hearing is to examine the most 
effective ways of utilizing student research to help teachers better 
understand students' instructional needs, it would be helpful to hear 
your thoughts on computer adaptive assessments. These assessments 
adjust automatically to each student's ability level, generating more 
difficult questions if the student is answering correctly and easier 
ones if the student is answering incorrectly.
    In doing so, these assessments enable teachers to pinpoint the 
proficiency level of each student across a range of subjects that 
correspond with the standards set by a state's curriculum.
    There are a few states who have already implemented computer 
adaptive assessments as a tool of measuring student achievement and 
growth--including Oregon--and a number of others who are interested in 
following suit, given that computer adaptive assessments provide 
essential and timely data that can more accurately illustrate student 
placement, student growth, and instructional needs.
    Can you provide the Committee with your views on computer adaptive 
assessments and whether they can be of benefit to teachers, 
administrators, parents, and ultimately students?

    Computer adaptive assessments are evaluation tools that are 
extremely helpful because they promote good decision-making at all 
levels: the classroom level, the school level, the district level, the 
state level, and the federal level. They prevent most cheating and 
crude ``teaching to the test.'' Because adaptive assessments put 
neither floors nor ceilings on the achievement of students, they allow 
students who are behind or ahead of their grade to be evaluated well. 
Every student can be appropriately challenged, and no student ever need 
face a ``dumbed down'' test. Computer adaptive assessments also allow 
tests from different states to be equated fairly easily so that states' 
performance can be compared well.
    Let me expand just slightly on some of these points.
    When taking a computer adaptive assessment, a student's answers to 
the initial questions affect whether he or she offered more or less 
challenging questions from then on. This is the way in which the 
assessment adapts to the student's level of knowledge and skill. 
Because students spend their time answering questions that efficiently 
diagnose what they know and do not know, a computer adaptive assessment 
delivers a very accurate evaluate of a student's learning. In contrast, 
a student who is taking a pencil-and-paper test may find that most of 
the questions are very hard or very easy for him or her. For such 
students, even the best pencil-and-paper test delivers only a crude or 
imprecise evaluation. The results of computer adaptive assessments are 
available immediately, not months after the test is taken. This allows 
teachers to use the test results to modify their teaching immediately, 
in order to provide extra instruction in the areas in which the student 
was weak. Also computer adaptive assessments provide not only an 
overall score, which can be used for little else but overall 
evaluation. They provide diagnostic information on exactly what 
knowledge and and skills the student lacks. For instance, a teacher 
might learn that a student can add, subtract, and multiply fractions 
but does not know how to divide one fraction into another. Many 
assessments give teachers lesson plan suggestions as well as results. 
Thus, the teacher might receive suggested lessons, examples, and 
practice problems for helping students learn how to divide fractions.
    In short, computer adaptive assessments have at least five 
properties that make them very useful to policy makers at all levels: 
(i) they can be much more accurate than a pencil-and-paper test that 
occupies the same time, (ii) their results are available immediately; 
(iii) their results are useful for diagnosis, not merely for rewarding 
someone who does well overall or punishing someone who does poorly 
overall; (iv) they generate lesson plans to improve a student's 
learning, quickly.
    Computer adaptive assessments prevent outright cheating as well as 
``teaching to the test.'' It is easy to make a computer adaptive 
assessment far more secure than pencil-and-paper tests are at present. 
The main way in which people cheat on pencil-and-paper test is 
inputting or changing answers during the period before the legal 
testing time or in the period after the legal testing time. This method 
of cheating requires no sophistication or cleverness, which is probably 
why it is the only common method. While such behavior is easily curbed 
by having proctors deliver the tests, remain during testing, and remove 
the tests, states have so far refused to use proctors, citing cost 
concerns. (Whether such cost concerns are legitimate is not at all 
clear, but the point remains that pencil-and-paper tests are not 
proctored and therefore not secure.) In contrast, computer adaptive 
assessments can easily be designed to be electronically available only 
during the legal testing period. While a very sophisticated hacker 
might possibly hack into a computerized assessment and enable people to 
cheat, we have little or evidence that school staff are willing to try 
complex or difficult methods of cheating.
    Because computer adaptive assessments draw upon a very large bank 
of questions and no two students can be expected to take exactly the 
same test, these assessments strongly deter ``teaching to the test'' in 
its crude form where teachers literally train students to answer 
particular questions. Of course, computer adaptive assessments do not 
and should not prevent teachers from helping their students excel by 
having the learn the knowledge and skills likely to be tested by the 
assessment.

    2. How does education research play a role in providing reliable 
information to parents? How can the federal government aid states and 
school districts in improving these efforts?

    Education research can be a much more reliable source of 
information to parents than are schools themselves. This is mainly 
because researchers do not feel a strong need to defend existing 
policies or support proposed policies. They can afford to be objective. 
In addition, researchers often bring modern scientific methods to bear, 
and these methods are sometimes less familiar to school and district 
staff. However, in order to help parents, it is essential that research 
(i) be held to a high scientific standard, (ii) be as timely as 
possible, (iii) be made available to parents in an easily interpretable 
form. The federal government can be helpful on all these dimensions. By 
setting high scientific standards for its grantees and contractors, the 
Institute for Education Sciences can strongly encourage the use of the 
most scientific methods. By encouraging schools, districts, and states 
to build databases that take fairly standard forms, the federal 
government can ensure that research is timely. This is because delays 
in getting data are the main cause of slow research. Most schools, 
districts, and states will build accurate, fairly standardized 
databases given sufficiently strong incentives: they are collecting the 
information anyway. Finally, the federal government can encourage 
federally funded researchers to publish a version of their research 
that is intended for parents and other non-researchers. Non-profit 
organizations often play this ``translation'' role as well, and it is 
very important.

    3. With such high standards for scientific evaluation, how can the 
federal government ensure that the research methodology is not overly 
cumbersome, leading to artificial results that are not relevant in a 
dynamic and fast-changing classroom?

    High research standards really have no effect on how quickly we 
produce research. It takes no longer to evaluate a rigorously conducted 
randomized controlled trial that it takes to evaluate the same policy 
in a less scientific manner. In fact, many researchers would say that 
evaluating a randomized controlled trial is faster because it is 
easier. There are three things that do slow education research down, 
and the federal government can improve two of the three. The first 
thing that makes education research rather slow is simply that students 
change slowly. Even the best curriculum in the world does not 
immediately raise students' learning. Depending on the intervention, we 
may have to follow students for a year or several years, and there is 
nothing that we can do about the pace at which students change. The 
second thing that slows down education research is data collection. 
While evaluation itself is quite fast, data collection is slow. 
Researchers still obtain data through painfully slow processes, in 
which it is quite normal for researchers to spend months if not years 
soliciting (even begging) for data, making their way through layers of 
administrators, and getting approved in long-drawn-out processes. This 
process need not be slow at all. If schools, districts, and states keep 
their data in a standardized form, in central repositories, researchers 
would not be forced to go through this process. Researchers with strong 
track records could be given a blanket approval so that their data 
requests were fast-tracked. The third thing that slows down education 
is the reluctance of many educators to provide data or allow randomized 
trials on the policies in which they believe most strongly. Their 
reluctance is based on the fear that the research will not validate 
their strong prior beliefs. Although this problem is not wholly 
solvable, any intervention that receives federal funding could be 
required to provide data to researchers. This would not only help to 
ensure that federally funded projects get evaluated well and quickly, 
it would also create a ``culture'' of evaluation that is still absent 
in education.
                  representative virginia foxx (r-nc)
    4. During the hearing you mentioned that other countries 
(specifically in Europe and Latin America) have better administrative 
data sets than the United States, and you could therefore do better 
research in other countries. I think a specific example you cited was 
the Dutch school reform and choice movement. Why do other countries 
have better data sets?
    Is there something in the US prohibiting them from collecting the 
same data sets (i.e. student privacy concerns)? Please expand more on 
why other countries do a better job with administrative data sets.

    Most European countries and several Central and South American 
countries have much better administrative data sets than the United 
States. This is largely because these countries have more centralized 
systems of education, and the central education ministry requires 
schools and districts to upload their data in a standardized format. In 
the U.S., in contrast, each district has enormous control over its own 
data and reports only a tiny share to its state government: the data 
elements required under its state's accountability program and under No 
Child Left Behind. While American data bases are improving as states 
develop longitudinal databases, many states have dragged their feet or 
succumbed to political pressure so that they are still far from having 
good data bases, let alone the comprehensive data bases of the 
aforementioned countries. The resistance to data bases comes from 
interest groups who are afraid that information will expose their lack 
of contribution to student learning.
    The evidence suggests that the independence of U.S. school 
districts is a good thing for their productivity and their management. 
If they were centrally managed and did not have to compete at all with 
one another, American school districts would likely produce 
substantially less learning than they do now. However, it does not 
promote efficiency to give each district the right to keep its data in 
its own way, measure things according to its own lights, and create its 
own idiosyncratic data access procedures. Such lack of standardization 
greatly inhibits competition and productivity because it makes 
comparing schools and evaluating policies very difficult. We have an 
analogous situation for firms. Although having firms that are 
independently managed improves competition and productivity, giving 
each firm the right to report data in a completely idiosyncratic way 
would not make the market better. It is important for investors that 
measures of income, for instance, are fairly standardized across firms. 
Since schools actually engage in a far less diverse range of activities 
than firms, there is no reason--except for fear of exposure--why they 
should resist standardized reporting much more than firms do.
                                 ______
                                 
                                             U.S. Congress,
                                  Washington, DC, December 5, 2011.
Dr. Eric Smith,
20 Eastern Avenue, Annapolis, MD 21403.
    Dear Dr. Smith: Thank you for testifying before the Subcommittee on 
Early Childhood, Elementary and Secondary Education at the hearing 
entitled, ``Education Research: Identifying Effective Programs to 
Support Students and Teachers'' on Wednesday, November 16, 2011. I 
appreciate your participation.
    Enclosed are additional questions submitted by members of the 
Committee after the hearing. Please provide written responses no later 
than December 19, 2011 for inclusion in the final hearing record. 
Responses should be sent to Dan Shorts of the Committee staff who can 
be contacted at (202) 225-6558.
    Thank you again for your important contribution to the work of the 
Committee.
            Sincerely,
                                Duncan D. Hunter, Chairman,
         Subcommittee on Early Childhood, Elementary and Secondary 
                                                         Education.
                  representative duncan hunter (r-ca)
    1. Given that the focus of this hearing is to examine the most 
effective ways of utilizing student research to help teachers better 
understand students' instructional needs, it would be helpful to hear 
your thoughts on computer adaptive assessments. These assessments 
adjust automatically to each student's ability level, generating more 
difficult questions if the student is answering correctly and easier 
ones if the student is answering incorrectly. In doing so, these 
assessments enable teachers to pinpoint the proficiency level of each 
student across a range of subjects that correspond with the standards 
set by a state's curriculum.
    There are a few states who have already implemented computer 
adaptive assessments as a tool of measuring student achievement and 
growth--including Oregon--and a number of others who are interested in 
following suit, given that computer adaptive assessments provide 
essential and timely data that can more accurately illustrate student 
placement, student growth, and instructional needs.
    Can you provide the Committee with your views on computer adaptive 
assessments and whether they can be of benefit to teachers, 
administrators, parents, and ultimately students?
    2. In your testimony, you talk about the fact that strict 
application of scientific research is often difficult for classroom 
teachers because of the dynamic nature of the classroom. Can you 
provide some examples of other types of research that are beneficial to 
districts and schools?
                                 ______
                                 

       Dr. Smith's Response to Questions Submitted for the Record

    1. Computer adaptive assessments (CAA) have great potential and we 
should encourage the thoughtful expansion of its use. CAA can provide 
the opportunity to help teachers more accurately tailor instruction to 
individual students' needs for both remediation and acceleration. If 
designed correctly, an adaptive test can also be somewhat diagnostic; 
helping the teacher or a computer program to identify a student's skill 
deficiencies. Adaptive tests are best used as formative assessments 
that help in guiding instruction and support. The data from adaptive 
assessments should lead to a flexing of the instruction provided a 
student so that student will be able to pass summative standards based 
exam by the end of the year.
    2. I believe there is a need to research practices that are proving 
successful in the ``real world'' over time. For example, as a 
superintendent in Charlotte I learned a great deal by sharing 
strategies and performance data with other superintendents that had 
similar student populations. I learned from them what strategies were 
making a difference in learning outcomes and what strategies were not 
successful. Another example was in Florida where I served as 
Commissioner. During that time we prepared to build a data base that 
would correlate school performance data and teaching strategies. Again, 
our intent was to learn what conditions led to success and what 
conditions led to failure. This is not to discount more rigorous 
scientific research but I believe we can have a fuller picture by 
expanding our research strategies in the ``real world''.
                                 ______
                                 
                                             U.S. Congress,
                                  Washington, DC, December 5, 2011.
Dr. Grover J. ``Russ'' Whitehurst,
775 Massachusetts Ave. NW, Washington, DC 20036-2013.
    Dear Dr. Whitehurst: Thank you for testifying before the 
Subcommittee on Early Childhood, Elementary and Secondary Education at 
the hearing entitled, ``Education Research: Identifying Effective 
Programs to Support Students and Teachers'' on Wednesday, November 16, 
2011. I appreciate your participation.
    Enclosed are additional questions submitted by members of the 
Committee after the hearing. Please provide written responses no later 
than December 19, 2011 for inclusion in the final hearing record. 
Responses should be sent to Dan Shorts of the Committee staff who can 
be contacted at (202) 225-6558.
    Thank you again for your important contribution to the work of the 
Committee.
            Sincerely,
                                Duncan D. Hunter, Chairman,
         Subcommittee on Early Childhood, Elementary and Secondary 
                                                         Education.
                  representative duncan hunter (r-ca)
    1. Given that the focus of this hearing is to examine the most 
effective ways of utilizing student research to help teachers better 
understand students' instructional needs, it would be helpful to hear 
your thoughts on computer adaptive assessments. These assessments 
adjust automatically to each student's ability level, generating more 
difficult questions if the student is answering correctly and easier 
ones if the student is answering incorrectly. In doing so, these 
assessments enable teachers to pinpoint the proficiency level of each 
student across a range of subjects that correspond with the standards 
set by a state's curriculum.
    There are a few states who have already implemented computer 
adaptive assessments as a tool of measuring student achievement and 
growth--including Oregon--and a number of others who are interested in 
following suit, given that computer adaptive assessments provide 
essential and timely data that can more accurately illustrate student 
placement, student growth, and instructional needs.
    Can you provide the Committee with your views on computer adaptive 
assessments and whether they can be of benefit to teachers, 
administrators, parents, and ultimately students?
    2. How can the purpose and operation of the national research and 
development centers, the RELs, and comprehensive centers be improved 
upon? Are these entities actually serving regional and local needs and 
assisting states, school districts, schools, and teachers to improve 
student achievement?
    3. The Institution of Education Sciences is responsible for 
evaluating federal programs for their impact on improving student 
achievement. However, the Office of Management and Budget (OMB), the 
Department's Office of Planning, Evaluation, and Policy, the Government 
Accountability Office (GAO), and private entities also evaluate federal 
programs for their effectiveness. Is the current system working? Are 
each of these agencies using the same metrics in evaluating programs? 
Which agency is in the best position to evaluate federal programs?
                                 ______
                                 

    Dr. Whitehurst's Response to Questions Submitted for the Record

                            chairman hunter
    Computer adaptive assessment has already been incorporated into 
psychometrically advanced assessment programs, including those carried 
out by the U.S. Department of Education's National Center for Education 
Statistics. For example, the Early Childhood Longitudinal Studies, 
which follow a large sample of children through school, carry out all 
of their student achievement assessments using adaptive technologies. 
Adaptive testing shortens test times, allows children to get more 
questions that probe their understanding (rather than a lot of 
questions that are too easy or too hard), and requires the development 
of assessment scales that are more likely than traditional assessments 
to be aligned from grade to grade. The timeline for feedback to 
educators from computer adaptive testing is orders of magnitude shorter 
than the timeline for obtaining results from paper and pencil tests. 
Finally, the costs of computer adaptive testing when spread over a few 
years to amortize start-up investments in technology are lower than the 
costs of traditional testing. The federal government, in my view, 
should not stand in the way of the use of computer adaptive testing as 
it has done through the Department's interpretations of NCLB assessment 
requirements. And to the extent that discretionary funds are available, 
Congress should consider providing money to states to advance the use 
of this technology.
    As I indicated in my testimony, the RELs are not working well in 
that much of the work they produce is of little relevance to the needs 
of those responsible for schools in their regions. This has been the 
case for 40 years. My recommendation is that in lieu of authorizing 
RELs Congress should provide a voucher to state departments of 
education that could be used specifically to purchase data analytic 
services that use statewide longitudinal databases to address questions 
of immediate importance to decisions about education policy at the 
state level or among numerous school districts within the state. These 
analytic services could be obtained from any of a number of entities, 
including the existing RELs, that pass muster with the Institute of 
Education Sciences in terms of the quality of their research services. 
IES should retain a review function with respect to the analyses that 
are commissioned with the research vouchers to make sure than the 
conclusions reached are justified by the methods deployed.
    The national research and development centers serve an important 
function in providing for concentrated team-based research on education 
topics of national interest. However, it is a mistake, in my view, for 
Congress to dictate the topics on which the R&D centers should focus 
through authorization language per the current version of ESRA or the 
amounts that should be carved out for R&D centers vs. regular 
competitive grants per appropriations language. The director and 
professional staff of the Institute of Education Sciences with the 
advice and consent of the National Board for Education Sciences is in 
the best position to know when there is both need and capacity in the 
field for an R&D center on a particular topic. In its efforts to comply 
with Congressional intent, IES frequently has held competitions for R&D 
centers on particular topics that generated only a few applications and 
none of quality. This would not have happened if the hands of IES had 
not been tied on R&D centers through authorizing or appropriation 
language.
    Comprehensive centers are not part of ESRA and are not administered 
by IES, although frequently the contractor for a regional comprehensive 
center is the same as the contractor for the regional REL. The 
comprehensive centers are part of a patchwork of technical assistance 
providers that various offices of the Department contract with though a 
variety of program accounts. In my view the technical assistance 
entities that are funded through ED, including the comp centers, 
provide services of uncertain quality that are rarely driven by 
customer demand. Similar to my recommendation with regard to the RELs, 
I suggest that Congress consider shifting to a mechanism in which some 
portion of program funds that are appropriated pursuant to ESEA, IDEA, 
Perkins, and other big budget programs is reserved for use by state 
departments of education to purchase technical assistance for 
implementation of the federal education programs. The Department could 
be authorized to create a list of contractors who have demonstrated the 
capability of carrying out technical assistance on particular topics.
    There are two important distinctions that are relevant to answering 
this question. The first is between evaluations of impact vs. 
implementation. Impact evaluations address the question of whether a 
program has a causal effect on the outcomes it is intended to 
influence. For example, an impact evaluation of Reading First would ask 
whether the reading achievement of participants in the program is 
accelerated compared to similar students who are not participants. An 
implementation evaluation, in contrast, would ask whether the funds for 
the program were expended as dictated by legislation and regulation. 
For example, were Reading First funds deployed to provide professional 
development for teachers as required in NCLB?
    The second distinction is between primary evaluations that are 
carried out by through the collection and analysis of original data, 
e.g., assessments of students carried out by the evaluation contractor 
vs. secondary evaluations that are based on summarizing and providing 
recommendations and conclusions based on a synthesis of results from 
previously published studies and other data sources.
    OMB and GAO do not carry out impact evaluations and rarely engage 
in primary evaluations. Rather they summarize what is known from 
primary data collections and from simple investigatory techniques such 
as engaging in interrogatories of program participants or program 
implementers.
    The Department's Office of Planning, Evaluation, and Policy 
Development (OPEPD) has limited itself in recent years to 
implementation evaluations that are based on primary data collection 
and quick turn-around secondary evaluations that are of high relevance 
to the Secretary. OPEPD does not presently carry out impact 
evaluations, although it used to and nothing in statute prevents it 
from doing so.
    Private entities sometimes carry out impact evaluations of federal 
programs but these are typically conducted years after the program has 
been implemented and are based on available administrative data, e.g., 
existing school records, rather than primary data collection that is 
designed ahead of time to answer a range of planned questions. Thus the 
type of impact evaluation of a federal program that might be carried 
out, for example, by a university-based economist would only very 
rarely have the timeliness or the depth and breadth to answer questions 
that are important to Congress and the administration in decisions 
about program authorization and funding.
    Presently, only the Institute of Education Sciences carries out 
large scale impact evaluations of federal programs. None of the other 
entities listed in the question overlaps with IES in this function. 
This is a critical function that is being carried out well by IES.
    Presently, only IES and OPEPD carry out large scale primary 
implementation evaluations. OPEPD has generally carried out its 
implementation studies well, but that are significant inefficiencies in 
having two separate divisions of the Department involved in evaluating 
a single program. For example, program implementers may be required to 
answer a similar set of questions and respond to duplicative data 
requests from IES and OPEPD. Some of these problems of overlap cannot 
be solved by better coordination between IES and OPEPD because the 
activities are funded by different contracts that are awarded on 
different timetables to different contractors. Further, there is always 
a legitimate concern about whether an office, OPEPD, that develops 
policy for and with the Secretary and that has no independence from the 
Secretary should be charged with evaluating whether programs the 
Secretary is charged with implementing are being carried out as 
intended in statute. For these reasons, it is my recommendation that 
IES be given the sole authority by Congress to carry out impact and 
implementation evaluations that are either required or permitted in 
program legislation. This has been the historical drift both within 
legislation and in the division of responsibilities between IES and 
OPEPD as administratively determined by the Department. It would be 
wise to cement this division of labor legislatively, in my view. In 
doing so Congress should designate funds specifically for evaluation 
purposes rather than setting aside a percentage of funds in program 
authorizations to be used for ``national activities, including 
evaluation.'' The latter language is problematic in that it creates a 
competition between IES and the Department's program offices for funds 
from the same pot, and it empowers the Secretary to throttle funds for 
evaluation activities that might expose performance issues with 
programs with which the administration is identified politically. In my 
view, all education programs with an annual price tag above a threshold 
of $20 million should be subject to an implementation and impact 
evaluation before they are reauthorized. These evaluations should be 
carried out by IES with funds specifically targeted to that purpose by 
Congress.
                                 ______
                                 
    [Whereupon, at 11:27 a.m., the subcommittee was adjourned.]