How to Get the Most out of Ed Tech Data

Teacher_Tablet.jpeg

With ed tech programs, devices and technologies proliferating around the country, school administrators and teachers are faced with a somewhat difficult decision. Which program, device or technology works best for the needs of their schools? After all, a tool or program which works with students in one district may not necessarily work for students in another.

That’s where data comes in. Purchasing decisions based on data and analysis are essential to the success of any school district.

Gathering Data…

In order to best assess the potential of an ed tech device, program or technology, you have to know how to collect the data. Here are a few questions to help you when deciding what kind of data to collect.

  • Is the technology supporting expected learning improvements, such as higher test scores or better engagement?
  • Is the technology being used by students the way the educators (and designers) intended?
  • Is the technology helping to improve students’ social-emotional learning?
  • Is technology making it easier for educators to pivot and individualize their lessons?


Currently, teachers and administrators use a blend of two types of tools to determine the success of a potential program and its effect on academic performance:

  • Digital tools, including digital gradebooks, Common Core assessments, test banks, dashboards and digital instruction tied to textbooks; and
  • Non-digital tools, including student observation, traditional gradebooks, in-class projects, student journals and in-class group discussions.
…and Evidence

Of course, you need evidence to support your purchasing decisions. How can you find the evidence you need? Mathematica suggests to keep in mind the pros and cons of the data you gather.

  1. Anecdotes: This includes personal experiences and testimonials which help better tell the story. Mathematica recommends using anecdotes to identify which tools are worth researching and testing in greater detail.
  2. Descriptions: Designed to measure a tool’s effectiveness over a specific period of time, this evidence is found in infographics and other marketing-related materials. However, it lacks the ability to explain a tool’s true effectiveness (due to the lack of a comparison group).
  3. Comparisons: While correlational evidence is a good starting point, it cannot rule out other possible explanations why, for example, a tool raised the scores of one group but not the group without the tech tool. This evidence is can be misinterpreted since people frequently use this data to discover success.
  4. Casual Analysis: This is “the only way to determine effectiveness with confidence.” Think of it as an “apples to apples” comparison where the only difference is the presence of the ed tech tool or program (the gold standard:  A trial that’s controlled and randomized).
The Difficulty with Data

Educators increasingly turn to data to understand the link between instructing and learning. There’s just one small problem:  Data, for many people, can be quite intimidating and difficult to work with. 

According to a study involving more than 4,600 teachers by The Bill and Melinda Gates Foundation titled “Teachers Know Best:  Making Data Work for Teachers and Students,” many teachers find data-collecting tools (including dashboards and digital gradebooks) overwhelming, incompatible, inconsistent and too slow. The study highlights an astounding 67 percent of teachers “are not fully satisfied with the effectiveness of the data and tools they have access to on a regular basis.”

There’s also not one single way that teachers and administrators can use data when researching potential ed tech programs – or even when they teach. “Teachers Know Best” draws attention to six archetypes and the ways they use data such as test scores, class participation rates, socio-economic demographics (including gender and subsidized lunch status) and individual student goals.

  1. Data Mavens: Those who use data to develop individualized learning plans.
  2. Growth Seekers: Those who use data to adapt their teaching styles and methods.
  3. Aspirational Users: Those who believe in the power of data but find it intimidating.
  4. Scorekeepers: Those who use data to prepare students for tests and assessments.
  5. Perceptives: Those who substitute data with student observation.
  6. Traditionalists: Those who use grades as a primary data source.

Before you start analyzing a particular ed tech tool, it helps to think about what kind of educational data user you are – or want to be.

Complement, Not Substitute

What does a successful ed tech program look like? How can administrators and teachers be certain it’s working for their school district?

There are, of course, right and wrong ways to use data when measuring the success of a specific program. The important thing to remember is that data, however it’s gathered and however it’s used, is a tool. It should complement interpersonal strategies (interactions between teachers and administrators, or teachers and students) – not serve as a substitute.

There are three things to keep in mind when employing data.

  1. Data should be comprehensive.
  2. Data should be organized.
  3. Data should be accessible.

The Data Quality Campaign has a helpful tip sheet to help decision-makers ensure they use data in the most responsible, effective way possible. Some tips include:

  • using data as a continual tool for improvement, not just on a one-time basis;
  • engaging students with existing data to better solve challenges and set goals; and
  • making sure to stay up-to-date on new data trends through ongoing professional development.

Privacy, of course, is always a concern – especially when it involves sensitive data about individual students (including their demographics and academic scores). It goes without saying that any data collection and analysis should have student privacy at a top-of-mind level.

And data collection and analysis should always be about education, not marketing. As Forbes notes in an article on student data privacy and ed tech entrepreneurs, “ed tech companies should be in business to serve students, not the other way around.”

Step by Step

One new tool available for school districts to use is the Ed Tech Rapid Cycle Evaluation (RCE) Coach. Designed by the Office of Educational Technology and Mathematica, the RCE Coach offers teachers and administrators a solid, helpful guide to navigating the complexities (and the intimidation) of using data to determine whether a particular ed tech tool is right for a school district’s needs.

The RCE Coach helps school administrators with the step-by-step tasks of collecting data, including:

  • how to determine the appropriate design for your research,
  • how to effectively collaborate with ed tech providers,
  • how to prepare your hard-earned data for analysis, and
  • how to calculate the effect of a tool on your district’s needs.
Time Matters

Given the lightning-fast speed at which ed tech tools change, it’s more essential than ever that research and data-collection methods become just as fast. A tool can be updated or change on you in the blink of an eye.

Time, as always, is of the essence. While it’s important to cut down on research time, evaluations should not be too rushed. Research quickly, but effectively.

The largest Missouri School District formed a focus group before fully implementing their 1:1 program. See how their Kajeet pilot program led to success and the deployment of a full, mobile Internet program at record speed.

Other posts you might be interested in

View All Posts