97BN
According to UNESCO (2023), the annual financing gap in education funding from 2023 to 2030 in low—and lower-middle-income countries is estimated at USD 97 billion. Philanthropy is an important force in the global education sector. It can be a disruptor to the structures and silos of the global education community, with different ideas, perspectives and networks. It can build bridges and it can support innovation. And often, it can fund where others can’t.
The International Education Funders Group (IEFG) is the largest global network of philanthropic actors funding education. We are all passionately engaged in
local, national and international grant-making within diverse organisations, with differing priorities and individual strategies but a shared belief in the power of education and a shared drive to improve the performance of education systems worldwide.
Visit us: https://iefg.org/
Follow us: https://www.linkedin.com/company/international-education-funders-group-iefg
97BN
IEFG BIG Series: Setting The Bar for Good EdTech
Setting the bar for good edtech
Welcome to the IEFG Brains in Gear series. Explore the maze of standards for Edtech in this episode. Procurement decisions and quality of the edtech hinges on the standards, but who sets them, and how can they help all the stakeholders who use them. Join us as we unpack this.
Our hosts will be:
- Libby Hills from the Jacobs Foundation is the Co-Lead of the Learning Schools portfolio. The focus of the portfolio is on promoting evidence use in schools and EdTech globally through impact investing and grant-making
- Gouri Gupta from the Central Square Foundation: She leads CSF’s work in the area of Education Technology. She works with a portfolio of EdTech products that take high-quality, contextual solutions to young children from low-income backgrounds to support learning.
They will be joined by:
- Karl Rectanus: The founder and former CEO of Learn Platform, bringing deep insights into the development and evaluation of educational technology platforms.
- Jairaj Bhattacharya: Co-Founder of ConveGenius, sharing his expertise on innovative edtech solutions and their impact on learning outcomes.
Here are some resources if you would like to learn more about standards setting in edtech in Education:
- Know more about EdTech Tulna here
- Learn more about learn platform here
- ISTE-Standards in EdTech
- EdtechHub- EdTech Standards
This podcast was brought to you by the International Education Funders Group, curated and edited by Anjali Nambiar, with post-production by Sarah Miles. You can learn more about the IEFG at www.iefg.org.
Subscribe to the podcast so you never miss an episode! And don't forget to rate and recommend this podcast to your colleagues.
You can follow the IEFG on LinkedIn here. https://www.linkedin.com/company/international-education-funders-group-iefg
Holding up the Bar for Good Edtech
Karl: The joke in education is standards are great because everybody has their own.
Jairaj: In any general public procurement process, the system is always incentivized for the lowest price bidder. After certifications and quality standards coming in play, procurement also looks at quality. We
Gouri: will need flexibility to ensure that these standards get adopted.
Otherwise they will just be great standards that sit on a shelf. Standards aren't really enough to actually move the dial. So how do we actually also provide support to help companies meet those standards? How can we open up access to certain services that might help them to progress up those levels?
Anjali: Welcome to the IEFG Brains in Gear series. In today's episode, we examine the standards The hosts for today's episode are Libby Hills from the Jacobs Foundation and Gouri Gupta from the Central Square Foundation. They will be joined by Karl Rectanus, formerly the founder and CEO of Learn Platform, and Jairaj Bhattacharya from ConveGenius.
Libby: Hi, I'm Libby from the Jacobs Foundation, and today's episode is about what's. The bar for good ed tech and how do we set it? And when we're talking about the bar, what we mean is some way of defining what good looks like. So essentially standards, everyone's favorite topic, but we are going to make it exciting due to the conversation we're going to be having today.
And so I'm really excited to be co hosting this episode alongside Gouri from Central Square Foundation, who also thinks a lot about standards like we do at the Jacobs Foundation, but both of us have got slightly different perspectives, which we're looking forward to unpacking a bit in this episode. Hi, and I'm Gouri from the Central Square Foundation.
Thank you, Libby. Excited to co host this episode. Today, we'll explain why we think a bar does need to be set to define what good edtech looks like and explore some of the questions that we've been asking all along, including where should the bar be set, how should it be set, and who should apply it. With us today, we have Karl Rectinis, formerly the founder and CEO of Learn Platform, a leader in evidence use in education decision making, and Jayraj Bhattacharya from ConveGenius, an edtech platform on a mission to change the way Bharat learns.
Jairaj: Great to be here. Thank you, Gouri. Yeah, looking forward.
Gouri: Let's explore the fundamental question, why do we need standards? Libby, what's your perspective and can you give us a real life example of where standards have been useful? Yeah, absolutely. So my real life example is here with us on the podcast, Karl.
So a lot of Karl's work and thinking's really inspired us at the Jacobs Foundation. So I'm actually going to turn to you first, Karl, to hear your perspective on this. And maybe as a starting point, you could just tell us a little bit about what you were doing at Learn Platform and what motivated you to start it and how it connects to this question about standards.
Karl: Yeah, thanks. And thanks for all the work that both the Libby your organization and Gouri , yours do. For those who aren't familiar, Learn Platform was used by school districts, states, and ultimately providers to figure out which edtech they were using and if it was working and how for which students and at what cost.
We launched in 2014 because there was just not a lot of data. We built a technology we called Impact, which was a rapid cycle evaluation engine. Essentially analyzed usage data, student achievement information, demographic data, teacher feedback, student experience, and cost, and did the equivalent of a third party evaluation, a control or comparative study or analysis, and created visualizations To make it much easier for decision makers.
To see what they were using and to understand the effects of those tools or interventions when they were using them in their situation and to be able to communicate that to their peers to partners and to others to inform operational instructional. And financial decisions.
Libby: Great. Thanks, Karl. When you were tackling some of these questions about helping schools to figure out if what they were using works, how were you setting the bar?
Like what standards were you using at Learn Platform to do that?
Karl: You know, the joke in education is standards are great because everybody has their own. That's a bit of a problem, right? And so when standards is spelled with two S's. It means it's a challenge. Everybody gets a chance to decide. However, for in the research community, in education research, traditionally, the gold standard has been the randomized control trial, the RCT.
And for decades, Only the RCT counted and the issue with randomized control trials is there's a number of issues, but it means that, for example, some students by definition will get access to an intervention and some won't. And those are chose randomly. And that creates a challenge for education institutions that are committed to delivering education practices to all students.
But it's also takes a long time and is quite expensive. These RCTs have been the standard. In the past. However, when we activated this work, we said, Hey, we've got limited information, comparative control based analysis, all of those are valid to help inform decision making.
Libby: So the way that you've described how you've been thinking about standards there really relates to standards around evidence, which is quite an emphasis of how we think about how we apply standards.
at the Jacobs Foundation, but I know there are also different standards and different ways of thinking about standards. So Gouri, it'd be great to hear from you how this differs from how you think about standards. Thanks Libby. I think in India, I'll just start with some statistics. In the next year, we have about 37 state governments who have expressed interest in using Some form of edtech in over 110, 000 schools, and they're keen to invest about 470 million on ICT.
And this is almost supported by a very dynamic supply of, you know, maybe 6, 000 edtech products in India, which are currently catering to K 12. Historically, the Focus of procurement in India has always been on hardware, as this is the most capital intensive as well as easy to spec element. The learning software has often been ignored, you know, primarily because of the lack of awareness and standards on what is good quality at tech, which makes it very difficult for decision makers in the government to evaluate.
And in order to address this issue at the Central Square Foundation, we work very closely with many state governments in India. We did feel the need to create standards across a variety of use cases on what is good at tech. And hence, you know, had the opportunity to partner with IIT Bombay, which is a premier academic institution in India.
To build these standards, Tulna is the name of the set of standards that have been milled, means comparison in Sanskrit. And what Tulna is trying to do is to allow for meaningful comparison between products, but more importantly, to create a shared language for what good edtech looks like. Tulna as a framework primarily is commenting on whether a product has been designed right.
There are three parts to the framework. The first part is around content quality, the second is on pedagogical alignment, and the third is on technology and design. The first use case that we had envisaged for Tulna is that it would aid governments in procurement decisions relating to web tech, and hence we needed to keep the framework fairly There are different ways in which governments have used Tulna.
While some governments have used it to filter out the bad products, others have used it to inform the technical part of the RFP. And depending upon how governments are wanting to use it, the Tulna framework becomes fairly flexible, while ensuring that the robustness of the framework remains intact. So it's exciting because we are hoping that with Tulna, we can change the narrative in the Indian edtech ecosystem from just push based marketing to demand for quality.
It's really exciting that I have Jairaj on the podcast here with me. Jairaj leads one of the largest edtech organization in India that is supporting many state governments in bringing high quality learning solutions. Into the hands of children. Gerard would love to understand from you and your perspective on the need for these standards and how you have been able to use that.
Jairaj: We have been an active part of using Tulna right from when Tulna was launched. So earlier before. There was no way the state departments, the education departments could compare how they can buy software and what do you define as a quality software, right? For example, when we talk about personalized and adaptive learning, these are two very heavy words, personalized and adaptive.
So the customer here, the education department does not really have a standard template to compare how personalized and adaptive learning of one person. So essentially when you're doing large scale procurement, you need to have templates that are defined in terms of what exactly personalized and adaptive learning means, right?
What does a digital classroom environment mean? So Tulna has played a role in firstly creating an input framework. So I could give examples of a couple of ones, one, which was designed early in 2019, we use Tula for a procurement for a large scale interventions for almost 2000 schools. And most recently Tulna was also used for an outcome based payment model by the think tank NITI Aayog and Tulna served as a template for quality standards.
And, and the, and the model itself are designed for outcomes. So there are many models now in edtech, which is following Tulna as part of the procurement itself that you need to basically be having Tulna as a certification or have Tulna in the committee technically to qualify as part of an edtech procurement.
So Tulna is being used now. We are actually very happy that there is a focus on quality on software as a standard.
Libby: A quick question as a product company that is evolving so rapidly, right. And scaling at such a pace, how are standards useful for you as a product company? Do they create incentives for the product to be built in a certain way?
And what has been your experience so far?
Jairaj: So in any general public procurement process, the system is always incentivized for the lowest, So after certifications and quality standards coming in play, procurement also looks at quality, right? How do we get a quality product, which has to deliver impact.
Tulna has kind of helped us build a quality comprising framework for software solution procurement at large scale. Not just that right now, we are also looking at. Projects where we can bid for outcome based payments model on edtech in India with government schools. Now that has been possible because we have a quality framework like Tulna which is completely unbiased, built by academic support from IIT and it's equally favorable to everyone.
I think overall creating an interoperable Quality framework is very important. Right now we are operating at the input level that what are the real features required for a certain edtech software like pal pal is here referring to personalized adaptive learning. So what does a pal actually have? What does it need to have to qualify as pal?
But I think going forward, these quality frameworks will also be based on output. output and integration of outputs, like when you're delivering learning outcomes, those data streams can also be used as a framework to understand quality. And eventually as outcomes, once definition of outcomes become clear and a super taxonomy kind of a interoperable framework is established between competitors.
Libby: So interesting to hear about the possible future trajectory for Toner and having more of an emphasis or of that output piece and outcome piece. That's actually where we've been concentrating quite a bit of our thinking around standards. So how can we use standards to assess how much evidence a company has supporting the impact claims that they're making?
We're using the similar framework that Karl was using at Learn Platform, which is, you know, got four levels in it, defining different levels of rigor of evidence, and we've been applying that framework on quite a large scale, so across hundreds of companies, to really see what's the picture, how does it look like in terms of who's got evidence, and we've been doing that because lots of people listening in will know that there have been questions raised about what evidence based ed tech is, and so we wanted to actually generate some data on how actually does it look if we're talking about evidence.
How do we define that? And what's the status quo that's looked like applying these four levels, these standards and looking at studies that companies have and saying, okay, well, do they have a study number one? And if they do, where does it sit in these standards in this framework? Are they at the starting point of their evidence journey?
Do they have something like a logic model or are they really quite advanced? So, you know, Karl was talking about the idea of RCT being the gold standard. So where do they sit in that framework?
We'd love to hear more about some of the challenges that you've experienced. There's some of the operational issues with actually using Telner in practice. Yeah, I've been living many. So I think, you know, I think when we started, right. So very interestingly, we thought there would be huge barriers in just Wanting to adopt Tulna, right, uh, within the government systems, but very interestingly, governments were very willing to start using Tulna standards because I do think that they believed that they needed some definition of what good edtech looks like.
It was just making their job easier, right? But how do you integrate these standards within the RFP process? And there we've gone through multiple iterations with multiple governments. And the questions that get asked are at what stage do we introduce these standards? Do we introduce them as criteria, as hygiene criteria to almost filter out bad quality products?
Do we use them at the technical stage when we are evaluating solutions and what their marks are? Do we use them just to know whether these products are meeting a certain basic level of standard quality, etc. And I think that's been an iterative process. And I think what we've learned is that we will need flexibility to ensure that these standards get met.
adopted. Otherwise, they will just be great standards that sit on a shelf. And it's been, it's been a tremendous effort on the part of IIT Bombay to actually work with different governments. And we have four government procurements with Tulna so far to actually adapt these standards to the needs of the government.
So I think the flexibility is important, but would love to hear from you. Yeah, I mean, Karl, I'd love to bring you in here because I'm talking about this work that we did at the foundation to assess across hundreds of companies. But I mean, you were the guy actually designing and doing this work with us.
So what were some of the challenges that you experienced?
Karl: You know, super interesting because I think the experience of Tulna in India, a lot of similarities in the U. S. One of the largest accelerations of the market in the U. S. around evidence was the adoption of the four levels of evidence that you described from logic model to demonstrating rationale to all the way to strong evidence.
Was the fastest acceleration was when that was adopted into policy within the Every Student Succeeds Act called ESSA. Just because that was adopted into policy, however, didn't mean it was real. It didn't mean it could get activated effectively. However, by defining it as policy, organizations could have a shared definition of what it meant to be quality.
That lowers the amount of confusion for users and buyers. It lowers the costs. As an example, for about eight years since we launched Learn Platform, we were working with buyers, only with districts and state buyers. However, the activation of this policy, the support, and the starting to shine a light on the fact that buyers were asking for evidence, Was a game changer in getting providers, those that like Jayraj are focused on building effective interventions and delivering those.
We saw a massive increase in the amount of evidence building that providers started activating in around 2021 2022 in the US. We also analyzed of the top 100. Ed tech products that were used in the U. S. Which ones had published public research and in 2021 2022 that number was around a quarter of about 25 percent of the top 100 had published research that aligned with any of the levels of ESSA from level for the lightest all the way to the strongest.
But most of that evidence was at the lightest levels, right? And it wasn't visibly active. However, if we fast forward over the last 12 to 18 months, we've seen that number increase by 50%. So they're closer to more than a third of ed tech providers are publishing evidence and sharing that with the public and the buyers, those consumers.
Understand it more, understand those levels. And I think by shining a light on that, I think philanthropy, government leaders have done a great job. That's one of the benefits of having these standards. Is the market can sort of take it and run with it, but it is definitely led to an increase in the amount of evidence building that is created and distributed.
Libby: That's really interesting, Karl. So the benefit of standards isn't just, hey, let's define what good looks like, but But the nature of having standards can then actually motivate people to meet them and to get better on that bar, which is super interesting. We've actually been, you know, some of our thinking has been, okay, well, standards aren't really enough to actually move the dial.
So how do we actually also provide support, you know, to help companies meet those standards? How can we open up access to certain services that might help them to progress up those levels? And it's been great to see interest in that approach.
Karl: It has been game changing for a lot of organizations at different levels too, whether they're early stage companies or existing unicorns growth businesses.
Traditionally, the value of that research has been limited to product development. But with the support of philanthropy and investors in this space, they're recognizing that it's great for adoption, sustainability, growth, and obviously a high priority on getting the best outcomes and shared learning for the entire market.
Libby: How do you think about? These standards evolving over time, right? And how much have you had to change these standards to meet how edtech is evolving, how the industry and the, you know, the buyers are thinking about edtech. Would love to get both your, as well as Jayaraj's perspective on how these standards have evolved and need to evolve.
Karl: I'll go first and, and yeah, definitely interested in Jayaraj's perspective. I'll say in the US with the four levels of evidence with ESSA, I think they did a really good job of acknowledging something you mentioned early, Gouri, which was flexibility. But the approach that was taken was to make them relatively simple, not go too deep in details to start.
And so by providing a framework that could be scaffolded and get certainly will evolve, especially as we think about, for example, AI or predictive assessment or personalized learning, traditional research hasn't been set up for that. But the overall framework. Is very valid. And I think we'll see evolution within each of those four levels as interventions become more unique and valuable for students and teachers.
Jayraj, I wonder your perspective, especially in your work in Bharat.
Jairaj: Thanks, Karl. I think framework are important, but as we see in India, from a perspective of a product company to engage into something. Like an RCT, it takes a lot of convincing. It's just very risky, right? I would definitely understand and get to know more how it has worked in the U S.
But here in India, it's, it's been really challenging to get even our competitors convinced to do an RCT at scale, right? However, there is a possibility to create AI based models where data that is coming out from the tech software, simple item based models where. We could potentially get real time access to edtech products.
Data can potentially evolve in creating some understanding of, you know, how good the product is in terms of attributing impact, right? So basically I do see that for a country like India, I would definitely like to get Gouri's views on this. That if you were to get a hundred companies to do evidence based on RCT versus getting a hundred companies to integrate using some kind of a API structure to get data from their products.
How would we really scale? And what do you think? Right now we are at the input stage, right? We are doing basically looking at if the features are available or not.
Libby: Karl and Libby, I think we discussed this towards the beginning of the podcast, where today, for example, Tulna is commenting on product design.
And I think it comes from a space where we don't have as many impact evaluations that have been conducted in EdTech in India. But we are looking at a scenario where the standards start incorporating, as Gerard just describing, the impact of these learning solutions. And Karl, the approach that you have on rapid cycle impact evaluations is something that I think is in the next step for the standards in India as well.
I think. Product design, and there is literature and evidence to say that if a product has been designed well, the impact on learning outcomes for the intended use case, there is a relationship. And I think we would like to, you know, Add the learning outcomes piece to it over time. So generally I completely agree that I think the standards need to evolve, be able to use product data to be able to comment on the learning outcome effects as well.
Karl: Jayraj, I think you, you mentioned a really important point and it was probably ineloquent earlier in particular around RCTs and the difficulty for organizations, both solution providers and education agencies to run randomized control trials. They're expensive, they take too long, they're difficult to publish, you know, all of these are challenges.
What has been very useful with the four levels of evidence, what has been very useful in the implementation of the four levels of evidence is it has equipped solution providers and their partners, their education agencies. To provide evidence at correlative, comparative or control based without having to go all the way to RCTs.
And so now companies in the U. S. are running these comparative or correlative analyses and doing very rapid cycle evaluations that don't reach all the way to randomized control trials. Some of them are doing those, but in other cases it makes more sense to do, some of them are doing dozens. Of evaluations or experiments and reporting on their implementation across different contexts, and that's seen and valued differently in the market.
Now, because of the standards, because of this evolution and opportunity, and it's been a unique and really critical point, I think, in evidence building. In the U. S. And it's helped us move towards that outcome focused, you know, engagement, not just input and usage, but looking at outputs and lower the barrier for organizations so that they don't have to go all the way to an R.
C. T. Which, as you mentioned, and frankly, doesn't matter where you are in the world, I think, are expensive and challenging in a number of different ways.
Libby: I love the passion in this conversation around standards and edtech research. I feel like I'm really with my people on this call. It's great. Someone shout if this isn't right. I feel like we all feel that standards are valuable. It's not one size fits all. It's the kind of right standards for the right use case, the right need.
You know, with that in mind, it'd be great to talk about what's the role of philanthropy. Like what can we do? And maybe we can throw that to our guests first. So tell us what can we do as philanthropy to further the thinking around standards, the value that standards have. Yeah. What do you guys think?
Karl: I think philanthropy has a unique and very powerful role in market adoption, both of evidence building and of building impact because philanthropy often is the most flexible of funding opportunities for both buyers and for providers. I think there's a few things that philanthropy can really do. One would be to adopt shared standards with peer groups.
Each philanthropy may have their own processes. And a specific grant applications or what have you. But the reality is that creates discontinuity challenges for organizations. You can still keep your own processes, but adopt shared standards, whether it's the four levels of evidence, whether it's the type of adoption you're seeing with Tulna to.
I think reviewing their own grant making to fund evidence that can be valuable. Both for you and for other philanthropies on these standards is super useful. And finally, the thing at this stage in the market for a lot of organizations I think is really valuable is to celebrate those early adopters, those collaborations, those examples where this is getting outsized benefit like the work that Jayraj and Gouri mentioned earlier, but also what we're seeing across the board. Early adopters who are really driving a focus on evidence. Philanthropy can also do a good job of awarding and And celebrating that too. So I think those are ways that philanthropy could engage.
Jairaj: I would like to add just the sheer fact that, you know, when you have philanthropic capital reward product companies that are able to show evidence of learning outcomes and move back the core focus to impact, and that's what your software is supposed to do.
You're supposed to actually deliver learning outcomes. So just acting as a catalyst, philanthropic capital can add onto For example, on top of existing projects that edtech software are already delivering with state governments and act as maybe an additional profit or a catalyst, and if you are able to deliver outcomes, then you have an incentive, which potentially doubles your profits, right?
And that creates incentives for the right. Outcome the right impact. And I think things like these can actually be done by philanthropic capital because it cannot be done by debt or equity capital. So I believe that will actually definitely create a disruptive change in the market to positively influence good quality software.
Otherwise, the founders are operating in a resource constrained environment where you're often cash strapped to invest in the last mile R and D, right? To make things just good enough or the things better to deliver outcomes that that last mile generally people don't choose to take unless we have evidence being rewarded.
There's very less work done in terms of creating narratives around evidence. I mean, going out there. Creating a narrative that, you know, this is a company that has potentially will deliver outcomes and that could be made into a marketing initiative and politicians will get rightly incentivized when narratives can be created around evidence, which has not really happened so far, but that unlocks a lot of other value possibilities.
Libby: Awesome stuff. I'm loving this. Really thought provoking. Great, great feed for thought and appreciate you guys sharing that. Yeah. It sounds like a kind of good, good call to action. You know, how can we activate evidence standards in a meaningful way? You know, how can we think about how to incentivize people to align around and build towards those standards, particularly given some of the realities and the challenges that companies face.
So yeah, that seems like, I think a really, really great point to end on, but Gouri do jump in. No. Couldn't agree with, uh, all that was said. Right. And I do believe that celebrating early adopters, ensuring that we can adopt shared standards, uh, amongst peer groups. I think they're all very exciting ideas and I do believe that there is a role that we can all play in.
You know, just creating that narrative in universally in the EdTech ecosystem to demand for quality amongst all users of EdTech. Yes, I think that's, I mean, all great ideas. Thanks for listening. Grab us at the next conference to talk about standards. We'll welcome it.
Anjali: Thank you very much for listening in on the conversation.
Standards are necessary for defining a common bar for EdTech across LMICs. And aids in various use cases, such as government procurement, catalyzing competition for the right outcomes in the edtech market, and ensuring that the right tools are being funded. But as we heard in the conversations, there are challenges around incentivizing usage of these standards by the edtech developers and the government.
Organizations need support systems. to help them meet these standards. Governments need support to customize these standards for their needs without losing the essence. Philanthropy can play a crucial role in creating clear incentives for adoption of these and in the development and evolution of standards which are relevant And practical by collaborating with the right partners like the government and tech providers and educational institutions.
This podcast was brought to you by the international education funders group curated and edited by Anjali Nambiar and post production by Sarah Miles. You can learn more about the IEFG at www. iefg. org and do subscribe to the podcast for more such thought provoking conversation.