A first-pilot field report from a Chennai classroom — Pilot #1 of 10.


A 15-year-old in Grade 10 at The Rajavva Public School, Chennai, gave the Karkei Heart Module a 1 out of 5 for comfort. “Controls to be changed,” she wrote. Her focus rating: 2. Her overall rating: 2. By almost every question on our post-pilot survey, she is the student who liked the experience least.

She is also the student who learned the most.

Before our ten-minute demo, she scored 2 out of 4 on a baseline diagnostic about the human heart — the lowest score in her cohort. After the demo, she scored 3 out of 4. In absolute, objective terms, the girl who rated our interaction lowest walked away with the largest correction of a factual misconception in the room.

I keep coming back to her. Not because she is the inspirational one, but because she is the honest one. She is the reason we are publishing this.


Why we did this pilot

Karkei is a young company. We make experiential learning systems for Indian schools — 3D, AR, gesture-based, designed to be performed by the student rather than watched by them. Our first product is a Heart Module for Grades 8–10 Biology. Our thesis, reduced to a sentence: children don’t learn abstract concepts by reading about them; they learn by doing, and Indian classrooms have almost no way to let them do.

The internal framework we build around is called SPARK. It stands for Sensorial, Participative, Authentic, Relevant, Kinaesthetic. Each letter is a test — does the experience engage more than one sense, does the child act rather than watch, is the content rooted in something real, does it connect to their world, does the body move. When a product design passes all five, we think it works the way the human brain actually learns. When it passes only two or three, we keep building.

SPARK is a hypothesis. The only way to test a hypothesis about learning is to put it in front of actual learners. On 16 April 2026, we did that for the first time.

The school was The Rajavva Public School, Chennai. Grade 10 students, a Principal who said yes in a single meeting, five teachers ranging from 7 to 18+ years of experience sitting at the back. One hour. A ten-minute demo preceded and followed by hand-written surveys.

We expected to learn whether the thing worked. We learned something more useful than that.

What we measured, and what we could not

Every student filled a paired pre- and post-pilot survey. Every teacher did the same. The student instruments had nine Likert-style items plus four multiple-choice diagnostic questions about heart anatomy and circulation. The teacher instruments had twelve items covering observed engagement, curriculum fit, adoption intent, and open-ended feedback.

I want to name the limits of this data up front, because the rest of this article means less if I don’t.

N is eleven students and five teachers. One school. One grade. One classroom hour. The surveys were hand-written, scanned, and manually transcribed. Students filled the post-pilot survey with Karkei staff in the room — the social pressure to be nice was real. The diagnostic was four questions, which is a narrow slice of heart biology. This is a case study, not a clinical trial.

With those caveats sitting on the table, here is what we saw.

The headline

On the one question we asked both before and after — How well do you understand how the human heart works? — the mean student rating rose from 3.1 out of 5 to 4.1 out of 5. A full one-point lift on a five-point scale. On the equivalent question for the overall experience, students rated Karkei 4.3 out of 5. On the question would you want more of your subjects taught this way, every one of the eleven students said yes. Seven said yes for all subjects; four said yes for Science and Biology.

The teachers were more granular. Five out of five would recommend Karkei to other teachers. Mean likelihood of school-level adoption: 4.4 out of 5. Observed engagement compared to a typical Biology class: 4.6 out of 5. Every teacher said they observed students who normally disengage come back into the room — three said “clearly, several students”, two said “somewhat, a few”.

The honest footnote: the objective diagnostic — the four MCQ questions on heart anatomy — moved only marginally. Mean score went from roughly 3.3 out of 4 to 3.4 out of 4. Three students made a clean gain, two appeared to regress on a single question (probably careless marking on the post-session form, based on their free-text answers), and three were already at ceiling going in. If I were writing this as a paper, I would say: self-reported comprehension rose substantially; objective performance on a narrow instrument did not, and the discrepancy is a finding in itself.

That discrepancy matters. It’s possible we lifted how confident students feel about the content more than how much they know — and for an advertising brochure that’s fine, but I don’t want to confuse you about what happened. What I think happened is that our four-MCQ diagnostic was too narrow to register the kind of learning the session actually produced. When I read the free-text answers, students write about tricuspid valves, bicuspid valves, the difference between arteries and veins, the pumping rate of the heart under load, the effect of caffeine and meditation on BPM. Almost none of that shows up on a four-question Anatomy 101 multiple-choice.

The next pilot will use a longer instrument.

The three students I can’t stop thinking about

Student 07 — the advanced learner we didn’t reach

He walked in with the lowest self-reported enjoyment of Biology in the cohort (2 out of 5) and the lowest confidence on a heart question (2 out of 5). Then he scored four out of four on the pre-pilot diagnostic. He already knew the answers. He just didn’t find his own Biology class interesting, and he didn’t feel confident about his knowledge — a classic pattern for smart kids whose affect and performance have come apart.

After the session, his engagement rating was 2 out of 5. His overall rating was 3 out of 5. In the space meant for what could be better, he wrote the words “nothing became clearer.”

For most of an hour I found this depressing. Then I realised what he was actually telling us. He was telling us our demo is calibrated for the middle of the class — and if the student has already internalised parts of heart and chambers and direction of blood flow, a visualisation-heavy introduction is reviewing what they know. The lesson is not “our tool doesn’t work for everyone”. The lesson is “we need a depth toggle”.

In the next build, the teacher will be able to unlock a second layer mid-session — electrical conduction of the heart, heart-rate variability, the why-behind-the-what — so that a student like him has somewhere to go.

Student 10 — the mirror of Student 07

She is the girl I opened this article with. Lowest pre-pilot objective knowledge in the room. Biggest objective gain. Worst user experience ratings across every dimension we measured. She told us the controls didn’t work.

I want to be precise here, because it would be easy to turn her into a hero of the story and miss the point. Her knowledge gain was one MCQ correction — she learned that arteries carry blood away from the heart at high pressure, not veins. One correction does not equal transformational learning. What is striking is that she made that correction despite rating the experience a 2 out of 5. Something in the visualisation got through the broken UX.

That is a very specific kind of product signal. It means the core content — the 3D heart, the animation of ventricular ejection, the artery-versus-vein colour-coding — is doing work on its own, even when the gesture interaction gets in the way. It also means the single highest-leverage engineering bet we can make right now is cutting gesture-recognition latency. If we shave a third of a second off the response time, we don’t just make Student 10 happier; we let the content land for the students most likely to need it.

Students 06, 09, and 11 independently flagged speed in their free-text too. Two of the five teachers also raised it. Six of sixteen respondents, unprompted, described the same problem with different words. That is the strongest single signal in the entire pilot.

Student 11 — the SPARK layer landing

Student 11 rated the session 5 out of 5 on everything. She corrected her pre-pilot diagnostic mistake (vein → artery). In the box where students were asked to write the one thing they learned today that they did not know before, she wrote this sentence:

“Flow of the blood through the heart to all of the body, and also meditation, caffeine — that a thing related to the BPM — was known by this.”

Read that twice. A Grade 10 student connecting cardiac anatomy to the things that influence her own heart rate — coffee, meditation. Nothing in our ten-minute demo explicitly taught the relationship between caffeine and BPM. The content made that connection available to her, and she reached for it.

This is the part of the pilot I am most excited about. The SPARK framework has a “Relevant” and a “Sensorial” pillar. When you model them well, the child does the connecting themselves. You don’t have to teach that meditation lowers heart rate; you have to make the heart real enough, visible enough, attached to their body enough, that they start wondering. Student 11 started wondering. That is what the job actually is.

The English teacher who wrote our roadmap

Two of our five teacher observers had decades of experience. One was an English teacher with 18+ years behind her. She filled her pre-pilot survey in English teacher’s handwriting — long clauses, generous parentheses — and told us what she was hoping the demo would be:

“Immersive experiential learning. Interactive board. Freedom for child to explore beyond the limit. Non screen-time addiction.”

I want you to notice what is in that sentence. She is not asking us to be louder or more entertaining. She is asking for something closer to the Montessori insight — that the child should direct their own exploration — delivered through technology that does not imitate the worst attention-economy habits of consumer apps. That is exactly the internal philosophy we use to design Karkei. She articulated it back to us before we had shown her anything.

After the session, she filled a full page of suggestions. Not criticism — not really — instructions. Paraphrased, her punch list: add audio, make the animations more biologically realistic, structure the session so students are briefed on the gestures before they start, build in a quiz that maps to the exam, give the child a “what if” button to ask their own questions. That last one — the “what if” button — is a product decision I hadn’t articulated before she did, and I’ve been chewing on it for four days.

In every pilot, you are looking for two things at once: data, and people. This teacher is one of the people. The data we can write on a slide; the person we have to keep in the room.

What we ship next

Three things, in priority order, each drawn directly from the data.

Audio. Three teachers out of five independently flagged it. A minimum viable audio layer — heartbeat sound, blood-flow swoosh, a short voice-over per interaction — is the single highest-return change we can make in the next two weeks. It is also, notably, not a hard engineering problem.

Gesture-recognition latency. The six-of-sixteen signal I mentioned above. This is the single largest engineering investment we will make in the next two months, and it is where we will spend the most time reading sensor data, not building features.

NCERT and board-exam alignment. Three of the five teachers brought this up, one of them rating our current alignment 1 out of 5. It is not a content problem — the heart module is reasonably close to the NCERT syllabus — it is a mapping and assessment problem. A chapter cross-walk, a five-question post-session quiz drawn from previous years’ board questions, and suddenly “delightful” turns into “useful for the exam their principal cares about.”

That is the list. Not a product roadmap I invented sitting in our studio in Chennai. A product roadmap written by 16 people in a classroom, and decoded from their handwriting over the course of a weekend.

Why we are publishing this

Three reasons, honest ones.

The first is that Indian edtech has a credibility problem, and some of us want to solve it. The category is full of metrics that nobody believes — engagement rates that don’t mean what they say, teacher testimonials recorded in the marketing studio, pre-post comparisons with no control and no methodology. If founders in this space stop publishing our honest first-pilot data, we deserve whatever polite scepticism we get from schools, investors, and grant committees. So here it is.

The second is that this work — a single-school pilot with a Principal who agreed in a meeting and teachers who stayed after class to fill a survey — is not something Karkei did alone. The Rajavva Public School Principal Mrs. Vijaya Vijaykumar made it happen. Her teachers gave us the hardest, most actionable feedback I have read in a year. The least we can do is tell that story publicly, with their name attached.

The third is more selfish. This is Pilot #1 of ten. Between now and the end of the year, we will run a version of this session in nine more schools across Tamil Nadu. If you are a teacher reading this and you want your school to be number two, or four, or seven, we’d like to hear from you. If you are a founder running a pilot and you want to swap notes on how you structured yours, we’d like to hear from you. If you are a parent or a grant officer or a fellow experiential-learning believer, we’d like to hear from you. The comment section is open. The email is below.


Karkei Experiential Technologies Pvt. Ltd. is a Chennai-based company building SPARK-compliant learning systems for Indian schools. Our first product, the Karkei Heart Module, is in limited classroom pilots. If you teach, lead a school, or work in a grant organisation that supports experiential learning in India, we would love to hear from you — you can reach us at vanakkam@karkei.in.

This is the first of a ten-part series. Pilot #2 writes itself next month.

Elango Raghupathy
Founder, Karkei · elango@karkei.com