A Manifesto for AI for Educational Design
We Stand at a Crossroads
Artificial intelligence has arrived in education. The question is not whether AI will transform teaching and learning, but how. Will we accept AI systems that hide their pedagogical assumptions in black boxes, making decisions for educators? Or will we demand tools that amplify human judgment, creativity, and values?
We choose amplification over automation. We choose transparency over obscurity. We choose design over default.
Our Principles
1. The Interface Is the Intervention
Why not just use chatGPT? How we present AI capabilities directly shapes what educators can do with them. Every dropdown menu, every slider, every visualization, every layout either expands or constrains human agency. We must design interfaces that surface possibilities, not bury them.
2. Make the Implicit Explicit
Every AI system embodies pedagogical beliefs—about how learning works, whose knowledge matters, what "good" teaching looks like. These biases must become visible, adjustable parameters under human control. No more hidden defaults masquerading as neutral choices.
3. Teachers Are Designers, Not Users
Teaching has always been a design profession. Teachers design experiences, craft explanations, orchestrate discoveries. AI must extend these design capabilities, not replace them. We build tools that position teachers as directors of AI behavior, not consumers of AI output.
4. Honor Pedagogical Pluralism
There is no one right way to teach. Direct instruction serves some moments; inquiry serves others. Individual work has its place; so does collaboration. AI tools must respect this diversity, enabling educators to align technology with their values, their students, their contexts.
5. Lead with Passion & Inspiration
Backward design is beautiful in theory. Activity-first planning is common in practice. Rather than judging natural workflows, we meet educators where they are and build bridges to where they want to be. There's no wrong entry point to good pedagogy.
6. Fidelity and Autonomy Are Not Opposites
The old trade-off—implement with fidelity OR adapt with autonomy—is false. When AI helps educators implement their own vision consistently, we achieve both. High fidelity to educator intent, high autonomy in execution.
7. Surface Patterns, Enable Reflection
When educators see their pedagogical patterns—"You've chosen high-challenge options all week"—they become more intentional practitioners. AI should mirror teaching choices back to educators, fostering professional growth through gentle awareness.
8. Integration Over Isolation
Good design materials work with other materials. AI tools must export to Google Docs, connect with Canvas, play nicely with Kahoot. We build bridges, not walled gardens. The best tool is the one that enhances the tools educators already trust.
9. Values Matter
Both individuals and communities have different beliefs about what constitutes good education. Rather than imposing Silicon Valley's assumptions globally, AI must enable individuals and communities to instantiate their own educational values. Technology should serve culture, not override it.
10. Extend Human Capability
The goal is not efficiency alone, but expanded possibility. What could teachers create with AI that they couldn't before? What questions could researchers explore? What learning experiences could students design? AI should be used to make previously impossible pedagogies possible (not just achieve the same thing with less effort).
Our Vision
We envision educational AI that:
- Surfaces pedagogical decisions rather than hiding them
- Empowers educators to direct AI behavior rather than merely using it
- Respects diverse teaching philosophies rather than imposing one
- Integrates with existing workflows rather than replacing them
- Reflects patterns back to educators rather than judging them
- Enables new possibilities rather than just automating old ones
Our Call to Action
To Developers: Stop building black boxes. Create transparent, adjustable systems that put educators in control.
To Educators: Demand tools that respect your professionalism. You are designers, not operators.
To Researchers: Study not just whether AI works, but how it distributes power, surfaces values, enables agency.
To Leaders: Invest in AI that amplifies human judgment, not AI that replaces it.
In the comings months, you will see a few changes in our UI, so we can more directly live our our values. If you have any thoughts or questions, please do not hesitate to get in touch: maya@questionwell.ai.