We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
AI in EdTech Weekly
by avergin 92 sources
Weekly intelligence briefing on how artificial intelligence and technology are transforming education and learning - covering AI tutors, adaptive learning, online platforms, policy developments, and the researchers shaping how people learn.
Sal Khan
Justin Reich
MacKenzie Price
The lead — Assessment is shifting from “did you make this?” to “show me how you think”
Across K–12, higher ed, admissions, hiring, and even corporate compliance, multiple sources converge on the same problem: AI has severed the link between producing an artifact and demonstrating understanding, making “cheating” easier and harder to detect . Evidence cited this week includes:
- 84% of high school students using generative AI for schoolwork
- A UK university study where 94% of AI-written submissions went undetected and scored half a grade boundary higher than real students
- Teachers reporting rampant AI-assisted submissions (including many “0” grades), with some moving assessments back to pen-and-paper/in-class work
In response, the most practical pattern isn’t better detection—it’s more observable thinking: live defenses, in-class work, and interactive assessment designs that require students (or candidates) to explain and justify their work in real time .
Theme 1 — “Observable cognition” is becoming the new baseline
Detection is a dead end (and creates its own harms)
One argument is explicit: you won’t be able to reliably detect AI use in homework, so schools should stop building policies around it . Related evidence includes AI-written submissions passing undetected at high rates and educators describing how quickly students learn to route around enforcement (or how enforcement is constrained by grading policies) .
What replaces detection: defendable work
Several concrete “defense” patterns surfaced:
- CalTech admissions: applicants who submit research projects appear on video and are interviewed by an AI-powered voice; faculty and admissions staff review recordings to assess whether the student can “claim this research intellectually” .
- Anchored samples in admissions: Princeton and Amherst requiring graded high school writing samples as a baseline for authentic writing .
- Classroom moves that build friction and visibility:
- Boston College professor Carlo Rotella brought back in-class exams (“Blue books are back”), arguing the “point of the class is the labor” and that the “real premium” is “friction” .
- A high school Spanish teacher had students use AI to text-level Spanish sources (still reading in Spanish) and required a link to their chat history in the bibliography .
A related higher-ed complaint: AI-generated student email is described as “rampant” and “inauthentic,” prompting strategies like focusing on the content (“what do you mean by ‘reliable time’?”) rather than trying to prove origin .
Theme 2 — Personalized “time back” learning models are scaling (but governance choices matter)
Alpha School: 2-hour academics + human motivation layer
Alpha School is described as a network of private K–12 schools using AI to deliver 1:1 mastery-based tutoring and compress core academics into ~2 hours/day, with the rest of the day focused on projects and life skills supported by human guides . A recurring design choice: no chatbots (“chatbots…are cheat bots”) .
Operational details shared this week include:
- A “Time Back” dashboard that ingests standardized assessments (NWEA/MAP) to build personalized lesson plans and route students into specific apps (e.g., Math Academy; Alpha Math/Read/Write) .
- A vision model monitoring engagement patterns (e.g., scrolling to the bottom, answering too fast) and nudging students (e.g., “slow down…read the explanation”) .
- A reported platform cost of roughly $10,000 per student per year.
Alpha School’s model also got mainstream attention: a TODAY show segment highlighted a Miami campus pilot program described as “teaching kids with AI instead of teachers,” with reported admissions demand spiking after the segment .
Khan Academy: “Socratic” tutoring with testing and error tracking
Khan Academy’s Khanmigo is positioned as an AI tutor/teaching assistant that nudges learners without giving answers (a “Socratic tutor”) . The team describes building infrastructure around difficult evaluation edge cases and tracking error rates (reported sub-5%, in many cases sub-1%) . They also cite efficacy research: 30–50% learning acceleration with ~60 minutes/week of personalized practice over a school year .
Self-directed learning at scale: “use AI to figure stuff out”
OpenAI shared a usage claim that 300M+ people use ChatGPT weekly to learn how to do something , and that more than half of U.S. ChatGPT users say it helps them achieve things that previously felt impossible . In parallel, Austen Allred argued there’s an “extreme delta” between people who plug their questions into AI and those who don’t .
Theme 3 — Curriculum and content are being redesigned for comprehension and inclusion
Math word problems, rewritten for comprehension without reducing rigor
M7E AI described an AI-powered curriculum intelligence platform that evaluates and revises math content to remove unintentional linguistic and cultural barriers while maintaining standards alignment and mathematical rigor . The team framed the problem as a “comprehension crisis,” citing 61% of 50M K–12 students below grade level in math and noting 1 in 4 bilingual students .
The platform produces district-level summaries, deep evaluations, and revisions (including pedagogical/formatting recommendations and image/diagram feedback), and is offered free for district leaders/schools to use .
Localization and translation as distribution
- Google’s Learn X team described YouTube auto-dubbing as a way to expand global access to education content by letting learners watch videos in their own language .
- Canva described “Magic Translate” as localization beyond language—ensuring template elements reflect local festivals and people students recognize .
Theme 4 — District “plumbing” and student safety: more AI depends on more data (and transparency)
A key operational claim from an edtech infrastructure discussion: there is an “insatiable appetite” for more student data (beyond basic rostering) to make AI systems like tutoring and safety tools work . Examples cited:
- Attendance and family engagement: TalkingPoints described using attendance data to message families when students miss school/periods and to help schools intervene before chronic absenteeism/truancy . They also described an AI feature (“message mentor”) that suggests improvements to teacher-family communications .
- Student safety: Securely described using AI to scan student Google Docs for potential suicide notes and raise flags quickly, while emphasizing privacy/transparency and framing a benefit as “no human has to ever become aware of the student’s private thoughts” unless a flag is raised .
- Admin reduction in special needs: Trellis described transcribing child plan meetings and drafting a child’s plan/minutes (with time-bound, measurable actions), piloting across Scottish councils to reduce the 1.5–2 hour teacher write-up burden and improve teacher presence/eye contact in meetings .
A separate classroom-side warning: one educator described a “tech-powered system that never sleeps,” where AI is already embedded (text-to-speech, translation, writing supports) and constant measurement/feedback can erode pause and reflection, increasing pressure on students .
Theme 5 — AI literacy is being reframed: less “prompting,” more domain knowledge + visible practice
Two complementary takes stood out:
- Evaluate output through domain knowledge: Justin Reich argued that what’s hard is not using AI, but evaluating outputs—and that domain knowledge is a bigger differentiator than AI-specific tricks .
- Treat AI chats as texts: Mike Kentz proposed teaching AI use via comparative textual analysis of chat transcripts (students compare two AI interactions, identify differences, vote using a partially built rubric, then refine the rubric together) . He reports “promising” results across middle school through college but highlights gaps (transcript design, facilitation quality, and adapting beyond humanities) .
Teacher reality check: 79% of teachers reportedly have tried AI tools in class (up from 63% last year), while “less than half of schools” have provided training .
Student-facing AI: “instructional tool, not a companion”
MagicSchool AI released a white paper arguing student-facing AI should function as instructional technology, not a companion, to reduce risks like companionship and sycophancy . Their framing aligns with a broader principle that role clarity matters as AI enters classrooms .
Policy signals touched this too: Pennsylvania Gov. Josh Shapiro directed his administration to explore legal options requiring AI chatbot developers to implement age verification and parental consent.
What This Means (practical takeaways)
For K–12 leaders: If AI use is widespread and hard to detect , the most actionable lever is assessment design—more in-class work, live explanation, and structured reflection (rather than relying on detectors) .
For higher ed: Expect more hybrid “artifact + defense” models (e.g., video interviews, oral exams, anchored writing) to become normal ways to validate ownership .
For edtech builders and investors: The next wave of defensibility may be less about a chatbot UX and more about: (1) measurable learning loops (practice, feedback, progress), and (2) reliable integration into district workflows and data standards—plus clear transparency promises when products touch sensitive domains like safety .
For L&D / employers: The same authenticity problem shows up in hiring (AI-written résumés; rising cost/time to hire), reinforcing a shift toward early, live validation of skills .
For learners: Advantage goes to people who can ask good questions, verify outputs, and use AI as a scaffold rather than outsourcing thinking—skills echoed across classroom practice and workforce framing .
Watch This Space
- Live/interactive assessment spreading from admissions to everyday classroom practice (video defenses, oral exams, transcript-based evaluation) .
- AI “time back” models that combine personalization with human motivation layers (and how they handle engagement, cheating, and trust) .
- Student-facing safety and role clarity—instructional tool vs companion—and whether age-gating and consent become baseline requirements .
- Curriculum accessibility tooling (especially for multilingual and low-context learners) moving upstream into procurement and publisher workflows .
- Data governance under load as more AI products demand extended data for tutoring, attendance, and safety use cases—and districts push for transparency .
DeepLearning.AI
OpenAI
Andrej Karpathy
The lead: AI can speed up work—but it can also reduce learning if you don’t design for understanding
A randomized-controlled trial by Anthropic found that junior engineers using AI assistance completed a novel coding task slightly faster (about two minutes; not statistically significant) but scored 17% lower on a concept quiz (roughly two letter grades) . In the same study, participants who still scored highly while using AI tended to ask conceptual and clarifying questions rather than delegating the task to the model .
This learning tradeoff is showing up across the week’s coverage: leaders are shipping more “practice and feedback” tools into everyday workflows, while practitioners warn that guardrails, verification, and human judgment aren’t optional.
Theme 1 — Mastery learning with guardrails: Alpha School’s “bright spot” framing
Geoffrey Hinton is cited praising Alpha School as a potentially positive use of AI in education—described as notable given his usual warnings about AI risks . Alpha School’s positioning emphasizes that AI is:
- Harmful when it becomes “screens everywhere” and chatbots become “CheatBots”
- Powerful when used as a focused “1:1 mastery system” with “strong guardrails”
"This frees adults to do the human work - coaching, relationships, and life skills - while kids gain superpowers in learning."
From Alpha’s own description, its AI tutor runs in the background as a personalized, mastery-based platform that adapts lessons by level and pace, measures learning, and fills knowledge gaps—while explicitly saying it does not use a GPT or a chatbot that kids interact with . The same post claims Alpha schools have less screen time than a traditional school and “way better results” .
Operational signals also showed up in social posts:
- A weekend hackathon at Alpha School reportedly had students building impressive apps “after a few hours,” with a response: “AI gives kids superpowers” .
- Alpha School shared that students use AI to pursue passions, e.g., one student learning to code a cooking app .
- Alpha School is described as bringing 100 Stanford and MIT students to Austin for an intensive summer to build AI apps aimed at transforming education for 1 billion kids .
Theme 2 — Practice and feedback at scale: tests, flashcards, and bite-sized skill builders
Gemini expands standardized test practice (SAT + JEE)
Google says Gemini now offers full-length practice SATs and mock JEE Main tests at no cost, with feedback and study tips . The JEE practice is described as grounded in “rigorously vetted content” in partnership with Physics Wallah and Careers360, with immediate feedback on strengths and study needs .
Microsoft rolls out AI-powered flashcards across M365 (with classroom insights)
Microsoft has rolled out AI-powered flashcards in the Learning Activities app across Microsoft 365 apps for students and educators . Teachers can generate flashcards from text (up to 50,000 characters) and from Word documents or PDFs, choose language and card types, add hints, and pull images via Bing .
For classroom use, it also supports sharing by link/join code and provides educator-facing insights (e.g., how many students started/completed, average score, challenging cards) .
Limitations to keep in mind: The flow is highly generative (create → regenerate → tweak), which can speed up production—but it also means review and editing are central to quality control .
AI-generated “minigames” as a practice format
Ethan Mollick shared a prompt to Claude Code to “figure it out” and create something “awesome,” resulting in a set of 21 minigames intended to teach a broad list of practical skills .
Theme 3 — Simulation-first learning: role-play, field verification, and realistic training environments
A multimodal agent in medical simulation
Mollick highlighted a paper testing a multimodal AI agent (using Gemini 2.5) in a realistic medical simulation used to train physicians, reporting it matched or exceeded 14,000 medical students in case completion and secondary outcomes like time and diagnostic accuracy .
Higher ed role-play: where guardrails have to be “castle walls”
In a Substack interview, one contributor argued that high-risk domains (clinical psychology, nursing, drug-abuse counseling) require more than guardrails—“castle walls”—including HIPAA compliance and making sure what a student says “never, ever leaves the classroom” and “can never be used in court against them,” plus extensive testing . The same discussion suggests chatbots open the door to cognitive simulations and role-plays across fields like criminal justice and interviewing, with an LMS role-play that can look things up on the internet and behave in character (including languages) .
A concrete example: nursing faculty using role-play so students practice assertive communication with a simulated coworker that adapts responses, followed by debriefing with a communication coach and in-class discussion .
AI as a “mirror” for student thinking in public health
At Duquesne University, Dr. Urmi Ashar described a public health assignment where students adopted personas and used chatbots to explore whether someone should move to the Sheraden neighborhood, then compared outputs against Google Maps and a “windshield survey” (experiencing the neighborhood firsthand) . The exercise surfaced student assumptions and emphasized verification: “the map is not the terrain” .
Ashar describes AI as “more like a mirror” reflecting questions, assumptions, and blind spots, with the instructor shifting from expert to coach .
Theme 4 — Governance and safety: deepfakes, bias, and misinformation literacy become operational concerns
“AI is like corn syrup”: districts treating AI as unavoidable in procurement
An EdSurge piece quotes a K–12 CTO: “AI is like corn syrup; it’s going to be in everything,” framing AI as embedded in edtech whether districts are ready or not . The same piece notes districts are pushing harder on data governance and asking students to learn prompting and critical consumption of information .
AI, education, and the law: bias + deepfake risk
A Tech & Learning practitioner guide flags legal and ethical challenges including algorithmic bias—citing evidence that AI detection tools can be “near perfect” for native English speakers while falsely flagging 61% of essays by non-native speakers as AI-generated . It also cites data that nearly half of students and more than a third of teachers are aware of school-related deepfakes .
The same piece points to a “human in the loop” approach and suggests leaders ask whether systems have biases, whether student data is used to train third-party models, and whether tools minimize data collection .
Parallel discussion in teacher communities tracked enforcement challenges alongside policy:
- The “Take It Down Act” is described as making revenge porn and AI deepfakes a federal crime, and the Senate is described as having passed a related bill unanimously .
- South Korea is described as passing a 2024 law in response to deepfake pornographic videos of teachers and students, with 5–7 years for creating/distributing and penalties for watching/possessing .
Misinformation literacy: AI-generated “pink slime” news
Tech & Learning described “pink slime journalism” as sites masquerading as local news while pushing an agenda, and reported Yale research in which just under half of participants preferred AI-generated fake local news sites over legitimate ones . Recommended responses include teaching students to check “About Us,” assess authorship and sourcing, and apply a cybersecurity-style skepticism to unfamiliar content .
Governance friction in practice: NYC votes down AI contracts
Chalkbeat reported that NYC’s Panel for Educational Policy repeatedly bucked City Hall in recent months, including voting down “millions worth of AI contracts” .
Theme 5 — Agents as a workforce skill: management, reusable skills, and new workspaces
“Programming in English” and the need for oversight
Andrej Karpathy described moving rapidly to a workflow of ~80% agent coding and ~20% manual edits, calling it the biggest change to his coding workflow in ~two decades . He also warned that current “agent swarm” hype is too much: models still make subtle conceptual errors and often run with wrong assumptions without seeking clarifications, requiring careful oversight in an IDE . He noted early signs of atrophy in manual code-generation ability (distinct from reading/reviewing) .
“Management as AI superpower” in higher ed entrepreneurship
In an experimental University of Pennsylvania executive MBA class, students built working startup prototypes from scratch in four days, using Claude Code and Google Antigravity for coding and ChatGPT/Claude/Gemini for idea generation, market research, pitching, and financial modeling . Mollick attributed much of the success to management skills—scoping problems, defining deliverables, and recognizing when outputs were off—turning “soft” skills into the hard ones .
Reusable “skills” for agents
Andrew Ng and DeepLearning.AI promoted a short course, “Agent Skills with Anthropic,” describing “skills” as structured folders of instructions that agents load on demand, designed to move workflow logic out of prompts and into reusable components . The course description highlights deploying across Claude.ai, Claude Code, the Claude API, and the Claude Agent SDK .
PLTW: treating AI as a “colleague” and building AI literacy into STEM pathways
Project Lead The Way described a one-semester high school course (“Principles of AI”) as the foundation of a four-pillar AI framework, covering AI/ML history, how data and LLMs work, and ethical reasoning . In the same conversation, PLTW described an organizational expectation of treating AI “as a colleague, as a team member,” while emphasizing judgment and ethical boundaries—especially for educator- and student-facing content .
Research workspaces also get “AI-native”
OpenAI introduced Prism, a free cloud-based LaTeX-native workspace “powered by GPT-5.2” for scientists to write and collaborate on research, with GPT-5.2 working inside projects with access to paper structure, equations, references, and surrounding context . Prism is described as removing version conflicts and setup overhead, and is available on the web for ChatGPT personal accounts (with Education plans “coming soon”) .
What This Means
For K–12 leaders: The “AI tutor” conversation is shifting from whether to use AI to how to design it—toward mastery systems with explicit guardrails and adult-led coaching, and away from unsupervised chatbots . At the same time, legal and reputational risk is rising (deepfakes, detection bias, data practices), making “human in the loop” governance and procurement questions practical requirements .
For higher ed and workforce learning: Simulations and role-plays are emerging as high-leverage use cases—but only where privacy and safety requirements can be met (HIPAA “castle walls,” classroom containment, and testing) .
For product builders and investors: The learning tradeoff in AI assistance is now harder to ignore: tools that help people finish faster may reduce understanding unless they’re built to elicit conceptual questions and reflection . Features that produce practice + insight loops (full-length tests with feedback; classroom flashcard analytics) are one concrete path to value .
For learners: Expect “AI literacy” to look less like memorizing prompts and more like building the habit of verification, asking clarifying questions, and treating AI output as draft work that needs judgment and editing .
Watch This Space
Learning-first AI design: whether more products adopt patterns that push learners to ask clarifying/conceptual questions (instead of “answer now”), reflecting the Anthropic study’s high-performer behavior .
Standardized test prep inside general AI assistants: Gemini’s full-length SAT/JEE tests suggest “assessment-as-a-feature” will spread beyond dedicated test-prep platforms .
Deepfake enforcement vs. school reality: policy is tightening, but teacher discussions point to prosecution and enforcement gaps in practice .
Simulation ecosystems: medical, nursing, and public health examples are converging on a theme—AI can simulate scenarios, but educators still need the debrief, verification, and judgment layer .
Agent skills as the new professional development layer: reusable skills, structured workflows, and “AI as colleague” expectations are turning into training products and curricula (from PLTW to short courses to MBA classes) .
Austen Allred
Andrew Ng
Anthropic
The lead: AI is being embedded into the “systems of school” (not just used as a chatbot)
This week’s clearest signal isn’t a new model—it’s where AI shows up.
- Google is pushing AI deeper into the tools teachers already use (Google Classroom + Gemini) with admin controls and privacy commitments .
- Microsoft is shipping an on-device lesson builder (Learning Zone) tied to Copilot+ PCs, plus assignment, reporting, and content libraries .
- “Non-classroom” applications—like master scheduling optimization—are being positioned as high-leverage AI because they shape student experience without students interacting with AI directly .
Theme 1 — Platform-native AI: planning, differentiation, and content generation inside existing workflows
Google Classroom + Gemini: from prompts to pre-built teacher tools
A teacher-facing walkthrough reports 28 pre-prompted AI tools inside Google Classroom’s Gemini dashboard, organized by planning, instructional materials, assessments, student support, and administrative tasks . Examples include outlining lesson plans, releveling text, generating quizzes, drafting newsletters, and creating PD plans .
Two features that stood out in coverage:
- Classroom context inside Gemini: educators can connect Google Classroom to the Gemini app so Gemini can reference class roster, assignments, and grades when helping adjust lessons .
- Audio lessons: Google described a Classroom feature that turns content into a student–teacher dialogue audio lesson designed to go deeper into misconceptions (distinct from podcast-style audio) .
Google also described Gemini for Education as free access to its “highest-end reasoning model” for Google for Education customers, with a data-protection claim that student data isn’t used for training .
A separate pilot example cited up to 10 hours/week of time savings for educators in Northern Ireland after rolling out Gemini .
Limitations to keep in mind: a teacher blog explicitly frames outputs as drafts—useful for skipping the blank page, but still requiring review and editing .
Microsoft Learning Zone: on-device lesson generation + classroom analytics
Microsoft introduced Learning Zone, an AI-powered app for Copilot+ PCs that uses a local small language model to generate interactive lessons in minutes .
Key workflow pieces from the demo:
- Grounding with sources: teachers can upload Word/PDF files, attach OneDrive files, or use vetted resources such as OpenStax .
- Editability: lessons generate as a mix of content and practice “slides,” but teachers can edit slides, add question types, generate distractors, and simplify language .
- Assignment + LMS hooks: share via join code/link/QR and share into Teams assignments or Google Classroom; Microsoft also said Learning Zone lesson attachment in Teams/LTI is being worked on for spring 2026 .
- Reports: per-lesson and per-student performance insights (e.g., % correct, time), drill-down by exercise type, and identification of students needing support .
Theme 2 — Safety, privacy, and regulation: guardrails are becoming product requirements
Two policy “fronts” reshaping AI + edtech
Edtech Insiders highlighted two broad vectors:
- California AI + minors: OpenAI and Common Sense Media announced plans for the “Parents & Kids Safe AI Act,” including age assurance, a ban on targeted advertising to minors, limits on sharing children’s data without parental consent, and content safeguards against harmful AI content. The piece notes these rules would also apply to AI-powered educational tools . Enforcement would flow through the Attorney General and financial penalties (moving away from a private right of action) .
- Screentime scrutiny spilling into edtech: an NTIA inquiry is questioning whether federal subsidies are pushing schools toward more screens without evidence of learning benefit . The same article links a broader political trend (e.g., Kids Off Social Media Act proposals) to increasing regulation of what happens on school-issued devices .
“Red lines” in product design: LEGO Education’s stance on generative AI
LEGO Education described “red lines” for bringing AI into classrooms:
- Generative AI tools may be made safer, but they “cannot be guaranteed to be safe,” so LEGO Education will not bring them into classrooms until that gap can be closed .
- They avoid anthropomorphizing AI (no faces/names; not describing AI as creative) .
- They emphasize local processing and keeping child data from leaving the classroom or being transmitted over the internet .
Procurement reality: compliance-first vs “public” tools
A SchoolAI community manager emphasized COPPA/FERPA compliance, stating SchoolAI does not use student data to train models or sell data . The same comment warns that using public-facing tools with student identifying information (e.g., student names for a seating chart) can break federal law .
Theme 3 — Assessment is being rethought for an AI era (and vendors are rushing in)
BETT: assessment shifts from “pattern recognition” toward what’s harder to test
At BETT, one discussion argued that the most valuable things are becoming harder to assess, while the least valuable are increasingly easy for AI (pattern recognition). The takeaway was the need to “measure what we treasure” .
In the same coverage, Vicki Merrick described pilots using machine learning-enabled comparative judgment (holistic pairwise comparisons by teacher judges) for more reliable assessment of subjective Key Stage 3 work. In one pilot: 40 judges assessed 2,000 Year 7 art items across 14 academies in less than an hour and achieved a 0.89 reliability score. Teachers reported greater confidence because their judgments were one of many .
AI tooling for assessment creation + feedback loops
- Kahoot AI Generator: creates quizzes from prompts, slides, or PDFs, with “over 13” question types and modes like Accuracy Mode (points for correctness rather than speed) . Kahoot also cited “almost 200 independent research studies” and claimed grade increases by a letter grade on an average test .
- Red Pen AI: a formative assessment workflow that starts with uploading photos of handwritten student work, identifies urgent curriculum gaps, generates editable feedback, and tracks class progress on a dashboard—aiming to reduce teacher workload without requiring 1:1 devices .
- Teacher feedback prompts: Monica Burns shared copy-and-paste prompts to draft student-friendly feedback faster, emphasizing drafting + revising with professional judgment .
Theme 4 — AI literacy is shifting toward fundamentals, agency, and durable human skills
From “how to use AI” to “how to understand and judge it”
- LEGO Education’s new Computer Science and AI product line aims to teach AI/CS/robotics fundamentals from kindergarten, including probability, statistics, machine representation, and algorithmic bias—explicitly pushing away from “throwing conversational chat bots in front of children” .
- The National Literacy Trust launched a “National Year of Reading” campaign after reporting that only 1 in 3 children said they like reading and 1 in 5 read every day in a survey of 17,000 children . In the same coverage, an expert argued literacy becomes more important in an AI-driven world because students must write accurate prompts and evaluate whether AI output is accurate and truthful .
Evidence emerging on what teachers actually build with AI
A SchoolAI study analyzing 23,000 teacher-created AI learning experiences reports that over 75% were anchored in core curriculum and designed to prompt students to reason, evaluate, and make decisions—not just recall information .
“Human skills” framing is hardening into leadership language
A Tech & Learning piece proposed the C.A.R.E.S. framework (cultural competence, adaptability, relationships, ethical judgment, scholarly discernment) as the “irreplaceable” human core as AI drafts lessons, analyzes student work, and generates feedback .
Theme 5 — Upskilling is scaling: educators, engineers, and whole workforces
- Anthropic + Teach For All: a partnership to bring AI training to educators in 63 countries, enabling teachers serving over 1.5 million students to use Claude for curriculum planning, customized assignments, and tool-building, and to provide feedback to shape Claude’s evolution .
- Gauntlet AI: positions itself as free immersive training for engineers (travel to Austin plus covered housing/food) with employer matching for $200k+ roles; it states participants never pay under any circumstances .
- Gemini CLI training: Andrew Ng promoted a DeepLearning.AI short course on Gemini CLI (an open-source agent) focused on multi-step workflows from the terminal, including orchestrating tools via MCP and automating coding tasks .
What This Means
For K–12 and district leaders: AI adoption is accelerating where it can be governed—inside platforms with admin controls, protected data terms, and teacher-facing “draft” workflows (e.g., Classroom+Gemini, Learning Zone). Expect procurement to increasingly center on privacy posture and control surfaces, not feature lists .
For assessment and curriculum teams: “AI in assessment” is splitting into two lanes: (a) automating creation and feedback loops (quizzes, formative feedback), and (b) redesigning what’s assessed (contextual, non-deterministic work) using methods like comparative judgment .
For edtech builders and investors: The regulatory environment is converging on child-focused requirements (age assurance, data sharing constraints, content safeguards) that will apply to AI edtech, not just social platforms . Products with explicit safety “red lines” and local processing claims (or on-device models) may gain advantage in K–12 contexts .
For learners and L&D professionals: The “AI capability” gap is widening—multiple sources frame value as judgment, curation, and the ability to evaluate output quality (not just generating text quickly) .
Watch This Space
- Age assurance + AI edtech compliance: whether California’s proposed standards become a de facto requirement for AI products used by minors .
- On-device education AI: tools that rely on local models (e.g., Copilot+ PC workflows) as a response to privacy, cost, and offline constraints .
- Assessment redesign at scale: comparative judgment pilots and other methods that claim reliable evaluation of subjective work without over-indexing on what AI can do best .
- AI literacy as fundamentals + agency: product lines and curricula that emphasize how systems work (and how to judge them) rather than putting chatbots “in front of children” .
- Training models for the AI workforce: partnerships and “free training + job outcomes” models expanding across educators and engineers .
NotebookLM
Austen Allred
Lance Eaton, Ph.D.
The lead: K–12 “AI safety” is turning into concrete access controls
Denver Public Schools (DPS) started blocking student access to ChatGPT on school-issued devices and the district Wi‑Fi, citing concerns tied to ChatGPT’s newer features (including 20-person group chats and planned adult content) and the risks of harmful interactions, self-harm, violence, bullying, cyberbullying, and academic misconduct . DPS says it never explicitly approved ChatGPT for its 89,000 students and is instead using Google Gemini (for monitoring and data-privacy compatibility) and MagicSchool for education-specific workflows like lesson creation and writing feedback .
“We’re trying to make sure kids think and can access their skill sets and not ChatGPT.”
The district also flagged safety concerns connected to AI and children’s mental health, noting lawsuits alleging harms related to chatbot relationships .
Theme 1 — Policy is moving from “guidelines” to public hearings and school board action
Congressional testimony: safety, teacher judgment, and student agency
MagicSchool AI’s Adeel Khan testified before Congress on AI in K–12, stating goals to keep students safe, keep teachers “in the driver’s seat,” and ensure schools have tools purpose-built for education.
He also described seeing educators use AI to save time, personalize instruction, and improve feedback without outsourcing judgment, and shared an example of a student using AI feedback to revise writing while choosing not to use AI as a shortcut because “her learning matters” .
Resources shared:
- Video: https://www.youtube.com/live/RM0aq5ynUiQ
- Written testimony: https://edworkforce.house.gov/uploadedfiles/adeel_khan_testimony_final.pdf
State and district governance: “evaluation” is still early
A Digital Promise report reviewing publicly available state-level AI evaluation guidance across 32 states and Puerto Rico found most states are still in the Nascent & Exploratory stage (with fewer in “Systematic & Evidentiary”) . Examples of more advanced efforts included Colorado tracking qualitative/quantitative metrics around access/engagement/outcomes by demographic, and Louisiana tracking progress and effectiveness of AI tools . The report emphasizes evidence gathering and professional learning as part of responsible adoption .
Theme 2 — “AI in the workflow” expands: research, revision, and teacher coordination
Google Scholar Labs: faster research overviews, but not a deep-dive replacement
Google Scholar Labs (launched in late Nov. 2025) lets users ask research questions (e.g., “What are some recent research papers about mindset?”) and returns summaries of relevant studies . A review found it effective for quick snapshots and comparisons, but noted limitations: it can skew toward older research, miss important studies, and struggles with highly specific queries .
NotebookLM: course-grounded tutoring patterns are becoming reusable
Google for Education highlighted a NotebookLM pattern that turns course materials into a Socratic tutor grounded only in uploaded sources, with citations back to the materials . It can also generate quizzes from course content and support shareable notebooks while keeping chat histories private .
Two other “in the wild” school-adjacent uses surfaced:
- NotebookLM rolled out Data Tables to all users, including example prompts aimed at classroom tasks like structuring historical events into columns .
- The Fukushima City Board of Education used NotebookLM to generate a manga-style slide deck summary so teachers who weren’t present could understand what happened in class .
Gemini inside Google Classroom: pre-prompted tools + human review expectations
Google’s Gemini is now built into Google Classroom for educators using Google Workspace for Education . Teacher-facing materials describe using it to generate lesson ideas, activities, and assessments without leaving Classroom , while emphasizing a “draft mindset” that requires human review to correct potential hallucinations .
Theme 3 — AI literacy is being reframed around “how it works” and “how to judge,” not just tool tips
A practitioner-centered guidebook (and why it’s framed as hypotheses)
Justin Reich and Jesse Dukes released a free guidebook, “A Guide to AI in Schools: Perspectives for the Perplexed,” alongside a limited podcast series, after interviewing ~120 teachers and students across the U.S. . They cited RAND survey findings suggesting only about one quarter of teachers reported receiving guidance about AI (and similarly for professional development), roughly two years after ChatGPT’s release .
Reich said the guidance was intentionally framed as hypotheses sourced from practicing educators, urging humility and local experimentation because education has a history of “bad guesses” early in technology hype cycles .
Classroom AI literacy example: demystifying chatbots with “productive struggle”
An EdSurge report described a middle-school lesson using the 1960s chatbot ELIZA to help students experience a limited “therapist-bot,” then program their own chatbot in MIT App Inventor—teaching computational thinking (decomposing systems, tracing logic, debugging) and building frustration tolerance . Students still expressed strong trust in modern AI tools like ChatGPT for practical uses (study guides, practice tests) even while acknowledging misinformation risk .
Theme 4 — Higher ed: widespread use, consolidation, and “human-side” adoption friction
EDUCAUSE: high AI use at work, low clarity about rules
An EDUCAUSE report discussion highlighted that about 90% of respondents said they use AI tools for work, while about half weren’t aware of institutional policies or guidelines that govern that use . Speakers emphasized that many relevant rules already exist via data governance (e.g., FERPA contexts), but institutions often need explicit, readable AI guidance—and coordination between IT, HR, and academic units .
Coursera–Udemy merger: distribution as the “moat” in online learning
Michael Horn described the Coursera–Udemy $2.5B all-stock merger as consolidation that strengthens aggregation in online learning, arguing the primary moat is increasingly distribution/channel, especially amid AI disruption .
Leadership and trust: “resistance” as workload + uncertainty
Lance Eaton argued that what gets labeled as resistance is often care, concern, and fatigue—alongside added work (rethinking courses/policies, checking hallucinations, evaluating vendors) and eroding trust when responsibilities and permissions are unclear . He emphasized leadership practices like naming uncertainty, creating psychological safety for dialogue, and modeling transparent AI use .
Theme 5 — “Build faster” is reshaping learning-by-doing (and exposing new failure modes)
MBA “vibefounding”: compressing a semester into four days
Ethan Mollick reported that his experimental MBA class has students launch a company in four days, using tools like Claude Code, Gemini, and ChatGPT—work he says previously would have taken a semester, with lower quality . He also warned that AI can create “new kinds of work,” and the best efforts leverage the AI’s analytical, creative, and empathetic capabilities—not just automation .
Mollick described a recurring early-stage failure mode: AI-generated “Wizard of Oz demos,” where an interface is built without underlying logic and functionality is simulated live .
Gauntlet AI: training emphasis shifts from building fast to integrating with real systems
Austen Allred said future Gauntlet AI cohorts won’t reward simply building apps quickly (“that’s free now”), and will instead emphasize getting AI to understand and refactor/extend existing systems .
Separately, a podcast discussion described Gauntlet AI as a selective, free program that culminates in $200k+ jobs, structured around weekly challenges and intensive in-person work focused on building with AI .
What This Means
- For K–12 leaders: The DPS decision signals that “AI policy” is increasingly about product-specific risk surfaces (features, monitoring, content, student safety)—not abstract pro/anti positions .
- For edtech builders and investors: “Teacher judgment stays central” is showing up everywhere—from congressional testimony goals to product guidance around human review and limitations . Tools that integrate into existing workflows (Classroom, NotebookLM) may outpace standalone pilots .
- For higher ed and L&D: Expect adoption friction when AI use is widespread but rules and responsibilities aren’t legible; EDUCAUSE’s awareness gap and Eaton’s “human-side” framing point to clarity, coordination, and trust-building as core implementation work—not add-ons .
- For curriculum and assessment teams: AI literacy efforts that teach how systems work (and how to question outputs) are emerging as durable complements to “how to use the tool” training .
Watch This Space
- K–12 safety backlash vs. “AI everywhere” reality: more districts drawing hard lines on features/data access, even while retaining approved alternatives like Gemini and education-specific tools .
- State-level AI evaluation maturity: whether more states move from nascent guidance toward systematic evidence gathering (and what metrics become standard) .
- AI literacy beyond tool proficiency: more classroom-ready units that demystify how chatbots work and make source evaluation and judgment skills visible .
- Online learning consolidation: whether aggregation strategies (Coursera–Udemy) become the default go-to-market for credentials and lifelong learning in an AI-saturated content landscape .
Ethan Mollick
Andrew Ng
DeepLearning.AI
The lead: K–12 AI is shifting from pilots to default workflow
Two signals this week point to AI becoming a standard layer in school operations—especially for assessment and teacher productivity:
- MagicSchool AI reported 2025 milestones including 7M educators signed up, 1,353 districts/orgs onboarded as partners, and a $45M Series B, alongside 25+ new product features/launches. The stated 2026 focus is an “AI Operating System for Schools.”
- Microsoft says it has broadly rolled out Copilot Quizzes to all educators using M365, available via the Copilot app or Teams for Education .
Theme 1 — AI for “daily work”: quizzes, fluency practice, and district tech stacks
Microsoft: faster, standards-aligned assessments (with knobs teachers can actually control)
Copilot Quizzes sits inside the Teach module, which includes tools for curriculum planning, lesson plans, homework/assessments, rubrics, study aids, and content modification . In the quiz flow, educators can:
- Add a subject, grade level, and a detailed description (up to 10,000 characters) .
- Ground quiz generation in uploaded materials (Word docs, PDFs) .
- Align quizzes to standards from all 50 U.S. states and 35 countries.
- Set question count and duration; enable Practice mode for self-paced feedback (note: Practice mode requires no time limit) .
- Share as a link and integrate into Teams assignments .
Capability: rapid creation of standards-aligned quizzes and iteration via Copilot suggestions (themes, explanations, practice mode) .
Limitation to price in: the workflow is powerful, but teachers still need to review and adjust generated assessments (the tool is drafting and refining, not replacing professional judgment) .
Microsoft Reading Coach: “guided practice” for oral reading fluency (plus teacher analytics)
Microsoft’s Reading Coach adds Guided Practice, letting teachers create lightweight practice assignments where students read aloud and work toward time-based goals (example: 90 minutes over winter break) . Teachers can assign:
- Open reading (student choice) or single passage (teacher-selected; can use a library passage or paste/write a custom one) .
- Practice across multiple languages.
On the student side, one demonstrated flow includes generating a story using AI, reading it aloud, and receiving feedback on pronunciation accuracy, correct words per minute, and targeted word practice .
On the teacher side, Guided Practice provides analytics such as time spent, completion, words-per-minute, and accuracy—drillable by student and viewable across assignments .
District examples: “flexible guidelines” and vendor relationships
- Clarkstown Central School District’s Director of STEM & Instructional Technology, Jennifer Mazza, describes leading AI integration in a district of ~8,000 students, emphasizing that technology should serve instruction. The district’s stack includes Gemini and NotebookLM (with NotebookLM noted as successful in special education), plus tools like Canva, WeVideo, and Brisk—with an emphasis on vendor relationships because “student safety” is where districts can go wrong .
- Suffern Central School District describes leaning into AI to support multilingual ENL learners, citing early adoption of SchoolAI and MagicSchool and a preference for flexible guidelines over rigid policies that become obsolete quickly . One described use case is “translation agents” with guardrails intended to remain educationally focused even when students try to break them .
Theme 2 — “Build anything” is getting real: from personal tools to beginner-friendly courses
Beginner-friendly “vibe coding” (vendor-neutral)
Andrew Ng promoted a new course, Build with Andrew, aimed at people who’ve never coded: it teaches how to describe an idea and build a working web app with AI in under 30 minutes. The example project is a browser-based interactive birthday message generator that can be shared and customized by chatting with AI . The course is positioned as vendor-neutral, mentioning tools like ChatGPT, Gemini, and Claude .
Enrollment links shared:
Personal tools as a new category of software
- Kevin Roose said he used Claude Code to build a Pocket-like app and it kept adding features (TTS read-alouds, video summaries, spaced repetition emails, Kindle sync), describing the result as a “perfect external brain” from ~12 prompts .
- Ethan Mollick argued this enables a new category of software: “useful tool for me” projects that were previously not worth the time or were infeasible, because AI coding can work well “without a lot of errors” .
A caution from practitioners: fundamentals still matter
A CTO on r/edtech pushed back on the idea that AI can replace foundational learning in coding:
- Students still need to understand how to trace code flow and transformations; AI is “far from” perfect .
- They recommend teaching through concrete examples and use, combining pieces as students progress .
- They warn against putting AI-generated code into projects without understanding “exactly what every line is doing” .
Theme 3 — AI literacy and governance: the gap is less “access” than judgment
Higher education: heavy use, low awareness of rules
EDUCAUSE research found nearly all surveyed higher-ed respondents used AI tools for work in the last six months, but only about half were aware of institutional policies or guidelines meant to govern that use . The gap is flagged as potentially serious for data privacy/security and broader governance . Respondents cited risks including data privacy, cybersecurity, and the potential loss of critical thinking skills .
Recommendations highlighted include reviewing and updating policy portfolios (including data governance policies), communicating how they apply to AI use, providing professional development and micro-credentials, collecting local needs data, and measuring ROI as pilots end and funding sources need to stabilize .
K–12 policy signal: “AI system literacy” enters the legislative agenda
New York lawmakers introduced a bill directing the state education commissioner to develop recommendations for incorporating “artificial intelligence system literacy” into K–12 instruction .
Practical classroom-side AI literacy: frameworks and “tutoring-style” guardrails
- Mike Kentz’s Butler–Thinking–Sparring framework proposes three intentional modes for educators using AI: drafting support when you can judge quality (Butler), brainstorming options (Thinking Partner), and pressure-testing ideas (Sparring Partner) . The mode choice depends on whether you know what you want and whether you can evaluate outputs .
- Writing instructor Anna Mills recommends limiting AI use to tutoring-style assistance (e.g., feedback) rather than document creation, outlining, or idea synthesis—unless an instructor explicitly directs otherwise . She describes students’ anxiety about cheating accusations and the value of open, pragmatic conversations that give students both guidance and room to question .
“My hope—my dream—would be that students feel more confident and free to express a wide range of opinions, ideas, and questions about AI…”
Theme 4 — Assessment, safety, and accessibility: where real-world friction is surfacing
Pre-K: adoption is rising, but guidance lags
RAND data reported by EdSurge indicates 29% of pre-K teachers use generative AI in the classroom (with 20% of those using it less than once a week), compared with 42% of elementary, 64% of middle school, and 69% of high school teachers . RAND also highlighted a “critical gap” between training on using edtech (7 in 10) and training on assessing edtech quality (less than 4 in 10) .
AI grading and AI detection: contested trust signals
Some educators raised concerns about AI-driven grading and detection:
- One Reddit commenter said their state moved a test to AI grading and “all of the scores plummeted,” affecting school ratings .
- Another discussion describes a state writing prompt requiring multi-paragraph structure (introduction, evidence from passages, organized paragraphs, conclusion), and notes “there is no one even reading the essays” because the state uses AI grading .
- A separate commenter argued that formulaic writing instruction can cause real student writing to flag on AI detectors .
Accessibility mandate: a hard deadline for digital content
A thread in r/edtech highlighted a requirement that by April 24, 2026, public entities serving 50,000+ people must ensure digital content—including websites, mobile apps, course materials (videos, PDFs, third-party content), and electronic documents—meets WCAG 2.1 Level AA (with smaller entities cited as having a 2027 deadline) . Commenters described the operational challenge of legacy course materials and noted that Google Docs/Slides don’t automatically prevent uploading non-compliant content .
Theme 5 — “Social AI” and relational design: collaboration as the product (and the risk)
Edtech Insiders argued that “Social AI” (collaborative, multi-user AI that strengthens relationships) may break into the mainstream—especially in education . Examples cited include:
- OpenAI’s Group Chats in ChatGPT enabling multi-user collaboration with ChatGPT in the same conversation .
- Products for AI-facilitated collaboration, peer tutoring, and small-group work (Breakout Learning, Honor Education, OKO Labs, PeerTeach) .
At the same time, the same publication cited University of Denver research suggesting adolescents prefer “best friend”-style relational chatbots over transparent systems that clearly state they aren’t human—raising safety and design concerns for tools aimed at young users .
What This Means (for education leaders, investors, L&D, and curious learners)
Procurement is becoming “workflow-first.” Tools that slot into existing systems (Teams/M365 assessments, lightweight fluency assignments, translation agents) are moving faster than standalone pilots .
AI literacy is shifting from “how to prompt” to “how to judge.” EDUCAUSE’s policy-awareness gap and RAND’s “edtech quality assessment” gap point to the same operational need: clearer guardrails and stronger evaluation capacity .
The build barrier is collapsing—but the learning barrier isn’t. Courses and tools promise “build in 30 minutes,” and personal tools are now feasible . But practitioners are explicit that fundamentals and line-by-line understanding still matter when reliability is on the line .
Assessment and trust remain fragile. Reports of AI grading impacts, detector false positives, and shifting accountability pressures suggest leaders should be cautious about high-stakes automation without transparent validation and appeal processes .
Accessibility is no longer optional project work. The WCAG compliance deadline (as described) turns “content hygiene” into urgent institutional operations—especially for higher ed course materials .
Watch This Space
- “AI OS for Schools” consolidation: whether MagicSchool’s platform ambition translates into fewer tools to manage—or just a bigger layer to govern .
- Assessment automation backlash and redesign: how states/districts respond to concerns about AI grading and AI detection reliability .
- Accessibility tooling (and enforcement): whether institutions build scalable workflows to reach WCAG 2.1 AA across PDFs, video, and legacy slide decks .
- Social AI in classrooms: growth of AI facilitation for group work—plus safeguards when relational tone drives trust among teens .
- The new “personal tool” economy: whether learner-built, one-off apps become a mainstream part of study/work—and what institutions do about support, security, and attribution .
Ethan Mollick
Andrew Ng
The lead: AI tutoring shows measurable gains—when it’s engineered to teach
A World Bank study in Nigeria found that after-school activities combining an AI tutor with teacher guidance produced a 0.31 standard deviation performance increase over six weeks—described as roughly an extra year of additional learning.
But the same interview highlights the failure mode education leaders are living through: unstructured “answer getting.” In a Wharton controlled experiment (with students in Turkey), “just letting students use AI” led to students regurgitating AI outputs and learning nothing.
“If you just let students use AI, they get answers to questions, they think they learned something, but they're just regurgitating AI information and they learn nothing.”
Mollick’s concrete fix is to push systems into tutor behavior (“act like a tutor… don’t give me answers… help me understand concepts”) and to use assigned tutor setups or built-in study/learning modes that ask questions instead of finishing work.
This framing also explains the integrity tension: he calls genAI a “universal cheating tool” that can undermine educational approaches that require effort and practice. At the same time, he argues the upside is access—“every kid in Mozambique” has the same tool and it can be shaped into a tutor. Teachers may also see workload relief; a Walton Family/Gallup survey is cited as reporting six hours saved per week on prep and support.
Theme 1 — Gemini in a Japanese middle school: accessibility + inquiry + peer dialogue
A Google for Education classroom video shows generative AI (Gemini) being used to raise expectations and widen participation—positioned less as a destination for answers and more as a tool for student growth and inclusion.
Where it shows up in practice:
- Access and differentiation: students who struggle with textbooks use AI to restate content more gently and transform it into images, audio, or video.
- Iteration as normal work: in PE, students ask Gemini for practice menus, try ideas, then re-prompt until suggestions match their needs.
- Methods over answers: teachers observe students asking for solution methods (e.g., in math) rather than just requesting the final answer.
What changes for teaching:
- Teachers describe deliberately building in student explanation and reflection moments, monitoring work in the cloud, and giving targeted “nudges” when students hit a sticking point.
- Instead of extended lecture, one observed move is simply asking students “What are you doing?”—prompting learners to explain and organize their work.
- Students compare outputs from slightly different prompts and discuss why results differ and what makes an answer clearer, turning prompting into a shared literacy.
Teachers also describe starting with minimal restrictions (expecting early “troubles”) while reinforcing basics like not entering personal information.
Theme 2 — Early childhood AI: “tech for two,” not an attention machine
In an Edtech Podcast interview, Tandem is presented as a generative AI platform for preschoolers built to shift screens from passive consumption toward parent–child co-creation of storybooks, factbooks, and play activities grounded in child development science.
Design choices explicitly optimize for human interaction:
- “Tech for two” defaults: shared questions and pauses inside stories, plus a prompt to “put the phone down” and talk.
- No video and slowed pacing to avoid highly attention-grabbing experiences.
- Parent coaching: passive sensing tracks parent–child interaction and provides suggestions (described as a “Fitbit for fun”).
- Early AI literacy: kids “feed” ideas to a robot, wait for outputs, and give feedback—framing AI as something humans discuss and correct.
Safety and ethics are treated as product features: an “AI nanny” reviews generated content before it reaches a child, the platform is designed for parent co-review, and the team describes bias audits and illustrator collaborations with royalties.
Global lens: the model is not the bottleneck—access and “human infrastructure” are
The same conversation cites major gaps in readiness: AI is accelerating quickly while access, skills, and safeguards lag, risking concentration of benefits in high-income contexts; it calls for practical AI literacy, safety/privacy boundaries, and treating connectivity and teaching capability as infrastructure.
Tandem says it has only released in the UK and Ireland so far due to dynamic regulations, and argues that scaling to new contexts is less about the technology and more about the cultural and human infrastructure needed to avoid “the same stories for Lusaka and London.”
Theme 3 — The 2026 edtech stack: scaffolds, benchmarks, and AI-ready devices
Educators increasingly describe AI as “just there now”—a built-in layer across tools rather than a standalone category. Meanwhile, builders note the market for “study buddy” apps is crowded with limited differentiation, and that students recommend tools that help them score better more than tools that merely save time.
Tools built around practice + feedback (not just content generation)
Short Answer (originating at Stanford) is described as a gamified writing platform where teachers launch writing prompts, students join without accounts, students get instant AI feedback aligned to teacher-chosen criteria and can revise, and the class compares anonymized responses to build exemplars and reflect on quality.
Other structured-support tools highlighted this week include:
- TAI Math Coach for PreK–5 math planning, assessment, and instructional support (answering curriculum questions, improving lesson plans, and supporting students on demand).
- Brilliant classroom activities and interactive manipulatives designed to help students discover the “why” behind concepts, with teacher monitoring and differentiated reinforcement.
- School AI, positioned around personalized student support and real-time insights into progress and who needs help.
Teacher workflow accelerators (slides, infographics, accessibility, low-stakes quizzes)
A teacher on r/edtech describes using NotebookLM’s podcast features, Gemini for image creation for slides, ChatGPT to check the accessibility of notes, and ChatGPT/Perplexity to create low-stakes pop quizzes—reporting that students “loved it.”
Monica Burns also shared an example of using NotebookLM to generate infographics from a blog post. And a Shake Up Learning post claims AI can generate slide decks with both text and images (with a “good/bad/ugly” review).
Benchmarking and devices catching up to the learning layer
Two infrastructure updates stand out:
- The Educational AI Leaderboard aims to compare major LLMs on education benchmark tasks (automated essay scoring and math-misconception detection), including performance and cost data from private test sets. (Link)
- Dynabook has released Ryzen-powered laptops described as “Copilot+ ready,” with optional NPUs for local AI tasks (e.g., smarter conferencing and Copilot workflows) and Wi‑Fi 7.
Learner-built study companions (and “build an explainer on demand”)
Across Reddit discussions, learners are increasingly turning personal notes into study systems:
- A neurodivergent PhD student describes using AI for “chat with PDF,” mind mapping, and refining scattered notes, checking outputs in real time as a kind of repeated review.
- A Computer Science student nearing graduation is described using AI to turn notes into quizzes and study guides.
On the “tool-building” end, Ethan Mollick showcased Claude generating an interactive explainer for correlation vs. causation from a simple prompt, and then producing a simplified version for non-stats audiences.
- Tool: https://claude.ai/public/artifacts/5e9ad491-a9e3-4f5f-a290-8130d2c25733
- Simplified version: https://claude.ai/public/artifacts/71eb192e-245d-45b0-aa80-3a0e76fd18a9
Limitations surfaced this week (worth pricing into adoption)
- Misalignment hits struggling students first: one educator reports strong students may not need AI tools, while weaker students can be “sabotaged” by practice materials that don’t match the course and encourage passive reading.
- “Faster studying” can mean less learning: another warns the limiting step isn’t information access, but the internal process of thinking, practicing, and internalizing—so “less work” messaging can cut the very time learning requires.
- Verification overhead remains a hidden cost: teachers note that reading/editing AI-generated essays and verifying sources can take as long as writing, especially since AI can fabricate citations.
- Prompting determines whether AI is a scaffold or a bypass: one commenter calls “summarize this for me (in place of reading it)” brain rot, but sees value in reading first and asking for critique.
Theme 4 — Integrity and information literacy: making sources and process visible again
Teachers report that some students now cite “Google” or “ChatGPT” as sources, relying on search-page summaries instead of opening and evaluating underlying websites.
One U.S. history teacher’s workaround is procedural: a unit-long “research anything you find interesting” assignment where students post discoveries to a discussion board, but must link the actual source for every new fact (not a Google results page and not a chatbot). The final summary requires students to describe what they learned and how they learned it; a rubric scores activity, quality in students’ own words, sourcing, and interaction.
“Google is a butler, not a source.”
On the assessment side, some teachers describe shifting to in-class writing because low-effort students submit easily recognizable “AI-generated slop,” and absences become the main loophole. Separately, one educator reports grading an unprecedented number of “perfect papers,” attributing it to AI doing students’ work.
Practical resources are emerging that focus on assessing thinking, not just polished output: Michael Hernandez is promoted as sharing “low-lift” strategies for helping students document their thinking process and increase originality in the age of AI.
Even among educators who want proactive adoption, there’s an emphasis on sequencing: one commenter argues AI should be restricted like calculators until students have enough background knowledge to evaluate outputs and spot hallucinations, while another argues students will use AI at home anyway—so teachers should model critical prompting, verification, and bias-checking in class.
What This Means
If you’re buying or deploying “AI tutoring,” demand tutor behavior. The strongest gains cited this week depend on structured practice with guidance , while unstructured answer-getting can produce the illusion of learning with no retention. Starting points: require tutor-style prompting and/or learning modes that ask questions instead of supplying final answers.
For K–12 leaders: AI is already reshaping classroom routines—plan for pedagogy shifts, not just tools. The Japan example emphasizes teacher moves (eliciting student explanation, cloud-based monitoring, peer discussion of prompts) alongside differentiated supports.
For early years and families: “more human interaction” is becoming a product requirement. Tandem’s design choices (shared reading, conversation prompts, slowed pacing, no video) and layered safety approach suggest one viable direction for preschool AI.
For workforce and L&D: watch the apprenticeship pipeline, not just productivity. Mollick argues internships and early apprenticeship are being disrupted as managers and interns turn to AI instead of learning-by-doing. Andrew Ng similarly warns that plunging into building without foundations leads to reinventing standard AI building blocks.
For assessment and literacy: make sources and process gradeable again. The “click the source, not the summary” assignment and the shift to in-class writing are pragmatic responses to students treating chatbots/search pages as sources.
For investors/builders: differentiation is moving toward pedagogy, workflow, and evidence. With “study buddy” apps crowded and students recommending tools that directly improve grades , benchmarks like the Educational AI Leaderboard and structured-feedback products (e.g., Short Answer) may become more important signals.
Watch This Space
- Apprenticeship-by-design: what replaces internships and early “apprenticeship” learning when first-pass work shifts to AI.
- Education-specific benchmarking in procurement: how performance + cost comparisons (essay scoring, math misconception detection) shape which models get embedded into products and schools.
- Preschool AI scaling with guardrails: Tandem’s plans for deeper personalization (including neurodiversity) and the challenge of expanding beyond the UK/Ireland regulatory context.
- The access/safeguards gap as a global constraint: continued focus on practical AI literacy and infrastructure as prerequisites for equitable adoption.
- Reusable institutional governance: Lance Eaton’s Creative Commons workshop resources and growing repositories of AI syllabus and institutional policies as shared infrastructure.
- K–12 platform ambition: MagicSchool AI says 2026 is focused on building its “most ambitious platform” for schools after scaling to millions of users.
The Edtech Podcast
liemandt
Austen Allred
The most consequential shift: AI moved from “promise to practice” in 2025—and 2026 pressures the system to redesign
Education leaders are no longer debating whether AI will show up in classrooms; the reported inflection is that educators are already using it, at scale. One signal: a report of 6× more educators using AI than in 2023, with nearly 4 in 5 saying they feel confident using it in their classroom .
At the same time, multiple sources converge on a harder implication for 2026: if AI makes drafting, searching, and tutoring cheap and ubiquitous, then assessment, instructional design, and human connection become the real constraints (and differentiators) .
Theme 1 — From “productivity hacks” to the instructional core
Early adoption often started with administrative tasks (lesson planning, emails). Several notes point to a deeper shift: toward AI that’s grounded in curriculum, pedagogy, and learning science.
- Educators and vendors describe a move from generic assistance to “instructional intelligence”—AI grounded in curriculum/pedagogy/learning science, not “the wisdom of the internet” .
- Teachers are increasingly asking how to introduce AI to students while preserving productive struggle, especially around writing and feedback .
- A concrete “job to be done” that shows up repeatedly is formative feedback at scale—more frequent and more actionable feedback, not just faster content generation .
Theme 2 — Assessment is breaking (and the responses are diverging)
2A) Redesign: measuring process, not just outputs
One forecast is blunt: “Assessment is breaking—And 2026 forces a redesign.” AI didn’t create the fragility, but it made it impossible to ignore .
Examples of where redesign is headed (as described in the sources):
- Moving beyond the five-paragraph essay as a default assessment .
- Shifting toward revision, reasoning, and process, so students are evaluated on thinking and growth rather than first-draft output .
- Increased use of oral exams, portfolios, and client-connected projects as alternatives to single scores or static essays .
2B) Control: in-class, longhand writing to reduce AI cheating
In K–12 teacher discussions, a common anti-cheating pattern is to require in-class handwritten work—sometimes surprise/timed—to ensure the work reflects the student’s capability . Teachers also describe requiring longhand rewrites in class for partial credit after suspected AI use .
2C) Practical “how-to” guidance emerging
Some practitioners are packaging immediate tactics:
- Monica Burns shared an episode positioned around building “AI resilient assessments”, with “practical tips teachers can use right away” . The link provided: https://classtechtips.com/2025/12/19/assessments-in-the-age-of-ai-bonus/.
- The same episode is framed elsewhere as guidance on reducing cheating by addressing root causes and what teachers can shift .
Separately, one educator reflection argues education is still “playing catch-up” on inappropriate AI use, with concern increasing year over year .
Theme 3 — AI tutor viability is converging with “education modes” in mainstream models
Two threads reinforce each other:
- Predictions that AI tutoring becomes truly viable at scale
- One prediction says 2026 may be the year the “AI tutor becomes the hero story,” citing multimodal interaction, lower cost, and better data as enablers .
- Another claim points to research showing AI-supported tutors matching human effectiveness, framing the goal as reach without replacing people .
- Major models are shipping education-specific interaction patterns
- In 2025, ChatGPT (study mode), Gemini (guided learning), and Claude (for education) released new or updated education modes designed to avoid simply giving answers and instead use more Socratic-style interactions .
- A write-up on Gemini 3 argues it can have an edge as a tutor/explainer because it explains complex topics conversationally and highlights sources—important in education settings . It also notes students may interact with Gemini via Google’s AI search summaries, whether they intend to or not .
Where this gets practical: sources describe AI being used to make homework/studying more accessible via customizable support and rapid feedback loops .
Theme 4 — The “learning layer” is being built by Big Tech; smaller tools compete on workflow, not raw model access
4A) Platform shift: learning embedded into core products
One prediction argues Big Tech is building learning directly into platforms, collapsing layers of creation, distribution, and intelligence . The implication for many EdTech vendors is specialization: focusing on niches too small or specific for Big Tech to care about .
4B) Tool pattern: UX and integration become the differentiator
A Reddit thread on an LLM-based essay feedback product captures a recurring market reality:
- Users can ask ChatGPT for detailed critique via prompting .
- The product’s value-add is framed as UX (Google Doc–style comments you can reply to) enabling targeted follow-ups without prompt iteration and without copy/paste workflows across tabs .
4C) Capabilities and limitations: generation still needs verification
AI-generated instructional materials are improving, but the limitation is consistent: verification overhead.
- A user testing LM Notebook said output looked impressive until they noticed multiple spelling errors, leading them to “triple check” and concluding they may as well have made it themselves .
- Another user said Luminaslides was decent and lacked spelling errors, but still required manual changes to information .
- A teacher-facing post claims AI can generate slide decks with both text and images, and points to a “good/bad/ugly” review (link provided in the source) .
4D) Workforce-adjacent learning tools are iterating fast
- Andrew Ng promoted a short course on Claude Code (built with Anthropic) focused on “highly agentic coding,” including orchestrating subagents and autonomous PR workflows . Enrollment link: https://www.deeplearning.ai/short-courses/claude-code-a-highly-agentic-coding-assistant.
- A bootcamp operator said they rewrite their AI curriculum every cohort because AI moves quickly, and they’re considering zero “greenfield” (from-scratch) projects because “building from scratch is too easy now” .
Theme 5 — Governance and equity: AI is accelerating faster than access, skills, and safeguards
A global-policy lens frames the risk as a widening divide: AI is moving fast while access, skills, and safeguards lag, concentrating benefits among already advantaged learners and high-income contexts .
The same conversation highlights concrete policy levers:
- Build practical AI literacy (hands-on, job-relevant use) .
- Put in place privacy/safety guardrails (the EU AI Act is referenced as a developed model for high-risk boundaries) .
- Treat connectivity/computing/teaching capability as economic infrastructure, not optional EdTech .
A specific equity datapoint used as an example: Romania at 3% of the population using AI, with the warning that non-users fall behind quickly .
Campus operations reality: the “registrar’s view” of adoption
Higher ed administration notes emphasize practical governance work:
- Ensuring AI tool use is FERPA compliant and considering impacts on GDPR, including limiting integrations with systems holding student data .
- Recognizing the need to evaluate vendor claims (“all technology vendors will say that they have some kind of AI”) and planning human oversight—especially where chatbots can give incorrect information that affects students .
- Using automation (e.g., data entry) to free staff time for high-touch student support, including for international and first-generation students, while upskilling staff rather than treating roles as obsolete .
Organizational strategy: “curiosity” and small pilots over mandates
Lance Eaton argues for a campus approach centered on curiosity and classroom-based pilots where faculty learn from one another, share findings, and make grounded decisions (including deciding not to use AI in a program, if reflective) .
He also points to policy repositories:
- AI Syllabi Policy Repository (190+ policies): https://docs.google.com/spreadsheets/d/1lM6g4yveQMyWeUbEwBM6FZVxEWCLfvWDh1aWUErWWbQ/edit?gid=118697409#gid=118697409
- AI Institutional Policy Repository (17 policies): https://docs.google.com/spreadsheets/d/1RE26GolTTu1KLMaaCXfYNHiCxLG3gyDsT_9yURpkYlQ/edit?gid=0#gid=0
What This Means (practical takeaways)
Plan for an assessment portfolio, not a single fix. Sources point to a split between redesigning assessment around process (revision/reasoning/oral/portfolio) and “control” approaches (in-class longhand work) . Many systems will likely need both.
Treat verification workload as a first-class cost. Whether it’s slide generation or notebook content, multiple users report that errors can erase time savings unless workflows include review and correction .
Expect “AI tutor” adoption to ride on mainstream platforms. Education modes in ChatGPT/Gemini/Claude plus predictions that tutoring becomes viable at scale in 2026 suggests the default tutoring layer may be embedded where students already are.
Governance isn’t optional—privacy, safety, and accountability show up in day-to-day ops. The registrar lens (FERPA/GDPR, vendor evaluation, error escalation) is a template other units can reuse .
Human connection becomes more valuable, not less. Multiple predictions argue the distinctly human work—coaching, judgment, relationships—becomes more visible as cognitive work scales .
Watch This Space
- AI tutors as the “hero story” of 2026 (especially multimodal tutoring at lower cost) .
- Campus-wide deals with frontier labs as a competitive threat to many EdTech vendors .
- Agentic workflows moving from demos to training expectations, via tools like Claude Code and subagent orchestration .
- Quality grading of EdTech apps: Alpha Timeback (2026) says it will grade apps using “instructional invariants,” positioning parent-facing clarity as a product feature .
- The “communications layer” of schooling: teachers report suspicion that students/parents are using LLMs to craft sophisticated, manipulative complaint emails .
Andrew Ng
Austen Allred
Google
The Widening Gap: Acceleration vs. Friction
A stark divide is emerging between educational environments designed around AI and those retrofitting it into existing structures.
The Acceleration Model At Alpha School, an AI-powered private institution expanding in the Bay Area, students are reportedly completing academic units 2–3x faster than traditional pacing . By using AI tutors to ensure mastery of prior material before advancing, the school claims to have reduced the standard 12-hour work day (class plus homework) to just 3 hours of core academics.
This efficiency frees time for "Olympic-level" passion projects. For example, a student named Alex developed Berry, an AI-powered plushie line for teen mental health. The device uses a microchip trained on vetted therapist data to bridge the "actionability gap" in traditional therapy .
The Friction Model In contrast, traditional classrooms are facing an integrity crisis. Teachers report "rampant" AI cheating, with students using tools like Google Lens and ChatGPT for everything from homework to final exams .
- The Response: Many educators are reverting to analog methods, requiring in-class, device-free pen-and-paper assessments to verify learning .
- The Tension: While some districts invest in LLMs, they simultaneously block research sites like Wikipedia, creating a contradictory landscape where admin use paid ChatGPT subscriptions while students face restrictions .
- The Consequence: Educators worry that reliance on AI for basic tasks is producing graduates who lack grit and problem-solving skills, struggling with tasks that cannot be solved by a quick query .
What This Means
We are witnessing a bifurcation in learner experiences. In AI-native environments, the technology is a productivity engine that compresses rote learning to expand creative output. In traditional settings, it acts as a subversive force, prompting a defensive retreat to analog assessment methods. The challenge for leaders is moving from "detection" to "integration" without losing the rigor of independent thinking.
Tech Giants Embed Pedagogy
Major platforms are moving beyond generic chatbots, embedding specific pedagogical tools directly into productivity suites.
Microsoft's Teach Module Microsoft is rolling out specialized AI teaching tools accessible via office.com. The new Teach Module includes generators for lesson plans, quizzes, and flashcards, designed to integrate with Teams and LMS platforms .
Google's Learning Ecosystem Google has significantly upgraded its learning tools:
- NotebookLM is now built on Gemini 3, improving reasoning and multimodal understanding . New features include Data Tables for synthesizing insights and a direct integration with the Gemini App .
- Gemini 3 Flash now supports audio-based study plans. Users can upload voice notes explaining a concept, and the AI will identify knowledge gaps and generate quizzes to fill them .
What This Means
The "prompt engineering" era for educators may be short-lived. By baking pedagogical frameworks (like scaffolding, quizzing, and lesson planning) directly into the interface, tech giants are lowering the barrier to entry. This shifts the teacher's role from designing the AI interaction to curating the AI's output.
The New Workforce Literacy
The definition of "technical skills" is shifting. Andrew Ng, co-founder of DeepLearning.AI, argues that coding is becoming a universal requirement for knowledge workers—not to build software products, but to automate personal workflows.
"I think at some point saying that you don't write code will be like saying, I don't use email... People that use AI to write code for them will really get a lot more done."
Market Signals
- Purdue University has approved an "AI working competency" graduation requirement for all undergraduates entering in 2026 .
- Coursera and Udemy are merging in a $2.5 billion deal specifically to target workforce upskilling for the AI era .
- Chegg has cut nearly half its workforce, citing AI tools replacing traditional tutoring services .
What This Means
The distinction between "technical" and "non-technical" roles is dissolving. As AI lowers the barrier to coding, the ability to build custom tools (like a receptionist building a CRM via web scraping ) is becoming a baseline expectation for employability, not a specialized skill.
Global & Policy Snapshots
- El Salvador: Partnering with xAI to deploy Grok Tutor to over 1 million public school students, marking the world's first nationwide AI tutor program .
- Italy: The University of Ferrara has deployed Chromebooks with ClassTools to secure digital exams, reducing setup time from hours to minutes and enabling secure assessments in any classroom .
- MagicSchool + Anthropic: A new partnership aims to build an "AI Operating System for Schools" with a specific focus on safety and alignment, raising the standard for responsible AI in K-12 .
- Higher Ed Leadership: A survey reveals that 60% of AI leadership in higher ed is distributed or unclear, with only 1.4% driven directly by the president's office. This "fragmented ownership" is slowing strategic adoption .
Watch This Space
- Agentic AI: Beyond chatbots, look for "agents" that perform multi-step tasks. Andrew Ng is launching a course on making agentic workflows reliable, signaling a move from demos to production systems .
- Claymation for Concepts: Gemini's new "Claymation Explainer Gem" turns complex topics into animated infographics, hinting at a future where educational content is generated on-demand in any format .