Assessment By: Marina Kidron June 9, 2024

Reactions to Bloomberg, New York Magazine and Axios: Rethinking Assessment in Higher Ed

Three recent voices have made it clear: generative AI has exposed deeper flaws in how we educate. It will have a tremendous impact on the graduates coming to the workforce. But this is also an opportunity for true innovation in evaluation, particularly through AI‑based tools.

Bloomberg (May 27, 2025) asked “Does College Still Have a Purpose in the Age of ChatGPT?” and warned that students can now outsource homework entirely, leading to a landscape where “computers grading papers written by computers, students and professors idly observing, and parents paying tens of thousands of dollars a year for the privilege”. The piece emphasized that “AI may prove to be a powerful pedagogical tool. But simply letting students outsource their homework isn’t the way.”

New York Magazine (May 7, 2025) headlined “Everyone is cheating their way through college.” A student they interviewed admitted, “I’d just dump the prompt into ChatGPT … AI wrote 80 percent of every essay I turned in… I’d just insert 20 percent of my humanity, my voice, into it.”. A Professor at Cal State Chico warns that “Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate”.

Axios (May 26, 2025) talked about “AI cheating surge pushes schools into chaos” highlighted the surge in AI cheating and noted that AI-detection tools often don't get it right. Most teachers can’t agree on what constitutes acceptable AI use in this new world.

What This Reveals About Learning

  • Assessment by output no longer works. If students can offload 80% of the work to AI, we need to shift from measuring output to measuring understanding.
  • Detection and deterrence are insufficient. Plagiarism detectors are error-prone and adversarial. These tools are defensive looking for problems, rather than enabling learning. AI can be a powerful addition to a student’s toolbox.
  • Educators lack frameworks for a positive integration of AI tools. Bloomberg hints at AI’s promise as a pedagogical tool; Axios cites American University’s initiative, but most schools remain reactive.

How Togeder Responds

Togeder offers a solution by shifting homework to alternative assessments:

  • Real-time evaluation of learning. Instead of grading outputs, Togeder monitors in-session problem solving and critical thinking via collaborative work in small groups. Something AI can’t fake.
  • Rich, human-centric insights. Automated reports provide personalized feedback to students, and detailed reports that show who contributed, how they reasoned, and the quality of their communication. Anchoring assessment in student thinking, not just results.
  • Scalable and flexible. Togeder supports both small seminars and large lectures, overcoming the “scale problem” that plagues approaches like oral exams. Institutions can scale in-class evaluation, no matter the class size.

This approach offers something traditional assessments cannot: visibility into a student’s thinking as it unfolds. It’s not about policing behavior or outsmarting AI tools. It is about designing assessments that AI can’t replace by enabling students to collaborate with their peers, mirroring real-world workplace scenarios.

Looking Ahead

If AI has changed the rules, then it’s time we change the game. Not by abandoning assessment, but by redesigning it so that education remains about deep thinking, clear expression, and meaningful growth. That is what college must continue to be about. And with the right tools, it still can be.
In short: AI didn’t break education. It highlighted its weaknesses. This is an opportunity for technology to redesign assessment around what really matters: student thinking at the moment, not polished AI-generated outputs.

AI in Education Alternative Assessment Academic Integrity EdTech Innovation Personalized Feedback