Leveraging AI for Smarter Student Feedback: Insights from New Research

StudentPulse Team
February 14, 2025

How can AI help educators make sense of student feedback? Universities rely on course evaluations to assess teaching quality, but analyzing hundreds of open-ended student comments is a challenge. A recent research study by Professor Euan Lindsay and colleagues at Aalborg University explores how AI-powered tools can synthesize student feedback into actionable insights. Their findings highlight both the promise and limitations of using large language models (LLMs) in education.

In this article, we break down the study’s key insights and discuss how they align with the work we’re doing at StudentPulse—where we’re actively developing AI-powered solutions that go beyond simple summarization to provide actionable recommendations linked to real student feedback.

Key Challenges Identified in the Research

While AI has shown promise in processing large amounts of qualitative feedback, the study by Lindsay and colleagues identified several key challenges:

  • Factuality and Hallucination – AI models sometimes generate insights that are not directly supported by the student feedback, particularly when working with small datasets. This results in hallucinated recommendations that can mislead educators.
  • Handling Contradictory Feedback – AI struggles to accurately reflect opposing opinions from students in the same course.

Where you have the challenge is where half the class thinks it’s too hard and a third of them say it’s too easy, and you get this contradictory kind of piece.

Euan Lindsay, Professor at Aalborg University

  • Non-Actionable Summaries – While AI-generated summaries highlight broad themes, they sometimes lack specific, actionable recommendations that educators can implement.
  • Trust and Transparency Issues – AI-generated feedback occasionally includes unintended elements, such as student names, despite explicit instructions to avoid personal identifiers. This raises concerns about how AI processes and presents sensitive information.
  • Reliability and Prioritization – The model has difficulty determining which feedback is most relevant in large datasets, leading to an overload of generic suggestions instead of prioritizing the most critical insights.

These challenges highlight why AI cannot function as a standalone solution for course evaluations and why human expertise remains essential in interpreting feedback effectively.

Addressing AI’s Limitations: The StudentPulse Approach

At StudentPulse, we are actively developing AI-powered solutions to not only summarize student feedback but also rank, prioritize, and generate actionable recommendations—all while ensuring transparency and reliability. Here’s how our AI approach addresses the challenges identified in the research:

1. AI-Driven Prioritization

Not all student comments are equally valuable. To ensure that AI-generated insights are meaningful, we rank student comments based on multiple factors:

  • Relevancy Scoring: Using proprietary machine learning techniques, we determine which comments contribute most to actionable feedback.
  • Sentiment Extremeness: The more extreme a sentiment (positive or negative), the more weight it carries in shaping recommendations.
  • AI Fitness Score: Some feedback is more structured and actionable than others. We assess how ‘AI-friendly’ a dataset is before generating suggestions.

This ranking ensures that our AI focuses on the most impactful feedback, rather than just summarizing everything indiscriminately.

2. Transparent AI Recommendations Linked to Real Comments

A common concern with AI-generated insights is trust—how do educators know where recommendations come from? At StudentPulse, we directly link each AI-generated suggestion to real student comments, allowing institutions to trace feedback back to its original source. This transparency ensures that universities can verify insights and act with confidence.

3. Human Validation: Continuous AI Improvement

We don’t just rely on AI—we also integrate human feedback to refine our models. Educators can like or dislike AI-generated recommendations, providing real-time feedback that helps us improve the system over time. This approach ensures that AI remains a collaborative tool rather than a black-box decision-maker.

4. Confidence Scoring & Trust

To address reliability concerns, we are actively exploring ways for AI to assess its own accuracy through confidence scoring. This means AI doesn’t just generate insights—it evaluates how reliable those insights are, helping institutions distinguish between strong recommendations and areas requiring additional human review.

5. Training Our Own AI Model on ‘Relevant’ Student Comments

Unlike generic AI solutions, we have trained our own AI model to identify the most relevant student comments, ensuring that our insights are fine-tuned for higher education settings. You can read more about how we developed this model in our article here.

The Future of AI in Student Feedback

Lindsay’s research highlights both the opportunities and challenges of AI in course evaluations. AI can process vast amounts of student feedback at scale, making it easier to spot trends and identify areas for improvement. However, ensuring accuracy, trust, and meaningful actionability remains a work in progress.

At StudentPulse, we are continuously refining our AI capabilities, integrating client feedback, and improving how AI ranks and processes student comments. Our goal is to make AI-driven student feedback not only insightful but also practical and actionable for institutions.

Curious to see the StudentPulse AI in action?

Book a free demo and be among the first who experience how AI can turn loads of data into instant, actionable insights.