Generative AI in Education and Industry: What the Data Really Says

A new generation of tools like ChatGPT, GitHub Copilot, and similar large language models is rapidly changing how people learn, teach, and build software. While public buzz around Generative AI (GenAI) often swings between fear and fascination, a new report offers something far more grounded: data.

The 2024 study Beyond the Hype: “A Comprehensive Review of Current Trends in Generative AI Research, Teaching Practices, and Tools” draws on a wide mix of sources: surveys of educators and developers, systematic literature reviews, and interviews with toolmakers and researchers. It is backed by several major universities and was presented at the ACM ITiCSE conference.

What emerges from the findings is a picture of enthusiasm, tempered by friction, gaps, and open questions.

Educators See the Shift. But Are Still Catching Up.

Ask educators if GenAI is changing the skillset needed to build software, and the answer is a resounding yes: 77% say it already has. But fewer than 40% have updated their teaching to match. Many are letting students use tools like Copilot or ChatGPT, but just 35.5% are actively designing courses around them.

This creates a disconnect. On the one hand, there is wide recognition that GenAI is becoming a core part of modern computing. On the other, there is hesitation, partly due to unclear policies and partly due to a lack of tested teaching methods.

The result? Some students are experimenting with GenAI unguided, while others are blocked from using it entirely.

Developers, On the Other Hand, Are All In

In industry, the adoption curve looks very different. Over 79% of surveyed developers now use GenAI tools at work. For more than half, it is a daily habit. They use it to generate boilerplate code, autocomplete functions, write documentation, and help with debugging.

It is not without concern. About 33% of developers describe GenAI as “a little harmful,” mostly due to accuracy issues or bugs in generated output. But 81% say it makes them more efficient overall. In short, they are willing to live with the flaws.

One 2024 survey even found that one in five DevOps professionals now uses AI at every stage of the software pipeline. This trend is not slowing down.

Misalignment Between Classrooms and Workplaces

There is also a gap between what teachers think developers are doing with GenAI and what is actually happening.

For example, 79.2% of educators believe GenAI is mostly used to generate code. That is true, but many underestimate how often it is used to modify existing code or restructure legacy systems. Conversely, they tend to overestimate how often developers use GenAI for more complex tasks like algorithm modeling.

This misread can lead to courses that either over-trust or under-prepare students for real-world use. The solution, according to the study authors, is more dialogue between educators and industry professionals, plus better data sharing.

Results in the Classroom: Mixed, but Promising

The study did not stop at perception. It also looked at outcomes.

Out of 71 reviewed studies that tested GenAI in actual classrooms:

  • 86% saw positive results when using GenAI to generate learning materials
  • 80% reported gains in code comprehension
  • 58% saw improvement when students used GenAI to write code
  • Only 50% found it helpful for generating coding hints

This suggests GenAI can be a powerful tutor, but only for specific tasks. Using it to explain a block of code? Great. Using it to drop hints during a complex problem set? Not always reliable.

Equity Is Still a Problem

There is another catch: access.

Students at wealthier schools, or those who can afford paid subscriptions to tools like GitHub Copilot, are more likely to benefit. Instructors estimate that a three to four-year degree could cost students up to $1,000 in GenAI tools alone, unless schools negotiate access or provide institutional licenses.

Only 23.7% of educators surveyed said they teach at institutions serving minority populations. And 30.3% said they were not sure. That uncertainty speaks volumes.

Policies Are Evolving but Inconsistently

The academic response to GenAI is still taking shape. Most instructors, 77.6%, do not ban these tools outright. But among those who do, many are taking extra steps, like changing exam formats, rewriting assignments, and reviewing student code for signs of AI-generated content.

That lack of direction may be harming students more than helping. Without structure, they may rely too heavily on GenAI or use it in ways that undercut learning.

Looking Ahead: What Comes Next?

Educators are beginning to shift their focus. The old way of grading assignments line by line is fading. New tools are expected to assist with grading, suggest real-time feedback, and even help design better assessments.

Many educators now believe the future lies in teaching students how to work with AI tools, how to ask the right questions, debug responsibly, and evaluate what the model spits out. In other words, problem-solving, not syntax memorization.

This shift echoes what is already happening in the workforce.

Final Thought

The ACM study gives us a rare, well-researched window into how GenAI is actually being used, not just theorized about. It blends academic caution with practical insight.

And while the hype around artificial intelligence is not going anywhere, this report reminds us that the real work is quieter. It is happening in classrooms, at code reviews, and during late-night grading sessions.

“I do not need my students to become walking compilers. I need them to know how to think.”

By Randy Ferguson