How ChatGPT’s Performance on the FE Exam is Exposing Major Flaws in Engineering Education

Download PDF

Artificial intelligence (AI) has made significant strides in recent years, and its application in education is sparking heated discussions across various fields. Engineering education, known for its complexity and rigorous assessments, is one area where AI is beginning to have a noticeable impact. ChatGPT, an AI language model, has shown impressive performance in many exams, including engineering’s Fundamentals of Engineering (FE) exam. But while its success is intriguing, the real story lies in what this performance reveals about the current structure of engineering education.

ChatGPT’s Success on the FE Exam: A Case Study

Several studies have tested ChatGPT’s ability to perform on the FE exam, a comprehensive test that engineers must pass to obtain professional licensure in the U.S. ChatGPT achieved moderate success, scoring between 63% and 76% on different versions of the exam, depending on the complexity and the questions used. While this is impressive for an AI, the flaws in its performance—especially when faced with complex, real-world engineering problems—highlight some major shortcomings in the exam and, by extension, engineering education itself.

Studies, such as those by the American Society for Engineering Education (ASEE) and researchers, found that while ChatGPT performed well on multiple-choice questions that required formula-based answers, it struggled with more complex questions. For example, when tasked with problem-solving scenarios that demanded a deeper understanding of engineering principles or creative thinking, ChatGPT often gave confident but incorrect answers

Why ChatGPT’s Success Highlights Engineering Education’s Flaws

The moderate success of ChatGPT on a professional engineering exam calls into question the current assessment methods in engineering education. Many exams, like the FE, rely heavily on formulaic, memorization-based questions. These kinds of questions are easily tackled by AI, but they don’t necessarily test the critical thinking or problem-solving skills that real-world engineering demands.

Engineering students are being trained to pass exams that assess their ability to recall formulas and apply them to well-defined problems. But outside of academia, engineers must solve open-ended, often ill-defined problems that require creativity, collaboration, and adaptability. As ChatGPT’s performance demonstrates, AI can easily pass exams that focus on standardized problem-solving but struggles with the nuances of real-world engineering practice.

Rethinking Engineering Education and Assessment

ChatGPT’s ability to pass portions of the FE exam should be a wake-up call for engineering educators. While it might be impressive that an AI can do this, it raises a more important question: Are our exams really measuring what we need future engineers to know? The heavy reliance on multiple-choice and formulaic questions needs to be re-evaluated.

To truly prepare engineers for the demands of the industry, education systems must embrace more experiential learning, project-based evaluations, and case studies that replicate real-world challenges. These types of assessments push students to apply their theoretical knowledge in practical ways, ensuring they develop the critical thinking and creative problem-solving skills essential for success in engineering.

Conclusion: The Future of Engineering Education

The findings from ChatGPT’s performance on the FE exam expose significant flaws in how engineering education currently assesses students. As AI continues to advance, these standardized assessments may become increasingly unreliable indicators of a student’s ability to succeed in the field. The focus must shift towards creating assessment models that evaluate not just technical proficiency but also the ability to innovate, collaborate, and tackle the complex challenges engineers face today.

Scroll to Top