AI-Powered Chatbots Pass Certified Ethical Hacking Exams, Study Reveals

  • AI-driven chatbots like ChatGPT and Bard (Gemini) demonstrated competence in certified ethical hacking exams.
  • They effectively explained complex cybersecurity concepts such as man-in-the-middle attacks.
  • Bard showed slightly higher accuracy, while ChatGPT excelled in clarity and comprehensiveness of responses.
  • Both AI models corrected answers when prompted, reflecting responsiveness but also limitations.
  • Researchers emphasize these tools should complement, not replace, human expertise in cybersecurity strategy.

Main AI News:

Artificial intelligence (AI) has proven its mettle in the realm of cybersecurity, as demonstrated by recent research led by University of Missouri’s Prasad Calyam and collaborators from Amrita University. Their study scrutinized the capabilities of two leading AI models—OpenAI’s ChatGPT and Google’s Bard (now Gemini)—through a rigorous certified ethical hacking exam.

Certified Ethical Hackers, akin to cybersecurity virtuosos, employ tactics akin to malicious hackers to pinpoint and rectify security vulnerabilities. These exams gauge proficiency across various attack vectors, defensive strategies, and breach response protocols.

ChatGPT and Bard (Gemini) belong to a class of sophisticated AI known as large language models. These models leverage vast neural networks to generate human-like responses, adept at answering queries and crafting content.

During testing, both AI systems successfully tackled challenges from the certified ethical hacking exam. For instance, they adeptly explained complex maneuvers like the man-in-the-middle attack—a technique where an intermediary intercepts communication between two systems. Notably, Bard exhibited marginally superior accuracy, while ChatGPT excelled in clarity, comprehensiveness, and conciseness of responses.

Professor Prasad Calyam, the Greg L. Gilliom Professor of Cyber Security at Mizzou, emphasized the study’s findings. “While these AI tools demonstrated strong performance, including corrective responses to prompts like ‘are you sure?‘,” Calyam noted, “they should complement, not replace, human expertise in devising robust cyber defense strategies.”

Indeed, AI-driven insights can serve as valuable starting points for investigation and training in cybersecurity. They offer foundational knowledge for IT professionals and small enterprises seeking rapid, informed assistance before consulting with specialized experts.

Looking ahead, Calyam remains optimistic about AI’s role in ethical hacking, asserting that ongoing advancements will enhance their accuracy and utility. “As these models evolve,” he concluded, “they hold the promise to significantly bolster cybersecurity measures, safeguarding our digital landscape with greater efficacy.”

Conclusion:

The success of AI-powered chatbots in passing certified ethical hacking exams underscores their growing role in cybersecurity education and initial problem investigation. However, their reliance on vast datasets and inherent limitations in handling nuanced scenarios suggest they should be used as supportive tools alongside human cybersecurity experts. This dual approach could lead to more robust defense strategies in an increasingly complex digital landscape.

Source