The Secret Use of ChatGPT AI by Workers Poses Significant Risks for Tech Leaders

TL;DR:

  • Investment from big tech companies in AI and chatbots has created challenges for chief information security officers (CISOs).
  • OpenAI’s ChatGPT, Microsoft’s Bing AI, Google’s Bard, and Elon Musk’s chatbot project are making waves in generative AI.
  • Companies without their own GPT need to monitor employee usage of this technology.
  • Generative pretrained transformers (GPT) rely on large language models to produce human-like conversations.
  • CISOs should approach this technology with caution and implement necessary security measures.
  • Employees find generative AI useful for their work, regardless of IT approval.
  • Companies need to catch up with security measures related to generative AI.
  • CISOs can start with the basics of information security, including monitoring and licensing AI platforms.
  • Developing a customized GPT or hiring companies for this purpose is an option for companies.
  • Companies should ensure their GPT is based on unbiased and accurate data.
  • CISOs must be intentional about the information fed into the technology.
  • CISOs should focus on protecting confidential information and regulating data storage.

Main AI News:

The surge of investment from prominent tech companies in artificial intelligence and chatbots has caused a stir among chief information security officers (CISOs). While the industry experiences mass layoffs and a decline in growth, the emergence of OpenAI’s ChatGPT, Microsoft’s Bing AI, Google’s Bard, and Elon Musk’s plan for his own chatbot dominate the headlines. Generative AI is infiltrating workplaces, demanding cautious approaches and necessary security measures from CISOs.

The foundation of generative pretrained transformers (GPT) lies in large language models (LLMs) or algorithms that generate human-like conversations for chatbots. However, not all companies possess their own GPT, necessitating the monitoring of employee usage of this technology.

Michael Chui, a partner at the McKinsey Global Institute, likens the adoption of generative AI to how workers embraced personal computers and phones, even without the endorsement of IT departments. Chui emphasizes the significance of compelling technologies that individuals are willing to pay for, noting the historical examples of mobile phones and personal computers. Consequently, companies find themselves playing catch-up when it comes to implementing security measures.

CISOs, who are already grappling with burnout and stress, face numerous challenges, such as potential cybersecurity threats and increasing automation demands. As AI and GPT permeate the workplace, CISOs can begin by focusing on the fundamentals of information security.

Chui suggests that companies can acquire a license to use an existing AI platform to monitor employee interactions with chatbots, ensuring the protection of shared information. By implementing technical measures, such as licensing software and establishing enforceable legal agreements regarding data usage, companies can prevent employees from sharing confidential information with publicly accessible chatbots.

Licensing software involves additional checks and balances, including the safeguarding of confidential information, regulation of data storage, and guidelines for employee software usage. Auditing the software becomes possible through an agreement, allowing companies to ensure that data is adequately protected.

Chui notes that most companies already follow this practice when storing information in cloud-based software, so providing employees with a company-sanctioned AI platform aligns with existing industry standards.

Another security option for companies is to create their own GPT or enlist the services of companies specializing in this technology to develop a customized version. Sameer Penakalapati, CEO of Ceipal, an AI-driven talent acquisition platform, highlights the availability of various platforms like Ceipal and Beamery’s TalentGPT for specific functions such as HR. Furthermore, Microsoft’s customizable GPT could be a viable option. However, despite the associated high costs, companies may opt to build their own technology.

By creating their own GPT, companies can ensure that the software provides employees with access to precise information. Alternatively, even when hiring an AI company to develop this platform, companies can securely feed and store information. Regardless of the chosen path, Penakalapati emphasizes that CISOs must remember that these machines operate based on their training. Therefore, it is crucial to be intentional about the data provided to the technology.

Penakalapati stresses the importance of using technology that relies on unbiased and accurate data, emphasizing that such technology is not created by accident. CISOs must ensure that the information fed into the system is carefully curated and reflective of the company’s values and objectives.

Conlcusion:

The increasing investment in artificial intelligence and chatbot technologies by big tech companies, alongside the challenges faced by chief information security officers (CISOs), signifies a significant shift in the market. The emergence of generative AI, powered by large language models, presents both opportunities and risks for businesses.

It is crucial for companies to approach this technology with caution and implement robust security measures to protect sensitive information. CISOs must stay proactive and stay ahead of the curve by monitoring employee usage, licensing AI platforms, and considering customized GPT solutions. By addressing these challenges and embracing the potential benefits of generative AI, businesses can position themselves for success in an evolving market landscape.

Source