Call for Licensing of AI Scientists in UK Sparks Debate in Industry Amidst Regulatory Review

TL;DR:

  • The British Computer Society is calling for licensing of AI scientists to ensure ethical considerations are met.
  • The CMA is conducting a review of the AI market amid concerns over big tech dominance.
  • A 2020 OECD study suggests quality standards should be ensured for goods and services rather than setting standards for professionals.
  • The CMA review is a “mapping” exercise and not a starting point for increased regulation.
  • Trust in AI services is becoming a public concern.
  • AI can be useful for modeling different business scenarios and their outcomes, but there needs to be confidence in the outputs.

Main AI News:

The professional body for tech workers has called for the licensing of scientists developing AI products. The CEO of the British Computer Society (BCS), Rashik Parmar, has urged for a register of computer scientists to be established to ensure those working on AI technologies in “critical infrastructure” or those which “could potentially be harmful to human life” are certified and working to a code of ethics. This follows the Competition and Markets Authority (CMA) review of the AI market, which has raised concerns about the dominance of big tech firms, such as Microsoft.

Mr. Parmar argued that IT professionals should not be allowed to build and deploy complex technologies without the same level of professionalism as surgeons, who must adhere to strict codes of ethics and be certified to operate. He believes that a certified level of professionalism is required in the AI industry to ensure that ethical considerations are met.

A 2020 study by the Organisation for Economic Co-operation and Development (OECD) found that “occupational entry regulations” reduced companies’ productivity by approximately 1.5% on average. The authors of the study suggested that licensing and certification requirements should be “lightened” to ensure certain quality standards for goods and services instead of setting standards for the professionals providing them.

The CMA’s review is currently a “mapping” exercise and not a starting point for increased regulation. However, there is a growing concern about the use of AI services to make decisions on behalf of humans. As technologies like ChatGPT and its derivatives become more embedded in everyday life, the public is increasingly worried about placing their trust in AI.

John Hill, the founder of process simulation company Silico, noted that while AI could be useful for modeling different business scenarios and their outcomes, there needs to be confidence in the outputs of the AI software. Hill believes that it’s not just a shift to trusting technology but a shift in using it for different aspects of the decision-making process, which humans “cannot achieve” on their own.

Conlcusion:

the call for licensing AI scientists and the ongoing review by the CMA may have significant implications for the AI market. While it may increase the level of professionalism and ethical considerations, it could also potentially lead to reduced productivity and increased costs for companies. The shift towards ensuring quality standards for goods and services may also mean a change in the way professionals in the industry are regulated.

As trust in AI services becomes an issue, companies may need to focus on building confidence in the outputs of AI software to ensure their continued use in the decision-making process. Overall, the future of the AI market remains uncertain as the industry continues to grapple with these challenges.

Source