AI will not be the destroyer of jobs, says Bank chief

Bank of England Governor Andrew BaileyGetty Images

Artificial Intelligence (AI) will not be a “mass destroyer of jobs” and human workers will learn to work with new technologies, the governor of the Bank of England has told the BBC.

Governor Andrew Bailey said while there are risks with AI, “there is great potential with it”.

The Bank says businesses expect to see the benefits to productivity soon.

Almost a third told the Bank they’d made significant AI investments in the past year.

Mr Bailey added “I’m an economic historian, before I became a central banker. Economies adapt, jobs adapt, and we learn to work with it. And I think, you get a better result by people with machines than with machines on their own. So I’m an optimist…”

In its latest assessment of the UK economy the Bank’s business contacts said that automation and AI investment was already “containing recruitment and labour costs” in a tight labour market.

Mr Bailey’s comments come as a committee that sits in the House of Lords says that we should embrace the positives of AI rather than just focus on its risks.

The committee’s chair Baroness Stowell told the BBC that talk of “existential risks and sci-fi scenarios” should not get in the way of reaping the rewards of AI.

Baroness Stowell

Getty Images

AI goldrush

The country could “miss out on the AI goldrush,” her committee’s report said.

It said some of the “apocalyptic” warnings about AI’s dangers were exaggerated.

The Lords Communications and Digital Committee’s report focuses on large language models (LLMs), which are what power generative AI tools like ChatGPT.

They have captured people’s imaginations with their ability to, for example, give human-like responses to questions.

But they have also prompted concerns, including from various senior industry figures, that the technology could cause problems ranging from eliminating jobs to threatening humanity itself.

The UK hosted the world’s first AI Safety Summit in November 2023, where a global declaration on managing AI risks was announced..

But Baroness Stowell warned that the government needed to be careful the UK didn’t end up as “the safety people”.

“No expert on safety is going to be credible if we are not at the same time developers and part of the real vanguard of promoting and creating the progress on this technology”, she said.

Given there is no UK equivalent of ChatGPT, and in order to avoid another situation in which – as with other areas of tech – all of the industry giants are clustered elsewhere, the Lords committee seems to be essentially warning the country to go easy on the red tape.

The Committee has also highlighted the issue of copyright, which is particularly contentious with AI.

That’s because LLMs rely on being fed information from things that already exist digitally, and there are questions over whether developers have properly sought permission for it.

Photo agency Getty Images is currently taking legal action against Stability AI, claiming that the tech company has used its images without permission to train its picture generation tools.

The Committee is calling on the government to provide clarity over what rules apply, saying it can not “sit on its hands” while LLM developers “exploit” the works of rightsholders.

“The government needs to come out with its position,” Baroness Stowell told the BBC.

Secretary of State for Science, Innovation and Technology Michelle Donelan will give evidence to the Lords Communications and Digital Committee on Tuesday, where she is expected to be questioned on the government’s reaction to the report.

A Department for Science, Innovation and Technology spokesperson said: “We do not accept this – the UK is a clear leader in AI research and development, and as a government we are already backing AI’s boundless potential to improve lives, pouring millions of pounds into rolling out solutions that will transform healthcare, education and business growth, including through our newly announced AI Opportunity Forum.

They added: “The future of AI is safe AI. It is only by addressing the risks of today and tomorrow that we can harness its incredible opportunities and attract even more of the jobs and investment that will come from this new wave of technology.

“That’s why have spent more than any other government on safety research through the AI Safety Institute and are promoting a pro-innovation approach to AI regulation.”

Comments

Leave a Reply

Skip to toolbar