Loading Now

AI ‘Godfather’ Yoshua Bengio: We’re ‘creating monsters more powerful than us’


Whether you think artificial intelligence will save the world or end it — there’s no question we’re in a moment of great enthusiasm. AI, as we know, may not have existed without Yoshua Bengio.

Called the “godfather of artificial intelligence,” Bengio, 60, is a Canadian computer scientist who has devoted his research to neural networks and deep learning algorithms. His pioneering work has led the way for the AI models we use today, such as OpenAI’s ChatGPT and Anthropic’s Claude.

“Intelligence gives power, and whoever controls that power — if it’s human level or above — is going to be very, very powerful,” Bengio said in an interview with Yahoo Finance. “Technology in general is used by people who want more power: economic dominance, military dominance, political dominance. So before we create technology that could concentrate power in dangerous ways. We need to be very careful.”

In 2018, Bengio and two colleagues — former Google (GOOG) vice president Geoffrey Hinton (winner of the 2024 Nobel Prize in Physics), and Meta (META)’s chief AI scientist Yann LeCun — won the Turing Award, also known as the Nobel Prize of computing. In 2022, Bengio was the most cited computer scientist in the world. And Time magazine has named him one of the 100 most influential people in the world.

Despite helping to invent the technology, Bengio has now become a voice of caution in the AI world. That caution comes as investors continue to show a great deal of enthusiasm for the space and bid up shares of AI plays to fresh records this year.

AI chip darling Nvidia’s (NVDA) stock is up 172% year to date, for example, compared to the S&P 500’s (^GSPC) 21% gain.

The company is now valued at a staggering $3.25 trillion according to Yahoo Finance data, trailing Apple (AAPL) slightly for the title of most valuable company in the world.

I interviewed Bengio about the possible threats of AI and which tech companies are getting it right.

The interview has been edited for length and clarity.

Yasmin Khorram: Why should we be concerned about human-level artificial intelligence?

Yoshua Bengio: If this falls in the wrong hands, whatever that means, that could be very dangerous. These tools could help terrorists pretty soon, and they could help state actors that wish to destroy our democracies. And then there is the issue that many scientists have been pointing out, which is the way that we’re training them now — we don’t see clearly how we could avoid these systems becoming autonomous and have their own preservation goals, and we could lose control of these systems. So we’re on a path to maybe creating monsters that could be more powerful than us.

OpenAI, Meta, Google, Amazon — which big AI player is getting it right?

Morally, I would say the company that’s behaving the best is Anthropic [whose major investors include Amazon (AMZN) and Google (GOOG)]. But I think they all have biases because of the economic structure in which their survival depends on being among the leading companies and ideally being the first to arrive at AGI [artificial general intelligence]. And that means a race — an arms race between corporations, where the public safety is likely to be the losing objective.

Anthropic is giving a lot of signs that they care a lot about avoiding catastrophic outcomes. They were the first to propose a safety policy where there’s a commitment that if the AI ends up having capabilities that could be dangerous, then they would stop that effort. They are also the only ones, along with Elon Musk, who have been supporting [California’s AI regulation bill] SB 1047. In other words, saying, “Yes, with some improvements, we agree to having more transparency of the safety procedures and results and liability if we cause major harm.”

Thoughts on the huge run-up on AI stocks, like Nvidia?

What I think is very certain is the long-term trajectory. So if you’re in it for the long term, it’s a fairly safe bet. Except if we don’t manage to protect the public, … [then] the reaction could be such that everything could crash right? Either because there’s a backlash from societies against AI in general or because really catastrophic things happen and our economic structure crumbles.

Either way, it would be bad for investors. So I think investors, if they were smart, would understand that we need to move cautiously and avoid the kind of mistakes and catastrophes that could harm our future collectively.

Thoughts on the AI chips race?

I think the chips clearly are becoming an important piece of the puzzle, and of course, it’s a bottleneck. It’s very likely that the need for humongous amounts of computation is not going to disappear with the kinds of events, the scientific advances I can envision in the coming years, and so it’s going to be a strategic value to have high-end AI chips capabilities — and all the steps in the supply chain will matter. There are very few companies able to do it right now, so I expect to see a lot more investments going on and hopefully a bit of a diversification.

What do you think about Salesforce introducing 1 billion autonomous agents by 2026?

Autonomy is one of the goals for these companies, and it’s a good reason for it economically. Commercially, this is going to be a huge breakthrough in terms of the number of applications that this opens up. Think about all the personal assistant applications. It requires a lot more autonomy than what current state-of-the-art systems can provide. So it’s understandable they would aim for something like this. The fact that Salesforce (CRM) is thinking they can reach it in two years, for me, is concerning. We need to have guardrails, both in terms of governments and technologically, before that happens.

Governor Newsom vetoed California’s SB 1047. Was that a mistake?

He didn’t give reasons that made sense to me, like wanting to regulate not only the big systems but all the small ones. … There’s a possibility that things can move quickly — we talked about a few years. And maybe even if it’s a small possibility, like 10% [chance of disaster] we need to be ready. We need to have regulation. We need to have companies already going through the moves of documenting what they’re doing in a way that’s going to be consistent across the industry.

The other thing is the companies were worried about lawsuits. I talked to a lot of these companies, but there’s already tort law, so there could be lawsuits anytime if they create harm. And what the bill was doing about liability is reducing the scope of lawsuits. … There were 10 conditions. You need to have all of these conditions for the law to support the lawsuit. So I think it was actually helping. But there’s an ideological resistance against any involvement — anything that’s not the status quo, any more involvement of the state into the affairs of these AI labs.

Yasmin Khorram is a Senior Reporter at Yahoo Finance. Follow Yasmin on Twitter/X @YasminKhorram and on LinkedIn. Send newsworthy tips to Yasmin: [email protected]

Click here for the latest technology news that will impact the stock market

Read the latest financial and business news from Yahoo Finance





Source link

Post Comment

YOU MAY HAVE MISSED