Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
26
27
28
29
30
31
News Every Day |

‘We’re Not That Far Behind.’ Baidu’s Robin Li on China’s Push to Diffuse AI Throughout Society

On the wall of the entrance foyer of Chinese tech giant Baidu’s cavernous Beijing headquarters hangs a small wooden plaque embossed with the golden number “1417.” It was taken from the hotel room opposite Peking University where Robin Li founded the $50 billion company back in 2000.

Early on, Li was focused on cementing Baidu’s enduring position as China’s top search engine. However, he’d long been intrigued by artificial intelligence (AI) after taking undergraduate classes at Peking and Tsinghua universities. But upon arriving in the U.S. for graduate school in 1991 that interest was put firmly on hold.

[time-brightcove not-tgx=”true”]

“I told my professor that I was interested in AI, but he told me: ‘Don’t, you will not be able to find a job if you do that!’” Li laughs.

Today, Li’s former teacher has been proven staggeringly wrong. The global AI market was estimated at $244 billion last year, while AI chip pioneer Nvidia is the world’s most valuable company worth over $4 trillion. Li saw the trend early and today Baidu is one of China’s top full-stack AI companies, offering everything from chips and cloud infrastructure to models, agents, applications, and consumer products. 

TIME caught up with Li on the sidelines of November’s Baidu World conference in Beijing while reporting our Person of the Year special feature on the Architects of AI.

This interview has been condensed and edited for clarity.

When you started Baidu in 2000 did you have any idea that AI was going to play the role it is today?

No. When I founded Baidu, I realized that the Internet was going to be a big thing in China, and search technology will be very important to the development of the Chinese Internet. But I could not link AI with search engines at that time. Around 2010, we realized that machine learning, which is a branch of AI, started to play a role in the ranking of the search results. We started to invest in AI around that time, so that we learned how many people click on this link. Then in 2012 we realized that deep learning was going to become big. It was able to recognize images much more precisely than the previous generation of technology. And the serious investment in AI for Baidu started around 2012.

You’ve spoken about how numerous thresholds have been broken last year in terms of integrating AI into various sectors of society and the economy. Do you feel that 2025 is a pivotal year for AI adoption?

In terms of adoption, yes. Because for 2024 or 2023, the main theme was the foundation model. The capability of foundation models keeps improving and the inference cost keeps going down. But going forward, people will have to think about the value-add at the application level. And over the past half-a-year or so, we’ve seen all kinds of scenarios that this wave of AI is creating value at the application layer.

You recently unveiled Ernie Bot 5.0, which competes well with ChatGPT and DeepSeek and other large language models on various metrics. But it’s a very competitive field. What makes you feel that Ernie Bot can stand out from the crowd?

We take an application-driven approach when we try to develop our own foundation model, namely Ernie Bot. Especially for the 5.0 version, we didn’t try to be everything for everyone. We have application areas that we care about a lot—for example, search or digital humans. For these kinds of areas, whatever capability it requires from the foundation model level, we will try to train the model to be good at those skills. For example, I think our model is very good at instruction following and also creative writing—we were rated number one for creative writing. Because these kinds of capabilities can be used as a digital human layer, especially when you try to let digital humans do live streaming, e-commerce. To sell things, you need to come up with scripts that are really convincing, so that people are willing to pay for goods. So we try to optimize the model for those kinds of application scenarios. We think going forward, no foundation model can be better than anyone at any aspect. OpenAI cannot do that; Google Gemini cannot achieve that. We cannot achieve that either—but we try to optimize the model so that it performs better at the directions our applications care about the most.

Do you feel that the foundation model space will soon coalesce amongst a couple of champions much like other technologies have done?

Eventually, I think so. This is the case for the desktop internet, for mobile internet, and it’s going to be for AI, too. There will be only a few foundation models left, but at the application layer, there will be a lot of successful players in very different directions. I think that’s where the most opportunities are, because otherwise, this is going to be just a bubble. It’s going to burst sooner or later.

You’ve said that the value in chips is misguided and the application value-add needs to increase. With Baidu being a full-stack AI company are there applications where you see the growth and revenue coming from?

That’s my point. I think right now it’s a pyramid where most value is realized at the chip level, then the model level realizes probably one-tenth, and the application layer realizes even less. This is certainly unhealthy. The reason I said that is because we took a very different approach since the beginning of 2023 when we were working on the foundation model, when everyone’s attention was on Ernie Bot. I publicly said, “Don’t focus on models; focus on applications.” And that’s what we did over the past two, three years. Because I firmly believe that you have to create much more value at the application layer in order to sustain the investment in models, in chips, and so on.

Baidu proudly embraces an open-source model. Why do you think that’s best?

We’ve always embraced the open-source community, especially at the deep learning framework level, where we have tens of millions of developers using PaddlePaddle, which rivals TensorFlow and PyTorch. And at the model level, we also realize that when you open source it, you get more attention so people are more willing to try it and take a look at the effectiveness. Having said that, I don’t think open source is the key for value creation. And to be exact, it’s not really open source, it’s open weight. You know all the weights of those models, but you don’t know the data that train those models, so you cannot really replicate what the model players have achieved. But it doesn’t matter; the model that feeds your application is the best model. It could be a very small model, open source. It could be a very large model, expensive one, closed source. But if it can create much more value than you pay for the inference and training, then it’s worth it. So we are at this stage with many, many models launched almost every week or even daily. Developers have a lot of choices and it’s going to settle down. It’s going to mature as more model developers shift to the application layer to develop agents for all kinds of application scenarios.

I feel the big difference now compared with the last time we spoke is that Baidu is now firmly embarked on international expansion, especially your Apollo Go robotaxis in the Middle East and Europe. How are you negotiating the regulatory and geopolitical challenges of that push?

It’s always been a challenge, either in China or elsewhere. The technology itself is at this tipping point where we are ready to massively deploy robotaxis in very congested urban areas. But many cities don’t allow self-driving on the road yet. We have about 22 cities running Apollo Go robotaxi, and we are quickly gaining scale and expanding, adding cars and taking more rides. So in this aspect, wherever regulators allow us to deploy the cars we are very happy to do so. Of course, sometimes we do it all by ourselves; sometimes we partner with Uber, with Lyft, with all kinds of local partners. We are very flexible and I think our technology is ready for that. Also, because China has a very competitive supply chain, we can manufacture robotaxi cars at a lower cost than Western ones. So we can achieve unique, positive economics in most cities of the world. That’s also the reason why we are ready to deploy wherever regulators allow.

China’s supply chain in terms of sensors and batteries and other EV components is very strong. But chips are one area where the U.S. seems to have a stranglehold. Baidu just announced your new M100 chip and you’re developing new chip clusters. Have we reached a point now where China has broken free of relying on U.S. chips?

No, in terms of GPUs or AI accelerators I think we are probably two, three generations behind the U.S. But that will not prevent us from developing very valuable applications. The chip layer is at the very bottom, and on top you have different frameworks, you have foundation models, then you have the applications. We are probably a few years behind on chips, but we’re not that far behind on the model level. And on top of the models, we have lots of application scenarios you cannot find elsewhere; U.S. people don’t even know that they need to solve these kinds of problems. That’s where the value gets created. So, I’m not so worried about the restrictions on the chip, although I’d very much like to get access to the most advanced Nvidia chips.

Policymakers in the U.S. talk about a “Manhattan Project” push for AGI. But in China, the policy framework is more about diffusing AI technology across society. Do you feel it’s helpful for the U.S. to be talking about AI in terms of an arms race?

On this topic, we do have very different views. The mainstream U.S. people do take it like a Manhattan Project. The state invests hugely to achieve so-called AGI, so that the U.S. will be ahead of China and all other countries. For us, we care more about applications. China is very strong in manufacturing, we have lots of factories, we have all kinds of products that we need to manufacture at a low cost, at a very high efficiency, and we need to use AI to solve those problems. That’s what we care more about. I’m not even convinced that there is so-called AGI that one model fits all and it’s better than anyone at every aspect. I think we do need to bear applications in mind. Even if you are [as smart as] Einstein, if you don’t even know something exists, it’s very hard for you to solve problems.

So are you just putting AGI to one side and just thinking of applications?

I don’t think about AGI a lot. We are training our models, but the reason we train our model is to solve our application problems. I don’t think we should come up with a similar super smart AI that can be everything for everyone.

What are some of the biggest challenges in terms of trying to develop the applications and overcome the regulatory and other headwinds for mass AI adoption?

When you innovate, you almost always need to deal with things that’ve never been dealt with [before], especially when you try to deploy technology into the real world. For example, robotaxis: There are taxi drivers, there are all kinds of human driven cars on the road. It’s something new, and I think that overall the Chinese government is pro-innovation; they always say, “we support your innovation efforts.” But on the other hand, they also need to care about all kinds of concerns from stakeholders, and if no regulation says that you can have a self-driving car on the road, then it means it’s not [possible]. This is a little bit different from the U.S. In the U.S., if no regulation says you cannot have a self-driving car on the road, then you are allowed. In states like Texas or Georgia, there’s no regulation at all for robotaxi operations. But for China, you have to, in a lot of cases, get permission from the regulators on AGI and foundation models. Back in early 2023 a lot of opinion leaders in the U.S. said, “AI is very dangerous, we need to regulate it. We need to delay the development of foundation models by six months so that we can be sure that it’s safe or aligned with our value system.” But nothing really happened in China. We don’t talk a lot about regulating these kind of things, but there are actually regulations that are guiding the development of new technologies.

It now seems that push in the U.S. toward more safeguards and guardrails regarding AI has disappeared and the U.S. is just full speed trying to win the “AI race,” as they frame it. Do you feel that’s reckless? Do you think that perhaps the U.S. should take a step back and do what China’s done by putting in some proper regulation?

I would be very careful about that. On the one hand, I do think there should be guardrails; but on the other, because the technology moves so quickly, it improves so fast, you need to be careful to put regulations that do not hurt the pace of innovation. It’s very hard to expect regulators to have a very deep understanding, or better understanding, than the foundation model developers. Having a pre-emptive attitude is probably not good. You want to regulate it, a half step back, watch how it evolves and put in the right regulation. You don’t want to be one step or a half-step ahead of the technology map—because that will be a speed bump for innovation.

What would you say to people who are worried about AI taking jobs and displacing humans? Do you understand those fears?

Yes, both China and the U.S. face a similar issue. In the longer term, I think there’s a consensus that new technology will create a lot more job opportunities for people. But in the near term, we do face a challenge—because of the productivity improvement caused by AI, there will be downward pressure for employment, and we need to find ways to handle this. In the U.S., people talk about UBI; here in China, we talk a lot about new job opportunities, like data labelling, this kind of thing. For Baidu, we helped many cities establish data labelling centers that hire thousands of people. And I think going forward, we will be able to create a lot more jobs we have never thought of.

Another big concern is a huge amount of energy which has been consumed for data centers. How do you see that problem being solved?

We faced that problem a long time ago—10, 15 years ago—when we started to build massive data centers of so-called PUE. We spent a lot of effort to make sure that it’s energy efficient. We’re still on probably the most energy efficient data centers here in China, and as AI scales up we do need much more compute and much more power. So this effort becomes more and more important, and also kind of different. In the GPU age, there are a lot of different ways for you to save energy. But probably the most obvious one is to make your model smaller, make the inference costs cheaper. If you can achieve that, it naturally requires less power. On this aspect, China is way ahead; we can come up with models that the inference cost is one-tenth or one-hundredth of U.S. counterparts. I think the U.S. is more focused on coming up with the most powerful AI models and China, on the other hand, probably because we have less buying power and steeper competition, we always have to drive down the inference cost. And just as a side effect, it saves energy.

What do you make of the fact a company like DeepSeek can create its V3 model for $6 million, whereas Meta has plowed billions into AI with dubious results? Do you think that the U.S. is stuck in a bubble where they’re just throwing money at research and development without really focusing on the gains?

I think there are two different directions. On the Chinese side, we try to make the models more efficient—we have to because we don’t have access to the most advanced chips. On the U.S. side, you do have more advanced chips, you are very willing to invest in advancing cutting-edge technology. I think that’s good, too. That helps humanity to achieve, to explore the ultimate possibilities. I’m very interested in watching that effort, and we are following that also very closely. And like I said, we probably cannot match the investment of Google and OpenAI in training models, but we are closer to the applications. We know what problems to solve. We hope to solve those problems before the U.S. people realize there’s that kind of problem.

What do you think is one of the least understood or more surprising ways that AI is going to transform society in say 10 years’ time?

There’s huge uncertainty on that because the technology just evolves so quickly. It’s very hard to imagine 10 years down the road, because if I look back one year I couldn’t even imagine that AI technology looks so powerful today. So we’ll just watch what’s possible and try to find ways to leverage the innovation.

Ria.city






Read also

Keyes stars as Ipswich claim fifth U18 Women’s National Cup crown

Upzoning plans in Berkeley could come at a major cost, shop owners warn

To Accept the Things I Cannot Change

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости