Rana el Kaliouby on why AI needs a more human future
AI is moving fast. But are we really keeping humans at the center? AI scientist, founder of Affectiva, investor at Blue Tulip, and host of Pioneers of AI, Rana el Kaliouby makes the case that human-centric AI isn’t just a safety guardrail; it’s the key to thriving socially, economically, and emotionally. She also cuts through the noise on the buzziest AI myths, including whether we’re in an AI bubble.
This is an abridged transcript of an interview from Rapid Response recorded live at SXSW, hosted by former Fast Company editor-in-chief Robert Safian. From the team behind the Masters of Scale podcast, Rapid Response features candid conversations with today’s top business leaders navigating real-time challenges. Subscribe to Rapid Response wherever you get your podcasts to ensure you never miss an episode.
You exited Affectiva in 2021. You’re an investor now at Blue Tulip. But you’re also the host of the podcast Pioneers of AI. Are these tools, between the investing and the podcast, that you’re using to try to shape where AI goes from here? What is your goal in that?
Affectiva was my baby. It was literally my third child. It really was a big part of what I did and my identity. When I sold it in 2021, I spent a lot of time thinking about, What do I want to do next? And I kept coming back to this idea/question that we absolutely need to build a future of AI that is human-centric, that prioritizes how these technologies are going to affect our everyday lives and our relationships. And I mean, I believe that AI has massive economic opportunity. It really does. And at the same time, it has this opportunity to unlock human potential. So my point of view is that AI should not replace our abilities. It should really amplify and augment what we can do. And ideally, we can harness AI and use it to solve really meaningful problems facing society today.
So that’s kind of my thesis around that. And then I was like, Okay, how do I shape that? How do I become a real player in that space, given my background? And I landed on three things. One is investing—backing founders who are building these generational category-defining human-centric AI companies. Two is storytelling, amplifying the voices of AI that maybe you may not have heard from. There’s a very small set of companies that dominate the AI headlines, in my opinion, but there are a lot of innovators and thinkers and creators in the AI space. And I want to make sure that we are a platform to tell their stories and . . . be a door opener too. And the third one is a convener, which is why I like to do these things. I love bringing people together with disparate backgrounds and perspectives and just seeing what magic unfolds.
You use this phrase about humanizing technology before it dehumanizes us. And in the dialogue today about AI, I always wonder about for the practitioners, and you were one of the seminal ones, how much responsibility you feel like you have for what the future of this technology ends up being, and how deep is that conversation in that community as opposed to giving lip service to it, but I just got to get ahead of the company next to me?
I feel a very strong responsibility. And I would actually argue we all have a responsibility as well because we get to vote with our [wallet] which AI tools we’re using every day. Who’s getting the $20-a-month subscription from all of us? And I think asking questions around, does this company care about the ethics of the technology? How is it being built? Are they thinking about bias, both data and algorithmic bias? Are they thinking about trust and security and privacy? Are they thinking about the use cases of this technology? Where should it be deployed and where should it really not be deployed? I think these are big questions that we all should be asking of the tools we’re using. And as an investor, there’s a set of questions. We have a rubric that we ask founders, and if the founders have not at all thought about it, if they’re not open, then we’re not investing in them.
So because there’s so much noise surrounding AI right now and so many myths, it’s hard to know what to pay attention to. I think we all feel that. So this game is called fact or fiction, and I’m going to share a few video clips, some of which come from Pioneers of AI, the podcast, and each of them lead to a myth surrounding AI today. And I’ll be eager for your take about whether it’s mostly fact, mostly fiction, or somewhere in between. Are you ready?
Let’s do it.
So the first myth: We’re in an AI bubble. Is this fact or fiction?
I think actually it’s mostly fiction. I believe there are signs . . . of potentially a bubble. For example, the frothy valuation problem. There are a lot of companies raising hundreds of millions of dollars at billion-dollar valuations, but they’re pre-product, they’re pre-revenue, that’s a red flag. And there are also some concerns around the circular money machine. You look at these handful of companies, they’re all investing in each other. They’re all buying chips from each other.
Nvidia gives money to OpenAI. OpenAI uses that money to buy chips from Nvidia.
Exactly. You kind of wonder what is the net new value creation here? But the world I’m in every day, the ecosystem of founders building real products that are going to be transforming real industries and companies that are really trying to figure out how to bring AI to be more productive, this is real. And it’s very early days. So that’s where I focus my energy. And I think we’re in the very early days of massive, massive economic opportunities.
Maybe in the investment marketplace, there might be some bubble, which might be cautionary for all of us because we all have money in these companies now. But in the long run, you think the technology itself, we maybe are even undervaluing?
I think so, yeah. The technology itself, it’s very early days, and the use, the applications of the technology is very early days. . . . Our thesis is basically AI is transforming every industry and vertical, but we focus on three in particular. One is how AI is driving this health span revolution. So think about sensors, data, AI, and how that can advance healthcare in every aspect of it. The other is the future of work. So how can we employ and deploy AI, whether it’s physical AI or AI coworkers and agentic AI to transform businesses and especially antiquated industries. Often they’re very boring and unsexy, but there are lots of opportunities there. And the last is sustainable living. How can we use AI to apply that to planet health, whether it’s food innovation, rethinking manufacturing, climate, [or] energy?