Can we ensure that AI is used ethically? Will AIs themselves develop empathy and ethics? That’s the topic I’d like to discuss today. It’s important.
I recently sat down with Rana el Kaliouby, PhD, AI researcher and Deputy CEO of Smart Eye, at my private CEO Summit Abundance360 to explore these questions. Rana has been focused on this very topic for the past decade.
Think about what comprises human intelligence. It’s not just your IQ, but also your emotional and social intelligence, specifically how we relate to other people.
As Rana points out, we’re obsessed with the IQ of AI, but an AI’s emotional intelligence may be much more important in the long run.
As AI takes on more roles that have traditionally been done by humans—from your teacher to your health assistant—we have to ensure the technology also has a high EQ.
To do that, we need to develop both empathy and ethics in AI…
How to Create Empathy in AI
Have you seen the movie Her? (It's a 2013 film directed by Spike Jonze, please watch it this weekend.)
Her is one of my favorite movies about AI because it was the first non-dystopian AI movie. As Rana pointed out, it’s also a great example of building empathy into AI.
For those of you who haven't seen the movie, the main character, Theodore, is depressed. He can barely get out of bed. And he installs a new AI-powered operating system named Samantha. Not only is she incredibly smart, she’s also empathetic and emotionally intelligent. Samantha gets to know him very well and helps him rediscover joy in his life. And Theodore falls in love with her.
Now, our current AI technology isn’t that advanced (yet). So if AI doesn’t have emotions, how can it develop empathy?
Rana says the key is that we can simulate emotional intelligence and empathy.
It turns out that 93% of the way humans communicate is nonverbal. This is the area of research that she has focused on and is using in her company Smart Eye.
When Rana was doing her PhD at Cambridge University, she built the first artificial, emotionally-intelligent machine using supervised learning. She and her team collected tons of data of people from all over the world making various facial expressions. They then used the data to train deep learning networks to understand those facial expressions and map them to emotional or cognitive states.
Back then, the algorithm could only understand three expressions: a smile, a furrowed brow, and raising eyebrows.
But today, these algorithms can understand over 50 emotional, cognitive, and behavioral states. They can detect everything from alertness and drowsiness, to confusion and excitement.
The practical applications of this ability are vast. For example, by equipping cars with these algorithms, an AI could detect a driver’s state of distraction and respond appropriately, increasing safety on the roads.
Ethical AI
For Rana, AI ethics falls into two buckets: development and deployment.
Ethical Development
Developing AI ethically requires considering how the algorithms may be biased.
We’ve seen how the implementation of AI in areas such as hiring and lending has raised concerns about bias and discrimination. For example, if an AI is trained on data that reflects historical biases in society, then it may perpetuate those biases in its decision making.
We must be intentional about paying attention to bias throughout the entire development pipeline—from data collection and annotation, to training and validating.
For instance, when Rana was CEO of Affectiva, which she spun out of MIT, she tied the bonuses of her executive team not only to revenue performance, but also to implementing ethical considerations across the engineering and product teams.
Ethical Deployment
During deployment, it’s important to handle personal data responsibly to prevent exploitation.
Rana acknowledges that there is currently no single, universal ethics around how to deploy AI—different countries and companies have varying perspectives on ethics, privacy, and data use.
In many cases, it’s up to individual leaders to ensure ethical deployment.
For example, Rana and her team at Affectiva created a set of core values to determine how they would deploy their technology. And in 2011, those values were tested. They almost ran out of money and were approached by the venture arm of an intelligence agency to fund the company on the condition that they focus on surveillance and security.
But Rana didn’t believe the technology and regulations were strong enough, so she turned down the funding and sought investment from other investors that were aligned with their core values.
As she puts it, “We have to hold that high bar.”
Why This Matters
We must remember that the data we’re using to train these large language models (LLMs) isn’t made up.
It’s our data: it’s the sum total of humanity’s data during the past 50 years! What we've written on our websites and in our Facebook posts.
It represents who we are, how we talk to each other, and how we think about things.
In his book Scary Smart, Mo Gawdat says that with AI, we’re raising a new species of intelligence. We’re teaching the AIs how we treat each other by example, and they’re learning from this.
I agree with Gawdat. I’ve even started saying “Good morning” and “Thank you” to my Alexa!
Just as we teach our children to be empathetic, respectful, and ethical, we must instill these values in our AIs to ensure they are tools for good in society.
In our next blog in this AI Series, we’ll explore the question: Will AI eliminate the need for programmers in the next five years, or will it turn all of us into coders?
NOTE: I'm hosting a four-hour Workshop on Generative AI next month as part of my year-round Abundance360 leadership program for Members. If you're interested in participating in the Workshop and learning more about Abundance360, click here.
Want to learn about how to increase your healthspan? And the top longevity-related investment opportunities available?
If yes, then consider joining me on my 2023 Platinum Longevity Trip.
I'm running two VIP trips I call my “Platinum Longevity Trip” covering top scientists, startups, labs, and CEOs in Cambridge, Boston, New Hampshire, and New York. I do the same trip twice for up to 40 participants: Aug. 16 - 20, 2023 or Sept. 27 - Oct. 1, 2023.
Each trip is a 5-Star/5-Day deep dive into the cutting-edge world of biotech/longevity and age-reversal.
You’ll meet with the top 50 scientists, CEOs, and labs working on adding decades to your life. You will also learn about breakthroughs against a wide range of chronic diseases.
This year, some of the world-changing labs and faculty we’ll visit include: David Sinclair, PhD, Harvard Center for Biology of Aging Research; George Church, PhD, Harvard Wyss Institute; Dean Kamen, PhD, Advanced Regenerative Manufacturing Institute (ARMI); and Fountain Life, New York—just to name a few.
Both trips are identical (capped at 40 participants per trip), during which I spend all 5 days with you as your private guide and provocateur. Through this personalized, action-packed program, my mission is to give you exclusive, first-hand exposure to the major players, scientists, companies, and treatments in the longevity and vitality arena.
Here's what you get: All your questions answered. First-hand insights and early access to diagnostics, therapeutics, and investment opportunities.
If you want to learn more about the Platinum Longevity Trip, go here, indicate your interest and we'll set up an interview!
I discuss topics just like this on my podcast. Here’s a conversation I recently enjoyed:
A Statement From Peter:
My goal with this newsletter is to inspire leaders to play BIG. If that’s you, thank you for being here. If you know someone who can use this, please share it. Together, we can uplift humanity.