Learn English with Omar Sultan Al Olama. His Excellency Omar Sultan Al Olama, Minister of State for Artificial Intelligence of the UAE, addresses the India Global Forum 2023. He discusses the UAE’s advancements in AI, including the development of large language models and the importance of AI in driving global growth. Al Olama shares insights into AI’s impact on the Middle East, India, and Africa, its role in facing global challenges, and the need for effective AI regulation.
Donwload available for Premium Subscribers
PDF Full Transcript
Explore every word with our concise PDF transcripts.
Audio Version
Immerse in speeches with clear, downloadable audios.
English Lesson
Enhance English skills with interactive speech lessons.
⚬ Free 30-day trial
”Whoever leads the AI race will lead the future.
Transcript
Host: The world’s first minister of artificial intelligence. He was recently listed as Time, one of Time Magazine’s most, 100 most influential people in artificial intelligence. Alongside Elon Musk and ChatGPT creator Sam Altman. Please give a very warm IGF welcome to His Excellency Omar Al Alama. How are you? Nice to see you. So, Your Excellency, I asked ChatGPT to write a haiku about you. Is it okay if I read it?
Omar Sultan Al Olama: Please, yes.
Host: Omar guides the code, UAE’s AI beacon, innovations path. Not bad.
Omar Sultan Al Olama: I think humans still are not at risk to be replaced by AI. If that’s the best it could do.
Host: Not too many syllables there in a haiku. So, Your Excellency, you said something in your interview with Time Magazine. I want to quote. You said, “When humanity used to depend on coal and wood fire for energy, there wasn’t any minister of energy. When it became paramount to ensure that energy production and energy distribution, every single government in the world has appointed a minister for energy.” You’ve said the same happened with telecommunications and you believe that AI is at that same level now. So, give me an example of something that you’ve seen just in the last few weeks that makes you believe that statement is true.
Omar Sultan Al Olama: I think that’s a very important question. And the reality is we don’t need to look at the past few weeks. Let’s look at the past few years. The honest truth is each and every single one of you, everyone in this room and most people beyond this room, cannot live without artificial intelligence. All our questions are being answered by AI. All our content is being fed to us by AI. Even our shopping today, if you shop digitally, it’s an AI engine that drives that for you. If anyone, whether it’s in the UAE or in another country, said that we’re going to stop or not allow you to use these tools, how will it impact your quality of life? It’s going to have a detrimental impact on your quality of life. This is the fact of the matter.
That’s why countries like India, for example, went to the non-conventional path of creating its own platforms, which I think is very smart and really the way that many countries of that size need to go. Now, in that sense, AI is driving the economy. AI is impacting society. And AI is today the technology that is enabling people to go into the 21st century in the right way, governing it in the right way, regulating it in the right way, and developing certain products in the right way is really the only way to go. The UAE has worked on developing its own large language model. As you know, we have Falcon and Jace today that are equivalent to ChatGPT or Lama, for example, from Facebook. And they are open source tools because we do believe that all governments need to look at this seriously, and we need to lead the charge as the UAE.
Host: So I want to come back to some of those Arabic large language models in a moment. You were on stage recently with the Indian Minister for AI, a great friend of IGF, Ravi Chandra Shekhar, at the UK’s AI Summit recently. I wonder when you think we’ll see an AI minister in every country in the Middle East and Africa.
Omar Sultan Al Olama: I don’t want to be the person that people point back to and say, “You made that statement and you were wrong.” So let’s look at it this way. In 2017, when I was appointed, it was very lonely. So if we had a convention for ministers of AI, I’d be talking to myself. Today, there are at least two other ministers of AI in the world. So there’s one in Spain, and there’s another one in the UK or an equivalent to that in the UK. I do believe that we are going to see it.
The issue is the actual bureaucratic process in certain governments to appoint a minister for a new mandate differs between country to country and region to region. I have no doubt that this is going to be the most important thing that governments need to deal with and tackle in the future. However, I cannot give you a specific timeline. If it were up to me, I think we’re already late. So if it were up to me, I’d say the clock is ticking and we need to see a lot more of them very, very soon.
Host: Interesting. Okay, so obviously trust, fear are some of the issues that have been bubbling around when we talk about AI. On a scale of one to ten, where one is, “I’m worried AI will enslave us all,” and ten is, “We have nothing to fear. Welcome our robot overlords.” Where do you put yourself?
Omar Sultan Al Olama: I would put myself as five.
Host: Okay. All right. Why is that?
Omar Sultan Al Olama: Because I can’t afford to be anything lower or higher. Because, look, my job as a government official is to be in the middle, dead center. Try to look at all the opportunities and grasp them and try to look at all the challenges and regulate against them. If anyone answers this question, and they were a government leader looking at this mandate, and they said, “Anywhere under five or above five, I think you should be concerned.”
Host: All right. So if you imagine a timeline where perhaps the end of it is when computers, superhuman AI, we have AGI or beyond, artificial general intelligence, matching or maybe exceeding human intellect, and the beginning of the timeline is now, how fast do you think we’ll get there? When do you think we’ll see superhuman intelligence?
Omar Sultan Al Olama: So the catch here or the thing that we need to look at is what do we define as intelligence first? And second, what capabilities are we defining here? If the capabilities are math equations, for example, then not just artificial intelligence, computing is already better than us at that. So the ability to calculate or scientific calculate or whatever you want to call it, can do much better than most humans in the world, probably any human in the world. If we look at certain tests that we put for human beings, like, for example, the bar for legal associates and people working in the law field, or, for example, the medical exam, I think that these systems can do much better than human beings, and they might be seen as very, very smart. Does that mean that they’re better than humans is a question. My view is I tend to believe that AGI is still far away.
Host: How many years?
Omar Sultan Al Olama: Well, it depends on which timeline you’re talking about. The UAE’s timeline every year. So we as a country live in dog years. So what we achieve in one year, most countries achieve in seven. That’s what I see.
Host: Not to be humble.
Omar Sultan Al Olama: Well, I try to state facts. So the answer here is if I tell you 10 years, the 10 years in our timeline might be 50 years or 70 years in other timelines. We believe that it’s at least 10 years out. We really plan our years. So we have, for example, a vision 2071 for the next 50 years or so. And I think AGI is still, you know, in the horizon. I cannot see it coming closer. However, people working in the field have a better view. So we are engaging with them continuously to better understand what’s happening. Will we have narrow purpose artificial intelligence that is going to really either wreak havoc or create incredible opportunities for people in the coming two to three years? Absolutely.
And our job is to engage as countries to ensure that people are not disrupted in India or the UAE or in the U.S. or other places. Because what people forget is disruption anywhere is, I think, negative for everyone. Because at the end of the day, we are connected. Whether we are connected through borders, whether we are connected through a digital landscape, or whether we are connected by our humanity, we need to ensure that we can work together.
Host: It’s been quite a week in artificial intelligence. Sam Altman, CEO of ChatGPT was in. He’s out. He’s back in. And beyond that drama, there’s actually something that’s quite important here. It’s the issue around safety and how the board of ChatGPT looked at safety. I heard a version of the story, which in short was that it was the capitalists versus the catastrophists. And the capitalists won. What’s your opinion on what happened there?
Omar Sultan Al Olama: Well, OK, I don’t want to give a personal opinion because I actually do know some of these individuals personally. What I will say is in the corporate life, usually there is drama. This is a fact. When the cake is big enough, when the opportunity there is big enough and the size of the challenge or the positive financial outcome is big enough, there’s a lot more drama. So it’s magnified. You can easily extrapolate a very negative future by seeing what currently exists. Because as humans, we tend to foresee the future or forecast the future on a very singular exponential trajectory. So if things are the way they are today, this is how they’re going to continue to go.
I believe that it’s very difficult for us to extrapolate the future in that sense. We need to look at things in a more pragmatic sense. So am I concerned? I am concerned that there is turbulence, definitely, especially in this sector, which is a very sensitive sector. So I do have trust, and maybe the IGF is the right platform for me to say this. Satya Nadella is an extremely capable CEO. Honestly, I think he might be, in the tech world, one of the top five CEOs in the world. And his ability to maneuver this storm and to get it all across the line in a very smooth and seamless manner gives me trust that things are not going to go south anytime soon. And that you have these individuals that are able to steer these ships in the right way.
Host: I mean, speaking personally, it was worrying that two of the people who were booted from the board were women, and they’ve been replaced by two white men. And I know that diversity in AI is something that you think about quite carefully. So I’d love to hear your approach to inclusion and diversity in artificial intelligence.
Omar Sultan Al Olama: Well, look, in corporate boards, generally, you have a lot more white men than women. This is a fact, right? And this is across most boards. Does it mean it’s the right thing for us to have? Absolutely not. But it will take some time for, I think, the world to start to really balance things out in the right way. And I think there will be more women coming to STEM. There will be more women coming to technology. And one of the things that reassures me is Mira Murati from OpenAI is a woman, and she’s actually a very, very capable technology leader today. We are seeing more women that are taking leadership roles in these companies, not because they are women, but because they are the most capable. And this is the sign that we need to look at when we look at where the future is heading. Now, with regards to the board of ChatGPT, I think we need to really look at what the board does in its first meeting, in its first year, before defining whether that was a good move or a bad move. I don’t want to talk about individuals.
With regards to how we look at diversity, so one of the reasons why we volunteered for Falcon, for example, and Jace and our large language models to be open sourced is because the biggest issue that we have today is when you try to eliminate bias, you actually are trying to eliminate our differences, and especially for governments. Think about it this way. If I bring an American LLM and try to deploy it in a country like India, it makes absolutely no sense, because you’re trying to remove all biases against people that are Indian in the US to make it more holistic, right? But there are some nuanced biases that are really good for the systems to understand. They need to understand how Indians live, what the culture is, how people are, for it to be deployed in India. If it’s a closed source tool, you can never do that. So for it to be open source, it means that you will be able to train it in a way that will ensure the positive biases for your culture, for your populace is there. There’s another thing as well. The weightages that are given with regards to whether the system is factual or whether it’s creative, whether it actually takes in data that is being fed or actually believes that whatever it had before is the fact, is important for us to look at, because I don’t think there’s going to be one tool that everyone’s going to use.
ChatGPT is going to be a great general purpose LLM. The people are going to go for general questions. But I do believe that there are going to be models in India that are going to be cutting edge. I do believe that there are going to be models in Africa that are going to do the same. I think the UAE’s view on this is, whether it’s JACE, whether it’s Falcon, or any of the others that we are developing, we want the world to be able to use it as their own, without us trying to infer our bias, or to have the colonial mindset of saying, we know best, take that product and use it. Actually, I want to say, we want to work together. You can help us make it better, and let’s build something for all of us.
Host: So, I think it’s really interesting around the approach that you’ve done there, and Falcon, JACE, as you mentioned. Tell me some of the challenges that you’ve seen as those large language models have been developed. I know dialects is one issue. What are the other challenges you’ve seen?
Omar Sultan Al Olama: There are so many challenges. I think there’s a plethora of challenges. I don’t want to go into all of them, but let me give you a few. The first is, there is a rate of diminishing returns as you develop these algorithms and these LLMs. You can develop one and be cutting edge today in a very short period of time as technology advances, as computing advances, as even more people go into the field, you realize that people can close the gap with regards to what you built versus what already exists in a very short period of time. So, this is an issue. Who can afford to keep building these? A sovereign state can. Maybe some of the largest corporations in the world can. But some countries that don’t have the resources like the UAE cannot. And that is why for us, again, going back to this, making it open source is paramount, because we understand that it is going to be a challenge that people are going to face. Another challenge is talent.
To be absolutely honest, there is not enough talent in the world working in this field. The third challenge is data. And with data, everyone can go and scrap the Internet and use that data to train these large language models. However, I must ask, because I don’t have the answer, how much of the Internet’s content is Indian data or in the Indian language? I don’t know. Is it enough? I’m sure it’s not nearly as enough as we want it to be. If we want it to be the most capable, how do we ensure that people produce more content? There is a way for people to create data, like synthetic data, and then use it to train these models. But then there are some nuanced problems that you’re going to go and face when dealing with a tool that was trained on synthetic data that might include gibberish, that might just include things that are, let’s say, the vanilla service, right? It’s just content for the sake of content rather than real content.
And for countries the size of India, when you have over a billion people, you can create content in your dialect, in your language, with your cultural requirements. But for smaller countries around the world, you cannot. So we will go through this phase of building tools that completely forget people from different corners of the world. I think that we need to be very cautious not to do that.
Host: You spoke about talent there, and I was looking at some of the pictures of the first graduating class here at the University of Artificial Intelligence here in the UAE. Delighted to see that the 2023 valedictorian was a woman, and that the graduating class comes from 25 different countries, including the UAE, China, all across Africa. Tell me what your hopes are for developing talent through the university?
Omar Sultan Al Olama: So the Mohammed bin Zayed University for Artificial Intelligence is a university that really is taking capability first, I think, as a requirement, but also diversity as a key enabler for that university. Today, the model that we took is not a quantity model. It’s a quality model. So you see the number of graduates is not as big as most universities, but the quality is extremely high. One interesting thing is, if you look at the UAE specifically, and I can talk about the UAE example here, if you look at the graduates of our universities, we have the highest graduation levels of women in any country in the world. So most, if not all, women in the UAE actually graduate with a degree from high school and then go into university, they’re enrolled into university.
And if you look at the distribution, more women go into STEM in the UAE than men. So in a very short period of time, the same way that we saw, for example, a woman lead our Mars program and our space program, Her Excellency Sarah Namiri, the same way we saw a woman create Falcon, for example, the large language model, her name is Samer Mazroui, and you see a lot more women actually leading the technological charge forward. I think in the region also, we are going to see more women come into power in technical spheres, which I think is going to naturally mean that this technology is going to be more inclusive. As a government, we need to push forward. So we are doing programs specifically dedicated towards women. But my view, and I’m saying this with humility, it’s much easier to convince a woman to go into STEM than a man. And maybe a challenge that we are going to face in the near future is getting more men in STEM.
Host: All right, that’s fantastic.
Omar Sultan Al Olama: I don’t know if you should be optimistic about that.
Host: So let’s talk about regulation, because AI is a very broad field, right? It covers so many things from large language models to autonomous cars. Tell me how you view this issue of regulation and how regulation should be different for different sectors.
Omar Sultan Al Olama: So I’ve been calling for this. I’ve been saying that many calls for regulation of AI are non-starters. And I’ve been saying this for a few years. And the reason why they’re non-starters is it’s as if I told someone I’m going to regulate a field of computer science, or I’m going to regulate electricity. You don’t regulate electricity. You regulate where electricity is used and what are the outcomes. AI is a field of computer science. It is very difficult for you to have one set of regulations that cut across all its use cases. The second thing is the impact of AI differs on geographies. So I’m sure the Indian stakeholders in India that are looking at the impact of AI in India will see much different challenges to India than I’m seeing in the UAE because of demographic differences, because of the differences in the job classes that people have and the type of jobs that they have, as well as the maturity of the technology.
So self-driving cars today are very mature, but they’re not ready to go to market for many different reasons, whether it’s infrastructure readiness, whether it’s technology readiness, whether it’s regulations, or so on and so forth. Large language models are here with us, and they’re currently creating disruption. What does it mean for India? What does it mean for the UAE? It’s a question that we need to be asking ourselves. And then finally, you have specific tools that are so broad in their use cases that the question is, how do you regulate them? How do you regulate computer vision? I think you need to regulate where it’s used. For certain things, it’s actually really good. For other things, it might be controversial. So we need to have more nuanced conversations on the regulation side, and we need to look at it as the output-focused regulation rather than a very broad-based regulation.
And that’s something that I’ve been calling for continuously. I’m happy that the UAE is included in many of the global dialogues, whether it’s the UN Secretary General’s high-level advisory board on artificial intelligence, or the World Economic Forum’s body on AI governance, or many of the other bodies. And this is what we are constantly calling for. We need to do more on that front.
Host: Are there certain areas of AI that should be agreed on, however, by the world? Like we regulate nuclear weapons, for example, or we try to regulate issues around climate change.
Omar Sultan Al Olama: Autonomous weapons is one of them. And I think we also need to look at the potential for harm. When the potential for harm crosses borders, it’s significant enough that all of us agree this will actually harm us, if it’s in someone else’s hands. So how do you look at it? It’s not whether you possess the capability. You think about it in the sense that if someone else possesses this capability, and they can use it against me, will I allow for it? And if the answer is no, we need to work on regulating that across the board.
Host: We’re just days ahead of COP. There’s been a lot of discussion about whether, how much AI can help us solve this climate issue. Where do you see the benefits of artificial intelligence in terms of helping us manage the damaging effects of climate change?
Omar Sultan Al Olama: Fantastic question. And I think there’s a lot of synergies or similarities between climate change and AI. One, if we look at the similarities, climate change is not an issue that a single country can tackle on its own. Everyone needs to tackle it together. The same with artificial intelligence. I don’t think the problem of AI governance and the uses of AI in the positive sense can be tackled by a single country. We need to actually work together because it crosses borders. In the same way, climate change actually crosses borders as well. It’s a timely issue.
So the more time that passes, the bigger the problem gets. It’s the same between climate change and with artificial intelligence. And then finally, it’s a very difficult issue to tackle using human capabilities on their own. So with climate change, the amount of data that is being generated across the world by all these sensors cannot be analyzed by a human. It’s absolutely impossible. Just think about it this way. And this is not on the climate front. I’m going to give you something on the engineering side. An engine from a Boeing 747, or any airplane for that matter, produces 10 gigabytes worth of data for every minute of flight. If you actually ask a human to analyze that data, it’s going to take a long period of time. Imagine if you want to do that for all of the airplanes in the world, to understand all of their emissions, how to make them more efficient or effective, and what the impacts are going to be.
It’s impossible for us. Even if we had all the talent in the world, it would be very difficult for us to do it without AI. With AI as well, the problem is the same. So as these tools learn and get better, the only way we’ll be able to audit them and actually ensure that we understand what they’re doing is by leveraging AI to oversee AI. And that is something that we’ve been calling for as the UAE. We think it needs to be a pragmatic approach, leveraging technology to combat the issues that we are seeing.
Host: We’ve seen a lot of hope coming from the first and second stages of the internet, social media. We’ve also seen some real disappointment. How can we ensure that the benefits of AI go to everyone, that we see it as an inclusive technology?
Omar Sultan Al Olama: Is this a global question or a local question?
Host: Well, I think about it for the poorest countries in the world. So how can we make sure that we extend the benefits of AI to include those at the end of the road?
Omar Sultan Al Olama: And by “we,” are you talking about humanity or are you talking about countries?
Host: I would say humanity.
Omar Sultan Al Olama: So why I ask you these questions is because if you want humanity to ensure that this technology is a positive technology for everyone, then all of humanity needs to be involved in this discussion. Today, unfortunately, the only way that they’re actually on board is by seeing movies like The Terminator and movies that show you that AI is really bad. So the only way that they can engage is to tell you, “No, we’re really scared. This is going to take our jobs. This is going to bring the apocalypse.” And that’s it. Not enough is being done to get people to understand what this technology technically is, what its capability is, and how it’s going to change the world. And I think we need to do a lot more of that. Now, to ensure that it’s going to be a positive technology, this is a question for governments.
And in governments, the answer I would say, the first answer is, if you want to make sure that the technology is a positive technology, you need to eliminate ignorance. Ignorance within the decision-making process of governments that will ensure that if I take a decision, I understand the ramifications and the repercussions it’s going to have across generations and across the board. And now, I want to say two things here, and I think I want to say something just relating to India and the UAE. When do you feel optimistic about the future? You feel optimistic about the future when you believe that the cohort of government leaders that are leading the charge are enlightened, are aware, and have the right amount of humility to understand when they don’t know something and they tap into the private sector. India possesses that, right? And I’ve seen it. So, you mentioned Minister Chandrasekhar, who is a dear friend. Just his knowledge on technology is really, I feel like he’s an ocean of knowledge in this field, and he’s able to really steer this discussion forward, whether it’s through the AI Safety Summit or many of the other summits.
But it’s not a unique thing that only Minister Chandrasekhar has. If you look at the Indian cabinet, whether it’s Minister Jishankar, Mr Vishnow, or some of the others that I have interacted with, you realize that they have the sense of enlightenment to be able to steer the discussion. Now, my question… I don’t want to talk about the UAE because I think I’m biased and people know this… but my question is, how many other governments possess the same level of expertise and knowledge? If the answer is too little, then you should be concerned for that specific question. If the answer is many, then we should be optimistic.
Host: All right, that’s good because we’ve actually run out of time, but I want to give you a massive IGF thank you for joining us here at IGF here in Dubai, ladies and gentlemen, His Excellency.