Skip to main content
[vc_empty_space height=”30px”]

Sam Altman: The Father of ChatGPT

Learn English with Sam Altman. Watch as Sam Altman, CEO of OpenAI, delivers his compelling testimony before the Senate Judiciary Privacy, Technology and the Law Subcommittee. Held in Washington, DC on May 16, 2023, this monumental three-hour-long hearing explores the potential risks of generative AI, its implications for the jobs market, and the urgency for government regulation. As one of the leading figures in the AI field, Altman’s insights offer a critical perspective in the dialogue on AI’s future. This hearing is the first in a series, setting the stage for future discussions on the ethical, legal, and national security concerns surrounding AI. Sam Altman is the visionary CEO of OpenAI, an organization at the forefront of artificial intelligence research and development. Known for his thought leadership in tech entrepreneurship, Altman plays an instrumental role in shaping the AI industry.

Download the Full Transcript and Audio Here!

English Speeches creates these files FREE & downloadable, so you can learn English and improve your vocabulary!

[vc_btn title=”DOWNLOAD HERE” style=”3d” color=”danger” size=”lg” align=”center” button_block=”true” link=”|title:Download%3A%20Angelina%20Jolie”][vc_empty_space height=”30px”]

Sam Altman | Quote

[vc_single_image image=”9997″ img_size=”full” alignment=”center”]

“AI is going to be the most significant development in human history.” Sam Altman


Senator Richard Blumenthal: For several months now, the public has been fascinated with GPT, DALL·E, and other AI tools. These examples, like the homework done by ChatGPT or the articles and op-eds that it can write, feel like novelties. But the underlying advancement of this era are more than just research experiments. They are no longer fantasies of science fiction. They are real and present. The promises of curing cancer or developing new understandings of physics and biology or modeling climate and weather, all very encouraging and hopeful, but we also know the potential harms. And we’ve seen them already.

Weaponized disinformation, housing discrimination, harassment of women and impersonation fraud, voice cloning, deep fakes. These are the potential risks, despite the other rewards. And for me, perhaps the biggest nightmare is the looming new industrial revolution, the displacement of millions of workers, the loss of huge numbers of jobs, the need to prepare for this new industrial revolution in skill training. Mister Altman, we’re going to begin with you, if that’s okay. Thank you.

Sam Altman: Thank you, Chairman Blumenthal, Ranking Member Hawley, members of the Judiciary Committee. Thank you for the opportunity to speak to you today about large neural networks. It’s really an honor to be here, even more so in the moment than I expected. My name is Sam Altman. I’m the Chief Executive Officer of OpenAI. OpenAI was founded on the belief that artificial intelligence has the potential to improve nearly every aspect of our lives, but also that it creates serious risks we have to work together to manage. We’re here because people love this technology. We think it can be a printing press moment. We have to work together to make it so. OpenAI is an unusual company, and we set it up that way because AI is an unusual technology. We are governed by a nonprofit, and our activities are driven by our mission and our charter, which commit us to working to ensure that the broad distribution of the benefits of AI and to maximizing the safety of AI systems.

We are working to build tools that one day can help us make new discoveries and address some of humanity’s biggest challenges, like climate change and curing cancer. Our current systems aren’t yet capable of doing these things, but it has been immensely gratifying to watch many people around the world get so much value from what these systems can already do today. We love seeing people use our tools to create, to learn, to be more productive. We’re very optimistic that there are going to be fantastic jobs in the future, and that current jobs can get much better. We also love seeing what developers are doing to improve lives. For example, Be My Eyes used our new multimodal technology in GPT-4 to help visually impaired individuals navigate their environment. We believe that the benefits of the tools we have deployed so far vastly outweigh the risks, but ensuring their safety is vital to our work, and we make significant efforts to ensure that safety is built into our systems at all levels.

Before releasing any new system, OpenAI conducts extensive testing, engages external experts for detailed reviews and independent audits, improves the model’s behavior, and implements robust safety and monitoring systems. Before we released GPT-4, our latest model, we spent over six months conducting extensive evaluations, external red teaming, and dangerous capability testing. We are proud of the progress that we made. GPT-4 is more likely to respond helpfully and truthfully, and refuse harmful requests than any other widely deployed model of similar capability. However, we think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models. For example, the US government might consider a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities. There are several other areas I mentioned in my written testimony where I believe that companies like ours can partner with governments, including ensuring that the most powerful AI models adhere to a set of safety requirements, facilitating processes to develop and update safety measures, and examining opportunities for global coordination.

And as you mentioned, I think it’s important that companies have their own responsibility here, no matter what Congress does. This is a remarkable time to be working on artificial intelligence. But as this technology advances, we understand that people are anxious, about how it could change the way we live. We are too. But we believe that we can and must work together to identify and manage the potential downsides, so that we can all enjoy the tremendous upsides. It is essential that powerful AI is developed with democratic values in mind, and this means that US leadership is critical. I believe that we will be able to mitigate the risks in front of us, and really capitalize on this technology’s potential to grow the US economy and the world, and I look forward to working with you all to meet this moment, and I look forward to answering your questions. Thank you.

Senator Richard Blumenthal: Should we consider independent testing labs to provide scorecards and nutrition labels, or the equivalent of nutrition labels, packaging that indicates to people whether or not the content can be trusted, what the ingredients are, and what the garbage going in may be, because it could result in garbage going out?

Sam Altman: Yeah, I think that’s a great idea. I think that companies should put their own, sort of, you know, here are the results of our test of our model before we release it. Here’s where it has weaknesses, here’s where it has strengths. But also independent audits for that are very important. These models are getting more accurate over time. You know, this is, as we have, I think, said as loudly as anyone, this technology is in its early stages. It definitely still makes mistakes. We find that people, that users are pretty sophisticated and understand where the mistakes are that they need, or likely to be, that they need to be responsible for verifying what the models say, that they go off and check it.

I worry that as the models get better and better, the users can have, sort of, less and less of their own discriminating thought process around it. But I think users are more capable than we often give them credit for in conversations like this. I think a lot of disclosures, which if you’ve used chatGPT, you’ll see about the inaccuracies of the model, are also important. And I’m excited for a world where companies publish with the models information about how they behave, where the inaccuracies are, and independent agencies or companies provide that as well. I think it’s a great idea.

Senator Richard Blumenthal: I alluded in my opening remarks to the jobs issue, the economic effects on employment. I think you have said, in fact, and I’m going to quote, “Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity”. End quote. You may have had in mind the effect on jobs, which is really my biggest nightmare in the long term. Let me ask you what your biggest nightmare is and whether you share that concern.

Sam Altman: Like with all technological revolutions, I expect there to be significant impact on jobs, but exactly what that impact looks like is very difficult to predict. If we went back to the other side of a previous technological revolution, talking about the jobs that exist on the other side, you can go back and read books of this. It’s what people said at the time. It’s difficult. I believe that there will be far greater jobs on the other side of this, and that the jobs of today will get better. I think it’s important.

First of all, I think it’s important to understand and think about GPT-4 as a tool, not a creature, which is easy to get confused, and it’s a tool that people have a great deal of control over and how they use it. And second, GPT-4 and other systems like it are good at doing tasks, not jobs. And so you see already people that are using GPT-4 to do their job much more efficiently by helping them with tasks. Now, GPT-4 will, I think, entirely automate away some jobs, and it will create new ones that we believe will be much better. This happens, again, my understanding of the history of technology is one long technological revolution, not a bunch of different ones put together, but this has been continually happening. As our quality of life raises and as machines and tools that we create can help us live better lives, the bar raises for what we do, and our human ability and what we spend our time going after goes after more ambitious, more satisfying projects. So there will be an impact on jobs.

We try to be very clear about that, and I think it will require partnership between the industry and government, but mostly action by government to figure out how we want to mitigate that. But I’m very optimistic about how great the jobs of the future will be. I think jobs and employment and what we’re all going to do with our time really matters. I agree that when we get to very powerful systems, the landscape will change. I think I’m just more optimistic that we are incredibly creative, and we find new things to do with better tools, and that will keep happening. My worst fears are that we cause significant, we the field, the technology, the industry, cause significant harm to the world. I think that could happen in a lot of different ways. It’s why we started the company.

It’s a big part of why I’m here today and why we’ve been here in the past and we’ve been able to spend some time with you. I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening, but we try to be very clear-eyed about what the downside case is and the work that we have to do to mitigate that.

Senator Josh Hawley: Help me understand here what some of the significance of this is. Should we be concerned about models that can, large language models, that can predict survey opinion and then can help organizations, entities fine-tune strategies to elicit behaviors from voters? Should we be worried about this for our elections?

Sam Altman: Yeah. Thank you, Senator Hawley, for the question. It’s one of my areas of greatest concern, the more general ability of these models to manipulate, to persuade, to provide sort of one-on-one interactive disinformation. I think that’s like a broader version of what you were talking about, but given that we’re going to face an election next year and these models are getting better, I think this is a significant area of concern. I think there’s a lot of policies that companies can voluntarily adopt, and I’m happy to talk about what we do there. I do think some regulation would be quite wise on this topic.

Someone mentioned earlier, it’s something we really agree with, people need to know if they’re talking to an AI, if content that they’re looking at might be generated or might not. I think it’s a great thing to do is to make that clear. I think we also will need rules, guidelines, about what’s expected in terms of disclosure from a company providing a model that could have these sorts of abilities that you talk about. So I’m nervous about it. I think people are able to adapt quite quickly.

When Photoshop came onto the scene a long time ago, for a while people were really quite fooled by Photoshop images, and pretty quickly developed an understanding that images might be Photoshopped. This will be like that, but on steroids. And the interactivity, the ability to really model, predict humans well, as you talked about, I think is going to require a combination of companies doing the right thing, regulation, and public education.

Senator Cory Booker: Do you have some concern about a few players with extraordinary resources and power, power to influence Washington? I mean, I see us, I’m a big believer in the free market, but the reason why I walk into a bodega and a Twinkie is cheaper than an apple, or a Happy Meal costs less than a bucket of salad, is because of the way the government tips the scales to pick winners and losers. So the free market is not what it should be when you have large corporate power that can even influence the game here. Do you have some concerns about that in this next era of technological innovation?

Sam Altman: Yeah, I mean, again, that’s so much of why we started OpenAI. We have huge concerns about that. I think it’s important to democratize the inputs to these systems, the values that we’re going to align to, and I think it’s also important to give people wide use of these tools. When we started the API strategy, which is a big part of how we make our systems available for anyone to use, there was a huge amount of skepticism over that, and it does come with challenges, that’s for sure, but we think putting this in the hands of a lot of people and not in the hands of a few companies is really quite important, and we are seeing the resultant innovation boom from that.

But it is absolutely true that the number of companies that can train the true frontier models is going to be small just because of the resources required, and so I think there needs to be incredible scrutiny on us and our competitors. I think there is a rich and exciting industry happening of incredibly good research and new startups that are not just using our models but creating their own, and I think it’s important to make sure that whatever regulatory stuff happens, whatever new agencies may or may not happen, we preserve that fire because that’s critical.

Senator Cory Booker: I’m a big believer in the democratizing potential of technology, but I’ve seen the promise of that fail time and time again where people said, oh, this is going to have a big democratizing force. My team works on a lot of issues about the reinforcing of bias through algorithms, the failure to advertise certain opportunities and certain zip codes, but you seem to be saying, and I heard this with Web3, that this is going to be decentralized, all these things are going to happen, but this seems to me not even to offer that promise because the people who are designing these, it takes so much power, energy, resources.

Are you saying that my dreams of technology further democratizing opportunity and more are possible within a technology that is ultimately, I think, going to be very centralized to a few players who already control so much?

Sam Altman: So this point that I made about use of the model and building on top of it, this is really a new platform, right? It is definitely important to talk about who’s going to create the models. I want to do that. I also think it’s really important to decide to whose values we’re going to align these models. But in terms of using the models, the people that build on top of the OpenAI API do incredible things. And it’s, you know, people frequently comment, like, I can’t believe you get this much technology for this little money.

And so what people are, the companies people are building, putting AI everywhere, using our API, which does let us put safeguards in place, I think that’s quite exciting. And I think that is how it is being, not how it’s going to be, but how it is being democratized right now.

Senator Richard Blumenthal: I’m going to close the hearing, leave the record open for one week. In case anyone wants to submit anything, I encourage any of you who have either manuscripts that are going to be published or observations from your companies to submit them to us. And we look forward to our next hearing. This one is closed.

Sam Altman