Posted January 17, 2024
By Ray Blanco
AI Gets Political
The political faultlines of artificial intelligence are being drawn as we speak.
Was it inevitable? Sure.
Is it already well on its way? Of course.
But nothing may represent a possible Left vs Right divide on the subject than the annual meeting of the World Economic Forum in Davos that’s currently taking place.
The highly influential WEF has made AI a top priority at this year’s gathering, with discussions being held by key figures in artificial intelligence development, such as:
- Satya Nadella (CEO of Microsoft)
- Yann LeCun (Turing Award winner)
- Mustafa Suleyman (co-founder of DeepMind)
- Sam Altman (CEO of OpenAI)
Over the next few days, these industry leaders will work with representatives of world governments to potentially shape the future of AI use and regulation.
And their fingerprints will almost definitely be left permanently on this powerful and dynamic technology…
It’s already been announced that OpenAI is working with the Pentagon to develop new cybersecurity protocols using their AI.
This happens shortly after OpenAI removed from their terms of service language effectively banning military and warfare applications.
While their military involvement is “defense only”, as the ban still includes using AI to develop weaponry, it’s still a compromise that has many people concerned - and could signal just how critical the meetings in Davos could be this week.
Deepfakes are likely the most pressing and disturbing implication of artificial intelligence that we’re currently facing.
Even now, the AI-produced images and audio imitations of real people aren’t always easy to tell from the real thing.
The potential impact of the widespread use of this advanced disinformation tool is enormous. Even prior to our introduction to this technology, our confidence in national elections was at an all-time low.
Earlier this week, OpenAI released a blog post on how they’re addressing the potential abuse of AI in elections across the globe in 2024.
In the years since the contentious 2020 U.S. election, few groups have even been able to even agree on which areas of election security need to be addressed.
So a private company - especially one tied up in so much recent turmoil - writing up the framework for how the process should be improved, is sure to raise some eyebrows.
According to OpenAI’s outline, it seems that their emphasis is one that most people seem to agree on (relatively speaking)...
We need transparency. Both in the election process / results, as well as with AI itself.
For example, they are working towards making it easy for users to identify whether an image was generated using AI technology, saying in their post…
“Better transparency around image provenance—including the ability to detect which tools were used to produce an image—can empower voters to assess an image with trust and confidence in how it was made.”
As far as how they’re addressing election transparency, that’s a little more vague…
“In the United States, we are working with the National Association of Secretaries of State (NASS), the nation's oldest nonpartisan professional organization for public officials. ChatGPT will direct users to CanIVote.org, the authoritative website on US voting information, when asked certain procedural election related questions—for example, where to vote.”
No matter where you sit on the political spectrum, most voters agree that truth and transparency has the ability to cure a lot of the issues we have faced in selecting our leaders.
It’s too early to say exactly how the WEF meetings will shape the future of AI, but we’ll be keeping a close eye on the news that comes out of Davos this week.
With that, we’d like to hear your thoughts. How do you think AI regulation should be handled, if at all? Let us know what you think at firstname.lastname@example.org.