deep fakes
The Case for a Deepfake Equities Process
The United States needs to create a government-wide process to carefully weigh if and when it would ever use deepfakes.
Latest in Artificial Intelligence
The United States needs to create a government-wide process to carefully weigh if and when it would ever use deepfakes.
But not as the overseer of general-purpose AI.
Broad AI transparency requirements will require well-resourced institutions to translate information into concrete protections that affirm democratic values.
Given the potential power of new AI tools, it’s sensible to place restrictions on them to minimize harm. But it’s also worth thinking about the potential uses of AI that operates free of such restrictions in order to mitigate the risk that such harm will occur.
The fast-developing law of AI appears to be… risk regulation. But there is more to risk regulation’s toolkit than impact assessments.
Innovative tools could collect the views of U.S. national security officials about what kinds of defense and intelligence AI we should use.
AI’s vulnerability to adversarial attack is not futuristic, and there are reasonable measures that should be taken now to address the risk.
The State Department’s political declaration on military AI is a good start to building global norms, but the U.S. needs to work with allies to make it a reality.
If ChatGPT is granted First Amendment rights, it won’t be because we are convinced that it has attained human-like personhood.
It already exists.