The context and possible implications of Advocate General Henrik Saugmandsgaard Øe’s opinion in Data Protection Commissions v. Facebook Ireland.
Latest in Privacy: Technology
The advocate general’s opinion details some important new jurisprudence about how the EU may look at foreign intelligence surveillance in the future.
Since it took effect in 2018, the General Data Protection Regulation (GDPR) has become one of the toughest data privacy regimes in the world.
After last year’s passage of the Clarifying Lawful Overseas Use of Data Act (Cloud Act), officials and journalists in the European Union have ramped up criticism of the American desire for extraterritorial access to electronic evidence, with some accusing the United States of being motivated by the desire to conduct economic espionage for the benefit of U.S. economic interests.
As a proponent of baseline federal privacy legislation, I am encouraged that proposals that would have been poison pills not long ago, such as individual rights to see, correct and delete data as well as new authority for the Federal Trade Commission, are drawing wide support now. But some crucial and difficult issues remain wide open.
A Congress that begins with a government shutdown carrying over and a raft of subpoenas to the executive branch issued by incoming House committee chairs promises to be at least as polarized and partisan as its predecessor. Even so, legislators want to legislate, and will seek some opportunities for bipartisan agreement. One area where this may happen is federal legislation to protect personal information privacy.
Last June in a 5-4 ruling in Carpenter v. United States, the Supreme Court extended Fourth Amendment protections to an individual’s cell phone location data for the first time.
Chinese human rights practices are in the news again. The White House is reportedly weighing sanctions against Chinese officials and companies that are engaged in or facilitating the mass surveillance and detention of Uighurs in the Xinjiang Uighur Autonomous Region (XUAR).
Back in February, we joined forces in this post to draw attention to the wide array of dangers to individuals and to society posed by advances in “deepfake” technology (that is, the capacity to alter audio or video to make it appear, falsely, that a real person said or did something). The post generated a considerable amount of discussion, which was great, but we understood we had barely scratched the surface of the issue.