The European Union has begun consideration of a new Data Protection Directive that is intended to protect personal information from uwarranted disclosure by corporations or governments. Among the more controversial aspects of the proposal is the idea of a "Right to be Forgotten" -- that is a right to require data holders (like, say, Google) to delete information about you -- even if the information is truthful. Some say that the right is technically impossible to implement while others contend that it is essential to protect privacy.
Into this debate steps my good friend, David Hoffman who has begun an new discussion of the policy implications of obscurity. He calls it the "Right to Fail" and its worth a read. I'm not sure I agree with it all, but he is headed in an interesting direction. From the introduction:
In the past, I have discussed the European Commission’s “Right to be Forgotten” proposal, and the issues with trying to provide a comprehensive right to wipe a record clean. I have argued individuals need a sphere of privacy where they know they can make mistakes, without those errors following them for the rest of their lives. Individuals will shy away from risky or provocative ideas and efforts, if they fear organizations will use those activities to discriminate against them forever. These provocative ideas challenge the status quo and are often what is needed to break away from conformity and innovate. Technology companies are familiar with this need for space to allow employees to innovate, and many structure their performance review systems to create the ability for individuals to take risks. I call the need for this space for innovation, “The Right to Fail”.