In a democratic society, matters of how much or little freedom to give in public demonstrations have to be debated and deliberated, not pre-ordained in code. Organic law, by contrast with code-as-law (as Lawrence Lessig has described the operation of code in cyberspace), is based on judicial interpretations and real case studies. Unlike computer code, it is not a series of binary “yes/no” options, and not physically coercive with “pass/no pass” or consecutively processed with “first in/last out”. There are a wide variety of opinions and circumstances on these issues in organic law, and the binary thinking of code and engineering is not the best mode to use on organic politics of real people in the real world.
Example, no. 8 of the Silicon Valley Principles ambitiously postulates that "human rights implications" can be recognized in code and that programmers (not states) should "ensure that technology is used in the exercise of fundamental freedoms and not for the facilitation of human rights abuses".
While the intentions may be noble, this is quite frankly an absurdity, and a dangerous one. No one has empowered technologists and their companies to make adjudications on matters of law -- much less incorporate them into automatic, robotic process that will be "good" or "bad". And as I’ve explained in my paper on “objections” to the principles, their commercial and corporate interests mean they are foxes guarding the henhouse.
Aside from this theological interpretation about code versus law (don’t concede that code *is* law), there are wild disagreements even among technologists themselves about these issues and it's wrong to set them in any kind of code.
Example: when the government of the UK raised the issue of whether Blackberry messengering services should be suspended in some areas where looting and rioting was deliberately planned with their use, Internet freedom advocates such as Rebecca MacKinnon issued cautionary warnings to David Cameron that the UK would become "like" China if it started blocking SMS messages.
Yet such an option, enforced by a liberal democratic state under the rule of law, would not have been “like” China, and certainly would not have deprived the public or individuals of their freedom of expression through many alternative venues besides Blackberry messages in one locality. (They could go on Facebook or TV). And when lives were at stake (people were injured and killed) and property damage and livelihoods at stake, the intervention of authorities due to "imminent violent action" seemed perfectly justified and consistent with Article 19 and British law protecting free speech.
In one sense, although on their own platforms they maintain sometimes draconian speech regulations, everywhere but in Silicon Valley’s cyberspace, restrictions as to “time, place, and manner” are recognized as lawful regarding freedom of speech. People will disagree about what is the best course here -- so how would "coding for human rights" work -- make it impossible for governments ever to shut off cell phones? Maintain an “always on” state defying intervention no matter what?
Or let's take the disagreements about satellite phone encryption and protection given the possibility they were used to assassinate journalists. There are opposing views on this -- some believe that it's not cell phones but computer connections such as Skype which were to blame in the recent deadly attacks on reporters in Syria; others think that in fact journalists should make their communications obviously trackable and mappable as a kind of protection in visibility -- although the good will of the states doing the killing of journalists hardly seems something to rely on.
So again, what is "coding for human rights" -- making no encryption possible, or encryption that might inevitably be hacked? People want their own individual options open, and technologists could foreclose them with rigid notions of what constitutes "rights". (And I might add here that in these principles, as in other manifestos from Silicon Valley, the concept of encryption is used selectively, arbitrarily, and opportunistically. When it came time to attempt encryption of digital content, always and everywhere the geeks told us it “could not be done” due to the ever-present possibility of hacking, even with obfuscation or simply policies that strove for some minimal compliance. When it comes time to securing their own privacy and realms, suddenly encryption is a necessity and a technical feasibility. (Much like the readiness with which geeks concede that malware sites can be blocked that harm their own software and their own domains, but refuse to apply the same logic to blocking obvious piracy sites.)
So while in the Silicon Valley Principle No. 8, the companies were essentially urged not to do evil, in No. 13, the companies would insist on the platforms staying open no matter what, and deprive governments and law-enforcers of taking appropriate action to stop assault and destruction of property, i.e. evil. How could that possibly be justified? The makers of electric lights, cars, phones and faxes have never insisted on such right-of-way and eminent domain for their products, nor such a “flexible” notion of good and evil.
No. 10 completely abdicates responsibility and in fact departs from the sort of basic TOS protections that all ISPs have -- there are court-established limits to how much you can harass, stalk, and bully a person online and expect the ISP to shield you from the police. The recent case in New Jersey that found that a college room mate who used a web cam and Twitter to bully his fellow gay room mate on line, who committed suicide, is an example of the limits communities and courts are putting on social media technology.
No. 11 gives away the utopian notion of "a borderless virtual world" which in fact defies the sovereignty of states. But even the virtual world of Second Life in fact has to abide by the laws of the State of California on computer fraud, as some customers have found, and the platform providers are forced to ban gambling, pyramid schemes, child pornography simulation (because it is actionable in Europe), use of Nazi symbols and other restrictions because in fact they aren't in an autonomous realm and face real-life law-enforcement response if not credit company cancellation of services. The notion of the “borderless virtual world” that now directly assaults Congress and other liberal democratic institutions to get its way must be challenged.
The principles are permeated with a notion that not states or international treaty bodies or courts will decide human rights enforcement, but groups of "stakeholders" -- engineers, users, whoever. They don’t have status or credibility to set and monitor rights because they are selective about which rights they chose; they define them tendentiously to serve their own corporate interests; and they are not conceding whatsoever – they are revolutionaries, not conceding that sovereign states with checks and balances and parliaments and courts decide law – not them.
I call this "the tyranny of who shows up" and the authoritarianism of innovation. These are platforms without the most basic tools of democracy – to cite but one small example with great implications, they are deliberately, ideologically structured to keep out "no" votes -- you can "like" but not "dislike".
Recent Comments