Vincent Cerf has a surprising essay published today in the New York Times which argues against making Internet access a human right.
He's right, but for the wrong reasons, because he is self-interested, on behalf of his company, Google, a massively multi-user online platform with free search and paid ads.
Vincent Cerf, on behalf of his business interest Google -- essentially no different than any other oligarchic corporate interest -- wants to make sure that he does not become liable in any way -- by law or by moral pressure -- to secure access for his customers to the machine that makes his products. The Internet isn't just a work tool for Vincent Cerf, like a lathe; it's like wood or water or coal for other industries -- it's a utility. A utility that has not been treated as a utility by anybody, really -- not governments or corporations -- but treated as an inexhaustible natural resource.
Cerf is right to take up that seemingly selfish position based on pure business interest because it isn't fair or just -- no freely-associated entity -- business, union, nonprofit, club -- should be forced to do things it does not want, to provide goods for people at its own expense; to take members it does not want, to say things it does not believe -- unless there is a really compelling legal requirement, i.e. against incitement of imminent violence, or theft or fraud. Freedom of association is really important and shouldn't be trumped by freedom of expression -- that is, one group in society should not be making another group "do something," i.e. supply a social good like access, and the government should not do this either as a compulsory law -- the inevitable question that arises is: who will pay for all that access? I've only got a regular Verizon DSL line. Shouldn't I be entitled to FIOS? How much broadband?
Cerf's assertive complaint comes at a time when Google is being asked as a company to take on various free-the-Internet chores -- give secret aliases to revolutionaries or pseudonyms to dissident writers under authoritarian regimes or victims of domestic violence on Google+, the new social networking platform; stand up to authoritarians like China that want to block search or people's free expression; lobby against various pieces of legislation that seem to "break" the Internet like SOPA; and take on various other ethical obligations (as it seems to have done, but not really, in the GNI).
But Google doesn't have a budget to connect everybody to the Internet, and neither do most countries or the UN. Obliging them to provide broadband, free or cheap service, and guaranteed online time are all social goods, but they are achieved not as "negative" rights, where the state merely has to get out of the way not to stop freedom, but achieved as "positive" rights, where the state has to create policies and spend funds to secure them. The former belong to the realm of civil and political liberties; the latter to social and economic rights that the UN and some countries, but not the US, recognize. The US believes that achieving social goods is done through free politics, making policies, and democratic budget-setting, not by constitutional law.
As the veteran civil libertarian Aryeh Neier once explained in a debate with the economic rights proponent Philip Alston, who has held a number of UN positions, if you make health care a human right, as South Africa did after the end of apartheid, then you will end up with cases like a demand for kidney dialysis to save a patient that the country cannot afford, and the state making a decision by a court of law not to save a life. Canada can afford to take care of dialysis patients; South Africa cannot.
Policies don't always become viable or visible at their extremes, and you could still argue about a state's obligation to provide a basic set of health services -- perhaps school vaccinations and pap smears and mammograms and EKG tests or something. But these are *debates* and setting international norms and constitutional norms about them to set them in law doesn't work if you have a capitalist system with a free market. Which is what benefits Cerf and Google.
Vincent Cerf has adopted this anti-rights position with an ideological flair, not making the arguments that UN experts might about positive and negative rights -- or another argument about justiciability -- i.e. the ability to declare how a right is violated and how to find a remedy through litigation. He's done this in terms of moral duty rather than law, because of the implications of litigation and liability.
Example: I had a lovely blog here with 10 points written a few months ago when I read the report of UN Special Rapporteur on Freedom of Expression, Frank LaRue, on the Internet, where he basically calls on states to create a "right to the Internet". This idea is very attractive to NGOs who want to add it to the repetoire of international norms that they see themselves as securing (and thereby helping to put themselves in power). But the reason LaRue in fact stops short of an outright call is that there is no treaty supporting it per se, and demands to negotiate a new treaty among states would immediately bring in the "defamation of religion" debate that continues to trouble UN bodies and which hasn't been entirely put to rest by the US-Egyptian co-sponsored resolution at the UN Human Rights Council.
But my lovely blog was eaten by a browser crash and the blogging software didn't save a copy as it is supposed to -- and voila, my Internet rights were violated. See how that works? When you make access to the Internet a right, like the right to water, when God or Nature doesn't let it rain, when your computer crashes, technology or other higher forces become the dispenser of that right.
When the US argues at the UN against economic rights, it says they are aspirations that should be the subject of politics, not law. Neither the US or Somalia have signed the Convention on Children's Rights, yet arguably children's rights are far, far better secured in the US than Somalia due to a host of factors in and outside of law.
Cerf, too, calls the concept of Internet access as a right to be aspirational -- he defines this as a goal, not an obligation. Furthermore, he makes the technologists' quintessential argument -- technology is just a tool -- it's neutral! It's not a thing-in-itself, but only a bridge to other things. So don't hold technologists responsible for what happens with technology!
At a simple level, sure, Cerf is right. The Internet is just a series of tubes, remember? Or as I always say, a telephone hooked up to a truck. It's not special. It's not an Autonomous Region. It is not Second Life with fairy wings!
But at a more complex level, he's misleading. Because in the end, Cerf says "Improving the Internet is just one means, albeit an important one, by which to improve the human condition." Ah, so he does believe, after all, that this "tool" isn't neutral, but should have welded into it a do-good mission. He is articulating the Silicon Valley geek religion that technology is salvational -- that it betters mankind. That it is also an instrument that led people to lose their teenages sons and daughters to suicide isn't a problem he accepts as his; and he certainly doesn't care if the Internet demolished the newspaper, book, music, and movie industries, and even government itself, in WikiLeaks. Oh no, technology is only a tool, and people in fact use it to worsen the world -- possibly because they don't, like Google, have a mantra: "Don't Be Evil".
Along the way, we get some other glimpses into the Cerfian metaverse:
Yet all these philosophical arguments overlook a more fundamental issue: the responsibility of technology creators themselves to support human and civil rights. The Internet has introduced an enormously accessible and egalitarian platform for creating, sharing and obtaining information on a global scale. As a result, we have new ways to allow people to exercise their human and civil rights.
In this context, engineers have not only a tremendous obligation to empower users, but also an obligation to ensure the safety of users online. That means, for example, protecting users from specific harms like viruses and worms that silently invade their computers. Technologists should work toward this end.
At one level this is corporate double-speak -- one of the rights-campaigners for the Internet like Rebecca McKinnon might well ask -- but if you support human rights, then why not make Internet access one of them? Or at least ask for a policy or a program that adds broadband to poor neighbourhoods or champions the rights of dissidents imprisoned for their blogs.
But at another level, it's a vision of people who think they already rule the world, but can and should be persuaded to do it ethically -- at least, by their own notion of ethics. For Cerf, "safety first" here doesn't call up to mind bullied teenage suicides or the loss of media jobs, but threats *to the technology itself* like viruses. Of course, this is an excellent example of selectivity -- technologists find worms to be a threat -- vandalism by other technologists -- but not piracy, stealing of content. They will block the one and declare the fight against one an ethical goal; they will not do so with piracy. And...that's one of the reasons why you don't let people who tell you they are "empowering" you -- as if they can do that? -- to run everything -- and take away or give you rights.
In fact, Cerf is also laying the groundwork endlessly laid already on LiberationTechnology, the subscription Stanford University list, for some kind of engineer/geek/progressive-driven body or network (a wired state) that would "run things" because, oh, governments, corporations, individuals, etc. are all so corrupt and, well, not progressive.
Ultimately, the right to Internet access depends on them more than it does on governments.
Cerf finishes by talking about the IEEE -- the Institute of Electrical and Electronics Engineers:
It is engineers — and our professional associations and standards-setting bodies like the Institute of Electrical and Electronics Engineers — that create and maintain these new capabilities. As we seek to advance the state of the art in technology and its use in society, we must be conscious of our civil responsibilities in addition to our engineering expertise.
Sounds boring and benign, no?
I've spent many an hour fighting the IEEE's virtual worlds subsection as it sought to gut out property rights, in obscure places like their discussion lists or the SL JIRA. At some point when the edgy techno companies like Linden Lab and other virtual world makers dropped out of attempting to make their worlds interoperable, of all things, the US military stepped in -- something I've protested mightily, but of course, alone, and in obscurity.
In the course of trying to understand what another standards-setting body is and does -- the IETF -- I found out how they don't vote on things. There's no vote. It's not a democracy. They say they can't have one-man/one-vote because they never know who might show up at meetings. Say, 100s of IBM engineers could show up all of a sudden and skew the vote. Or some company profoundly affected by X or Y decision rushes through a vote, or whatever.
So they do something else: they hum. They hold meetings, and they hum. It's kind of a "voice vote," if you will. If people like an idea, they hum. They may then be intimidated into humming because their neighbour is humming -- that's the kind of thing that happens when you don't have a secret ballot! -- but no matter. The hummers have it! (It's kind of like the way the Occupied Wall Street urban campers wriggle their hands to signify approval or not, looking disturbingly like autistic children -- anything but one person/one vote and a secret ballot, eh guys? Democracy...)
Yes, they do indeed hum -- and you can see about this and many other geek-cultural dogmas here:
Another aspect of Working Groups that confounds many people is the fact that there is no formal voting. The general rule on disputed topics is that the Working Group has to come to "rough consensus", meaning that a very large majority of those who care must agree. The exact method of determining rough consensus varies from Working Group to Working Group. Sometimes consensus is determined by "humming" -- if you agree with a proposal, you hum when prompted by the chair; if you disagree, you keep your silence. Newcomers find it quite peculiar, but it works. It is up to the chair to decide when the Working Group has reached rough consensus.