For a long time, I wasn’t sure how to feel when I read about self-professed minimalists and zero-wasters talk of decluttering their life thanks to the cloud. In a more or less direct way, these authors and lifestyle gurus were singing the praises of companies like Netflix and Google, Spotify and Pintrest. And at first it might make sense, right? Think of all the DVDs or CDs that won’t end up in the landfill thanks to streaming services. All the magazines and books that wont be taking up room on your shelves. All the burned fossil fuels saved by avoiding shipping them to the local big box store where you would have bought them.
What you, and honestly most people, don’t realize, though, is just how fundamentally dirty proprietary software, cloud services, and the associated information-gathering, really are. Let’s start with the most obvious: server farms and data centers.
What does over one billion monthly active users look like when projected onto material reality? Almost 50 thousand Google searches per second? How much physical space and energy do our online activities require? According to Paul Wallbank, who was asked to research this for an ABC radio show way back in 2012, one can only guess:
Figuring out how much data is saved in computer systems is a daunting task in itself and in 2011 scientists estimated there were 295 exabytes stored on the Internet, desktop hard drives, tape backup and other systems in 2007.
2007? That’s ancient history as far as technology is concerned. We’ve likely quadrupled that amount since. He goes on:
In 2009 it was reported Google was planning to have ten million servers and an exabyte of information. It’s almost certain that point has been passed, particularly given the volume of data being uploaded to YouTube which alone has 72 hours worth of video uploaded every minute.
Facebook is struggling with similar growth and it’s reported that the social media service is having to rewrite its database. Last year it was reported Facebook users were uploading six billion photos a month and at the time of the float on the US stock market the company claimed to have over a 100 petabytes of photos and video.
According to one of Microsoft’s blogs, Hotmail has over a billion mailboxes and “hundreds of petabytes of data”.
For Amazon details are harder to find, in June 2012 Amazon’s founder Jeff Bezos announced their S3 cloud storage service was now hosting a billion ‘objects’. If we assume the ‘objects’ – which could be anything from a picture to a database running on Amazon’s service – have an average size of a megabyte then that’s a exabyte of storage.
The amount of storage is only one part of the equation, we have to be able to do something with the data we’ve collected so we also have to look at processing power. This comes down to the number of computer chips or CPUs – Central Processing Units – being used to crunch the information.
Probably the most impressive data cruncher of all is the Google search engine that processes phenomenal amounts of data every time somebody does a search on the web. Google have put together an infographic that illustrates how they manage to answer over a billion queries a day in an average time of less than quarter of a second.
Google is reported to own 2% of the world’s servers and they are very secretive about the numbers, estimates based on power usage in 2011 put the number of servers the company uses at around 900,000. Given Google invests about 2.5 billion US dollars a year on new data centres, it’s safe to say they have probably passed the one million mark.
How much electricity all of this equipment uses is a valid question. According to Jonathan Koomey of Stanford University, US data centres use around 2% of the nation’s power supply and globally these facilities use around 1.5%.
The numbers involved in answering the question of how much data is stored by web services are mind boggling and they are growing exponentially. One of the problems with researching a topic like this is how quickly the source data becomes outdated.
It’s easy to overlook the complexity and size of the technologies that run social media, cloud computing or web searches. Asking questions on how these services work is essential to understanding the things we now take for granted.
Facebook users were uploading six billion photos per month back in 2011. How many photo albums, how much film, would that translate to? Ah, but it would never be a direct equivocation; according to the Jevons Paradox, that number only exists because it’s so easy to deal with photographs in this way, compared to the relative “inefficiency” of dealing with film and photo paper of the pre-digital era.
Okay, so, 2% of US energy goes directly to processing search queries, broadcasting tweets, and generally keeping the internet alive 24/7. According to The New York Times, that 1.5% worldwide 2012 figure is equivalent to the output of 30 nuclear power plants. According to that same article:
A yearlong examination by The New York Times has revealed that this foundation of the information industry is sharply at odds with its image of sleek efficiency and environmental friendliness.
Most data centers, by design, consume vast amounts of energy in an incongruously wasteful manner, interviews and documents show. Online companies typically run their facilities at maximum capacity around the clock, whatever the demand. As a result, data centers can waste 90 percent or more of the electricity they pull off the grid, The Times found.
To guard against a power failure, they further rely on banks of generators that emit diesel exhaust. The pollution from data centers has increasingly been cited by the authorities for violating clean air regulations, documents show. In Silicon Valley, many data centers appear on the state government’s Toxic Air Contaminant Inventory, a roster of the area’s top stationary diesel polluters.
Worldwide, the digital warehouses use about 30 billion watts of electricity, roughly equivalent to the output of 30 nuclear power plants, according to estimates industry experts compiled for The Times. Data centers in the United States account for one-quarter to one-third of that load, the estimates show.
“Its staggering for most people, even people in the industry, to understand the numbers, the sheer size of these systems,” said Peter Gross, who helped design hundreds of data centers. “A single data center can take more power than a medium-size town.”
Energy efficiency varies widely from company to company. But at the request of The Times, the consulting firm McKinsey & Company analysed energy use by data centers and found that, on average, they were using only 6 to 12 percent of the electricity powering their servers to perform computations. The rest was essentially used to keep servers idling and ready in case of a surge in activity that could slow or crash their operations.
“This is an industry dirty secret, and no one wants to be the first to say mea culpa,” said a senior industry executive who asked not to be identified to protect his company’s reputation. “If we were a manufacturing industry, we’d be out of business straightaway.”
There you have it, folks, and straight from the horse’s mouth. Data centers, from the coal fueling their substations to the rare earth metals in their equipment, are downright toxic operations, and no amount of green-washing can change that. It is the nature of the digital beast.
“That’s all very interesting,” you might be thinking to yourself. “But what does this all have to do with proprietary software and surveillance?”
Good question! Now that I’ve established that data centers, the powerhouses behind our social media, our search engines, and our entire modern online existence, are fundamentally antagonistic to an environmentalist way of living, we can start talking about the ways that those data centers are used and identify the myriad ways that they are misused from an eco-justice perspective. Fasten your seat belts.
The title of this post names the two biggest problems in the centralized world of technology that the vast majority of us currently live: namely proprietary software and surveillance. Proprietary software can be defined as any kind of program that runs on any kind of electronic device that is not open-source. Opensource.com describes open-source products as “something that can be modified because its design is publicly accessible”.
While it originated in the context of computer software development, today the term “open source” designates a set of values—what we call the open source way. Open source projects, products, or initiatives are those that embrace and celebrate open exchange, collaborative participation, rapid prototyping, transparency, meritocracy, and community development.
What’s the difference between open source software and other types of software?
Some software has source code that cannot be modified by anyone but the person, team, or organization who created it and maintains exclusive control over it. This kind of software is frequently called “proprietary software” or “closed source” software, because its source code is the property of its original authors, who are the only ones legally allowed to copy or modify it. Microsoft Word and Adobe Photoshop are examples of proprietary software. In order to use proprietary software, computer users must agree (usually by signing a license displayed the first time they run this software) that they will not do anything with the software that the software’s authors have not expressly permitted.
“Computer users must agree that they will not do anything with the software that the software’s authors have not expressly permitted.”
To most people, this is a given. After all, if you rent a car from a rental company, there’s a reason you have to sign a contract outlining all the ways in which you won’t abuse this vehicle you’re borrowing. The car is not yours, after all; the car’s owner, the rental company, gets to make the rules, and if you don’t like it, then tough. But software isn’t actually anything like a rental car– or anything else that you can physically own that has restrictions put on it. In fact, it’s not even like a car that you bought and paid for yourself and that you own outright.
Cracked.com used a similar analogy to explain a disturbing trend in video game publishing and distribution. This was back in 2011, but things are still on this path:
Imagine if every time you drove your car, you had to first check in with the car manufacturer to confirm that it’s you behind the wheel. Let’s say that this relies on an Internet connection, and if the connection is down, you can’t drive.
With a car, you can paint it, swap out parts, and fix it yourself all you want. You’re even allowed to take off the maker’s mark. You can’t do that with proprietary software, even if you did pay full price for it and own a physical copy. It’s against the law. Why is this?
Well, it’s the same argument against digital piracy. Firstly, digital media piracy is a misnomer– traditionally, a pirate was someone that committed theft and violent crime at sea, stealing physical goods and inflicting physical harm against victims. But associating the term with those who violate copyright laws has been in practice since before there even were copyright laws; since at least the early 1600’s, and modern copyright-holders have been quick to jump on the bandwagon when it comes to describing peer-to-peer sharing. This practice of equating copyright infringement with theft and piracy hasn’t been permitted in US courts since the mid-80’s.
So if software isn’t comparable to a physical product, but is something that you are forced to purchase in order to “own”, then why are there restrictions on what you can do with that product, who you can give it to, how you can give it? Even though there isn’t even a physical, material original of the thing that can be stolen, gifted, and generally moved from one place to another instead of reproduced indefinitely with no harm done to the original? If you’re not allowed to modify your car, or let someone borrow it, or in some cases, even look under the hood, would you say that you truly owned that car? Probably not.
Richard Stallman, creator of the first open-source computer operating system, GNU/Linux, recalls the history of software since the 80’s to describe why this is a problem:
In 1983, the software field had become dominated by proprietary (ie nonfree) programs, and users were forbidden to change or redistribute them. I developed the GNU operating system, which is often called Linux, to escape and end that injustice. But proprietary developers in the 1980s still had some ethical standards: they sincerely tried to make programs serve their users, even while denying users control over how they would be served.
Does anyone remember Ello? It was that new, hip, social network that was supposed to be a Facebook-killer. It was in the spotlight for about 2 months late last year and I haven’t heard from it since… maybe that’s because I left Facebook at the time and headed for Diaspora* instead, and most of the discussion about Ello was happening there. Or maybe, it’s because Ello was doomed from day one, and has already fallen into obscurity. The thing about Ello was that it was trying to be Facebook… except without all the stuff everyone hates about Facebook; its selling point was that it was going to be the “first” ad-free social network out there, that its users weren’t a product to be sold to advertisers. But advertising is only just one of the problems with Facebook–and indeed all centralized social media platforms–and this was an uphill battle for the burgeoning website. In other words, they were trying to make a new kind of fire that only burns slightly less hot than normal fire, and brand it as safe to touch.
How would Ello make money, people asked. Servers don’t run on social capital and positive mentions on twitter, programmers and designers don’t get paid and venture capitalists don’t get rich off of an adoring userbase alone. So amid the vocal criticisms taking over the blogosphere, Ello announced that it had finalized a manifesto. Ah, yes, a manifesto! Because we know that companies always keep their promises. Not.
Stallman continues (oh and hey, here’s more for that car analogy):
How far things have sunk. Developers today shamelessly mistreat users; when caught, they claim that fine print in EULAs (end user licence agreements) makes it ethical. (That might, at most, make it lawful, which is different.) So many cases of proprietary malware have been reported, that we must consider any proprietary program suspect and dangerous. In the 21st century, proprietary software is computing for suckers.
What sorts of wrongs are found in malware? Some programs are designed to snoop on the user. Some are designed to shackle users, such as Digital Rights Management (DRM). Some have back doors for doing remote mischief. Some even impose censorship. Some developers explicitly sabotage their users.
What kinds of programs constitute malware? Operating systems, first of all. Windows snoops on users, shackles users and, on mobiles, censors apps; it also has a universal back door that allows Microsoft to remotely impose software changes. Microsoft sabotages Windows users by showing security holes to the NSA before fixing them.
Apple systems are malware too: MacOS snoops and shackles; iOS snoops, shackles, censors apps and has a back door. Even Android contains malware in a nonfree component: a back door for remote forcible installation or deinstallation of any app.
What about nonfree apps? Plenty of malware there. Even humble flashlight apps for phones were found to be reporting data to companies. A recent study found that QR code scanner apps also snoop.
Apps for streaming services tend to be the worst, since they are designed to shackle users against saving a copy of the data that they receive, as well as making users identify themselves so their viewing and listening habits can be tracked.
The Free Software Foundation reports on many more cases of proprietary malware.
What about other digital products? We know about the smart TV and the Barbie doll that transmit conversations remotely. Proprietary software in cars that stops those we used to call “car owners” from fixing “their” cars. If the car itself does not report everywhere you drive, an insurance company may charge you extra to go without a separate tracker. Meanwhile, some GPS navigators save up where you have gone in order to report back when connected to update the maps.
Amazon’s Kindle e-reader reports what page of what book is being read, plus all notes and underlining the user enters; it shackles the user against sharing or even freely giving away or lending the book, and has an Orwellian back door for erasing books.
What’s to stop these proprietary developers from encroaching on our privacy and freedom even more than this? Well, nothing, really. Nothing except the resources necessary to take these guys to court and compel a judge or jury to side with the user. And even then, it would be too little too late: the whole reason proprietary software exists is so that its code can’t be audited. And if it can’t be audited, then you have no control over what happens to you and your data when you use that service, let alone know what’s happening to it.
But wait, there’s more!
Closed-source software is protected by copyright law the world over, and many components of software falls under the category of intellectual property. (And these laws are getting tighter and tighter with every new international trade agreement that gets passed.) So unauthorized distribution and modification are prohibited. An incomplete answer to this has been the Creative Commons, which allows content producers to specify a custom license for their works that may permit modification, derivation, redistribution, etc. Having those options spelled out in easy-to-understand language is helpful, sure, but I argue that all unfree creative works, and in this case, unfree, closed-source software, is bad.
Above, it’s already been proven that un-auditable software creates an environment that is at best ambivalent to, and at worst encourages, abuses of users by both the companies themselves and third-party exploiters. But proprietary software also undermines the integrity of:
Creativity and Innovation
The legal structures that allow proprietary software to exist, patent and copyright law, do much more to stifle creativity and innovation than they do to encourage it. Thanks to these practices, Apple has ownership over rounded corners on phones, for instance. Many hurdles in the history of the electric car can be blamed on patent misuse as well. But as one user on Opensource.com points out, when it comes to software:
Software is mathematics. Mathematics is not patentable. The whole current process is irrational because the lawyers and judges and legislators WANT software to be patentable and so they keep trying to argue that some mathematics is mathematics and other mathematics is not mathematics. It is inherently contradictory and the current disaster is what we have.
Who does proprietary software, copyright law, and patent law protect? In theory, it’s supposed to level the playing field for everyone, but in practice, it accomplishes quite the opposite. Large companies can go out of their way to put their smaller competitors on the defensive, and in some cases, acquire their patents once they’ve got them facing bankruptcy. The innovations that might have changed the industry for the better, and put the bigger company out of business, get buried.
Or what about the case of individuals who have come up with a creative work, a piece of art or software, only to later find out that a larger company had appropriated their work and are now making money from it on a mass scale? This has been the case with the development of large, highly-visible projects like Minecraft, and happens all the time with hobbyist artists across the web as they go to war against entities that run the gamut from random Chinese Etsy stores to Urban Outfitters. Many of these creators are young, low-income-earners, and sometimes even minors; none of these people have the ability to shut down all instances of “art theft”, or even any of them. Sometimes all they can afford to do is send emails. Going to court is completely out of reach, and most of the time, the bullying company in question would be able to spend circles around their nobody prosecutors in court and legal fees anyways. In other words, there is often no recourse for the very people copyright and patent law is supposedly designed to protect the most.
I’m imagining it right now: “What in the heck does this all have to do with the environment? With zero waste??” you must be thinking. I know, I’ve been covering a lot of ground that looks irrelevant, but don’t worry! I’m just about there. But let’s summarize first:
- The physical infrastructure of digital goods and services is a disaster for the environment
- The legal infrastructure of digital goods and services is disastrous for humans’ natural inclination to be creative and innovate
- The principle of prioritizing complete control over digital goods and services is disastrous for the needs and safety of users
- Proprietary = surveillance
Let’s revisit the legal thing again.
Take a moment to think about the legal process, the machine, that allows proprietary software to exist: patent and copyright laws; intellectual property laws; private property laws; the judicial system to rule on these laws; the police and FBI to enforce these laws; the prison-industrial complex to deal with those who break any of these laws with sufficient frequency or severity; lawyer fees; court fees; consulting fees; resources to issue DMCAs and takedown requests; resources to file patents and copyrights; resources to file lawsuits; resources to defend against lawsuits; the physical and human infrastructure required to process all of that filing, all of that suing, all of that patenting.
Now, imagine what life could be like without copyright and patent laws. Okay, let me revise that; imagine a life without copyright laws and patent laws in a world with no profit motive. People would make things for the sheer joy of making things, just like how we all did when we were children. Somewhere along the way, though, we started being told not to copy other people, to produce things that were useful and not “frivolous”, to be happy with what we get when it comes to proprietary innovations and products that other people make.
But no. We live in a world where smart phones are put together using glue instead of screws because planned obsolescence is more lucrative than creating a product that’s built to last and easy to fix.
Enforcing laws that protect private property, that protect profits, are wasteful. This system eats up unimaginable amounts of paper, energy, e-waste, fossil fuels. It squanders the creative drives of individuals who are yoked, willing and unwilling, to the pursuit of profit. It punishes innovations that don’t please the right people at the right time, and paves the way for what some argue to be a grand, digital, plan. With so many casualties in this war for the “right” to spy on users, the “right” to shackle users to toxic software, and the “right” to abuse users, all with impunity, shouldn’t we start seeking out alternatives? And if there’s one thing I know zero-wasters are really good at, it’s finding alternatives for wasteful products.
So where do we start?
Well, the first obvious solution is to make the switch to as much free and open-source software (FOSS) as possible. Use Diaspora* instead of Facebook and Twitter; use VLC instead of iTunes or Windows Media Player; use Libre Office and Etherpad instead of MS Office and Google Docs; use Firefox and Tor instead of Chrome, IE, and Safari; use Calibre to manage your e-book collection; use Thunderbird to manage your emails; use Amaya or KompoZer to manage your websites; use Pidgin/Adium for instant messenging; use Jitsi for VoIP voice and video calling; use DuckDuckGo for your browser searches.
And so on. Simply by removing yourself from the ecosystem of profit-driven software and technology, even if just a little, you’ve reduced your footprint. Surveillance tech and data miners have one less opportunity to log information about you to store on some server farm someplace. One less opportunity to sell you something or sell you to someone else. One less opportunity to bully you using a rigged legal and judicial system if you fall out of line.
It’s useful to think about the future that I want, I believe; to think about that future in vivid, painstaking, technicolor detail.
I imagine it because I have to. I owe it not just to myself, but to the world around me, the ecosystems and species we’ve crippled, and my brown and black neighbors living on far-flung continents. I owe it to their children, too. Because if I can’t imagine the world that I want to live in, then I definitely can’t work toward it, and if I can’t work toward it, it will never become a reality.
Does that world involve the internet, though? Honestly, I don’t know. Without an oil industry, I’m not sure if there’d be enough motivation to keep manufacturing plastics and silicone. Who would want to mine and refine coltan without a warlord holding their village hostage? I know I wouldn’t. I’ve got a million other things I’d rather do than dig pits looking for gray rock so that I could give that gray rock to someone else, who will give it to someone else, who will give it to someone else, who will eventually give it to someone who could then make a smartphone out of it?
It may surprise you to hear that my ideal world doesn’t really have any need for smart phones. Or server farms.
It may also surprise you to hear that the internet doesn’t need either of those things to survive. (That is, if it’s meant to survive.) The key concept here is decentralization.
A decentralized system is always more resilient than a centralized one. As an example, look to biology. A human is a very complex organism; we have a brain, the command center, that controls the rest of the body with exacting detail. If something goes wrong with that brain, say, head trauma, then the rest of the system can suffer tremendously… or be lost altogether. A head injury can easily be fatal, even if the rest of the system is perfectly healthy and undamaged. A tree, on the other hand, is tremendously resilient in terms of damage. Lopping off the crown or any individual branch will rarely ever spell death for a strong tree. In many species, cutting down an entire specimen may not even be permanently fatal as the roots will often send up new leaves around the stump.
Of course, trees aren’t quite as complex as human bodies are. But who said complexity is a good thing? Trees are remarkably good at what they do; that is, communicate between its different parts, make and distribute nutrients, and figure out the best way to survive given its location.
Shouldn’t that be what the internet does? Or rather, shouldn’t that be all the internet needs to do in an ideal world? Using the word “should” is always tricky when you’re an anti-authoritarian, but I do have to draw the line where the environment is concerned. Right now, the internet looks a lot more like a human body than a tree, and that needs to change if the internet is to survive in the future as a tool with a neutral carbon footprint. It has to be easily disassembled, easily fixed, lightweight (in terms of code), and less complex. Unfortunately that complexity currently manifests as cloud computing, as streaming services, as proprietary software that pulls strings from inside black boxes. Sorry, Netflix fans. I don’t think there’s any way to make your smart TV carbon-neutral.
When I do my imagining of that ideal future, the most technologically “advanced” communities look a lot like that of post-civilizationist thinkers. Computers and video games made with 100% reclaimed materials, powered with maybe reliable, maybe not, renewable energy from local wind turbines and water wheels. I imagine a society where the internet is used more like a telephone or ham radio; that is, intermittently, peer-to-peer, and only when needed. Maybe it’ll resemble the newsgroups and message boards of the early days of “internet 1.0″. Maybe it won’t. Maybe we’ll have figured out how to fit much more meaningful information and communication into a smaller, simpler network of servers and terminals, winking on and off the map as the sky gets cloudy or the wind picks up, depending on where you are and what time of year it is.
Mostly, though? I long for a future where computers aren’t necessary, where internet access isn’t the only way to be truly present in your community. Pretty funny to hear this coming from a millennial who worked in video games for a while, huh? I honestly think that disconnection is the neurosis of the digital age. We’re disconnected from ourselves, from the people in our immediate communities, from our bioregion, from the plights of others far away.
We need that immediacy and presence back in our lives; our brains and bodies evolved to thrive on it, hundreds of thousands of years before the first drive wrote the first bit. The internet, as a concept broader than most folks, techies and laymen alike, care to imagine, isn’t wholly incompatible with that reality. I believe that there is hope for an internet divorced of industrial infrastructures and resource extraction. But the internet as it exists is for now, sadly, mired in it.
“When somebody says ‘I’m going to store something in the cloud, we don’t need disk drives anymore,’ — the cloud is disk drives,” Mr. Victoria [a professor of electrical engineering at at the University of Minnesota]. “We get them one way or another. We just don’t know it.”
Whatever happens within the companies, it is clear that among consumers, what are now settled expectations largely drive the need for such a formidable infrastructure.
“That’s what’s driving the massive growth — the end-user expectation of anything, anytime, anywhere,” said David Cappuccio, a managing vice president and chief of research at Gartner, the technology research firm. “We’re what’s causing the problem.”