Florian Sprenger
With the rise of smart media, the internet of things, and ubiquitous technologies in the last decade, the power of calculation has been transferred from isolated, locally bound end-devices into environments on a large scale. ‚Everyware‘, as Adam Greenfield termed these technologies, operates spatially independent in a network and is at best context-sensitive on the basis of large amounts of sensorial data collected by end-devices. Beginning with the establishment of mobile laptops and tablets, popularized globally with the smartphone and projected with the rise of the internet of things, digital technologies gain more and more independence from geographical space and transform our environments into spatially distributed networks. At least this is what companies tell us and what users experience. The infrastructural foundations of this process might reveal another outlook. Computers evidently have not only become devices of daily use, but migrate into more and more objects that communicate with each other. The technical permeation of our surroundings nevertheless depends upon external storage and centralized processing powers, because miniaturization and automatization foster the construction of smallest components with few applications, but a high degree of interconnectedness. The centres of these processes are data centres.
In the face of the enormous amounts of data and the comforts of ubiquitous access, almost no new gadget abstains from cloud services and externalized data storage. The mechanisms of economic extraction that are connected to these technologies are based on a centralized analysis of collected user data. The foundation of this new dispositiv of digital cultures are not only infrastructures of distribution that enable mobile addressing and constant availability in the form of digital networks. Rather, an intensification and centralization of more and more ambitious services takes place in the background. Many of the developments of the internet of things abstain from local storage and these services have no place and no time on the user’s devices themselves. But not only users utilize data centres, often without noticing. Many institutions and companies outsource storage capacities to external providers which have the neccessary knowledge, promise data security, and in the end turn out to be economically more favorable than local, company-owned datacentres. The service of datacentres has become a global industry – big players afford to built their own network of datacentres, others rent capacities from external suppliers. Today, computing at the edge, that means in spatial distribution, mobile and miniaturized, is possible only when it is accompanied by computing at the centre. Datacentres turn out to be a signum of our present.
But what exactly happens with data in datacentres remains opaque – not only because their operations are kept as an industrial secret, but also because data at the core of a datacentre are invisible. There is no public in a data centre – customers don’t know each other and can only be connected by the providers. The windowless buildings with metal paneling, usually huge boxes with two walls for optimized heat isolation, prevent any view of the inside. On the rare occasions at which a company gives access to these hallways, only endless server racks and the measures for their protection become visible – secured against fire, earthquakes and terrorist attacks. The huge rooms with servers never sleep and don’t know nighttime. All the more important it is thus to develop an conceptual language to cope with the geopolitical and mediatechnological dimension of these interconnected phenomena. What makes them so decisive for our present and the near future are not only the new uses and applications, but a new entanglement of space and technology. The centralized storage of user data and their consequential examination, the restructuring of software, and the ubiquitous availability of data raise a series of mediatheoretical questions reaching from possible surveillance to the reterritorialization of national territories through extraterritorial datacentres.
In this sense, the triad of storage, transmission and processing can help us to understand the operational modes of datacentres at least heuristically. That datacentres may offer all three modes in a bundle, and that this combination accounts for their productivity, does not mean that the distinction between these three operations becomes obsolete. Rather, this triad, which was introduced as the basis of his technologically oriented media theory by Friedrich Kittler referring to the computer architecture developed by John von Neumann, can lead us to an attempt to classify of the different operational modes of data centres.
Storage and backup, availability and accessability are only a customer-friendly offer that hides its materiality under the name of the cloud. Datacentres are also used for the postprocessing of data, for the analysis of big data, and for the local convergence of datasets between cooperating institutions or companies. Cloud-based software such as Microsoft Office 365 or Adobe Cloud Services transform the supply of software into a service that is provided on centralized servers and not on local computers. No social media, no online shopping, no video streaming, and no NSA-surveillance without datacentres and their capacity for data processing. But also the transmission of data through digital networks hinges upon respective infrastructures: every internet node is a datacentre at which, during the act of packet switching, data is temporarily stored for further distribution (and possibly surveillance).
The centralization fostered by datacentres is bound to the massive distribution of networked devices without storage, because storage takes space and is inert. The outsourcing of intensive processes of storage, calculation, and energy into datacentres is one of the presuppositions of the distribution of devices. The smart objects of the internet of things and the cloud-based smartphones are bound to datacentres and constantly exchange data. This centralization is also an economic concentration on the big five Amazon, Apple, Facebook, Google and Microsoft. The geopolitics of data is an element of the chains of value extraction of digital networks. It is no coincidence that Amazon is the largest supplier of cloud services for institutions and companies. Centralization, in this sense, is also an economic model. These processes of concentration undermine the common idea of democratization through digital networks and thus contradict the supposed horizontal alignment of geographical differences.
Accordingly, the material signum of our times are not only mobile devices, but also datacentres and the resulting separation of data collection and data processing. The billions of end devices in the hands of users are confronted by a few gigantic server farms. What appears as a cloud to the users and makes services such as search engines, music and video streaming, online shopping and social networks possible is a complex and capitalized ensemble of millions of servers with specialized software and often also self-designed hardware. Such datacentres, called landhelds by Google because of their demand for land, determine the connectivity of handhelds. Bruce Sterling calls the actors and winners of this process of concentration stacks: vertically integrated companies, whose business model consists in the constant economic utilization of data from users who use their infrastructure. To sustain this process, the stacks need influence both at the endpoints of devices, gadgets and sensors, and capacities for data analysis in the background of the cloud.
Datacentres are places both of acting with data and of dealing with the possibility of acting with data. Depending upon their geographical location, their technical configuration and their target groups, they offer different modes of operation with data. They can accomplish all the tasks that a personal computer offers regarding the data on its hard drive, but they also offer the chance of cross-connecting locally stored data – apart from their quantities of storage and the speed of processing itself.
For each of its different modes of operation, the productivity offered by datacentres is based on the local storage of data – that means on their collection at a centre and the materiality of their availability. Datacentres can exist without a network, without external access, and without the possibility of cross-connection. But the common denominator of all datacentres is the local concentration of data of different heritage. This concentration is bound to the de-centralization that digital networks brought with them and that reaches a new stage of escalation with ubiquitous media and the internet of things.
These different options offered by datacentres to handle data cannot be separated from their spatial relations: datacentres are centres at which decentrally distributed data are collected at one place. For this reason, the selection of na advantagous location is so important: on the one hand are the climatic conditions and the energy resources of the location, on the other hand is the connection to existing networks, for example undersea-cables or important internet nodes. In this sense, a breakdown of the different uses and operations of datacentres should include an analysis of the spatial relations of data. The infrastructural characteristic of a datacentre consists in the fact that it offers access here and there: as a cloud-service or as software-as-a-service, as a plattform for streaming or as a centre of calculation for the internet of things, as low-latency-processing or big data-analysis – the centres at which datacentres centre data are local to be global, and accessable all the time from every location – downtimes and maintenance notwithstanding.
In this regard, the analysis of the spatial relations brought forth by datacentres should include a discussion of the materiality and temporality of their infrastructures. The term datacentre itself implies the materiality of data, since immateriality has no centre – it is everywhere at once. Such centrelessness is often attributed to supposedly immaterial digital networks that in fact are material through and through. Information seems to have no weight, to be independent of its location and distribution in time and space. But there is no data without carrier, no message without a medium that binds it in space and makes it addressable in time. It is no coincidence that the increasing general interest in infrastructures and their materialities goes hand in hand with the importance of datacentres for digital cultures – to think about the digital remains shallow without taking into accound the infrastructures of its distribution.
Seen on this level, datacentres are both centres of data at which they are collected, and centres at which they are accessed and processed. They are, in other words, archives and information desks at the same time. They are centres of data and centres for data. In both cases they are infrastructural centres in a decentralized network. As centres in a network, they are the condition for the further diffusion of the network. While the historical development of this network was propelled by an imperative of decentralization and finally of distribution, as obvious in Paul Barans famous network diagram, the importance of datacentres can be understood as a counter-movement towards centralization – and consequently also to proprietarisation – of data and infrastructures. From the start, the architecture of the internet was construed to ensure redundancy by the multiplication of nodes. The attempt to make data accessible from different locations meant to optimize the number of possible connections between nodes in a way that guaranteed low costs and stability. Though this did not result in an egalitarian distributed network as imagined by Baran to prevent the nuclear destruction of central nodes, even the distributed internet of the present, in which several nodes gather large amounts of traffic while many small nodes remain insignificant, is formed by a spatial distribution which creates stability via redundancy. This structure is currently transformed by a new geopolitics of data whose centres carry the contrary tendency in its name (even though many providers promise to mirror data at different locations so that datacentres themselves are interconnected).
For the logic of the Cold War, which still lingers in the background of these developments, data centres seem anachronistic. Locations of centralized calculative power, such as Singapore or Hong Kong, make excellent targets for possible attacks on global infrastructures. Their destruction would result in a chain reaction of crises and cut off the global distribution of data. Even when governmental order is totally disrupted, security of data is supposed to be assured. These example are not intended to call for the real danger of a nuclear war, but help to situate the current meaning of data centres for the architecture of global connectivity. With the new infrastructure of concentration, the old scenario of crisis returns, which becomes visible in the self-descriptions and advertisements on the homepages of data centres: the latent crisis, which is the background to the attempts of the providers to secure redundancy of data, can grow into an imaginary apocalypse quickly. Data centres sustain their evidence from their presumed security: data centres are necessary, because they can be destroyed, not because they cannot be destroyed. It is this destruction, in which the loss of all data would result in the end of the world, that forces digital cultures to be constantly engaged in preparations.
Taking these introductory observations as a background, a media theoretical analysis of data centres can be oriented towards three complexes of questions:
1. What are the spatial relations that these technologies create? If data centres are both political institutions and digital infrastructures, then they realize new modes of power. How can we in this sense refer the relation of data centres to their networks on the relation of surroundings and surrounded that is central for current technologies?
2. What are the modes of operation of data centres? Under which conditions do they store, process and transmit data?
3. How can we describe the imaginary that goes hand in hand with data centres as dominant technologies? The fact that the world’s largest data centre is run by the NSA needs special attention in this context: the dream of a transparent universal archive gains a new dimension with data centres. But this imaginary is at the same time haunted by the apocalypse.
These preliminary questions stake out the field on which a media theoretical investigation of data centres could be based. Such investigations are possible only as collaborative projects: the digital cultures of the present cannot be reduced to social or technological questions. Consequently, the materialities of their infrastructures are bound to the imaginary of their evidence.