Conversation on digital archiving practice– June 2015

I was interviewed by Davide Georgetta and Valerio Nicoletti for their research in libraries, archives, and file sharing. I think my answers are interesting and worth sharing publicly.

Conversation on digital archiving practice


1. In which way are text-sharing platforms (repositories, digital libraries, p2p networks) becoming relevant in the process of knowledge disclosure nowadays?

It is important to separate out each of these systems in terms of their actual functioning and the affordances each provides. The first distinction is online versus offline. Within both of those broad categories there are a variety of systems and practices. So, for instance in the offline system, there are a variety of libraries being passed around that have a varying number of texts in them and are organised in greater or lesser fashion. For example, it is quite common that people will share USB sticks with books on them. While the size of USB sticks grows every year, the larger ones are still expensive and thus less likely to be shared. At the other end, people will often share smaller USB sticks – ranging from 4 to 16 or even 32 Gigabytes (GB) – freely as their cost is relatively negligible.

There are also larger libraries on hard drives slowly making their way around the world, one being the “Alexandria Project”. Depending on how much it has been used and edited, it can vary from 30,000 to 50,000 books or more. The collection of 50,000 books ranges from 250 to 300 GB. Another interesting collection is from who were selling hard drives at nominal cost that contained their website’s contents. Quite powerful libraries right there, in both cases, and not the only ones in circulation.

So, from students sharing USB sticks with a few to hundreds of eBooks to large hard drives filled with tens of thousands of eBooks, there are even larger systems. For example, one can procure a version of the Library Genesis collection – some 800,000+ books – on the Tor network. However, it is not a simple zip file of pdfs. It is much more complex than that and requires significant programming skills to make use of it, as well as one hell of a big hard drive, as the collection is (the last time I checked) about 14 terabytes (TB) in size. Such an array would be both expensive, bulky, difficult to use much less share offline.

This shows an extremely important component of the differentiation of text-sharing platforms, as each has its own exigencies centred around scale. What is extra-ordinary is that even at the low-end of the scale, say a 16 GB USB drive, we are dealing with some incredible scales. From my research I have found that my collection of pdf files averages around 5 megabytes (MB). This means than a 16 GB USB stick can hold about 3200 books. Practically, if one reads one of these books a week, one would need about sixty one and a half years to read all the books on that one tiny cheap USB stick. A personal portable library for most any human being would be easily contained on a 32 GB USB stick which can be bought for as little as fifteen dollars. So, if one knew in advance all the books one would read in one’s lifetime in advance, they would all fit on a USB stick that presently costs as much as a mediocre bottle of dinner wine.

A number of these USB sticks were released into the wild at the Transmediale festival in 2015, distributed using the DeadSwap system pioneered by the Telekommunisten collective. Each stick was organised around a centre of knowledge, so there was a stick for Political Theory, another for Philosophy, and another for Art and Aesthetics, and so on. Collectively, they were one of my contributions to Transmediale as “Datafield3“. These sticks were hidden in a variety of locations in the Haus der Kulturen der Welt (HKW) in Berlin during the exhibition period. Participants, using their mobile phones, would text into the DeadSwap system. Instructions would arrive to them on how they can find a USB stick. Of the ten sticks, nine disappeared into circulation in the HKW. This demonstrated that Datafield3 USB Sticks were a viable method of transmitting and managing research. All the Datafield3 sticks were self-indexed using the portable indexer Dropout, and each Datafield3 stick was curated to a particular discipline such as Science, Philosophy, Art and Aesthetics, Sociology, etc. This meant that a researcher using a Datafield3 stick is able to search inside the documents of a given field of knowledge using a variety of keywords and Boolean Search requests. Another, and very powerful, value of a Datafield3 stick is that as it is an offline system, no corporate, private, or government entities would have any idea what one is researching using these documents.

Offline libraries are (as is so common with the offline) slowly reaching potentials. One is to service the knowledge needs of people with little access to the internet. Even a highly restricted system like GoogleBooks or Amazon is of little value if your access to the internet is little better than a dial-up connection. Marginal communities – whether they are in Africa or the Arctic – are poorly served by the internet, and it is in such places that offline libraries could have a massive and positive impact on the knowledge and education needs of these liminal communities.

This style of research, of course, strikes at the corporate predispositions toward digitality, where all things should be monitored and related in an “internet of things”. That said, offline libraries operate at a glacial pace and a highly fragmented trajectory of distribution. Online systems, when coupled with broadband access, create entirely new environments and experiences where knowledge can be acquired at great speed. Online systems are not necessarily oppositional to the offline – they are more complimentary, and the two, offline and online, can and do operate in symbiosis.

In this way, various online libraries and communities are fed by personal offline libraries, and these offline libraries are assembled by people downloading material to them. The resilience and invisibility of offline systems and casual “sneakernet” file sharing systems thus acts as a support for the online systems. As the vectoralists go about shutting down online sharing systems, the next generation of online libraries exist in the thousands of hard drives collecting dust on scholars and students desktops. There are difficulties with this strategy of “whack-a-mole”, one being scale. A “mole” (a site deemed anathema by the proprietarian / vectoral interests) can be “whacked” (removed from the internet) instantly regardless of size. This was demonstrated in the destruction of – an online library of over 800,000 books that disappeared as quickly as an offending music blog of 10 records on BlogSpot would have been crushed. The difference, of course is that the offending music blog can get itself rolling again in a few hours. was gone, forever and its founders brought to court. The contents may have found their way back to the internet, but it took quite a long time. The largest online libraries face the same problem. Additionally, most users do not have symmetrical internet access, i.e., they cannot upload as fast as they can download. This makes things also very difficult if and when a large site is taken out of commission, as rebuilding can be very slow. was resurrected into other sites, but it took years, and they are still just as vulnerable and precarious as ever before. The proven precarity of the online and resilience of the offline creates the symbiosis between the two.

So, this analysis of the offline and online gives us a firmer material grasp of the relevance and use of the repositories, digital libraries, p2p networks and suchlike and their relevance in the process of contemporary knowledge disclosure.

So, to that, the different systems have different audiences and use values. It is also important to realise that these systems exist parallel or orthogonal to any questions of ethics or their economics. For example, there are many online repositories of books that operate outside of copyright considerations and many that don’t. They all serve different interests in the Access to Knowledge frame. The sub rosa systems logically have an audience that is composed of students and scholars, many of whom may not have access to a university research library, or are paid so poorly – the plight of the adjunct professor is well known – that they can’t afford to buy these books as they are often only somewhat less than extortionately expensive. In some cases, they don’t necessarily have access to the university’s library system, or their university is so strapped for cash, the university library itself can’t afford the journals and books they need. These exigencies drive people to the sub rosa online libraries. These sub rosa libraries are a great boon to students and independent researchers as they are often the least economically able to afford acquiring these volumes in ways that comport with proprietarian demands.

On the other side are these journals and publishing systems themselves. Many people talk about the “darknet” as if it is some kind of a den of thieves. This isn’t entirely accurate. The biggest darknets are actually these journals and educational nets – they’re dark because they’re unsearchable outside their paywalls. Like a sub rosa library it might not be searchable by Google, but unlike a sub rosa library, membership is not free. There are a variety of such nets, and they tend to be rather expensive and their actual contents are invisible and/or unavailable to non-members. These systems are of great use value to a variety of professions – engineers, doctors, scientific researchers – they all have a variety of paywalled datalockers at their disposal, for a fee.

The vectoral proprietarians and publishers are loathe to embrace models that question their raison d’etre. Their grasp is strong and far reaching – the events around Aaron Swartz’s untimely demise are a brutal case that point directly at the very same publishers and proprietarians that own these journals and information networks. Still, for those who can afford access, these systems are amazing as they can provide enormous amounts of data to their paying audiences.

So, all of these systems together have their limitations. However, societies that share knowledge the most are societies that function best. This puts paywalled and subscription based knowledge services on the wrong side of history. Some sectors of academia have been better at responding to this than others. The sciences, for example, have ArVix and similar systems where work can be submitted for testing, and the results can be read for free. These are good moves. The arts and humanities aren’t quite so forthcoming, and are falling behind STEM disciplines, although sites like Monoskop, Ubuweb, and are encouraging. The important part of this is that everyone has a right to access knowledge. Rights to benefit from the production of knowledge are alienable, but the right to access itself is not alienable – it is fundamental.


  1. How does a digital container influence its contents? Does the same book — if archived on different platforms (such as Internet Archive, The Pirate Bay, Monoskop Log, etc.) still remain the same cultural item?


Contrary to the predilection of the questions framing, digital contents are more influenced by their file type than the ownership of the server it’s located on. For example, let’s take my book Radical Tactics of the Offline Library (2014). It can be downloaded as a pdf from Internet Archive, the Institute of Network Cultures, and a variety of sub rosa file sharing systems. It’s an identical digital copy across all these platforms – the differences in collecting it from any of them is negligible. What makes a big difference is the file format. The most common formats are txt, pdf, and epub. The one favoured most by academics is pdf as it is directly paginated. Txt does not do this at all and epub’s support of pagination is partial and fragmented at best. Epubs are popular on eReaders and tablets as their text can be made to flow and zoom easily. This is not useful for scholars and academics where absolute page referencing is required.

PDF files also have the advantage of being lockable – once they are saved they are difficult to change, especially as pages are usually image scans from a book with an invisible text overlay for copy/paste purposes. This cuts both ways – a PDF can also be locked in such a way that its contents cannot be changed at all. So if a book has been scanned and locked with a password as a scanned book with no OCR, then there it is extremely difficult / impossible to OCR the book and give it copy-able content. PDF has a variety of these security levels, making it both extremely useful and opaque and useless at the same time, depending on its settings.

In contrast epub files are basically .zip files full of html text documents with epub file extensions. These can be opened in a zip file reader, altered, and then saved. So, the politics and history of epub vs pdf aside, with the ability to be easily modified and corrupted and a sketchy relationship with pagination, epub is mostly used for trade fiction and is less useful for scholarly work. That said, my book, Radical Tactics… is also available as an epub…

So, with the provision that the offline file type is a far greater determinant of how a “digital container” affects contents, we can now turn our attention to the online presentation of texts as described in the question. Also, there is the relationship, or “play” between online and offline versions which will require some address.

Different websites have different User Interfaces (UI) and these can and do affect the reader of these texts, more specifically in how they go about acquiring them. How these texts can be read online varies from site to site and from machine to machine – a 15 inch Apple retina display is going to be easier to read than a fuzzy out of focus 15 inch VGA CRT screen. Most computers, tablets and phones fall somewhere along that continuum. If a website has developed a mobile viewer for their content, then it will likely be much easier to navigate and read on a phone or tablet than otherwise. Given the shoestring budgets many of these sites have, mobile reading has not been a high priority – simply staying online is. For example, a few months prior to this writing, lost its DNS and scrambled to find a new address in the .fail domain. At this writing (23 June 2015) Monoskop is offline by similar means, underscoring the precarity of these online systems.

Each of these systems is working on new and more interesting ways to present what are ostensibly, pdf files, the presentation and experience of which is dependent more on the pdf reader or browser one is using. That said, different systems can create radically different reading experiences. For example, vs. google books. We can look at a book that is clearly in the public domain as our exemplar – “Through the Looking Glass” by Charles Lutwidge Dodgson (Lewis Carroll). With Google Books, one must search on text and then find where it occurs in the book and then go to those pages to view it. The text is readable, while not selectable or copy-able. With, one searches for the title, and one is directed to its page where it can be downloaded as a searchable and copy-able pdf.

These are radically different appreciations of the text as an object. For while Google states “If it’s in the public domain, you’re free to download a PDF copy” this is clearly not well implemented, as one is told that Through The Looking Glass is public domain, there is no clear method of downloading it. In contrast, on, the ability and method to download the pdf is very clear. is clearly more inviting. It provides several versions of the book for download, including one from Google(!?). One of the pdfs was also OCR’d, and so text could be copied out from it. The basic decency of supplying links to downloadable versions is an essential aspect of the experience one has with a site, and one where Google fails, badly. As a consequence, the “cultural item” that is Through the Looking Glass is radically different in presentation between Archive and Google. There are many pressings and printings of Through the Looking Glass. However, by working with the PDF file type, pagination is preserved and so one can use it in a scholarly context. Opposed to this would be, obviously, Amazon’s preview of Through the Looking Glass and, oddly, Gutenberg’s provision of Through the Looking Glass. Amazon’s version is even more hobbled and limited than Google’s as they want you to buy the book from them.’s version is less than useful as it is only provided in epub or txt formats – flowing text without pagination.

So, if one is a scholar or student and one wishes to study the topsy turvy world of Alice’s adventures in the looking glass, one would have to go to or a sub rosa site to find an OCR version of the PDF – the mirror opposite of what one would expect in a civilised society where knowledge and creativity are nurtured and venerated. As a consequence, it is not so much that the digital container (website) determines a book as a cultural item, as much as it attenuates, filters, and distorts our access to a given text.

These distortions lead to a “play” between online and offline, sub rosa and in lumine text storage systems. For example, imagine a book is published, written by “Z”. Someone painstakingly scans each page of the book and uploads it to a sub rosa website, “A”. Someone downloads the PDF, and she wants to read it on her tablet, so she wants it as an epub. With the appropriate software, she converts the pdf to epub. It is now flowing text. The vectoralist proprietarians send in their flying monkeys and sub rosa site “A” ceases to exist. In the shuffle and over time, the woman who downloaded the book lost the pdf version, but she has the epub version on his tablet. A few years later, sub rosa website B is now operative and is asking for donations. So, she uploads the epub version, figuring, something is better than nothing. Another user of sub rosa B sees the book by Z she uploaded and downloads it. He needs page references, as he is writing a paper on Z in his first year class on ()theory. So, he copies the text into MSWord, inserts page numbers, adjusts the size of the pages, the margins, and font to roughly what the book should be and then outputs a paginated PDF. His page numbers are going to be off from the original, but the pages are, word for word, correct –they’ve just been re-arranged.

He feels confident – if his professor says “I didn’t find that text in the book, and I have the tree-killer version right here”, the student says “Well, I have a digital version – check it out” and he sends the digital version to the professor, and sure enough – all the text is correct, just re-arranged as a different edition.

The knowledge itself is correct, word for word, however it has been reframed and repurposed outside the interests of capital… but that is getting ahead of our discussion. Suffice to say, these are the kinds of “play” between online and offline than can and do occur.


  1. The scanning of texts — for instance out-of-print books — and their subsequent storage as digital files have developed actually a new figure: the amateur librarian. What are the features, responsibilities, and limits for this “role”, in your opinion?

4. Marcell Mars has drawn up a kind of vademecum for all possible contributors of this amateur librarianship [Why and How to Be(come) an Amateur Librarian,
link]. Are there some features that can define the ideal digital book from the operative perspective of the librarian? [esp. file formats and tools to be preferred, how to organize files and metadata, how to manage the distribution and access to content and so on]

These questions can be answered as one. I fully support Marcell Mars’s efforts. He is blazing an important trail. While we differ in emphasis and direction, our fundamental interests in digital librarianship are very close. I would suggest that everything he wrote in Why and How to Be(come) an Amateur Librarian is excellent. I am not as big a fan of Calibre as he is, as Calibre copies and creates its own method of labelling and storing book files – a method I find less than useful for other systems. That said, Calibre is an excellent, if not the very best, system for an amateur librarian in managing eBooks. Marcell’s Let’s Share Books plug-in for Calibre is brilliant, as it transforms Calibre from a powerful file manager into a p2p booksharing service. This is great as it allows amateur librarians, scholars, and students to build their collections. The problems with Let’s Share Books are the same as any p2p system: asymmetry and asynchrony – asynchrony in machine availability and asymmetry in bandwidth and the variability of bandwidth in various locations. For example, in terms of bandwidth I have a typical internet access for Toronto and can download at around 17 – 18 megabits per second (mbps). Upload is a different story – I can only upload at 0.75 mbps – a tiny fraction of the speed I can download. To get symmetrical upload and download speed is only somewhat less than extortionately expensive. So, the asymmetry of upload and download bandwidth is a major brake on p2p networks, thrusting them back into speeds not seen since dialup. Asynchrony is the other problem – like any other p2p system, the computer has to be on and the p2p system operating. This isn’t always possible. These are the tools I have at hand, and compared to most of Canada, I’m in a very privileged sector – a point I will return to later. From this one can see both the strengths and shortcomings of p2p systems. My focus has been less on the transference of online data and more on the transference of offline data and its organisation and use value.

Regarding optimal practices by amateur librarians, I would suggest that the amateur librarian would optimally operate on a number of levels. I know doing all of these levels at once would be difficult and time consuming – a fulltime job in itself. Ideally, an amateur library or librarian would

1. Scan texts into PDF files
2. OCR the texts
3. Proof their work
4. Output the file as
a. PDF with security measures off

b. EPUB file
c. txt file with page numbers as a triple paragraph break with a page number in the second paragraph break.

5. Name the file in a standard way:
lastName, firstName-Title-(year).fileType
example: Warwick,Henry-Radical Tactics of the Offline Library-(2014).pdf
This enables Librarians to assemble the files and organise them on a drive by author A through Z, or in a series of directories (possibly based on fields of knowledge, say “Art” or “Science” or “Philosophy”) and then by author within those folders. This makes the library much more shareable. And useful.

6. back up their library multiple times and share it with other librarians to build resilience.

7. upload all versions of the text to an online library.

8. distribute copies of their library so that they are indexed in a portable indexer. This point is a complex one and I will return to it later, as well.

Obviously, that is a lot to ask of amateurs – especially the scanning part. Scanning requires technology, patience, and a lot of time. I don’t even do that. The technology can be acquired – it’s the patience and “the time thing” I am very short on. That said: scanning is a bedrock and crucial point in contemporary amateur librarianship, and I respect people who do that very much. While scanning the book as an image is good, it is only the first step towards an optimal eBook, as outlined above. OCR is critical, as it generates the text for scholars in a PDF of the scan, and it forms the textual basis of other digital documentation formats, such as epub. How these digital texts can be built is well documented in the book From Print to Ebooks: a Hybrid Publishing Toolkit for the Arts by Joe Monk, Miriam Rasch, Florian Cramer and Amy Wu.


  1. Does a way out from the debate between publishers and digital independent libraries (Monoskop, Ubuweb, exist, in terms of copyright? An alternative solution able to solve the issue and to provide equal opportunities to everyone? Would the fear of publishers of a possible reduction of incomes be legitimized if the access to their digital publications was open and free?


In terms of a “way out”, the question has to be “a way out from what?” The conflict between capital and human needs? There is only a problem because the proprietarians insist on their being a problem. If they just “go away” then there is no problem. However, that reveals a deeper issue which is compensation for labour – the difference between wage and chattel slavery. The social geometries inherent to the workings of the mechanisms of file sharing (online or offline) and digital libraries imply and indeed, prefer, a different social order.

This kind of discussion is a necessary one, but only in the absence of socialism. Independent online libraries and repositories and offline personal libraries operate from a fundamental re-coding of the notion of property and a different vision of society. Entertaining proprietarian and vectoral arguments assumes they merit discussion as if the agents of these ideas have any real part in the desired future. They do not, and therefore, their ideology can and should be ignored.

In my view, there is very little need for private academic publishers. The state manages education, and the state should be responsible for the dissemination of knowledge in a free and public space. I think that would be a better use of public moneys than spending more than a $100 million a copy for the F-35 fighter planes that can’t even fly in the rain or compete against cheaper planes made 40 years ago. For the price of a handful of such useless death machines, any country could easily fund peer reviewed online published journals and distribute them freely as PDFs. In terms of non-academic publishing, a universal minimum income would go a long way to reducing the oppression of authors by the publishing cartels. Authors would have enough to put food on the table and write. This would change the power dynamic between publishers and authors – good authors would thus be in higher demand and attain greater rewards without the fear of starving in a garret. Lesser skilled authors could contribute and hone their craft without the fear of starving in a garret. Questions of copyright are thus obviated. The problem with copyright is one of political will and consciousness. The publishing cartels are vectoral organisations – they survive off of the extractions around artificial scarcity of knowledge – something the world long ago established as immoral and inimical to the ideals expressed in the United Nations Universal Declaration of Human Rights.

This battle between the needs of the many and the profits of the few is not lost on the proprietarian / vectoralists. From their perspective digital production and file sharing does many bad things but it also engages some positive value, from their perspective. For one thing, digital copies of books are much cheaper to produce than paper, so there is a reduction in the cost of distribution. This externalisation of distribution costs comes at the reduction of the book printing industry and the wholesale elimination of the bookstore system. That is “not their problem”. Secondly, by turning printed books (and journals) into a boutique or specialist system, the price of such items is dilated and can reach absurd values – stories of $200 or more per copy for textbooks is not uncommon while, at the same time, people can self-publish books through Amazon for less than $10 a copy, retail. Digital books can be sold for even less as the storage of an arbitrarily large number of them can be distributed for nearly the same cost as a single “copy”. It is at this juncture that digital libraries and repositories and archives insert themselves, for the entry cost of these systems is low. In the meantime, given the present political economy, distributing books for free to all people would implode the publishing industry. This could be good, but there would be losses involved as we would also lose editors who help craft the books we finally read. Also, the Author would be put in a very similar position to that of a musician, if not worse. A musician can perform their music, and music has a high re-use value, so there are income streams from radio, digital downloads, and a tiny bit from streaming services. Books have a very low re-use value. Most books are read only once, and if they are loved, maybe several times. People will cheerfully put a song on repeat and listen to it over and over. So, the Author is at a disadvantage even compared to the Musician, and the music industry has been slowly sinking since Napster, and musicians are in a tighter spot than ever before.

Thus, the plight of the Author is an excellent case for the implementation of guaranteed minimum income (as discussed earlier) and the socialisation of the media sphere itself.

To evade this, you will see capital shift not to books and vinyl records, but to dematerialise media itself and control access to it. This is already occurring in terms of music – CD sales have collapsed, and even digital file sales of music have dropped as people sign up for streaming services – a complete centralisation and feudalisation of the music sphere. These services are not socialised and run in the public interest to the benefit of artists – they are private vectoral corporations extracting wealth. The material outcome is simple: external media is made redundant and ignored as computers and other devices become vectors of media consumption and wealth extraction. iPads and iPhones do not have USB ports that will see external hard drives. They can only operate through a process of iTunes software. The new Macintosh laptops also do not have USB ports, and such peripherals can only be accessed through an adaptor which comes at extra charge. Yes, there are other computers than those designed in Cupertino, but Apple computers often set the market for future development. In fact, USB is a case in point. USB was invented by Intel and had been around for years. No manufacturer wanted to put a USB port on their computer because there were no USB peripherals. Apple broke rank, replaced ADB with USB, SCSI with Firewire, and other manufacturers quickly followed suit.

It is in the vectoral interest to limit the range and closely meter both the quantity and character of the data flow to these new computers which, especially in the case of cellphones and tablets, are little more than dumb terminals for media consumption. It is in the control of these flows that wealth is extracted in fees and subscriptions. If the media producers starve in garrets, it is of no concern to the vectoral interests, as they extract wealth using percentages of very large numbers. Siphoning off an even tinier percentage to a narrow group of media producers makes these producers very wealthy, thusly providing the illusion that there is some kind of meritocracy in a flattened technocratic media sphere, when, in fact, this is clearly not the case. Also, this acts as a rhetorical feint presented as proof of the proprietarian claims of authorship and recompense for labour.

The interests of capital, in present form, would see the elimination of offline libraries simply by design – nowhere to plug them in; and the elimination of online libraries through tighter control over the flows of online media. Some will argue “aah – but we will still have work-arounds and darknets” and this will be true – but also irrelevant. Marginalisation of already liminal media distributions will continue. DRM will become unnecessary as the data flows of approved media will simply squeeze out unapproved media. Network Neutrality is a condition, a state, not an infrastructure. ICT is not built for or against it. Vigilance by the many is required. At the same time, this is no guarantee of success. Given the infrastructural changes – streaming media, dumb terminals (cellphone, tablets, automobiles, etc.), and restructured / reconfigured computers optimised for streaming media consumption (ever more “streamlined” laptops and the decrease in use of desktop computers) there is great momentum against libraries, archives, and repositories of media – online or offline.

There is an alternative, but it is not a technological one. It is a matter of political economy, as discussed earlier.


  • After your answers, we would also receive suggestions from you. Do you notice any unresolved or raising questions in the contemporary context of digital archiving practices and their relation to the publishing realm?


There are a number of topics worthy of discussion that logically flow from matters of digital libraries, online and offline. These range from the philosophical to the practical and in between. As noted in my previous answer, these libraries operate from a recoding of property theory. Present day theory that informs copyright theory is one based in a classic Lockean frame of a negative commons with a labour theory of property. In such a formula, “no one” owns the world as it was given to humanity by God per instructions in sacred texts. That is a negative commons. One creates property by accumulating material from the natural world and exercising labour upon it, thusly transforming it into property. This property can also be sold, as can the labour that produced it, and this is accomplished through a contract. In this way, one job of government is to enforce contracts and protect property. (Another job is the protection and projection of the interests of the ruling class, but that’s a different discussion…) Given the galloping catastrophe of industrialism and the exigencies of the Anthropocene, I think it is safe to say that the idea of nature as a negative commons and property as a product of labour upon materials acquired from the negative commons of nature is clearly a bad idea done poorly. For all the glories of modern civilisation – moonshots, the internet, electronic music, lettuce in February, plastics, jello shots, and Adam Sandler movies (actually, skip that one – his movies are terrible…) – it is inconceivable to consider it all “worth it” in the face of the Sixth Great Extinction as its direct and necessary result.

However, that is exactly the place we find ourselves in – a civilisational cul de sac of geologically immense proportions. There is much to be undone – many basic notions, concepts, and presumptions. And there are tools we use to point towards the necessary transformations inherent in their use. One of those are digital libraries, especially of academic and practical knowledge. By wresting knowledge out of the hands of those who instill false and illegitimate scarcities and distributing it freely, we are redefining one kind of property and a different kind of commons – one that is neither negative nor positive. These collections we are discussing are managed by groups of people. They can, if they so choose, shut them down at will. This is proven by their precarity – they are often shut down at someone else’s will. So, in that sense, we can say that these collections are positive commons – they are “owned” by someone(s). At the same time, anyone can make use of this knowledge at will, so in this way, the knowledge itself is a negative commons. A simple “compromise” reading of this situation would that this helps define a “neutral” commons. I would disagree with such a formulation as I see this as more of a polymorphic commons, of which a neutral commons is simply one particular variant.

As I noted earlier, much of this is predicated on a particular political condition. The present condition can be seen, as Mark Fisher describes, one of “Capitalist Realism”, but one exacerbated by the contemporary practice of capital as vectoral – dominating economic agents inherent to the digital condition. As noted earlier – a shift to an emancipating and enabling socialism dispenses with many of the contradictions that are the root source of this very discussion.

With such a vision, we can use these resources as emancipating and enabling for first world scholars and students and deprived communities. In fact that is the subject of research I am engaged in now – how to bring digital libraries to far flung communities. The first step this way was with Geert Lovink who brought a hard drive full of books to a university in Uganda. Recently similar efforts have been revived in Myanmar. My focus is more towards the Arctic, where the privilege I enjoy in a technologically rich city like Toronto can be spread to communities throughout Canada, many of whom have computers but limited or no internet access. Such communities would benefit not from online internet, but from offline meshnets and other hybridic systems.

Another important direction of research for me is relative to the development of a cross platform portable offline archive indexing solution. Online, Google Books has the ability to search inside books, so if you know what you need you can find it. You can’t copy it, but you can look at it. Offline indexing systems allow the user to index their libraries, search inside the documents themselves, and this transforms a “repository” into a research tool. If one collects enough books, then search algorithms can be used very creatively to weave together sets of ideas from a variety of sources, finding connections between texts that would otherwise go unnoticed. A cross-platform portable offline indexer is a complex undertaking. As this branch of my research is terrifically complex and underfunded, I am hoping others will take interest in this.

In conclusion, thank you for the opportunity to discuss this topic. I enjoy and often prefer interview situations, as it provides a structure of discussion and a set of terms and ideas we can argue or modify or amplify. I expect that in the next decade we will no longer discuss the oppositionals between online and offline. There will be “line” and its hybrid variants.


About misterwarwick

I am an Associate Professor of Media Theory, Sound Synthesis, Audio Production, and "Digital Things". I am very much involved with issues of Archives, Access To Knowledge, and the pathetic predicament collectively understood as "Civilisation". I am also a composer of electronic music and I have an online music program called "Something Completely Different". I also like to play with digital imaging. I live in Toronto. It's a nice place.
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s