Albums of 2015

 

So, I’ve been asked by several people what were my faves of 2015. In typical fashion, for me, it’s hard to say – this has been a great year for music. I haven’t been listening to very much popular music this year, as my time with my show, Something Completely Different (on thescopeatryerson.ca every Sunday at 10PM) has eaten up most of my listening time. I limited myself to 3 in each category, just for the sake of imposing a limit of some kind, because that sharpens things.

 

Electronic / Ambient

Telepath – Window Druzy (USA)

Mining a similar vein of nostalgic gold as the Boards of Canada, only a good bit more ambient. Really wonderful, warm, tape hissy warbly stuff.

 

Markus Reuter – Mundo Nuevo (GER)

Swirling beautiful pieces driven by Markus’s custom built Touch Guitars, this record is a journey and a half. At times rather dissonant, I thought this should be in the classical section at times. A great work by a great composer.

 

Jaja – Starfields (GER)
Jaja makes luscious gentle quiet ambient washes of space music – it’s all just her on one synth, a Roland JD800 and a pile of effects. Beautiful stuff.

 

Folk

Marika Hackman – We Slept At Last (UK)

A deep and dark record. Angsty, wiry but filled with washes and tones. The lyrics are often melancholic but always inspiring. For folk music there is a lot of electronics on this album, which helps Marika push a lot more boundaries.

 

The Unthanks – Mount the Air (UK)

A gem of a record. Stunning. At times old-school folky, then drifting into a kind of loose jazz and wavering into Prog, but all of it sharp as can be. The Unthanks sisters have great voices and the raw honesty of their talent shows through.

 

Two Coats Colder – Unseen Highway (USA)

A bit traditional in presentation, this record has many charms. A few week points – when they try to play “country” music, it’s a bit of a fail. But the rest of it is golden sweet folkie stuff – bucolic and sweet.

 

Prog

Anekdoten – Until All the Ghosts Are Gone (SWE)

Possibly my favourite record this year. This record kicks so much ass, it’s hard to describe. For Anekdoten this is one of their very finest records, esp. with Anna Sofi Dahlberg’s Mellotron! SLABS of Mellotron. SLABS! The vocals are good, but they really come alive in the instrumental sections. If you’re not aware of Anekdoten, imagine a mix of King Crimson, Gentle Giant, and Gabriel Period Genesis, only much more dire than the above, with the possible exception of King Crimson. Just, wow.
https://www.youtube.com/watch?v=Zzv69ZwyKPo

 

Spock’s Beard – The Oblivion Particle (USA)

Another band that listened to too much Gentle Giant and Tull and Crimson in High School. These guys are great – they’re prog, but with a goofy smirking attitude. Not gut busting like Zappa, but more like Cartoon. And this is a great record of theirs.
https://www.youtube.com/watch?v=RPh6FFPpLoo

 

Godspeed You! Black Emperor – Asunder, Sweet and Other Distress (CAN)

GY!BE can do little wrong in my book. This is yet another monumentally difficult album to digest from these Montrealers, and one of the best records of 2015.
https://www.youtube.com/watch?v=gCBoKAxuzpc

Rock and Pop

Hollerado – 111 Songs (CAN)

These guy wrote and recorded 111 songs. No, I’m not kidding. They put out the word that they would write you a song for $100, and they got 111 responses. Some of the songs are kind of silly and simplistic, but many of them are really great, and demonstrate how these guys can turn on their creative jets and excel.
https://www.youtube.com/watch?v=rBFKOW393qg

 

Bossie – Meteor / There Will Be Time (CAN)

Anne Douris is one of the most overskilled people I know. If you’ve seen my video on Radical Tactics of the Offline Library, she did the brilliant animation for that. She also plays keyboards for Hollerado these days, AND she has a pop music vehicle called Bossie. Her songs are so catchy and hook laden you’ll be humming them for days. I know I do.
https://www.youtube.com/watch?v=T7QFr2U3FxA

 

Braids – Deep In The Iris (CAN)

Now a smaller band (now they are 3) I really think this is Braid’s best album. It is much more focused and clear. I find myself putting this on a lot more. Raphaelle’s voice is better than ever, and this record flips my crank.

 

Classical

Nils Frahm – Solos (GER)

Cool and spacious, this is acoustic piano work by Frahm – none of the electronic arpeggios – just pure simple connection between him, and instrument and your ears. At times reminiscent of Harold Budd or Roger Eno, this is really unique and gentle music, brilliantly recorded.

 

So I’m An Islander – Bræuw (DEN)

So I’m an Islander is Søren Nissen Jørgensen who lives on an island east of Denmark. His music, usually just solo piano, is gentle, inviting, and direct. The recording quality isn’t the sharpest, but this adds to the mystery of it all.

 

Bang on a Can All-Stars – Field Recordings (USA)

This is a record of songs where there are field recordings or voice recordings and the music interprets, and even sometimes slavishly follows the recordings. Super smart. Sometimes it seems to be more like ProgRock or some distilled jazz, there is always this rigour and objectivity in their playing giving a distinctly classical feel.

 

Jazz

Moki McFly – Osirus I: Xenolinguistics (PHI)

Moki McFly makes a very peculiar form of Jazz using his laptop. Looping and processing jazz, he creates new jazz that is experimental and compelling. Sometimes rather lyrical, sometimes dissonant and glitch, sometimes straight up encompassing and inviting, he’s one of my favourite jazz people right now.
https://www.youtube.com/watch?v=zgUjrRaZGHA

 

The Cancel – No Way To Stay (UKR)

Like Moki McFly, the Cancel make Computer based jazz. They’re not as glitch as Moki, but they also have more in the way of live instrument recording, and have a very deep funky groove to their sound.
https://www.youtube.com/watch?v=MlDdVqQeYJc

 

Laura Jurd – Human Spirit (UK)

Laura’s a magnificent trumpet player – quick, precise and sensitive. Her compositions are powerful and uncompromising yet never over the edge and dismissive. This is a great record.

 

Experimental

Lindsay Dobbin – Tatamagouche (CAN)

Lindsay is a multi/interdisciplinary Métis artist who lives and works on unceded Mi’kmaq territory in Halifax, Nova Scotia. She previously worked as Broken Deer, and I’ve played her work a lot on Something Completely Different. Hesitating, lofi, gentle, mysterious, and utterly engrossing – her work is powerful as it is tentative and moving. This is not from the record I mention, but from an earlier work by Broken Deer.

 

Rapoon – Dark Zero (UK)

Robin Storey makes a brilliant work in the Rapoon tradition. Formerly of :zoviet*france:, this work is ambient yet challenging, hypnotic yet gently jarring. Parts of this sound a lot like Mohnomische by :z*f: and other Rapoon works, but there is a darkness and tautness in this record not found in many of his previous works. This is not from the record I mention, but from an earlier work by Broken Deer.
https://www.youtube.com/watch?v=UGzn-Z-4BH4

 

√π- – Binary Code (NLD)

Glitch computer based electronics with some drifty ambient textures, this work makes these kinds of sounds quite listenable as they flock around loose rhythms made of glitches and distortions. Fans of Emptyset might find this interesting. I couldn’t find a video for this.

 

Bonus – Alan Sondheim has been making amazing music for the few years, and it borders on criminal how little exposure he gets. Go to his website and DL everything you can. It’s great, edgy, difficult music that can suddenly spin into great beauty.

Posted in Uncategorized | Leave a comment

Conversation on digital archiving practice– June 2015

I was interviewed by Davide Georgetta and Valerio Nicoletti for their research in libraries, archives, and file sharing. I think my answers are interesting and worth sharing publicly.

Conversation on digital archiving practice

 

1. In which way are text-sharing platforms (repositories, digital libraries, p2p networks) becoming relevant in the process of knowledge disclosure nowadays?

It is important to separate out each of these systems in terms of their actual functioning and the affordances each provides. The first distinction is online versus offline. Within both of those broad categories there are a variety of systems and practices. So, for instance in the offline system, there are a variety of libraries being passed around that have a varying number of texts in them and are organised in greater or lesser fashion. For example, it is quite common that people will share USB sticks with books on them. While the size of USB sticks grows every year, the larger ones are still expensive and thus less likely to be shared. At the other end, people will often share smaller USB sticks – ranging from 4 to 16 or even 32 Gigabytes (GB) – freely as their cost is relatively negligible.

There are also larger libraries on hard drives slowly making their way around the world, one being the “Alexandria Project”. Depending on how much it has been used and edited, it can vary from 30,000 to 50,000 books or more. The collection of 50,000 books ranges from 250 to 300 GB. Another interesting collection is from marxists.org who were selling hard drives at nominal cost that contained their website’s contents. Quite powerful libraries right there, in both cases, and not the only ones in circulation.

So, from students sharing USB sticks with a few to hundreds of eBooks to large hard drives filled with tens of thousands of eBooks, there are even larger systems. For example, one can procure a version of the Library Genesis collection – some 800,000+ books – on the Tor network. However, it is not a simple zip file of pdfs. It is much more complex than that and requires significant programming skills to make use of it, as well as one hell of a big hard drive, as the collection is (the last time I checked) about 14 terabytes (TB) in size. Such an array would be both expensive, bulky, difficult to use much less share offline.

This shows an extremely important component of the differentiation of text-sharing platforms, as each has its own exigencies centred around scale. What is extra-ordinary is that even at the low-end of the scale, say a 16 GB USB drive, we are dealing with some incredible scales. From my research I have found that my collection of pdf files averages around 5 megabytes (MB). This means than a 16 GB USB stick can hold about 3200 books. Practically, if one reads one of these books a week, one would need about sixty one and a half years to read all the books on that one tiny cheap USB stick. A personal portable library for most any human being would be easily contained on a 32 GB USB stick which can be bought for as little as fifteen dollars. So, if one knew in advance all the books one would read in one’s lifetime in advance, they would all fit on a USB stick that presently costs as much as a mediocre bottle of dinner wine.

A number of these USB sticks were released into the wild at the Transmediale festival in 2015, distributed using the DeadSwap system pioneered by the Telekommunisten collective. Each stick was organised around a centre of knowledge, so there was a stick for Political Theory, another for Philosophy, and another for Art and Aesthetics, and so on. Collectively, they were one of my contributions to Transmediale as “Datafield3“. These sticks were hidden in a variety of locations in the Haus der Kulturen der Welt (HKW) in Berlin during the exhibition period. Participants, using their mobile phones, would text into the DeadSwap system. Instructions would arrive to them on how they can find a USB stick. Of the ten sticks, nine disappeared into circulation in the HKW. This demonstrated that Datafield3 USB Sticks were a viable method of transmitting and managing research. All the Datafield3 sticks were self-indexed using the portable indexer Dropout, and each Datafield3 stick was curated to a particular discipline such as Science, Philosophy, Art and Aesthetics, Sociology, etc. This meant that a researcher using a Datafield3 stick is able to search inside the documents of a given field of knowledge using a variety of keywords and Boolean Search requests. Another, and very powerful, value of a Datafield3 stick is that as it is an offline system, no corporate, private, or government entities would have any idea what one is researching using these documents.

Offline libraries are (as is so common with the offline) slowly reaching potentials. One is to service the knowledge needs of people with little access to the internet. Even a highly restricted system like GoogleBooks or Amazon is of little value if your access to the internet is little better than a dial-up connection. Marginal communities – whether they are in Africa or the Arctic – are poorly served by the internet, and it is in such places that offline libraries could have a massive and positive impact on the knowledge and education needs of these liminal communities.

This style of research, of course, strikes at the corporate predispositions toward digitality, where all things should be monitored and related in an “internet of things”. That said, offline libraries operate at a glacial pace and a highly fragmented trajectory of distribution. Online systems, when coupled with broadband access, create entirely new environments and experiences where knowledge can be acquired at great speed. Online systems are not necessarily oppositional to the offline – they are more complimentary, and the two, offline and online, can and do operate in symbiosis.

In this way, various online libraries and communities are fed by personal offline libraries, and these offline libraries are assembled by people downloading material to them. The resilience and invisibility of offline systems and casual “sneakernet” file sharing systems thus acts as a support for the online systems. As the vectoralists go about shutting down online sharing systems, the next generation of online libraries exist in the thousands of hard drives collecting dust on scholars and students desktops. There are difficulties with this strategy of “whack-a-mole”, one being scale. A “mole” (a site deemed anathema by the proprietarian / vectoral interests) can be “whacked” (removed from the internet) instantly regardless of size. This was demonstrated in the destruction of library.nu – an online library of over 800,000 books that disappeared as quickly as an offending music blog of 10 records on BlogSpot would have been crushed. The difference, of course is that the offending music blog can get itself rolling again in a few hours. Library.nu was gone, forever and its founders brought to court. The contents may have found their way back to the internet, but it took quite a long time. The largest online libraries face the same problem. Additionally, most users do not have symmetrical internet access, i.e., they cannot upload as fast as they can download. This makes things also very difficult if and when a large site is taken out of commission, as rebuilding can be very slow. Library.nu was resurrected into other sites, but it took years, and they are still just as vulnerable and precarious as ever before. The proven precarity of the online and resilience of the offline creates the symbiosis between the two.

So, this analysis of the offline and online gives us a firmer material grasp of the relevance and use of the repositories, digital libraries, p2p networks and suchlike and their relevance in the process of contemporary knowledge disclosure.

So, to that, the different systems have different audiences and use values. It is also important to realise that these systems exist parallel or orthogonal to any questions of ethics or their economics. For example, there are many online repositories of books that operate outside of copyright considerations and many that don’t. They all serve different interests in the Access to Knowledge frame. The sub rosa systems logically have an audience that is composed of students and scholars, many of whom may not have access to a university research library, or are paid so poorly – the plight of the adjunct professor is well known – that they can’t afford to buy these books as they are often only somewhat less than extortionately expensive. In some cases, they don’t necessarily have access to the university’s library system, or their university is so strapped for cash, the university library itself can’t afford the journals and books they need. These exigencies drive people to the sub rosa online libraries. These sub rosa libraries are a great boon to students and independent researchers as they are often the least economically able to afford acquiring these volumes in ways that comport with proprietarian demands.

On the other side are these journals and publishing systems themselves. Many people talk about the “darknet” as if it is some kind of a den of thieves. This isn’t entirely accurate. The biggest darknets are actually these journals and educational nets – they’re dark because they’re unsearchable outside their paywalls. Like a sub rosa library it might not be searchable by Google, but unlike a sub rosa library, membership is not free. There are a variety of such nets, and they tend to be rather expensive and their actual contents are invisible and/or unavailable to non-members. These systems are of great use value to a variety of professions – engineers, doctors, scientific researchers – they all have a variety of paywalled datalockers at their disposal, for a fee.

The vectoral proprietarians and publishers are loathe to embrace models that question their raison d’etre. Their grasp is strong and far reaching – the events around Aaron Swartz’s untimely demise are a brutal case that point directly at the very same publishers and proprietarians that own these journals and information networks. Still, for those who can afford access, these systems are amazing as they can provide enormous amounts of data to their paying audiences.

So, all of these systems together have their limitations. However, societies that share knowledge the most are societies that function best. This puts paywalled and subscription based knowledge services on the wrong side of history. Some sectors of academia have been better at responding to this than others. The sciences, for example, have ArVix and similar systems where work can be submitted for testing, and the results can be read for free. These are good moves. The arts and humanities aren’t quite so forthcoming, and are falling behind STEM disciplines, although sites like Monoskop, Ubuweb, and Archive.org are encouraging. The important part of this is that everyone has a right to access knowledge. Rights to benefit from the production of knowledge are alienable, but the right to access itself is not alienable – it is fundamental.

 

  1. How does a digital container influence its contents? Does the same book — if archived on different platforms (such as Internet Archive, The Pirate Bay, Monoskop Log, etc.) still remain the same cultural item?

 

Contrary to the predilection of the questions framing, digital contents are more influenced by their file type than the ownership of the server it’s located on. For example, let’s take my book Radical Tactics of the Offline Library (2014). It can be downloaded as a pdf from Internet Archive, the Institute of Network Cultures, and a variety of sub rosa file sharing systems. It’s an identical digital copy across all these platforms – the differences in collecting it from any of them is negligible. What makes a big difference is the file format. The most common formats are txt, pdf, and epub. The one favoured most by academics is pdf as it is directly paginated. Txt does not do this at all and epub’s support of pagination is partial and fragmented at best. Epubs are popular on eReaders and tablets as their text can be made to flow and zoom easily. This is not useful for scholars and academics where absolute page referencing is required.

PDF files also have the advantage of being lockable – once they are saved they are difficult to change, especially as pages are usually image scans from a book with an invisible text overlay for copy/paste purposes. This cuts both ways – a PDF can also be locked in such a way that its contents cannot be changed at all. So if a book has been scanned and locked with a password as a scanned book with no OCR, then there it is extremely difficult / impossible to OCR the book and give it copy-able content. PDF has a variety of these security levels, making it both extremely useful and opaque and useless at the same time, depending on its settings.

In contrast epub files are basically .zip files full of html text documents with epub file extensions. These can be opened in a zip file reader, altered, and then saved. So, the politics and history of epub vs pdf aside, with the ability to be easily modified and corrupted and a sketchy relationship with pagination, epub is mostly used for trade fiction and is less useful for scholarly work. That said, my book, Radical Tactics… is also available as an epub…

So, with the provision that the offline file type is a far greater determinant of how a “digital container” affects contents, we can now turn our attention to the online presentation of texts as described in the question. Also, there is the relationship, or “play” between online and offline versions which will require some address.

Different websites have different User Interfaces (UI) and these can and do affect the reader of these texts, more specifically in how they go about acquiring them. How these texts can be read online varies from site to site and from machine to machine – a 15 inch Apple retina display is going to be easier to read than a fuzzy out of focus 15 inch VGA CRT screen. Most computers, tablets and phones fall somewhere along that continuum. If a website has developed a mobile viewer for their content, then it will likely be much easier to navigate and read on a phone or tablet than otherwise. Given the shoestring budgets many of these sites have, mobile reading has not been a high priority – simply staying online is. For example, a few months prior to this writing, aaaaarg.org lost its DNS and scrambled to find a new address in the .fail domain. At this writing (23 June 2015) Monoskop is offline by similar means, underscoring the precarity of these online systems.

Each of these systems is working on new and more interesting ways to present what are ostensibly, pdf files, the presentation and experience of which is dependent more on the pdf reader or browser one is using. That said, different systems can create radically different reading experiences. For example, archive.org vs. google books. We can look at a book that is clearly in the public domain as our exemplar – “Through the Looking Glass” by Charles Lutwidge Dodgson (Lewis Carroll). With Google Books, one must search on text and then find where it occurs in the book and then go to those pages to view it. The text is readable, while not selectable or copy-able. With archive.org, one searches for the title, and one is directed to its page where it can be downloaded as a searchable and copy-able pdf.

These are radically different appreciations of the text as an object. For while Google states “If it’s in the public domain, you’re free to download a PDF copy” this is clearly not well implemented, as one is told that Through The Looking Glass is public domain, there is no clear method of downloading it. In contrast, on arhive.org, the ability and method to download the pdf is very clear.

Archive.org is clearly more inviting. It provides several versions of the book for download, including one from Google(!?). One of the pdfs was also OCR’d, and so text could be copied out from it. The basic decency of supplying links to downloadable versions is an essential aspect of the experience one has with a site, and one where Google fails, badly. As a consequence, the “cultural item” that is Through the Looking Glass is radically different in presentation between Archive and Google. There are many pressings and printings of Through the Looking Glass. However, by working with the PDF file type, pagination is preserved and so one can use it in a scholarly context. Opposed to this would be, obviously, Amazon’s preview of Through the Looking Glass and, oddly, Gutenberg’s provision of Through the Looking Glass. Amazon’s version is even more hobbled and limited than Google’s as they want you to buy the book from them. Gutenberg.org’s version is less than useful as it is only provided in epub or txt formats – flowing text without pagination.

So, if one is a scholar or student and one wishes to study the topsy turvy world of Alice’s adventures in the looking glass, one would have to go to Archive.org or a sub rosa site to find an OCR version of the PDF – the mirror opposite of what one would expect in a civilised society where knowledge and creativity are nurtured and venerated. As a consequence, it is not so much that the digital container (website) determines a book as a cultural item, as much as it attenuates, filters, and distorts our access to a given text.

These distortions lead to a “play” between online and offline, sub rosa and in lumine text storage systems. For example, imagine a book is published, written by “Z”. Someone painstakingly scans each page of the book and uploads it to a sub rosa website, “A”. Someone downloads the PDF, and she wants to read it on her tablet, so she wants it as an epub. With the appropriate software, she converts the pdf to epub. It is now flowing text. The vectoralist proprietarians send in their flying monkeys and sub rosa site “A” ceases to exist. In the shuffle and over time, the woman who downloaded the book lost the pdf version, but she has the epub version on his tablet. A few years later, sub rosa website B is now operative and is asking for donations. So, she uploads the epub version, figuring, something is better than nothing. Another user of sub rosa B sees the book by Z she uploaded and downloads it. He needs page references, as he is writing a paper on Z in his first year class on ()theory. So, he copies the text into MSWord, inserts page numbers, adjusts the size of the pages, the margins, and font to roughly what the book should be and then outputs a paginated PDF. His page numbers are going to be off from the original, but the pages are, word for word, correct –they’ve just been re-arranged.

He feels confident – if his professor says “I didn’t find that text in the book, and I have the tree-killer version right here”, the student says “Well, I have a digital version – check it out” and he sends the digital version to the professor, and sure enough – all the text is correct, just re-arranged as a different edition.

The knowledge itself is correct, word for word, however it has been reframed and repurposed outside the interests of capital… but that is getting ahead of our discussion. Suffice to say, these are the kinds of “play” between online and offline than can and do occur.

 

  1. The scanning of texts — for instance out-of-print books — and their subsequent storage as digital files have developed actually a new figure: the amateur librarian. What are the features, responsibilities, and limits for this “role”, in your opinion?
    and

4. Marcell Mars has drawn up a kind of vademecum for all possible contributors of this amateur librarianship [Why and How to Be(come) an Amateur Librarian,
link]. Are there some features that can define the ideal digital book from the operative perspective of the librarian? [esp. file formats and tools to be preferred, how to organize files and metadata, how to manage the distribution and access to content and so on]

These questions can be answered as one. I fully support Marcell Mars’s efforts. He is blazing an important trail. While we differ in emphasis and direction, our fundamental interests in digital librarianship are very close. I would suggest that everything he wrote in Why and How to Be(come) an Amateur Librarian is excellent. I am not as big a fan of Calibre as he is, as Calibre copies and creates its own method of labelling and storing book files – a method I find less than useful for other systems. That said, Calibre is an excellent, if not the very best, system for an amateur librarian in managing eBooks. Marcell’s Let’s Share Books plug-in for Calibre is brilliant, as it transforms Calibre from a powerful file manager into a p2p booksharing service. This is great as it allows amateur librarians, scholars, and students to build their collections. The problems with Let’s Share Books are the same as any p2p system: asymmetry and asynchrony – asynchrony in machine availability and asymmetry in bandwidth and the variability of bandwidth in various locations. For example, in terms of bandwidth I have a typical internet access for Toronto and can download at around 17 – 18 megabits per second (mbps). Upload is a different story – I can only upload at 0.75 mbps – a tiny fraction of the speed I can download. To get symmetrical upload and download speed is only somewhat less than extortionately expensive. So, the asymmetry of upload and download bandwidth is a major brake on p2p networks, thrusting them back into speeds not seen since dialup. Asynchrony is the other problem – like any other p2p system, the computer has to be on and the p2p system operating. This isn’t always possible. These are the tools I have at hand, and compared to most of Canada, I’m in a very privileged sector – a point I will return to later. From this one can see both the strengths and shortcomings of p2p systems. My focus has been less on the transference of online data and more on the transference of offline data and its organisation and use value.

Regarding optimal practices by amateur librarians, I would suggest that the amateur librarian would optimally operate on a number of levels. I know doing all of these levels at once would be difficult and time consuming – a fulltime job in itself. Ideally, an amateur library or librarian would

1. Scan texts into PDF files
2. OCR the texts
3. Proof their work
4. Output the file as
a. PDF with security measures off

b. EPUB file
c. txt file with page numbers as a triple paragraph break with a page number in the second paragraph break.

5. Name the file in a standard way:
lastName, firstName-Title-(year).fileType
example: Warwick,Henry-Radical Tactics of the Offline Library-(2014).pdf
This enables Librarians to assemble the files and organise them on a drive by author A through Z, or in a series of directories (possibly based on fields of knowledge, say “Art” or “Science” or “Philosophy”) and then by author within those folders. This makes the library much more shareable. And useful.

6. back up their library multiple times and share it with other librarians to build resilience.

7. upload all versions of the text to an online library.

8. distribute copies of their library so that they are indexed in a portable indexer. This point is a complex one and I will return to it later, as well.

Obviously, that is a lot to ask of amateurs – especially the scanning part. Scanning requires technology, patience, and a lot of time. I don’t even do that. The technology can be acquired – it’s the patience and “the time thing” I am very short on. That said: scanning is a bedrock and crucial point in contemporary amateur librarianship, and I respect people who do that very much. While scanning the book as an image is good, it is only the first step towards an optimal eBook, as outlined above. OCR is critical, as it generates the text for scholars in a PDF of the scan, and it forms the textual basis of other digital documentation formats, such as epub. How these digital texts can be built is well documented in the book From Print to Ebooks: a Hybrid Publishing Toolkit for the Arts by Joe Monk, Miriam Rasch, Florian Cramer and Amy Wu.

 

  1. Does a way out from the debate between publishers and digital independent libraries (Monoskop, Ubuweb, Aaaarg.org) exist, in terms of copyright? An alternative solution able to solve the issue and to provide equal opportunities to everyone? Would the fear of publishers of a possible reduction of incomes be legitimized if the access to their digital publications was open and free?

 

In terms of a “way out”, the question has to be “a way out from what?” The conflict between capital and human needs? There is only a problem because the proprietarians insist on their being a problem. If they just “go away” then there is no problem. However, that reveals a deeper issue which is compensation for labour – the difference between wage and chattel slavery. The social geometries inherent to the workings of the mechanisms of file sharing (online or offline) and digital libraries imply and indeed, prefer, a different social order.

This kind of discussion is a necessary one, but only in the absence of socialism. Independent online libraries and repositories and offline personal libraries operate from a fundamental re-coding of the notion of property and a different vision of society. Entertaining proprietarian and vectoral arguments assumes they merit discussion as if the agents of these ideas have any real part in the desired future. They do not, and therefore, their ideology can and should be ignored.

In my view, there is very little need for private academic publishers. The state manages education, and the state should be responsible for the dissemination of knowledge in a free and public space. I think that would be a better use of public moneys than spending more than a $100 million a copy for the F-35 fighter planes that can’t even fly in the rain or compete against cheaper planes made 40 years ago. For the price of a handful of such useless death machines, any country could easily fund peer reviewed online published journals and distribute them freely as PDFs. In terms of non-academic publishing, a universal minimum income would go a long way to reducing the oppression of authors by the publishing cartels. Authors would have enough to put food on the table and write. This would change the power dynamic between publishers and authors – good authors would thus be in higher demand and attain greater rewards without the fear of starving in a garret. Lesser skilled authors could contribute and hone their craft without the fear of starving in a garret. Questions of copyright are thus obviated. The problem with copyright is one of political will and consciousness. The publishing cartels are vectoral organisations – they survive off of the extractions around artificial scarcity of knowledge – something the world long ago established as immoral and inimical to the ideals expressed in the United Nations Universal Declaration of Human Rights.

This battle between the needs of the many and the profits of the few is not lost on the proprietarian / vectoralists. From their perspective digital production and file sharing does many bad things but it also engages some positive value, from their perspective. For one thing, digital copies of books are much cheaper to produce than paper, so there is a reduction in the cost of distribution. This externalisation of distribution costs comes at the reduction of the book printing industry and the wholesale elimination of the bookstore system. That is “not their problem”. Secondly, by turning printed books (and journals) into a boutique or specialist system, the price of such items is dilated and can reach absurd values – stories of $200 or more per copy for textbooks is not uncommon while, at the same time, people can self-publish books through Amazon for less than $10 a copy, retail. Digital books can be sold for even less as the storage of an arbitrarily large number of them can be distributed for nearly the same cost as a single “copy”. It is at this juncture that digital libraries and repositories and archives insert themselves, for the entry cost of these systems is low. In the meantime, given the present political economy, distributing books for free to all people would implode the publishing industry. This could be good, but there would be losses involved as we would also lose editors who help craft the books we finally read. Also, the Author would be put in a very similar position to that of a musician, if not worse. A musician can perform their music, and music has a high re-use value, so there are income streams from radio, digital downloads, and a tiny bit from streaming services. Books have a very low re-use value. Most books are read only once, and if they are loved, maybe several times. People will cheerfully put a song on repeat and listen to it over and over. So, the Author is at a disadvantage even compared to the Musician, and the music industry has been slowly sinking since Napster, and musicians are in a tighter spot than ever before.

Thus, the plight of the Author is an excellent case for the implementation of guaranteed minimum income (as discussed earlier) and the socialisation of the media sphere itself.

To evade this, you will see capital shift not to books and vinyl records, but to dematerialise media itself and control access to it. This is already occurring in terms of music – CD sales have collapsed, and even digital file sales of music have dropped as people sign up for streaming services – a complete centralisation and feudalisation of the music sphere. These services are not socialised and run in the public interest to the benefit of artists – they are private vectoral corporations extracting wealth. The material outcome is simple: external media is made redundant and ignored as computers and other devices become vectors of media consumption and wealth extraction. iPads and iPhones do not have USB ports that will see external hard drives. They can only operate through a process of iTunes software. The new Macintosh laptops also do not have USB ports, and such peripherals can only be accessed through an adaptor which comes at extra charge. Yes, there are other computers than those designed in Cupertino, but Apple computers often set the market for future development. In fact, USB is a case in point. USB was invented by Intel and had been around for years. No manufacturer wanted to put a USB port on their computer because there were no USB peripherals. Apple broke rank, replaced ADB with USB, SCSI with Firewire, and other manufacturers quickly followed suit.

It is in the vectoral interest to limit the range and closely meter both the quantity and character of the data flow to these new computers which, especially in the case of cellphones and tablets, are little more than dumb terminals for media consumption. It is in the control of these flows that wealth is extracted in fees and subscriptions. If the media producers starve in garrets, it is of no concern to the vectoral interests, as they extract wealth using percentages of very large numbers. Siphoning off an even tinier percentage to a narrow group of media producers makes these producers very wealthy, thusly providing the illusion that there is some kind of meritocracy in a flattened technocratic media sphere, when, in fact, this is clearly not the case. Also, this acts as a rhetorical feint presented as proof of the proprietarian claims of authorship and recompense for labour.

The interests of capital, in present form, would see the elimination of offline libraries simply by design – nowhere to plug them in; and the elimination of online libraries through tighter control over the flows of online media. Some will argue “aah – but we will still have work-arounds and darknets” and this will be true – but also irrelevant. Marginalisation of already liminal media distributions will continue. DRM will become unnecessary as the data flows of approved media will simply squeeze out unapproved media. Network Neutrality is a condition, a state, not an infrastructure. ICT is not built for or against it. Vigilance by the many is required. At the same time, this is no guarantee of success. Given the infrastructural changes – streaming media, dumb terminals (cellphone, tablets, automobiles, etc.), and restructured / reconfigured computers optimised for streaming media consumption (ever more “streamlined” laptops and the decrease in use of desktop computers) there is great momentum against libraries, archives, and repositories of media – online or offline.

There is an alternative, but it is not a technological one. It is a matter of political economy, as discussed earlier.

 

  • After your answers, we would also receive suggestions from you. Do you notice any unresolved or raising questions in the contemporary context of digital archiving practices and their relation to the publishing realm?

 

There are a number of topics worthy of discussion that logically flow from matters of digital libraries, online and offline. These range from the philosophical to the practical and in between. As noted in my previous answer, these libraries operate from a recoding of property theory. Present day theory that informs copyright theory is one based in a classic Lockean frame of a negative commons with a labour theory of property. In such a formula, “no one” owns the world as it was given to humanity by God per instructions in sacred texts. That is a negative commons. One creates property by accumulating material from the natural world and exercising labour upon it, thusly transforming it into property. This property can also be sold, as can the labour that produced it, and this is accomplished through a contract. In this way, one job of government is to enforce contracts and protect property. (Another job is the protection and projection of the interests of the ruling class, but that’s a different discussion…) Given the galloping catastrophe of industrialism and the exigencies of the Anthropocene, I think it is safe to say that the idea of nature as a negative commons and property as a product of labour upon materials acquired from the negative commons of nature is clearly a bad idea done poorly. For all the glories of modern civilisation – moonshots, the internet, electronic music, lettuce in February, plastics, jello shots, and Adam Sandler movies (actually, skip that one – his movies are terrible…) – it is inconceivable to consider it all “worth it” in the face of the Sixth Great Extinction as its direct and necessary result.

However, that is exactly the place we find ourselves in – a civilisational cul de sac of geologically immense proportions. There is much to be undone – many basic notions, concepts, and presumptions. And there are tools we use to point towards the necessary transformations inherent in their use. One of those are digital libraries, especially of academic and practical knowledge. By wresting knowledge out of the hands of those who instill false and illegitimate scarcities and distributing it freely, we are redefining one kind of property and a different kind of commons – one that is neither negative nor positive. These collections we are discussing are managed by groups of people. They can, if they so choose, shut them down at will. This is proven by their precarity – they are often shut down at someone else’s will. So, in that sense, we can say that these collections are positive commons – they are “owned” by someone(s). At the same time, anyone can make use of this knowledge at will, so in this way, the knowledge itself is a negative commons. A simple “compromise” reading of this situation would that this helps define a “neutral” commons. I would disagree with such a formulation as I see this as more of a polymorphic commons, of which a neutral commons is simply one particular variant.

As I noted earlier, much of this is predicated on a particular political condition. The present condition can be seen, as Mark Fisher describes, one of “Capitalist Realism”, but one exacerbated by the contemporary practice of capital as vectoral – dominating economic agents inherent to the digital condition. As noted earlier – a shift to an emancipating and enabling socialism dispenses with many of the contradictions that are the root source of this very discussion.

With such a vision, we can use these resources as emancipating and enabling for first world scholars and students and deprived communities. In fact that is the subject of research I am engaged in now – how to bring digital libraries to far flung communities. The first step this way was with Geert Lovink who brought a hard drive full of books to a university in Uganda. Recently similar efforts have been revived in Myanmar. My focus is more towards the Arctic, where the privilege I enjoy in a technologically rich city like Toronto can be spread to communities throughout Canada, many of whom have computers but limited or no internet access. Such communities would benefit not from online internet, but from offline meshnets and other hybridic systems.

Another important direction of research for me is relative to the development of a cross platform portable offline archive indexing solution. Online, Google Books has the ability to search inside books, so if you know what you need you can find it. You can’t copy it, but you can look at it. Offline indexing systems allow the user to index their libraries, search inside the documents themselves, and this transforms a “repository” into a research tool. If one collects enough books, then search algorithms can be used very creatively to weave together sets of ideas from a variety of sources, finding connections between texts that would otherwise go unnoticed. A cross-platform portable offline indexer is a complex undertaking. As this branch of my research is terrifically complex and underfunded, I am hoping others will take interest in this.

In conclusion, thank you for the opportunity to discuss this topic. I enjoy and often prefer interview situations, as it provides a structure of discussion and a set of terms and ideas we can argue or modify or amplify. I expect that in the next decade we will no longer discuss the oppositionals between online and offline. There will be “line” and its hybrid variants.

Posted in Uncategorized | Leave a comment

Designs for Future Paintings

I haven’t made any paintings in a very long time. So, as a kind of idle fantasy I thought about what kind of paintings I would make if I did want to make them. I would want to keep them within my practice of designing them in software and then using the digital sketches as maps for the actual paintings, just much more realistic-ish than simple drawings. This method is what I was doing in the early to mid 1990s when I last had a burst of paintings. Since my paintings are always as much a conceptual exercise as they are paint on canvas, I thought about “what” they would be paintings of, and what kind of painting makes sense to me. My interests are greatly changed since the 1990s. I’m not the kind of person to stick with one method of working or appearance, and so for sometime I have considered the work of Vermeer. What would it take to make paintings like Vermeer, only in the 21st century with contemporary subject matter. That’s when I came across the work of Emma Tooth. She is much more influenced by Caravaggio and Holbein, but she is definitely running in a similar direction. Her work is muscular and dark, while I am more interested in the space I am in than the people in that space. This gave me a new appreciation for the photo-realists. However, their work is more photo than realist.
Then I noticed in Emma’s work the consistent reference to the works of the masters and how cleverly she quotes them. That set me on a different tangent. I began looking at contemporary uses of vernacular photography – selfies, cellphone photos, and such-like. I realised that these are all products of distortions and in fact, they are often run through filters to make them look like photos from other times and other media (slides, ektachrome prints, etc.) I thought that was interesting and thought about how would this work using my previous methods of taking abstract gestures in digital imaging apps like Photoshop and running them through various filters. That is how I designed all my paintings in the 90s. I thought – what happens if you use a photo of an artwork and then run it through filters, and this series was the direct result of that experiment. Some of the results are fairly obvious, bordering on literal. Others, not so much. What I found was that, like in the mid 1990s when I used photoshop’s wave filter extensively, the filters I am using in other apps (paint.net, for example) have other results. And these results had a degree of predictability. The photoshop wave filter process culminated a few years ago in my book, CODE.X which can be found at Amazon.com. There I took a cruciform (X) and processed it using the wave filters in a very orderly and progressive algorithm. I see CODE.X as the logical conclusion of my paintings in the 1990s.
These new paintings had to be different. And they are. I decided I would use images of important works of art. How these works are determined to be historically important is completely patriarchal and so full of bullshit as to be really quite ludicrous even with a cursory examination of the so-called canon. Other artists are doing an admirable job of deconstructing and disabusing us of the canon. I thought that it would be most valuable and within my skillsets to take a different approach, where the cannon is transformed into something very different and identifiably my own.
Our day and age is one of massive info overkill and the over presence of the image – to the point where making striking images are no longer really possible for artists – the most arresting images are not by painters or even photographers, but by citizens or news crews with cell phones and cameras who happen to be in the right place at the right time. Since the heroism of the image is no longer a viable practice, many photographers and painters have spent their time making paintings and photos of dreary minutae and actively avoiding any kind of emotional engagement. That to me is not interesting. I would rather take the uninteresting and make it vital. I see that as an ethical decision. Life is short – terribly terribly short. And to clutter one’s existence with dull images of drear seems unethical or simply “Wrong”. I was looking at these “great paintings” and working from Benjamin’s dictum of the loss of aura of art in its reproduction (and the contradictory results, where the most reproduced works have the most aura and social cache) I thought it might be interesting to appropriate these images and “make them my own” and use software to incorporate them into my production algorithm and visual practice. So, I collected these images and processed them. Then I thought about titles and since they were (usually) processed beyond any recognition the point of maintaining their titles was also a pointless effort – they needed new titles. So I developed new titles for each such that they provided a kind of linguistic and emotional resonance, a poetic and subjective interpretation. Rather than use a computer to process this, I used a very effective emotional computer – my own mental abilities and affective responses to the works and their old titles and cultural histories. This resulted in a kind of breadcrumbing back to the original. Sometimes the images themselves can be interpreted from the original if the viewer has enough of an art history education. Otherwise it may seem opaque. I think that’s fine as it allows the viewer the see these as completely new visual experiences independent of their referencings.

I am more fond of some of these than others. What I am presenting here allows you to see my process of visual inquiry in this series and how it evolved over time. All those presented have “made the grade” in some way – I have ditched at least 10x as many as I have kept. Some of those you see here may get culled at a future date. Still, each of these has some value for me. They were all made in April 2015.

-Henry Warwick, June 2015.

LHOS

 

They’re not listening still.


 

I’m your national anthem


 

Asymmetric Contents


 

Amnesia


 

Jimi Hendrix


 

Vicky


 

You make me feel like I am free again.


 

Femme Fatale


 

Call me up and give me a reason for living.


 

helpless helpless helpless


 

Tarzhey


 

Gironde


 

Carbon County


 

Forbidden Colours


 

Wandering


 

Mare Tranquilitatis


 

This is not a painting


 

Quadrilateral


 

Trying too hard.


 

Their Memories


 

Escargot


 

Prime Numbers, 1-2-3


 

The Exhaustion


 

Catalonian bars


 

deuxième monde


 

Hackery.


 

Que faire quand on a tout fait.


 

Violence Completes the Partial Mind


 

7.7


 

Avalon


 

Modern Blues


 

Breakfast of Champions


 

Cloud of Unknowing


 

Messages Scratched In Sand


 

The Filter


 

At Play


Posted in Uncategorized | Leave a comment

The Alexandria Project Update

This is where I am at with that – it’s been “dark” because I’ve been applying for grants.

 

Haven’t got any yet. However, one grant VERY close. I got good feedback on why I didn’t get it, so I can have another run at that soon. Having never applied for a “Real Grant” before, I appreciated/needed the critique. I will re-apply for it very soon, as I believe it is a rolling grant.

I was inspired by Geert Lovink bringing the AP to Uganda where at a university there, it was orders of magnitude larger than the actual number of books in the library itself. I don’t have a particular affinity for Africa so I am looking at models closer to home… For example, Northern Canada (for geographical extremes) and even closer to home: the First Nations reservations (for economic extremes). Very simply these people have gotten fucked. Bad. Hard. And repeatedly. And they live nearby-ish.

There are reservations in Ontario that are barely third world in their poverty level. People die young there from diseases easily preventable or treatable elsewhere in Canada. One town, Pikangikum, illiteracy is high, it has the highest rate of teenage suicide IN THE WORLD, and there’s nothing to read any way. The schools are a mess and the libraries few and far between. European capitalists and their vectoral copyright regimes have not done these people much good, at all. The present governments, (Nationally, the Conservatives led by Harper; Provincially/Ontario, the “Liberals” led by Wynne) have done little or nothing to help them, and Harper’s gang of thugs have been actively disuseful. For example THOUSANDS of First Nations women have been disappeared in the past 20 years. Their bodies are found on occasion, murder victims. Harper refuses to engage this obvious genocide. At all. That’s the level of evil we’re talking. At the same time, the trade agreement designed to make the billionaire vectoral class even richerer than ever before, the TPP, is something Harper’s pushing for really hard here in Canada, and his draconian internal spying law, C51, which makes the Patriot Act look like the Magna Carta, passed with backing from the Liberal Party leader, Justin Trudeau. Since he backed C51, Trudeau’s standing in the election polls has plummeted. This illustrates the unity of the two main parties in things most opposed to the interests of the average Canadian AND the First Nations of Canada.

 

Anyway – I’ve begun researching the situation, and will be finding out the location and size of every Reservation and the location and size of its library system.

 

I’ve done this already with one: the library at Iqaluit in Nunavut. They have 60,000 items. Period. Obviously, something like the Alexandria Project would be a massive boon to the Iqaluit Library and the citizens of Nunavut. Iqaluit is one of many (it’s simply the most remote – it’s only 3000km from the North Pole) places like this. Shipping anything to Iqaluit is VERY expensive – the biggest problem, of course, is weight. A book weighs as much as a tablet, but a tablet can hold hundreds of books. A handful of books weighs as much as a laptop, and the laptop, with the Alexandria Project, holds tens of thousands of books.

 

There are other things afoot with the AP. One is its structure. Obviously a small, focussed one can exist on a USB stick and distributed easily. Clearly, a larger terabyte drive can also work, and hold huge quantities of documents. As I predicted in my dissertation, the capacity per dollar of a drive continues to plummet. I bought a 3 TB drive for CDN$140 a month ago. The problem, as I outlined in my dissertation, is one of usefulness. Having a zillion books is nice. No one can read that. And because these are files on a drive, there’s no “wandering the aisles of books” effect to “find” something. This goes to my point about the necessity of indexing.


AP|Linux
This winter when I was in Berlin, I consulted with Dmitry Kleiner and Baruch Gottlieb about this. They were clear: the way to go would be to build a “computer on a drive” using Linux OS, probably the portable “Mint” distribution. This is also along the lines of “lower hanging fruit”. The biggest problem is the difference in platforms – Mac, Windows, Linux. None of them are very fond or capable of talking to each other as each vector of interactivity is like a cold war domino effect argument. This is where the AP|Linux comes in handy – simply boot the computer (Mac or Windows) off the drive and work inside the Mint environment. AP|Linux would have the Alexandria Project Library (*indexed* using a Linux desktop indexer), as well as software like “Libre Office” or “Open Office” so one could do ones research directly in the AP|Linux drive itself.

 

The benefit of this is obvious – it’s transportable, platform independent, completely private, and has its own research facilities built into it. The downsides are also substantial. First, it assumes the user knows how to boot their computer off an external drive. Second, once so booted, the user would have to navigate a new computer UI (Linux Mint). That right there is a lot to ask of the average user, many of whom can’t even programme their own TV remote. This becomes especially problematic in other more distant locations where access to computers may not be as universal and computer literacy varies greatly and can be quite problematic. There is also a strategic threat on the horizon to this approach – Computers running Windows 10 will not boot off any other operating system than Windows. This would prevent an external boot drive like AP|Linux from working at all. Ostensibly this is being done for questions of “security”, however, it is also very clear that Microsoft is trying to find a way to kill Linux….

My grant applications were to develop and distribute AP|Linux drives. Since I didn’t get the grants in the first round, I’ve been thinking “other thoughts”.
The problem isn’t the collection of books – that’s simple enough and in this context, trivial. It could be a USB with 500 books or a terabyte drive with 50,000 books. Oddly, the collection itself isn’t that interesting. It’s what one can DO with it that makes it valuable. In this way books, as digital objects, can be seen as collections of information – how does one quickly access it. This was also discussed in the book – Indexing. So, this leaves:

 


AP|Win  and AP|Mac.

Since neither of these would have native OS operations on the drive itself, it’s very much the same issues we’ve faced with previously. Windows OS Indexing is hideous. It’s a categorical fail. There are fine third party indexers, and as I noted in my dissertation and book, Dropout is free and works well. Unfortunately, it only works on Windows. The MacOS indexer (called Spotlight) is a much better system native indexer than the Windows example, but it is not transportable. This creates a slower and more complex workflow for the user upfront, but a much simpler AP. The AP|Mac would simply be a collection of books. The user would then have to copy the AP to their computer and tell Spotlight to Index the AP directory.

 

The AP|Win would also be a collection of books like AP|Mac, but it would also have the Dropout Indexer. A “normal” windows drive would be formatted in NTFS. Apple computers are formatted in HFS+. Macs don’t like NTFS and Windows is no friend of HFS. However: the AP|Win is formatted in ExFAT. Prior to ExFAT was FAT32, a file system that both Mac and Windows could read. USB sticks are commonly formatted in ExFAT, so any computer can use them. By formatting the AP|Win drive in ExFAT, anyone using any computer operating system can have read/write access to the files in the AP. ExFAT allows Dropout to run off the drive when the drive is attached to a Windows environment. ExFAT also allows Apple’s Spotlight indexer to read into the files as well.

I got the news of the last grant refusal two weeks ago, and have since concluded that the AP|Linux system is too complex for most users to implement. Arguments of “Well, users should know (x)” are, in my opinion, arrogant bullshit. An AP|Mac system is too limited – without an indexer, Windows users would be stuck with a sluggish machine. However, the AP|Win formatted in ExFAT with Dropout has win written all over it, as Mac users can use Spotlight. It’s a fine indexer that can be narrowcast to specific directories, preventing the indexer from eating the operating system’s resources. Linux users could have access to the files at will, like the Mac Users and there are free indexers in Linux. And if the super Linux users don’t like them, they can write their own. Turnabout is fair play.

 

I will focus my production efforts around that particular configuration AP|Win in ExFAT.

 

In terms of file arrangement, I am experimenting with another system where matters of curation come to the fore. Around the time I wrote the book, I had come up with a number of categories of interest and these, as directories, are a sorting system. I started with 25 or 30 categories. I anticipate I will have closer to 80 by the time this experiment is done. These categories are, at present:

 

ANTHROPOLOGY

ARCHITECTURE and URBANISM

ART – PHOTOGRAPHY

Art History

Art Instruction

Art MUSEUMS, curation, catalogues

ARTS, Art History and AESTHETICS

AUTO & BIOGRAPHY, memoirs

COMPUTER SCIENCE, Machines, and AI

DANCE

DIGITAL and INTERNET THEORY

ECOLOGY, Climate, Permaculture and Collapse

ECONOMICS, Business, labour relations

EDUCATION

ENERGY

FEMINISM and WOMENS STUDIES

FILM & VIDEO-practice

FILM & VIDEO-theory

FILM, THEATRE AND VIDEO SCRIPTS

FIRST NATION-ABORIGINAL

FOOD and COOK BOOKS

GENDER, QUEER and SEX studies

GEOGRAPHY

GEOLOGY-non-petrol

HISTORY and human evolution

LANGUAGE THEORY, LINGUISTICS – SEMIOTICS

LANGUAGES

LIBRARY and Archive theory

LITERATURE – Fiction, and Poetry

LITERATURE – THEORY

LITERATURE-How To, Style Manuals

Media, Advertising, and Communication Theory

MEDICINE human biology, bioethics and health

MILITARY, torture, espionage, surveillance, crypto

MUSIC – sheet and education

MUSIC AND AUDIO – Theory and history

MUSIC AND AUDIO-engineering

PHILOSOPHY and Rhetoric

Political Theory (Anarchist)

POLITICAL THEORY and activism

POST-ANTI-COLONIAL, globalism, 3rd World studies

PRACTICALITIES

PSYCHOLOGY and Neuroscience

RACE RELATIONS

RELIGION, Theology, Mythology, Atheism and Occult

SCIENCE and Math – Instructional and Engineering

SCIENCE and Math – RESEARCH

SCIENCE and Math – Theory_Philosophy

 

I am open to suggestions for categories. I imagine some of these will break into other categories. Like any curatorial project, the categories reflect my own personal interests. If someone doesn’t like them, they are free to re-arrange them to their hearts content.

 

——————————————————

 

So, I will be re-writing the grant proposals this summer and then see where it goes from there. I will research a variety of reservations, talk to the people there and see how the AP can help them.

 

There are two other audiences for the AP – scholars doing research, students doing research, and informed citizens who like to read cool stuff. I think the AP|Win variant will suffice for now. There is another plan involved using Java, but that’s a whole ‘nother kettle of fish and outside the scope of this report.

Posted in Uncategorized | Leave a comment

Another dumb test

Tra la la la la. This is getting boring. I wish this shit would work.

Posted in Uncategorized | Leave a comment

Elsevier – the vectoral going ever more vertical

This is from Tyler Neylon at The Cost of Knowledge, in collaboration with SPARC

“The key to all these issues is the right of authors to achieve easily-accessible distribution of their work.”
– The Cost of Knowledge

Last month, Elsevier made troubling changes to its sharing policy. Authors now have to wait up to 4 years before they can share an Elsevier-published manuscript through repositories; even then, the most restrictive Creative Commons license must be used.

Elsevier used to allow authors to share their manuscripts through repositories immediately upon publication.  The new changes make the content published in thousands of journals even more inaccessible and set a dangerous precedent for ever-increasing embargoes.

What can The Cost of Knowledge community do?  We can demonstrate that researchers and institutions oppose these new embargoes and encourage Elsevier to reconsider. Elsevier claims the new policy has only received “neutral-to-positive responses from research institutions and the wider research community.”  You can make your voice heard in a number of ways:
* Talk to colleagues about the new embargo changes when making publication decisions.
* Express your opinions publicly on campus, at meetings, and through social media.
* Join 1,800+ individuals in signing the COAR-SPARC statement against Elsevier’s sharing policy, and encourage your institution to sign as an organization:

https://www.coar-repositories.org/activities/advocacy-leadership/petition-against-elseviers-sharing-policy/

As always, your best contribution is the work you do every day – made available to anyone who might want to study it. To this end, you can publish and encourage others to publish in open access journals, such as those issued by PLOS or PeerJ. A comprehensive list of open access journals can be found at the Directory of Open Access Journals: https://doaj.org/

Thank you for helping us move toward a world where knowledge is free to all.

-Tyler Neylon at The Cost of Knowledge, in collaboration with SPARC

Posted in Uncategorized | Leave a comment

A New Way Of Doing This

So, I now have it set up so that as I write my texts in MS Word (Word), they will appear in WordPress (WP). This makes blogging a breeze. I’m much more comfortable typing in MS Word than the crowded little pest of a box they give you in WP. Also, I can put images into my text and use Word’s text wrapping features which are much simpler than WP. Then the fun begins. I have enabled some sharing systems in WP, and so when something arrives at WP, it is automatically sent to Facebook and Tumblr as well.

So, if I’m feeling gossipy, I can just hang on FB and write comments to other’s posts or respond to same on my “wall”. If I feel I have something “worth saying” in a larger context, I can use MS Word to broadcast to WP, FB, and Tumblr all at once.

My next step is to find a way for Word to broadcast to Diaspora as well. Diaspora’s good at broadcasting, not so good at receiving. Diaspora has other problems, but that’s a different discussion.

Now, another test – will this get through to my target outlets? Nothing like a little SCIENCE to find out….

Posted in Uncategorized | Leave a comment