02 September 2010

Daily Digest, 2010.08.31

Tuesday: Nisbet on open-access publishing and its costs; the ugly demise of Rice University Press; does Carr misrepresent the research he cites?; the social skills of dogs and wolves compared; The Mutationism Myth (conclusion); the "long tail" of language; a gracious response; a debate on language and/in the brain; 21st-century enlightenment; "Intentionality" at SEP; Koko the Clown sings St. James Infirmary Blues (with help from Cab Calloway); the wrong tool for a job I really want to do.

Scholarly Publishing

Matthew Nisbet at Big Think has begun a series of posts on open-access publishing.

Anyone interested in the topic should read the first post, "Strategies for Promoting Open-Access Publishing," which is full of good ideas, many of which are relevant to the humanities. (Nisbet's field is communication, which, like the humanities, is not typically supported by large grants from government agencies like the NIH or the NSF.)

Some quotes:
About 20% of journal articles published in the sciences, social sciences, and the humanities are open-access, meaning that only about 1 out of every 5 articles are immediately or eventually accessible online without paying an expensive journal subscription fee.  And although the open-access movement began in the early 1990s, survey studies find  that a majority of researchers still consider publishing at open-access journals as posing a risk to earning tenure and as potentially jeopardizing their chances at winning research grants.  As these findings suggest, there are not only financial and technological hurdles to making most scholarship open-access, there are also perceptual and cultural barriers.


One of the key advantages of open-access publishing is that research is far more likely to become part of online and face-to-face discussions among interested and relevant publics.  Not only does this sponsor wider attention and impact, but it turns academic publishing into a two-way conversation, providing the opportunity for valuable feedback from readers.   In many ways, open-access publishing enables an almost immediate “second-round of review,” crowd-sourcing insight, critiques, and follow-up suggestions for research.
Nisbet's list of "preliminary thoughts" about how to make open-access work is a must-read for anyone interested in starting an open-access project (that would be me, for one).

Nisbet follows up with a brief post on an event held last spring by the Scholarly Communication Program at Columbia University:  a panel discussion on open access including Mike Rossner, Executive Director of Rockefeller University Press, Ivey Anderson from California Digital Library, and Bettina Goerner, manager of Open Access for Springer.  Nisbet links to a video of the event, which I have not yet watched.

According to Nisbet, Rossner claimed at this panel that the publication of a single online scientific journal article costs $10,000.  That seems absurdly high (especially for an article in which the content and the reviewing are provided for free).

In the comments to Nisbet's post, Donald R. Streong points out that the cost of publishing at PLoS One is around $1300 per article.  One imagines the costs for articles in some fields (the humanties, for example) would generally be even lower.

Christopher Kelty has a scathing post at Savage Minds on the demise of Rice University Press, which was attempting to reinvent itself as an all-digital, print-on-demand, open-access university press.  Kelty (who was on the board of RUP) makes a persuasive and damning case that the demise was entirely the fault of arrogant and stupid university administrators.  The press did not "fail" because of the all-digital model.

Kelty points out in a comment to his post that the "backend" for RUP was Connexions (RUP's publications through Connexions are here).  I'm just beginning to get acquainted with this site, but it looks like an important experiment in new models for publication in the digital age.  Worth a look.  For my musical readers, the series on Music Fundamentals by Terry B. Ewell may be a good introduction to the site, which seems at first glance to me to have plusses and minuses.


Mike Masnick at Techdirt writes that Nicholas Carr, author of the recent The Shallows: What the Internet is Doing to Our Brains, seems to have misrepresented much of the research that he cites in his book. 

I'm not surprised.  Based on the several reviews I've read, Carr's book seems to exemplify its own title.


Jason Goldman at The Thoughtful Animal has an excellent summary of a paper from 2003 comparing the relative human-oriented social skills of dogs and wolves that had been hand-raised in human homes.  The dogs were considerably better at human social skills over all, which is not surprising. However, there was considerable rvariation in the abilities of the wolves, which may have implications for the evolution of dogs.

The article is:
A. Miklósi, et al. (2003), "A simple reason for a big difference: wolves do not look back at humans, but dogs do," Current Biology. The article is freely available for download.
I ran into this work with the hand-raised wolves when taking Bruce Blumberg's course on dog cognition in 2008-9.


Arlin Stoltzfus, a guest blogger at Larry Moran's Sandwalk, has posted the final installment of his series "The Mutationism Myth," which began in March.  I'd been following the series, but had frankly lost the thread of the argument. 

But the last installment does a fine job of summarizing all that has come before, and it can be read independently of the the previous posts in the series.  The series dissects the received triumphalist history of the Modern Synthesis in evolutionary theory, which, as Stoltzfus makes clear, is utterly wrong.  And he makes clear that the Modern Synthesis is no longer an adequate explanation of how evolution has been shown empirically to happen.

My readers who don't follow the literature on evolution and think that Richard Dawkins is the fount of all wisdom on the topic will want to read what Stoltzfus has to say.

A quote:
With his signature over-the-top rhetoric, Dawkins insists that "mathematical genetics" has proven that evolutionary rates are not limited by mutation. Allowing for some exaggeration, this is an accurate representation of MS orthodoxy ca. 1959, the approximate vintage of Dawkins's views.


Melody Dye at Child's Play has a splendid post introducing in a very approachable way some of the basic concepts in the "probabilistic" approach to language learning and use.

In the bits of linguistics that I had accumulated spordically and in a quite disorganized fashion over the years, I had been indoctrinated in the Chomskian nativist/rationalist paradigm, which was, so far as I knew, the only game in town (and I didn't have anyone to tell me different).  This was the case even as late as the first class I took at Harvard Extension in the fall of 2007, with Chomsky acolyte Cedric Boeckx, in what was billed as (of all things) an introduction to cognitive science.  (This would be the branch of cognitive science that avoids empirical investigation, apparently.)

It was only as I began to read widely on the evolution of music (and hence, as background, also on the evolution of language) that I began to realize that there is another way. So I'm very much looking forward to Dye's posts, which may help give me a more solid framework for the still rather scattered bits of knowledge I have of the empiricist approach.

Dye begins with an discussion of the classic work of George Kingsley Zipf, who discovered "that the frequency distribution of words in a given language follows an inverse power curve" (Zipf's Law). Very common words, like "the" and "of" account for a very large percentage of what we say, and there is a "long tail" of words that we only infrequently or hardly ever use.

A quote:
The short of it is : Given that we can produce long and highly complex sentences that we have never heard before (and indeed, that may have never been spoken before), how can we accurately estimate the likelihood of such sentences?  How can our ‘probabilistic’ experience with language inform whether or not these sentences are informative, meaningful or ‘correct’?  In other words, how can we use statistics — rather than rules — to produce (and comprehend) something as seemingly structured as language?

This question is pressing because it isn’t simply a logical or computational one.  If we can’t nail down (or even approximate) the algorithms that would allow a human to do so, or if we think that the computational problem posed is far too hard, then perhaps there is little reason to believe that people could –even in principle– be learning or using language statistically.  Instead, we might suggest that they were learning it in a rule-based fashion, which effectively allowed them to shortcut the otherwise-insurmountable learning process.

However, which approach we determine to be best is a question to be resolved computationally.  In the series to follow, I will illustrate why a predictive, probabilistic account of language is a good fit — both empirically and theoretically –with what we understand about language acquisition and use; and show how (and why) many of the ‘logical’ problems parried against such an account turn out to not be so logical after all.

Roy Peter Clark, a senior scholar at the Poynter Institute in St. Petersburg, Florida, has a gracious and humble response at Language Log to criticism that he has received at that site over the years.  A model of how to respond with grace and good humor to the sometimes overheated critical rhetoric that one often finds on the Web.

Clark has a recent book, The Glamour of Grammar: A Guide to the Magic and Mystery of Practical English.  Ammon Shea has a generally positive review in the 22 August edition of the New York Times Book Review.

Fedorenko & Kanwisher give a detailed response at Talking Brains to strong criticism of their work in last week's post "Neuroimaging of language: Why hasn't a clearer picture emerged?"  Lively discussion ensues in the comments.

I haven't grappled with all this yet, but it's a great example of how actual debate about substantive research issues can take place in the open, on a blog.


Reader Nancy Dale brings to my attention this remarkable video by Matthew Taylor of RSA, "21st Century Enlightenment."

The video is worth watching if only because of its remarkable use of stop-time animation technique.  The topic is weighty:  Taylor proposes to show, in 11 minutes and 11 seconds, how we can adapt the ideas of the Enlightenment to the realities of our 21st-century context.  I'm not sure I agree with everything he has to say, but the video is so dense with content that I would need to watch it at least a couple more times even to begin to come to grips with the argument.

I'd not run into RSA before (Taylor is the chief executive).  The initials stand for the Royal Society for the encouragement of Arts, Manufactures and Commerce, a British organization that has been around for 250 years.  Here is one of their blurbs:
For over 250 years the Royal Society for the encouragement of Arts, Manufactures and Commerce (RSA) has been a cradle of enlightenment thinking and a force for social progress.  Our approach is multi-disciplinary, politically independent and combines cutting edge research and policy development with practical action.

We encourage public discourse and critical debate by providing platforms for leading experts to share new ideas on contemporary issues.  Our projects generate new models for tackling the social challenges of today and our work is supported by a 27,000 strong Fellowship - achievers and influencers from every field with a real commitment to progressive social change.
Although the RSA website looks nice, it doesn't seem to be designed in a way to help the user figure out what it has to offer.

Pierre Jacob has just published a revision of his article "Intentionality" at the Stanford Encyclopedia of Philosophy.  "Intentionality" is one of those specialized terms in philosopher-speak of which I have had only a hazy grasp up to now (it doesn't mean "intention" in the everyday sense), and I should read this.  Really I should. 

Even the initial paragraph is helpful:
Intentionality is the power of minds to be about, to represent, or to stand for, things, properties and states of affairs. The puzzles of intentionality lie at the interface between the philosophy of mind and the philosophy of language. The word itself, which is of medieval Scholastic origin, was rehabilitated by the philosopher Franz Brentano towards the end of the nineteenth century. ‘Intentionality’ is a philosopher's word. It derives from the Latin word intentio, which in turn derives from the verb intendere, which means being directed towards some goal or thing.


My student Vonds brought to my attention this remarkable sequence from the Betty Boop cartoon "Snow White," from 1933, in which Koko the clown sings "St. James Infirmary Blues," voiced by Cab Calloway.  Some of Koko's movements in the number are said to be rotoscoped from footage of Cab Calloway.

The complete cartoon is also available at YouTube.  The plot makes no sense, but the animation is remarkable.  And the cartoon predates the Disney version by four years.

I'd love to see a thorough and thoughtful treatment of the use of African-American music in the first two decades or so of sound cartoons.  Daniel Goldmark's Tunes for 'Toons: Music and the Hollywood Cartoon, which I read a few months ago, has a chapter on the topic ("Jungle Jive: Animation, Jazz Music, and Swing Culture"), but in spite of some interesting detail, it isn't really adequate.


The category of this might be "Not a Tool"

I read a lot for this blog, and very often I'd like to annotate what I'm reading. 

Unfortunately, so far I haven't found a good way to do this.  I've been using Zotero for a while, but it has many disadvantages for the kind of research I'm doing right now: as I've complained here previously, Zotero's annotation system requires the storing of a local copy of a web page, without specifying where that copy is kept (I still haven't found out) and apparently denying the user the opportunity to decide when to resave that copy.

Now that Zotero has several times irretrievably lost extensive annotations (highlighting and notes) that I've added to a webpage, and because the Zotero help on the topic is utterly inadequate, I've given up on using Zotero for this function.

Instead, when I absolutely have to annotate, I "print" the webpage to pdf on my hard drive, and then use Skim or Preview (on the Mac) to mark it up.  But this is method is overly complicated, and I end up using it only for especially long pages that I want to address in great detail....when ends up hardly ever happening because of the extra complication.

I keep wondering, then, why there is no tool that would allow me to mark up a web page directly in my browser.

The other day, I ran across Denote, which sounds as if it would do this job.  Their homepage promises:
No Software Installation Required

The Denote Toolbar is built with cross-compatible internet technologies. It works in your favorite browser, and is loaded by clicking a bookmark.

Leave Detailed Notes On Websites Without Making Screenshots!
So far, it seems to be a tool rather like Readability or instapaper (which I use continually every day), with a button in my browser's bookmark bar that will allow me to mark up web pages directly in my browser.  Perfect!

But wait: Denote seems to be defined entirely in terms of use on "projects" in a business environment:
Denote is a simple and powerful tool that lets you leave notes and communicate directly on live Websites with your project teams. Share feedback, manage projects, save time.

Denote simplifies communication between you and your clients by giving comments, questions, and errors context within a live Website. The browser-based..
Hmmm.  That's not what I want.  Still, there's a free level of service, so maybe I can still use it to do what I want.

Oops, "Company Name" is a required field on the registration form.  Guess I'll have to make one up.

Hmmm, now I need to specify a "homepage" for my "project" (wait, I just want to read something and annotate it).

At this point I gave up.

So this isn't the tool for the job, but it's a really good idea for a tool.  So would somebody please create it?  Thanks. (Alternatively, the folks at Denote could realize that they have the wrong model for their technology, and they could...um...retool.)
Bookmark and Share

No comments:

Post a Comment