This New York Times article by John Markoff and Edward Wyatt confirms what we’ve been strongly suspecting for some time: Google has plans to digitize large portions of the holdings of several of the world’s major research libraries in the coming years. Public domain materials will be available in their entirety, and the article hints that short excerpts from materials still under copyright will be available.
This is great news. Having this material available has the potential to place in the foreground the importance of the public domain. Further, Google may have the resources and incentive to figure out comprehensively which post-1923 books are already in the public domain for failure of formalities.
It will be interesting to see how Google and the libraries plan to justify digitizing entire copyrighted books without a license. Even if they’re not included in the public database, merely digitizing them involves making one or more unauthorized copies. I’m sure they have a plan; my plan might be to make deals with all the copyright holders you can find, then just digitize the whole thing and settle with any copyright holders you couldn’t find. But that might be expensive, even for Google.