A blog about various citizen science projects
« June 2007 | Main | August 2007 »
A blog about various citizen science projects
Posted by Richard Akerman on July 31, 2007 at 11:52 AM in Science, Weblogs | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
Daily Wireless excerpts a Newsweek article that struck me: Both Tele Atlas and competitor NAVTEQ already employ hundreds of "road warriors" in the field to gather data that satellites can't catch: traffic signs, one-way streets, points of interest...
"At the end of the day, there's no substitute for going out there and capturing the real world ourselves," says Navteq CEO Judson Green.
How to scale that? Build a crowdsourcing network, so that others can capture the real world for you — using devices that companies like TomTom (and Garmin, and Nokia) can build.
Geospatial crowdsourcing already has a impressive working example: OpenStreetMap.org, where you can (manually) upload your own GPS tracks to create a collective world road map or edit existing data. And now it turns out that Google has been piloting something similar in India
...
This is where this week's under-reported acquisition plays a part: Nokia bought Twango. Twango is a media sharing web application for mobile devices. Unlike its main competitor Shozu, which offers to forward your media to your web sharing app of choice (such as YouTube, Flickr and others), Twango wants you to store all your media in one place, on its servers, and share it using its URLs as a destination. With Twango soon installed by default on new Nokias mobiles, Nokia will be making a play to capture the generated media of its hundreds of millions of mobile phone users, so that network effects are its to dispense — with uploaded videos and photos providing eyeballs to advertisers.
from Ogle Earth - The human sensor web
UPDATE: Via Map Room I see there was also a recent New York Times article With Tools on Web, Amateurs Reshape Mapmaking
On the Web, anyone can be a mapmaker.
With the help of simple tools introduced by Internet companies recently, millions of people are trying their hand at cartography, drawing on digital maps and annotating them with text, images, sound and videos.
Posted by Richard Akerman on July 31, 2007 at 11:44 AM in Mapping, Technology Foresight | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
July 22 2007: The town of Tewkesbury surrounded by floodwaters
Photograph: Daniel Berehulak/Getty Images
Above image from Guardian Unlimited - Severe flooding.
Another view of this iconic image was also on the front page of the Globe and Mail yesterday, above the story Human activity altering rainfall patterns
"It's the first time that we've detected in precipitation data a clear imprint of human influence on the climate system," Francis Zwiers, one of the lead authors of the study and director of the climate research division at Environment Canada, said in an interview Monday.
"Temperature changes we can cope with. But water changes are much more difficult to cope with. That will have economic impacts, and impacts on food production, and could ultimately displace populations."
The study, to appear Thursday in the science journal Nature, comes as record rainfalls wreak havoc in Britain and force thousands from their homes.
Nature has a news item Rainfall changes linked to human activity ( doi:10.1038/news070723-4 ).
The article itself is Detection of human influence on twentieth-century precipitation trends ( doi:10.1038/nature06025 ).
We used monthly precipitation observations over global land areas from the most recent version of the Global Historical Climatology Network (GHCN)21 to analyse precipitation trends in two twentieth-century periods (1925–1999 and 1950–1999), during which observational data are considered to be sufficient to describe global-scale land precipitation change. This data set has been carefully quality controlled.
You can follow more of the discussion in the Nature climate change blog, Climate Feedback.
Posted by Richard Akerman on July 25, 2007 at 10:34 AM in Climate Change, Science, Weblogs | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
Michael Geist has a piece in the Ottawa Citizen and various other papers, as well as on his website on the high price of mobile (cellphone) data in Canada.
For example, Rogers - Canada's sole GSM provider and therefore the only telecom company currently equipped to offer the iPhone - offers a starter data plan that provides 1.5 megabytes of data per month for $15 (each additional MB is $21). Since that is not even enough data to download a single high-resolution photograph, most consumers presumably opt for more. The company's biggest data plan provides 500 MB, yet costs $210 per month - far beyond the reach of most consumers.
This pricing, which is comparable to plans found with Bell and Telus, is not close to competitive internationally.
He has additional commentary in his blog
For example, consumers in Lithuania can purchase an unlimited data plan for less than $3.00 per month, while similar plans can be had in the Netherlands and France for under $15.00 per month. In fact, Canada not only trails the U.S. and Western Europe, but Eastern European countries such as Poland and Romania, Asian countries such as Malaysia, and African countries such as Rwanda all offer unlimited monthly data plans for less than $50.
Previously:
June 26, 2007 outrageous cellphone data charges, the sequel: roaming
Posted by Richard Akerman on July 24, 2007 at 02:53 PM in Wireless Internet | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
Librarian in Black responded to my article in Library Journal netConnect, and Richard Wallis' commentary thereon. She says
What makes me sad is that both Ackerman [sic] and Wallis have missed a key point: if the future is in web services, how can libraries take advantage of that with their current staff configurations? How many libraries in the U.S. have a honest-to-goodness computer programmer on staff? How many have staff with Computer Science degrees? How many staff do they have devoted to the library's hardware, software, and network? How many staff do they have devoted to web services?
In the smallest libraries, perhaps all of these are the same one person.
My article is about modern software engineering for libraries, not on how to staff them. However, in the very same issue of netConnect there's an article by Karen Coombs "Digital Promise and Peril" calling for library staffing to reflect the digital content environment
Many of these digital materials are in jeopardy of being lost because librarians have not yet found adequate ways to collect and manage them. In part, this is because roles and skill sets have been siloed in libraries. Materials preservation issues have typically been the purview of special collections and archives units within the library. In contrast, cataloging expertise has resided in technical services, and technology expertise has typically resided in systems. To collect and manage born digital objects adequately requires these roles and skill sets to come together.
So let me summarize some of the goals and targets of the article, as well as talk about the relationship to promising developments:
Where can we look?
Maybe the DLF project on ILS APIs will help.
Maybe the OASIS effort on standardising search services will be useful.
Maybe it happens by using OpenSearch and simple REST interfaces rather than custom library protocols.
UPDATE: There is another important piece, which is about libraries reaching out and speaking the right language. That's why you need to understand how to express things in terms of SOA, Web Services, and APIs. There is way more innovation capacity outside your walls than you can ever get inside, even if you have the perfect IT staffing policy and budget. From your local superpatrons, highschool CS students, and local college and university computer science departments, to, basically, every CS student in the entire world. You can reach them with contests, with collaboration requests, with invitations to improve your systems... but here's the important part... if you speak their language. CS people love challenges and programming, but they're not going to learn obscure library jargon and usage like OPAC, Z39.50 and database (which means something completely different in CS). You can't say "hey, can you help us improve our OPAC because the Z39.50 doesn't federate across our databases". They're not going to know what the f*** you're asking. Learn the CS language, and a whole world of programmers will open up to you.
To me one of the single biggest missed opportunities is in the digital library community. Ever year, lots of computer science groups, flush with energetic grad students, toil away and produce results that are presented at JCDL and ECDL. And, based on my experience at ECDL 2006, they then present those results entirely to a community of other computer scientists. Where are the librarians? Why are you all at library conferences talking to other librarians? Come to *CDL and ask the computer scientists to build stuff you need. Yes, it's a difficult transition from research to production, but at least join the conversation. ENDUPDATE
I don't have the answers and I have certainly asked again and again where people see all these frameworks and groups fitting together, with no response.
The good news is that there are lots of projects out there already - I don't think it's a case that there is no activity. The fundamental point of my article is that for these projects, we have to use enterprise architecture, service-oriented architecture, web services / standard APIs and the whole toolkit of modern network-based standard-data software development. Because if we don't, WE WILL BUILD SILO SYSTEMS AGAIN.
That's what I said in my IATUL presentation, and in the short accompanying paper (Scribd), and what I've been saying over and over again in this blog.
[Sidebar on Scribd: be careful browsing around this document hosting site. Many of the profiles, profile images, and documents are unfortunately very not safe for work. Scribd really needs to put in a moderation / adult content filtering system.]
Why do I think it's important to talk about these topics? Because there really are lots of new developments in the library catalogue and OPAC world, including:
There are also some great modular browser tools out there, including
I'm very much hoping that these developments will open libraries up to be better network participants, with a broader community of developers able to build pieces, and with standards enabling libraries with limited development capability to simply plug-and-play.
Some links via The Ten Thousand Year Blog: eXtensible Catalog open source project, VuFind released as open source software.
Previously:
June 29, 2007 Casey Bisson on Scriblio and OpenLibrary
Posted by Richard Akerman on July 24, 2007 at 12:33 PM in Academic Library Future, OPAC, Open Source, Service-Oriented Architecture, Software Development, Web Services | Permalink | Comments (1) | TrackBack (0)
Reblog
(0)
|
|
The good news is that the Globe and Mail, one of Canada's national newspapers, has a reasonable, balanced article about the open access issue. The bad news is that you can't read it, because the article itself is closed access (paywall). Irony, anyone?
Turning the ivory tower into an open book
This year, the University of Toronto's library system will spend $20-million on acquisitions. But less than one-third of that money will go to books. The majority will pay for the rising subscription costs of academic journals. "It's alarming," says Carole Moore, the university's chief librarian.
Along with colleagues across the country, she has watched the price of the latest research skyrocket, with top titles such as medical journal Brain Research now hitting $21,000 or more for annual subscriptions.
...
"This is very big money," says Jean-Claude Guedon, a professor of comparative literature at the University of Montreal who founded Canada's first electronic, open-access journal, Surfaces, in 1991. "This is research that is financed by government and the articles are paid for by libraries funded by government. Then there are these guys in the middle that extract profit."
To get around that, the open-access movement is attempting to establish high-quality publications to rival the titles of established houses. At the forefront is the Public Library of Science, a non-profit organization that has established several journals with the help of some deep-pocketed supporters, including Bill Gates.
There also are efforts to create collections of research apart from the traditional journal format. PubMed Central, maintained by the U.S. National Institutes of Health, is a digital library of peer-reviewed manuscripts. Several universities are following suit by posting archived faculty work.
But this research must be submitted by researchers themselves. And so far uptake has been weak. That has led some funding bodies - including the powerful NIH and the Canadian Institutes of Health Research - to consider requiring researchers to post work that they help pay for in an open-access format.
"I think of it as a democratic question. Open access is part of the public's right to know," says University of British Columbia professor John Willinsky.
Posted by Richard Akerman on July 21, 2007 at 05:12 PM in Open Access | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
"curated" pages with various resources on a specific topic, e.g. Evidence-Based Laboratory Medicine
Previously:
June 28, 2007 Elsevier pre-announces Scirus Topic Pages
Posted by Richard Akerman on July 20, 2007 at 10:00 AM in Science, Searching, Web/Tech | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
In a cleverly-named article Still plenty of wallflowers at the social, today's Globe and Mail looks at research on participation in online social networking sites
In particular, Forrester's research - I assume the work they are referencing is Charlene Li's report on How Consumers Use Social Networks, which is currently free (after guest account signup). The underlying message is that it can be difficult to get very active participation - only a small percentage of users actually create content - the so-called 1% rule. However, social networking is clearly a very important part of the engagement between organisations and their users, so you need to think about how you will use the various Social Software Building Blocks.
Previously:
January 7, 2007 audience and participation
Posted by Richard Akerman on July 19, 2007 at 01:23 PM in Web/Tech | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
Library Journal netConnect for July is out (July 15, 2007). The cover theme is the "Social Catalogue". All of the content is free online as usual.
On this theme are an article I wrote about Library Web Services, advocacy by John Blyberg about the need for open APIs ("Always Pushing Information"), and an explanation of one way to Visualize Your Catalog using Grokker, by Kate Bouman et al.
I want to thank Jay Datema for this opportunity to share these concepts with a wider audience, and in particular for his patience during the extremely long genesis of my article.
I do want to add some supplementary information.
"web services also refers to one specific technology implementation" - by this I mean the industry Web Services stack, SOAP-based services. This shouldn't be confused with SOA, Service-Oriented Architecture, which is a conceptual framework. There is a short version of my sidebar on SOA in the article, but with Jay's permission, the full original version is attached below.
I did a set of supplementary bookmarks for the article, but somewhere along the line I forgot to include them in the article itself, anyway, they're available at
http://www.connotea.org/user/scilib/tag/ljwebservices
UPDATE: As it happens, Jon Udell blogged a plea today for the most basic of catalogue interfaces, a standard ISBN syntax in the URL. Via panlibus. ENDUPDATE
I will also try to finish the blog posting of a long-overdue review of the 4 SOA / Web Services books I recommended at the end of the article.
You can of course read more as well in my (sometimes overlapping) blog categories on Service-Oriented Architecture and Web Services.
As well, Peter Murray has been advocating for library SOA, and has set up an aggregator to gather various blog postings on the topic:
Finally, here's the full sidebar that I wrote:
Library SOA
In the library world, we are often bombarded with new technology terms and ideas.
In this article, I will discuss Service-Oriented Architecture (SOA) and Web Services, providing a framework for understanding their differences and complementary nature.
In this way, it should become evident which aspects will be of enduring value, and which will change as technology evolves.
Often SOA and Web Services are used interchangeably but they are actually quite different. SOA is a methodology for software architecture and for guiding software development. The library technology landscape is littered with towering monoliths of software and teetering siloed systems. We shouldn't feel too bad about this: when catalogues and digital libraries were being developed, these were reasonable implementations. It is only as the software engineering field has evolved that we have realized the problems caused by developing closed, inflexible systems.
SOA is a way of thinking about software, of breaking it down into appropriately-sized component parts, and minimizing the dependencies between those parts. It also helps to guide your thinking by emphasizing reusability. Having reusability as a core consideration reminds the developers that their code is not ephemeral - it will almost certainly be around for a long time, which means that it is critical to make it maintainable and extendable.
Web Services are a technology that can be used to implement a Service-Oriented Architecture. They have some characteristics that mesh well with the requirements of an SOA. But Web Services are just the latest technology to come, and are sure to be replaced or added to by future developments. As well, there is a potential danger of complexity creep, as more and more WS-* standards are developed.
So the important thing is to take the concept of Web Services, that of providing a well-defined interface, an API that exposes useful capabilities or data from within your organization, and select a technology as appropriate. This may be full Web Services with SOAP, it may be a simple HTTP REST interface, or both, or neither. It’s the idea that’s important, not the particular technology. In fact, what is a service, within an SOA? It’s simply a bundle of functionality with a well-defined interface.
When developing software, regardless of methodology or technology, it is important is to have the concept of sustainability in mind. There is a balance of course between long-term benefits and short-term gain. Many will argue that too much design or planning interferes with organizational agility, that it is better to rapidly deploy something, than to tilt against the windmill of perfectly engineered software.
But the fact that we now have OPAC 2.0 and “new ILS” initiatives tells us that we need to learn the lesson of sustainable software. To put it bluntly: with the library catalogue, we slowly built a software system that no longer meets all of our needs. With mashups and quickly hacked APIs, we run the risk of rapidly building software systems that will not be able to meet our needs.
Service-Oriented Architecture is the latest, and by no means perfect, method of addressing enduring challenges of software design. It is a new way of describing and communicating the concepts behind the software you build; a way of maintaining the essential conceptual integrity needed to successfully build sustainable computer systems.
SOA is a framework that sits above the development process itself. SOA does not require a waterfall model of development. We know a failed approach is trying to do all the design up front and then, perhaps after substantial time has passed in the design phase, writing the code. In fact, I believe the best results will be obtained by iterating the architecture, that is, continuously refining the framework by going back and forth between implementing defined services, and updating service definitions based on the results. I think it is important to remember that modern software engineering is about the development of software as a conversation between design and implementation, not a one-way street.
Like any methodology, there is a risk that SOA can lead to an excess of complexity, and to trying to over determine requirements that may change quickly. Be wary of elaborate SOA processes that are better matched to giant Fortune 500 companies than to the needs of basic library systems. But with all those caveats in mind, the core of SOA is quite simple: define what functions the business wants to perform, and then break those functions down into the smallest useful pieces.
This is the concept of service granularity: identify services that are large enough to provide substantial useful functionality, without being so large that they are in danger of becoming new silos. As luck would have it, we already have a fairly good idea of basic library functions, as defined by our current catalogues. We first need to break those services out of the ILS silos, and then look at adding new capabilities beyond what the ILS could offer.
There is a possibility for us to all escape the isolation of our OPAC islands, by working together to create common library service definitions, so that we move beyond the catalogue and are interoperable right from the start. I believe this is a tremendous opportunity for the library community, because standard services are the very foundations upon which innovation is built. We look to Amazon and want to capture some of the software development energy that builds upon Amazon’s services, but we have to realize that behind the scenes, Amazon has an internal SOA that helps to guide and sustain its service framework.
SOA is, in a way, like infrastructure. We benefit enormously from the fact that electrical, water, and other services in buildings are standardized, with uniform interfaces and attributes. I don’t think many people would want to work in an environment of “agile electricity”, where every wall socket had a different interface and electrical characteristics. That basic foundation of standards is what SOA can provide to the library world.
We all know that there are successful standards efforts, and there are those that languish. This is the moment, as libraries are starting to explore the world of Web Services, for us to work together as organizations to put in place the infrastructure, the simple set of basic standards, the library Service-Oriented Architecture that will form a platform for sustainable innovation for years to come.
Posted by Richard Akerman on July 16, 2007 at 07:46 AM in Academic Library Future, OPAC, Service-Oriented Architecture, Software Development, Web Services | Permalink | Comments (1) | TrackBack (0)
Reblog
(0)
|
|
In 1973, the science fiction and fantasy writer Ursula K. Le Guin gave a talk at the Pacific Northwest Library Association conference, which turned into an article "Why Are Americans Afraid of Dragons?" in PNLA Quarterly 38, Winter 1974 (unfortunately not online). The essay is collected in The Language of the Night: Essays on Fantasy and Science Fiction.
Since we're in Harry Potter season, it seems appropriate to revisit the topic.
As you may know, the topic of "witchcraft" seems to have a thread running in American society from the days of Salem, including the strange idea (perhaps unique to a particular strand of American religion?) that books like Harry Potter are a bad influence. A thread strong enough for The Onion to satirise in the classic "Harry Potter Books Spark Rise In Satanism Among Children", which had enough of an impact to merit an entry in Snopes Urban Legends.
I was interested to see if this had also been the case over 30 years ago, but actually Le Guin's essay turns out to be a scathing critique of general American cultural conservatism - no mention is made of religion whatsoever. Well, she does connect to the Puritan work ethic, but not to the sort of fundamentalism behind some of the anti-Harry rhetoric in the US these days.
In wondering why Americans are afraid of dragons, I began to realize that a great many Americans are not only antifantasy, but altogether antifiction. We tend, as a people, to look upon all works of the imagination either as suspect, or as contemptible.
...
"Fairy stories are for kids. I live in the real world."
Who speaks so? ... It is, I fear, the man in the street--the hardworking, over-thirty American male--the men who run this country.
Such a rejection of the entire art of fiction is related to several American characteristics: our Puritanism, our work ethic, our profit-mindedness, and even our sexual mores.
To read War and Peace or The Lord of the Rings plainly is not "work"--you do it for pleasure. And if it cannot be justified as "educational" or "self-improvement," then, in the Puritan value system, it can only be self-indulgence or escapism. For pleasure is not a value, to the Puritan; on the contrary, it is a sin.
Equally, in the businessman's value system, if an act does not bring in an immediate, tangible profit, it has no justification at all.
Have these elements of American culture changed in the past thirty years? Or is it that capitalism, which must bring endless growth and at least the appearance of novelty, must eventually expand to encompass everything, every subculture? Or maybe it was Star Wars that gave it a push? Pulling in millions of dollars for a space opera in 1977 certainly would have gotten the attention of even the dullest businessman. But I think at most it became the case that it was more acceptable to use SF and fantasy to market to kids, not to adults. When did the real adult crossover start? When the Star Wars generation became adults?
It's clear that Harry Potter has been a major crossover success, to the point of having separate covers for adults. And the Lord of the Rings movies brought some wonderfully executed adult fantasy to the screen. Could the Lord of the Rings movies have had the same success in the 1970s? I know there was a phase of popularity coming out of the "hippie" days - was it the hippies that turned us all on to fantasy?
Did the return of Star Trek with the Next Generation in 1987 bring a ubiquitous presence to television science fiction? I know I can't switch a channel these days without coming across Trek or some other flavour of sci fi. Was it a cultural shift or was it just Paramount finding it can make piles of money?
With shows like Lost, Battlestar Galactica and Heroes, with complex plots aimed directly at adults, are we in a new era? Or is it just that the market has fragmented into so many pieces that what would have been niche programming now has an acceptable fraction of the audience?
And has this acceptance of fantasy penetrated to the ruling class that Le Guin describes? I don't have any impression that it has.
I think that a great many American men have been taught... to repress their imagination, to reject it as something childish or effeminate, unprofitable, and probably sinful.
Now I doubt that imagination can be suppressed. ... If it is rejected and despised, it will grow into wild and weedy shapes; it will be deformed. At its best, it will be mere ego-centered daydreaming; at its worst, it will be wishful thinking, which is a very dangerous occupation when it is taken seriously. ... Nowadays, with our secular Puritanism, the man who refuses to read novels because it's unmanly to do so, or because they aren't true, will most likely end up watching bloody detective thrillers on television, or reading hack westerns or sports stories, or going in for pornography... It is his starved imagination, craving nourishment, that forces him to do so. But he can rationalize such entertainment by saying that it is realistic--after all, sex exists, and there are criminals...and also by saying that it is virile, by which he means that it doesn't interest most women.
Hmm, ego-centered wishful thinking taken seriously? Does that sound like any of the country-runners these days? CSI and porn seem to be going strong too. Has the culture just grown to encompass more options? Or is there a mainstream culture that is much more diverse, and a "serious old men" culture that is still clinging to a distaste for dragons? When was the last time a politician said they were reading a science fiction or fantasy book? But they have no problem saying they faithfully watch 24.
Have the geeks gone mainstream, or is the mainstream just so big and ill-defined now that everything is mainstream?
UPDATE 2007-07-16: I ran across this Washington Post article
Harry Potter and the Death of Reading
More than half the adults in [the United States] won't pick up a novel this year, according to the National Endowment for the Arts. Not one. And the rate of decline has almost tripled in the past decade.
That statistic startles me, even though I hear it again and again. Whenever I confess to people who work for a living that I'm a book critic, I inevitably get the same response: "Imagine being able to sit around all day just reading novels!" Then they turn to each other and shake their heads, amazed that anything so effete should pass for a profession. (I can see it in their eyes: the little tufted pillow, the box of bonbons nearby.) "I don't read fiction," they say, suddenly serious. "I have so little time nowadays that when I read, I like to learn something." But before I can suggest what one might learn from reading a good novel, they pop the question about The Boy Who Lived: "How do you like 'Harry Potter'?"
Has reading Harry simply become a typical American mega-event, where participation in to some extent semi-mystical, the idea being that somehow being associated with a big success may cause some of success to rub off on you?
Are we moving into a Long Tail world, or is Harry a tail-less dragon that's eaten up all the casual readers?
Do the library patron trends support the assertion of the decline in fiction reading? (I work in a science library, so it's hard to know what the fiction trends are.)
UPDATE 2009-05-02 There's nothing like going right to the source: Ursula K. Le Guin tells me if Americans are Still Afraid of Dragons.
Posted by Richard Akerman on July 14, 2007 at 11:25 AM in Books | Permalink | Comments (2) | TrackBack (0)
Reblog
(0)
|
|
I've installed the new Twitter widget.
But it seems to me it kind of misses the point. Most people never see my blog page, they only see my feed.
Adding a new content stream should offer the option to merge it into your RSS feed, not just display it on your page; there is no indication that the new Twitter widget does this.
Posted by Richard Akerman on July 13, 2007 at 08:15 AM in Web/Tech, Weblogs | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
I've often said that if SOA is about technology then its another minor IT improvement that won't deliver a fraction of the benefit that the hype promises.
So why do I think that SOA is different from the way most people architect IT solutions and why do I think it will have an impact?
Because simply put I think that SOA is about changing the way people think, its about making them think about Business Services and being focused on business objectives first, and technology objectives second.
Service Architecture - Why SOA is different
Posted by Richard Akerman on July 13, 2007 at 07:59 AM in Service-Oriented Architecture | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
I think there's a big problem in that there's a set of presence-based technologies (IM, Twitter, Facebook) whose entire lifespan has coincided with a generation of high school and university students.
These students are very mobile, what with skipping classes and going to the pub, so they never know where other students are. As well, their friendships and relationships are constantly shifting. It's not at all clear that these technologies will endure when they enter the work world and have a more stable daily pattern. My entire Facebook presence stream consists of "at work", "now at home", repeat, and my group of friends stays pretty much static year-over-year. I think that's a reality of the "over 30" life.
So while I'm not sure that Facebook will die, I'm not convinced that tools for dynamic lives are needed by people with settled routines.
Remember that Facebook was just invented in 2004 - and many other Web 2.0 tools are even newer. A point I tried to make in my Timeline slide in a Web 2.0 presentation is how very new all of this stuff is. We should be very wary of projecting these very recent developments far into the future.
Posted by Richard Akerman on July 10, 2007 at 02:17 PM in Web/Tech | Permalink | Comments (2) | TrackBack (0)
Reblog
(0)
|
|
If you're passing through a dog park in Boston in the coming months and happen to catch a glimpse of a funny little device hanging off a pooch's collar, don't be surprised. A startup called SNIF Labs is gearing up to beta test a technology designed to help dogs--and their owners--become better acquainted.
SNIF Labs--the company's name is short for Social Networking in Fur--is developing what its website calls "a custom radio communications protocol" that allows special tags dogs wear on their collars to swap dog and owner information with other SNIF-tag users. When two dogs wearing tags come within range of each other, the tags start to swap dog and even owner information.
Once owners are back home and using the company's social-networking service, they can trade information about their dogs and themselves online.
from MIT Technology Review
Posted by Richard Akerman on July 10, 2007 at 01:58 PM | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
The Nikon D2X, the company’s current top-of-the-line SLR model ($5,100), works with an optional MC-35 GPS Adapter cord ($139) that connects with a standard G.P.S. receiver (which you must also buy) to automatically save location coordinates with each photograph.
But G.P.S. is starting to show up among lower-priced cameras. The new Ricoh 500SE (about $1,000), a point-and-shoot model aimed at outdoor enthusiasts, has a built-in G.P.S. device. It’s even showing up on camera phones, including the Nokia N95, though the $749 price is still a bit steep.
New York Times - Snapshots That Do More Than Bore Friends - June 10, 2007
via EveryTrail Blog
Yes, to be fair, in various places I have written about or linked to these high-end products, but I have also tried repeatedly to emphasize that for those of us without thousands of dollars to spare you can accomplish great automatic photo geocoding with a $100 GPS logger. I think maybe I need to emphasize that point more.
Previously:
June 28, 2007 Wired gets it wrong on geocoding photos
Posted by Richard Akerman on July 03, 2007 at 12:28 PM in Mapping, Photo | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
CBS 5 KPIX San Francisco has a video report on photo geocoding (there's a brief ad at the beginning).
via EveryTrail Blog
Posted by Richard Akerman on July 03, 2007 at 12:08 PM in Links to Video, Mapping, Photo, Television, Travel | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
Phil Bradley presented Practical uses for Web 2.0 in a library environment at Umbrella 2007 and then blogged Web 2.0 questions.
Posted by Richard Akerman on July 03, 2007 at 07:50 AM in Conference, Links to Presentations, Web/Tech | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
Ok, I've got no fingerprinty giant screen iPhone, but my Sony K790 does have a few minor advantages such as
- half the price
- better camera
- camera has flash
- can communicate with a variety of Bluetooth devices, including external GPS
- user storage expansion using cheap M2 memory cards
- FM radio
- runs Java mobile apps
- removable battery
- available in unlocked versions / SIM(s) of your choice
Anyway, enjoy your expensive hypephones, if you've got 'em.
Think how happy you'll be next year when there's a cheaper, better model.
And yes, in case you're wondering the K790 is also obsolete, to be replaced by the K850 with even more megapixels and additional features.
Posted by Richard Akerman on July 02, 2007 at 08:10 PM | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
Honestly, what else can you get three tonnes of for only a hundred bucks?
(Offsets for my recent travel, from MyClimate.)
Note: MyClimate can be slow to load from Canada, but it is there.
Posted by Richard Akerman on July 02, 2007 at 03:00 PM in Carbon Offset, Travel | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
This year somehow has had threads of Linnaeus, Lamarck and the Jardin des Plantes (botanical garden) running through it.
When I was visiting the Jardin in 2004, the rather grand building at the end of the promenade was of course unmissable
but I have to say I didn't really think about what it might be.
I made a point of returning to the garden in April of 2007 but I still only used the building as a photo backdrop
It wasn't until I read Paris to the Moon in May, post-visit, that I realized there were exhibits in the grand buildings, and it wasn't until later that month when I read Everything is Miscellaneous that I discovered that Lamarck had been a professor at the institution that includes the grand garden buildings, the Muséum national d'histoire naturelle.
A week after that, at the very end of May, I pointed to Linnaeus and his passion for order, in National Geographic.
Then in June I found myself at the IATUL conference in Sweden, with a day-trip to Uppsala, and of course, having not really paid attention to the details of the excursion and the location beforehand, it wasn't until I got there that I realized there would be a great deal about Linnaeus, since he was Chair of Botany at Uppsala University (Uppsala universitet), plus which it's the 300th anniversary of his birth. He's buried in Uppsala Cathedral (Uppsala domkyrka) - the tomb, inset in the floor, is surrounded by small plants and flowers. There was also an exhibit at Carolina Rediviva, the main building of the Uppsala University library.
Although there weren't many display cases, the exhibit was most ably described by our guide, who if I understood correctly also happens to be the head of the library. I learned that as part of his system of the world, Linn had also had a classsification for minerals, but it didn't catch on.
Then we went to Linnaeus' Hammarby, which was his summer cottage. Unless you are particularly fascinated by rooms in a cottage, there's not that much to see there, although the grounds are pleasant.
You can read more about the Swedish celebrations of his life and work at
And as the conclusion of my Swedish Linnaeus thread, I received the book Carl Linnaeus as a speaker gift.
Back in Paris for a day, I walked to the Jardin des Plantes, determined to finally explore the natural history museum. To my surprise, I found myself on Linnaeus Street
The Grande Galerie de l'Évolution really is quite grand
You can see the Minerology Museum just to the left of it.
Inside the Gallery is a large open space, dimly lit, with animal reconstructions as its main focus.
The centre of the Gallery is given over to a grand procession of African animals.
There are also book displays (I don't know if they are the original books) of the writings of Lamarck (first, naturally) and then Darwin.
It's hard to convey a good sense of the interior, but I can at least say that on a warm day where tourists were presumably clambering over themselves at the main Parisian attractions, the gallery was cool and (perhaps sadly) quiet.
Posted by Richard Akerman on July 02, 2007 at 02:41 PM in Photo, Science, Travel | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
Recent Comments