Google Checkout - A few places to shop
via Google Checkout Launched- Is this the paypal killer?
via Recently Furled
« May 2006 | Main | July 2006 »
Google Checkout - A few places to shop
via Google Checkout Launched- Is this the paypal killer?
via Recently Furled
Posted by Richard Akerman on June 29, 2006 at 02:53 PM in E-Commerce | Permalink | Comments (0)
Reblog
(0)
|
|
The latest round of articles has been released in the Nature peer review debate.
In this round, I was particularly interested in
Systems: Online frontiers of the peer-reviewed literature by Theodora Bloom of BioMed Central.
The Internet is allowing much more interactive science publishing
Online tools can be used to improve the accuracy, transparency and usefulness of the scientific literature by moving away from the traditional emphasis on closed peer review. Given the capability for post-publication amendment of articles, the scientific articles themselves and the peer-review process will soon be profoundly different from today’s standard.
My name did get spelled wrong though :( On the plus side, this happens all the time anyway.
A most important issue raised in the article is about article versioning
But if an article can be amended and commented on after publication, what should be seen as the definitive version? Some journals, like FASEB Journal and Genome Biology, already provide different versions of a single article for print and online audiences. Indeed, any article that includes a database or movie available only online can be considered in this light. Many traditionalists object to the idea of multiple versions of scientific articles, condemning them as duplicate publication of the same data.
I think this is going to be a huge challenge as we see more and more articles circulating outside the confines of journals. I plan to write more on this topic later.
Posted by Richard Akerman on June 29, 2006 at 07:30 AM in Publishing, Science, Web/Tech | Permalink | Comments (1)
Reblog
(0)
|
|
There is this web thing called default page. The default is usually index.html, default.html, or default.htm
So if I have a URL like
http://www.nature.com/nature/multimedia/googleearth/index.html
I actually only need
http://www.nature.com/nature/multimedia/googleearth/
and it will work fine; it's essentially the same URL.
So who would ever go around bothering to cut out the index.html bit?
Err, me.
If it doesn't need to be there, I'm deleting it.
But this causes a bookmarking problem. If I bookmark the above with the index.html, delicious happily locates 8 other bookmarkers.
If I bookmark it WITHOUT the index.html, there I am, all alone.
But surely this is fundamentally the exact same URL? Am I asking too much of delicious to consider these two the same?
Posted by Richard Akerman on June 28, 2006 at 09:42 PM in Bookmarking | Permalink | Comments (0)
Reblog
(0)
|
|
It asks me to select one
What are you?
err, I can't be both?
So anyway, I can't say much about WidgetBox yet, other than that the idea of a marketplace for widgets built on web services sounds neat. But I'm not sure that's what it is, exactly. And I'm not sure what they mean by "widget". Who got the brilliant idea to take a generic term, and try to productize it? A Yahoofabulator Widget? An Apple Widget? A widget-is-a-kind-of-thing-I-put-on-my-website widget?
via TechCrunch
PostApp will manage the process of turning web services into widgets that bloggers, social network users and others can insert into their pages. Outside developers will create web services, submit them to PostApp for transforming into widgets and content publishers like bloggers, auction sellers and social network users will select the widgets they want from the WidgetBox marketplace. The service will also manage the money for widgets that involve financial transactions like affiliate links or subscription, though developers will have first say in determining the business rules of their projects. PostApp will act as a master affiliate or subscription center, as appropriate.
Posted by Richard Akerman on June 28, 2006 at 09:18 PM in E-Commerce, Web/Tech | Permalink | Comments (0)
Reblog
(0)
|
|
To be an Enterprise Architect is to live in a bit of a temporal disconnect from the rest of the world.
This disconnect is two-fold:
1) Architecture, to my mind, must be looking at the continuously receeding 3-year time horizon.
That creates the critical bridge between the day-to-day and month-to-month activities of the organization, and the organization's stated 5-year strategic plan.
2) The people in your organization may not even be living in the present. They may have a technological viewpoint that lags several years behind.
So you're in the meeting room in 2006, and you're trying to have a conversation based on where you think things will be in 2009, meanwhile, the organization is still back in 2001.
That is a huge challenge.
In fact, I would say one of the biggest single challenges is trying to find ways to move people's thinking forward. And I don't have any solution. So far my recommendation is:
inform and be ready
Tell the organization that based on their goals and the current technology environment, they will need X, do the necessary models to support X and then... wait. Generally in about 9 months, after having completed their journey of discovery, the organization will come to you and say "we've just discovered, we need X".
Is this business-driven?
To me, as long as you are working from the strategic plan created by the business, you are business-driven. A key role of the architect is to be the Keeper of the Plan, and the Great Reminder.
It's not that you can't change the plan, it's just that that should be a conscious decision, not a function of drift. If you said your strategic goal for 2010 was to build a soaring stone cathedral, and you land in 2010 having built only a bunch of wooden houses and two sheds... to me that's the biggest single metric indicating success or failure of the EA.
Anyone else have ideas on key metrics to qualitatively or quantitatively assess the success of the EA?
Any thoughts on bridging the time warp between 3 and 5 year strategic visions and the day-to-day of project and operations execution? Better ideas for moving the organization's technology thinking forward, rather than just "inform and be ready"?
Posted by Richard Akerman on June 28, 2006 at 09:05 PM in Enterprise Architecture | Permalink | Comments (0)
Reblog
(0)
|
|
One of Canada's largest Internet service providers is warning its customers that Big Brother is lurking on-line, with the federal government expected to revive an Internet surveillance bill.
If the legislation is reintroduced, it could allow police unfettered access to personal information without a warrant, experts warn.
Bell Sympatico has informed its customers that it intends to "monitor or investigate content or your use of your service provider's networks and to disclose any information necessary to satisfy any laws, regulations or other governmental request."
Bell Sympatico's new customer service agreement, which took effect June 15 [2006], is a clear signal the telecommunications industry expects the Conservative government to revive the surveillance law, said Michael Geist, an Internet law professor at the University of Ottawa.
Globe and Mail (CP) - Big Brother watching you surf? - June 27, 2006
Posted by Richard Akerman on June 28, 2006 at 12:49 PM in Current Affairs | Permalink | Comments (0)
Reblog
(0)
|
|
I couldn't find a "Connotea This" FeedFlare Unit, so I made one.
Here is connotea-this.xml
It's based on Furl This, by Emily Robbins, in the Flare Catalog.
Thanks go to Martin Flack for providing me with the necessary Connotea syntax.
Technorati tag: feedflareunit
Posted by Richard Akerman on June 27, 2006 at 10:06 PM in Bookmarking, RSS Feed Tools, Web/Tech | Permalink | Comments (0)
Reblog
(0)
|
|
Workflow is an important new way of thinking about software design and integration.
To some extent, thinking about workflow helps one to design composable services.
EDUCAUSE has a workshop starting tomorrow.
Advanced CAMP: Workflow Models and Technologies
June 28-30, 2006
Wyndham Burlington, Burlington, VT
The Advanced CAMP: Workflow Models and Technologies provides a chance for attendees to explore the requirements, models, and needs for workflow in a highly interactive setting. The sessions will
- cover the security usage patterns in workflow, including the role of document digital signatures,
- examine emerging technologies, including web services, associated with workflow,
- discuss current and ongoing deployments in higher education and
- explore how middleware such as directory services, Signet, Grouper, and Shibboleth projects can be enhanced by workflow and could be used to facilitate the use of workflow deployments in your organization.
My main interest in this area is WS-BPEL workflows, particularly ones related to scholarly communications or e-Science.
Posted by Richard Akerman on June 27, 2006 at 11:17 AM in Conference, Workflow | Permalink | Comments (0)
Reblog
(0)
|
|
I'm experimenting with adding FeedFlare elements in my blog postings (they are already turned on for my feed). You should see items for emailing, delicious, Furl and Digg.
I'd like to add one for Connotea too, but I couldn't find a "Flare Unit" for Connotea.
(UPDATE 2006-06-28: I made a Flare Unit for Connotea.)
For more info see Everything TypePad - Get some Flare for your blog.
Previously:
March 14, 2006 meta: about my RSS feed
December 14, 2005 FeedBurner launches FeedFlare enhancements
Posted by Richard Akerman on June 27, 2006 at 09:53 AM in RSS Feed Tools, Web/Tech, Weblogs | Permalink | Comments (0)
Reblog
(0)
|
|
Surfing Digg led me to 9 Ways to Misunderstand Web Standards which led me to the awesome 2001 article Putting the torch to seven straw-men of the meta-utopia by Cory Doctorow.
He makes great points about the futility of the Semantic Web dream, including:
Previously:
January 17, 2006 in which I learn the HTML acronym tag
Posted by Richard Akerman on June 24, 2006 at 09:36 PM in Metadata, Web/Tech | Permalink | Comments (0)
Reblog
(0)
|
|
It’s time for Canada’s history to be accessed and preserved in a systematic, enduring way – one that is accessible for Canadians – and other citizens of the world. AlouetteCanada hopes to fill that role.
“Our vision is that Canadians will be able to know themselves through their heritage and the world will have the opportunity to better know Canadians.” declared John Teskey, President of the Canadian Association of Research Libraries. “Our common aim is to provide easy online access to the extraordinary wealth of written and other records by and about Canadians.”
“Libraries, archives, museums and other interested communities are all invited to play their part,” stated Carole Moore, Chair of the AlouetteCanada Steering Committee. “The aim is to develop a coordinated plan for online access to Canada’s recorded heritage.”
This is an open invitation for everyone who is willing and able to come and play their own unique part in developing our collective Canadian online memory. We would like to hear from local history societies, archives organizations, genealogists and others across the country.
from CARL - AlouetteCanada Open Digitization Strategy Launched (PDF) - June 21, 2006
They have a website now at
CISTI's Mary Low is on the Communications and Marketing Committee, and CISTI DG Bernard Dumouchel is on the Charter Projects Committee.
Technical Committee members include
(Please let me know if there's anyone's blog I've missed.)
Is it just me, or do librarians love committees?
Previously:
December 29, 2005 digitization for Canadian books: Alouette
Posted by Richard Akerman on June 24, 2006 at 11:21 AM in Books, CISTI, Digital Library | Permalink | Comments (1)
Reblog
(0)
|
|
I think the academic library has a major role to play in defining, managing, and possibly hosting a data repository for its community. ACRLog has a pointer to some discussion on this topic
Have you given much thought recently to the amount of data your researchers generate? Probably not. Where’s it going and who’s archiving it? Thanks to digital technology, scientists are generating vast amounts of valuable data that, months later, may be irretrievable or indecipherable. This could be a job for the academic library. This week’s Chronicle [of Higher Education] has a feature article about the “data deluge” and how some institutions are dealing with the challenges it brings.
Posted by Richard Akerman on June 23, 2006 at 07:11 AM in Academic Library Future, Data Management | Permalink | Comments (0)
Reblog
(0)
|
|
My brief opinion piece "Technical solutions: Evolving peer review for the internet" has been posted to the Nature peer review debate. Nature (2006) | doi:10.1038/nature04997
You can leave your feedback about it on the Peer Review Debate Comments blog section.
I have supplementary bookmarks at http://www.connotea.org/user/scilib/tag/peerfocus
I want to explore some of the concepts in the article in a bit more depth.
Also at the end of this posting you will find some acknowledgements, as many people contributed to this article.
There were three main points I made:
There is a fourth item which was part of my original charge for the article, but which I didn't really cover:
Taking these one-by-one.
1. I think we have to be very careful about the Wisdom of Crowds idea.
In Technical solutions: Wisdom of the crowds, Chris Anderson explored a lot of these issues.
One of my main concerns is that Wisdom of Crowds is sometimes oversold, in the way that Service-Oriented Architecture (SOA) is. Just put together a system, sprinkle some magic Wisdom of Crowds dust on it, and hey presto, the system is continuously improved by everyone who uses it.
The stark reality of Wikipedia and of open-source projects is that in general, a very small core group of people, with quite a strict set of management and control, drives the creation and growth of the system. If you looked in depth at the stats on how many people are making major, ongoing contributions to Wikipedia, I think you will find the core group is quite small. Similarly, if you look at the open-source projects on Sourceforge, you will discover most of them are one-man shows, or tiny teams. The poster child for open source, Linux, is really quite a small technical team of experts at its core.
2. Since one of my primary areas of activity is in Service-Oriented Architecture for library technology, I really like the idea of applying service thinking to the scientific workflow. Herbert Van de Sompel explores this in Technical solutions: Certification in a digital era.
The way I think about it is that currently when an article is submitted to a journal, you get peer review as a bundled service that is part of the publication workflow. I don't see any reason why this couldn't be a separate, stand-alone service: I write my article, I submit it to the Elsevier Peer Review Service, and it comes back certified as passing peer review.
The logical question then is, who pays? I think you have the same options as now: in the author (or "author's research funder") pays model, you pay up front, to cover the costs of peer review (which are mostly associated with the administration of the system - contacting reviewers, gathering their feedback, managing the communications). In the library (or repository) pays model, the organization that is going to be the primary host for the article pays to have it certified, in order to maintain the quality of the contents of its repository. I realize of course that I am glossing over a lot of complexity. One may certainly wonder about the possibility of then "buying" peer review - in theory the market might drive the business to services that have low certification standards. But I think that scientific reputation mitigates against this - certification from a well-established service is going to have the most weight.
In any case, I think this is an idea that should be explored, as we try to understand how scientific communication is evolving: does it become article-centric? repository-centric? publisher-centric? author-centric?
3. Peer review is, in my opinion, central to maintaining the quality of science. But science is also about communications, advancing our knowledge. An article may be wonderful and receive an excellent peer review, but if it doesn't lead to further discussions and additional work, is it contributing substantially to science as a whole? This is one area where the Internet may be able to shine, but it is also an area of huge challenges.
One can certainly imagine a dream research workflow environment, where as you work, relevant articles appear in your workspace. To make that happen is very difficult. What is most relevant, articles with a high number of journal citations? Linked to by a lot of web sites? Being discussed in science blogs? Assigned 3-or-more stars by your science peers?
That being said, discovering relevant information is one of the key challenges of our age, as we drown in the Internet flood. We need to create tools that enable communities to discover the information of relevance to them. An article that is tremendously helpful to an ordinary citizen suffering from cancer may be of less relevance to a leading-edge cancer researcher. The best scientific resources for teaching about cancer at a high-school level may not be the same as those for a graduate course. The underlying articles may all be excellent peer-reviewed science, but their relevance is highly contextual.
4. Discussion lists, discussion groups, discussion boards, repository discussions etc.
In part, this is an extension of the article discover topic above, because presumably one reason you want to discover articles is in order to discuss them. I deliberately avoided any detailed coverage of the discussion topic in my opinion piece, because it would have led me into a multi-paragraph morass around moderation. Conducting discussions that are both open and relevant is a huge challenge in an Internet environment. Even back in the original USENET days, when it was primarily university students contributing, a group like sci.physics was a chaotic mess of useful peer communications, frivolity, and pointless flamewars. That's with a tiny, specialized, self-selected community. Try now to extend it to thousands and millions of people.
This is a really hard problem. The Internet is as prone to the
Foolishiness of Crowds as it is to the occasional bursts of Wisdom. As
I write this, Science Blogs lists
I have to say, Stephen Hawking and Ann Coulter (a US right-wing pundit, for those who are fortunate enough not to know) are not two people one would often find in the same company, and Ms. Coulter may be many things, but I think even her most partisan admirers would not count scientist amongst them. There is a whole side discussion this opens up about ranking - popularity is about the most-discussed. In the US, most-discussed may be more about heated political debate. Another example: for a while, a top "most bookmarked" article in delicious and Furl was Scientists respond to Gore's warnings of climate catastrophe, even meriting a Slashdot article.
The article reads as if it is a serious scientific counterpoint to Gore's statements, and many of the comments on delicious reflect this. On Slashdot on the other hand, the top rated comments converged on the information that this was in fact a bit of partisan political hackery, cooked up by a consultant whose funding comes from oil companies.
Peer review would, I think, have immediately rejected this article. Peer discussion... well, it's all over the map. In Slashdot, high-ranked comments can bubble to the top, as part of a sophisticated moderation system. In my Slashdot view, it doesn't take much scrolling down to reach
And Who Happens to Fund the Article's Author?
(Score:5, Informative)
by goMac2500 (741295) on 17:46 Wednesday 14 June 2006 (#15535430)
Why Exxon Mobile of course!
del.icio.us and Furl have no such moderation systems for comments. So on delicious we have comments expressing enthusiasm for the article such as
"Actual Climatologists respond to Gore's warnings of climate catastrophe"
"the real climate experts weigh in on Al Gore's mis-representation and distortion of climate science"
as well as some negative comments
"right wing website trashing gore"
On Furl, the comments are pretty negative about the article
"blabbering about science without offering anything legitimate. What crap."
"how does this bullshit get written?"
Three different communities, three different comment systems, three different views of the article.
If you read delicious, you might get an impression the article was
credible.
Reading Slashdot and Furl, you would get the (more accurate) impression
that the article is highly biased and its science is dubious at best.
What's next?
I think it would be very interesting to explore the idea of controlled social discussion spaces for science - MySpace for scientists? Stay tuned...
I am deeply indebted to a number of people for in-depth background discussions that helped me shape this article.
They are:
Dr. Bruce Dancik, Editor-in-Chief of NRC Research Press, about peer review in general.
Cathy Norton, Library Director, MBLWHOI Library, and Director of Information Systems Marine Biological Laboratory, Woods Hole Oceanographic Institution
and
David Remsen, Information Systems Program Developer, Marine Biological Laboratory, Woods Hole Oceanographic Institution
about the uBioRSS tool (part of the uBio project)
Carl Lagoze, Senior Research Associate, Cornell Computing and Information Science
and
Sandy Payette, Researcher and Co-Director of the Fedora Project, Cornell Information Science
about the Pathways project
Euan Adie, bioinformatician, University of Edinburgh
about the PostGenomic tool
My organization, NRC CISTI, Canada's National Science Library, provided me with time to work on this article.
I also had lots of help from Nature:
Thanks to Sian Lewis (Ichabod is itchy) for her help shepherding my article through the Nature publication process, and to Maxine Clarke for her key role in providing me with this opportunity.
Previously:
June 15, 2006 new articles up in Nature Peer Review Debate
June 05, 2006 the Nature of peer review
Posted by Richard Akerman on June 21, 2006 at 08:19 AM in Publishing, Science | Permalink | Comments (1)
Reblog
(0)
|
|
Another fun toy - take your Google Analytics map data and plot it on Google Earth.
Here's what mine look like
Scurvy Jake's Pirate Blog - Google Analytics and Google Earth
he has an online converter available
via Google Earth Blog - View Google Analytics User Data in Google Earth
Here's a closeup of some European hits
Posted by Richard Akerman on June 18, 2006 at 11:48 AM in Mapping, Web/Tech | Permalink | Comments (0)
Reblog
(0)
|
|
After complaining again that Picasa has no integration with Google Earth... turns out that it does.
In Picasa 2.5 select a photo and then Tools->Geotag... Geotag With Google Earth...
(I would call this geocoding, not geotagging, since the GPS coordinates are in the EXIF.)
Works with both Google Earth 3 and 4 (I am using version 4).
You get a crosshair (crosshairs?) and then you move the Earth around until it is in the right location.
That can include the orientation, which will be recorded in the Google Earth data.
The photo will then be geotagged (in the EXIF) and a photo image will be inserted into Google Earth.
On my computer it can take up to 10 seconds for the image to be inserted.
You get a new folder called My Picasa Pictures.
Very very cool.
I still want automated GPS track tagging, but this is a big step in the right direction.
Here's an example, with the photo selected, from Mackenzie King's formal garden at Moorside in Gatineau Park.
Tilt doesn't work, if you try to geotag with a tilted Earth, it jumps and puts the image in the wrong place. You should be able to make the placemark and then tilt it though (more info below).
If you play around with geotagging the same image multiple times, you may get into problems, e.g. the nice popup window enlarged image will disappear. Also be aware that in the default config, it's connecting to Picasa to pull the logo, so every time you open one of those image windows, Google knows.
<a href="http://picasa.google.com/index.html"><img width="150" height="25" src="http://picasa.google.com/assets/logo.gif"></a>
By default it inserts it with a blank Author and no copyright info; you can manually insert these into the HTML description though. There doesn't seem to be any way to add this into in Picasa itself.
You can save from within Picasa (Tools->Geotag->Export to Google Earth File) but be aware that the info in Google Earth and Picasa is not dynamically linked. Basically when you click "Geotag" it sucks the photo image over to Earth, and jams the geotag (including orientation info) over to Picasa. So if you add e.g. copyright info to your Google Earth placemark, it won't show up in Picasa.
Here's the flat .kmz from Google Earth
Here's a view with the earth tilted slightly (to do this, make the regular flat geotag and click done, then in Google Earth set your desired tilt, select the image placemark, right-click, Properties, View tab, click the button Snapshot current view. Or right-click and Snapshot view should do the same thing.)
Ogle Earth has some good step-by-step info in Picasa + Google Earth = bliss
via Google Earth Blog Picasa Web Album - Geotag Photos Using Google Earth!
I have updated my page on geocoding photos with this information.
Posted by Richard Akerman on June 18, 2006 at 10:39 AM in Mapping, Photo | Permalink | Comments (0)
Reblog
(0)
|
|
If you want to take advantage of, and get noticed by, the wise crowds, the challenge is to be where the people are.
(That is a message that holds for libraries as well.)
Where the people are, as far as I can tell, is mostly looking at the first page of regular Google search results. But there are some communities. It's hard to tell what the real ones are though. Many sites are much bigger in "buzz size" than they are in actual membership. That being said, there are at least some Usual Suspects:
delicious for bookmarking
Flickr for photos
One of the reasons I think Picasa Web Albums is misguided is that ship has sailed. Just give up and support Flickr already.
For bookmarking, however, delicious doesn't quite do it for me. I like Furl better.
But then, I'm not participating in the delicious community conversation.
So I experimented with copying my Furl bookmarks to delicious, here is the result
http://del.icio.us/scilibfurl
I used the Python code furl2delicious.py from Anything Else: Furl to Delicious, with the necessarily modifications from the comments.
Mac OS X comes with Python, so I just ran it at the command line.
Since I have 1561 bookmarks in Furl, with multiple tags, and only 599 made it to delicious, with only one tag each (UPDATE 2006-06-16: And it splits multi-word tags into multiple tags, instead of joining them), I think it is only a moderate success.
A better way to maintain both sets is to get one of those bookmarklets that lets you send your bookmarks to multiple sites at once. Right now I am trying a bookmarklet built with Site Submission MultiTool - Alan's Marklet Maker. It brings up both interfaces, with the URL info filled in - about as good as you could do without a lot of custom work I guess.
Posted by Richard Akerman on June 15, 2006 at 07:51 AM in Bookmarking, Bookmarklets | Permalink | Comments (2)
Reblog
(0)
|
|
http://www.nature.com/nature/peerreview/debate/
Nature is releasing new articles for the debate every Thursday; the next round will be Thursday June 22. (UPDATE 2006-06-20: I think new articles are being released every Wednesday, the next round will be Wednesday June 21.)
In this round, I was interested by
Technical solutions: Wisdom of the crowds by Chris Anderson (who blogs at The Long Tail, his posting about his article is Open Peer Review)
I think there needs to be a lot of deep thinking about the whole wisdom of crowds idea, about which I will have more to say next Thursday.
Posted by Richard Akerman on June 15, 2006 at 07:35 AM in Publishing, Science | Permalink | Comments (1)
Reblog
(0)
|
|
Another mysterious unasked-for bit of functionality from Google,
perhaps emerging from their now-a-public-company beancounter mentality.
For this, they hired a zillion PhDs?
Anyway, Picasa is a great photo organizer for Windows.
Some wonderful features would be:
- make it available on Mac and Linux (UPDATE 2006-10-02: Now available on Linux.)
- direct upload to Flickr
- integration with Google Earth, Google Maps and GPS data for geocoding (UPDATE 2006-06-18: I was wrong - there is a new Tools->Geotag menu that lets you manually geotag using Google Earth.)
- IPTC tags (UPDATE 2006-10-02: Picasa supports both IPTC caption and IPTC keywords. Use CTRL-K to attach keywords.)
We got... none of those.
Instead, they have come up with Picasa Web Albums.
Which are web photo albums... 1999 style.
No tags. No geocoding. No copyright notice or way to mark as Creative Commons. No custom CSS. UPDATE 2006-06-15: Also no web hit counts or web stats of any kind.
Just photos. On the web. With a slideshow. Woo.
You get 250 MB. For $25/year you can buy an extra 6 GB.
Why only 250 MB? Why limited photo storage when the GMail storage counter merrily spins extra megs beyond 2GB into your email account for free? Beats me. Beancounters I'm guessing.
There are no ads. There are no ads? Err, isn't Google's whole business model ads?
How do you find other users? I don't know.
How do you search your own images, or other people's images? I don't know.
That's right, Google, the company about search, has no search features in Picasa Web Albums.
My scepticism of their recent developments, as articulated in trying to understand Google Co-op, continues.
Anyway, if you want to see some pictures of Kingswood in Gatineau Park, Canadian Prime Minister Mackenzie King's cottage retreat, here are a few
http://picasaweb.google.com/scilib/Kingswood
Posted by Richard Akerman on June 14, 2006 at 07:17 PM in Photo, Web/Tech | Permalink | Comments (3)
Reblog
(0)
|
|
Google Earth 4 beta is out.
It performs much more smoothly on my PowerBook G4 1.5GHz.
The previous version 3 was quite jittery and jumpy on my Mac, which detracted from the smooth earth navigation sensation.
Since it's beta you have to specifically go to the website and download it; it won't show up as an update.
There is also a new satellite image of Ottawa.
With clouds.
Over Parliament Hill.
Now I know this is all about image windows and automated selection and stuff but it is kind of annoying.
via Slashdot Google Earth v4 Released: Linux Support at Last /.
UPDATE 2006-07-03: Yahoo Maps has a clear image of Parliament Hill.
Posted by Richard Akerman on June 13, 2006 at 08:02 AM in Mapping | Permalink | Comments (0)
Reblog
(0)
|
|
I really enjoyed Is Strategic Planning for Technology an Oxymoron?, a 1998 EDUCAUSE article by Martin Ringle and Daniel Updegrove. It gives an overview of reasons why strategic planning can fail, and 10 guidelines for successful planning.
Strategic planning for technology is a topic that has received so much attention it hardly seems worthy of further discussion. The CAUSE Web site, for example, includes dozens of articles on strategic planning as well as copies of technology plans that have been contributed by more than eighty colleges and universities. What more needs to be said? Apparently, a great deal. In an effort to investigate strategic technology planning, we queried more than 150 technology officers in higher education around the country.2 The results were surprising. Roughly 10 percent of the respondents indicated that they simply don't do strategic technology planning at all, saying it is a frustrating, time-consuming endeavor that distracts from rather than contributes to the real work of building and maintaining an adequate technology infrastructure.
The vast majority of technology officers, however, devote a considerable amount of time and energy to strategic and financial planning. In most cases, their efforts follow the traditional model of institutional planning; that is, a committee or task force gathers information, conducts interminable discussions about what the institution needs, and ultimately drafts a huge document that meets with overwhelming approval by the three people who actually have time to read it. The relevance of the document to day-to-day operations, the quality of services, and the implementation of new initiatives is often questionable, although, oddly enough, few people seem to be concerned about this. There is something about the development of a strategic plan for technology that makes it worthwhile despite these shortcomings.
via Technology Planning for Health Sciences Librarians
via Library Stuff
Posted by Richard Akerman on June 12, 2006 at 07:04 PM in Technology Foresight | Permalink | Comments (0)
Reblog
(0)
|
|
Try to determine Canada's science policy directions right now is a bit like reading tea leaves, since the Federal Government's priorities are elsewhere. One does the best one can by following the presentations of some of the science policy leaders, which leads me to
The Changing Nature of Intellectual Authority (also available as PDF)
a presentation by Dr. Peter Nicholson, the President of the Council of Canadian Academies
given at the 148th ARL Members Meeting, which was held in Ottawa
A lot of interesting ideas about the challenges to the temple of scientific authority - a similar challenge is of course faced by libraries.
As a side note, based on his email address, I would expect a website to show up sometime at
Posted by Richard Akerman on June 12, 2006 at 04:57 PM in Academic Library Future, Technology Foresight | Permalink | Comments (0)
Reblog
(0)
|
|
The new Journal of Web Librarianship has an editorial blog
As the editor of a new journal, the Journal of Web Librarianship, I've started looking at academic authorship and peer-reviewed journals from a different perspective. This blog is about that perspective.
Posted by Richard Akerman on June 12, 2006 at 10:45 AM in Publishing, Weblogs | Permalink | Comments (0)
Reblog
(0)
|
|
Canadian Health Libraries Association
2007 Conference
May 28th - June 1st 2007
Ottawa, Ontario
Posted by Richard Akerman on June 12, 2006 at 10:12 AM in Conference | Permalink | Comments (0)
Reblog
(0)
|
|
Thanks to a pointer from Paul R. Pival I activated a new feature that gives me unified FeedBurner stats for my TypePad blog. This means I get to see all my subscribers, rather than just the ones I was capturing in the FeedBurner feed. This is what the result looks like in my stats:
In other words, I'm now tracking an additional 180 or so subscribers. (UPDATE 2006-06-10: Just to make it clear, all this means is that 180 existing, untracked TypePad feed subscribers were added to my FeedBurner feed - the total number of subscribers didn't increase, it's just they weren't all counted inside of FeedBurner before.) Anyway, right now it also means there are duplicate postings in my feed. I am going to see if that sorts itself out.
Now that this feature is active, I am also probably going to switch back to TypePad Basic from Pro, since the only reason to go Pro was to direct more people to the FeedBurner feed. You shouldn't notice any difference to the site appearance or the feed.
There's more info at
UPDATE: Here's what I sent in response to Joe Kottke's comment
If I look at the raw feed itself it looks fine
http://feeds.feedburner.com/ScienceLibraryPad
but if I preview in Bloglines (where the vast majority of my readers are) there are duplicates - maybe because the timestamps don't match on postings?
http://www.bloglines.com/preview?siteid=899708
UPDATE 2006-06-10: FeedBurner has discovered another 41 subscribers
Posted by Richard Akerman on June 09, 2006 at 10:00 AM in RSS Feed Tools, Weblogs | Permalink | Comments (1)
Reblog
(0)
|
|
For all those users of libraries who have ever wished they could bring information from their library to life outside the virtual walls of its web site. For all those librarians who have contemplated enriching their OPAC with maps, reviews, jacket images, or folksonomies. For all of you, and for anyone else who has harboured a yearning to see information from or about libraries put to best use and displayed to best effect alongside information or services from other sources, we bring you the Mashing Up The Library competition.
This is your chance to wow the world with your ideas; your chance to build better systems on top of library data; your chance to demonstrate the value and the power of libraries; your chance to take library information and display it in exciting new ways; and your chance to walk away with £1,000.
Entries are due by Friday 18 August [2006], and we have a first prize of £1,000 and a second prize of £500, both provided by Talis to encourage innovative approaches to library information such as those made possible by APIs from the Talis Platform.
via panlibus
If I'm reading the rules correctly, it looks like any organization can participate.
So... Lorcan, what about the OCLC Contest?Last year's contest page says
Plans call for the OCLC Research Software Contest to become an annual event. Look for announcements of next year's [i.e. 2006] contest sometime around December 2005–January 2006
UPDATE 2006-07-09: Second OCLC Research Software Contest announced.
Posted by Richard Akerman on June 05, 2006 at 05:09 PM in Software Development, Web/Tech | Permalink | Comments (1)
Reblog
(0)
|
|
Recent Comments