If you’re looking for my Twitter account @scilib you won’t find it, because it is deactivated and on its way to deletion.
You can find me on Mastodon, with the rather awkward syntax:
If you’re looking for my Twitter account @scilib you won’t find it, because it is deactivated and on its way to deletion.
You can find me on Mastodon, with the rather awkward syntax:
Posted by Richard Akerman on December 09, 2023 at 09:20 AM in Social Networking | Permalink | Comments (0)
Reblog
(0)
|
|
Posted by Richard Akerman on April 17, 2023 at 08:26 PM in Social Networking | Permalink | Comments (0)
Reblog
(0)
|
|
While there have been public servants using social media including blogs and Twitter for a long time, there has been a recent upsurge in government employees deliberately choosing to use various online sites to do what I would call narrating their work. This kind of open sharing of work in progress can be a great way to demystify what happens in government as well as to make new connections and get broader feedback.
Some recent examples from within government:
The above examples are all on Medium, a platform developed by Twitter co-founder Evan Williams.
You can also find public servants sharing their activities in other channels, for example CIO of Canada Alex Benay on Twitter and LinkedIn.
This increased public visibility of individual public servants and their work builds on years of conversations and experiments both "inside the walls" of government and on social media. For example, there was a Canadian government event in 2010 called Collaborative Culture Camp that touched on many of the challenges of working collaboratively and openly.
I found that Twitter took a lot of my energy away from longer-form writing about my work. Here's what I wrote about it in 2011
I think one loses a lot by not blogging. Twitter can to some extent maintain a presence online, but it can't expand it or make substantial impact. .... If you want to share your ideas in a way that will generate substantial discussion and spark interest in a major way, you have to write in the long form. ... to have an impact you must be writing your ideas, narrating your work. Not just for others, but as importantly, to better understand yourself, to have an online archive of your thoughts and work over time.
I am trying to return to doing more blogging and working in the open.
The above has a particular focus on the descriptive type of working in the open, there are other kinds of open work as well, for example open code on Github.
Kudos to Mary Beth Baker (Twitter @bethmaru) for her leadership role in getting the GC on Github.
As one example of open code, the website for the Canadian Digital Service digital.canada.ca is generated from Github, and is available for people to file issues or pull requests at https://github.com/gcdigital-gcnumerique/digital-canada-ca
For more on these topics, see my blog categories social networking and open source.
Note: Crossposted to Medium https://medium.com/@scilib/working-in-the-open-public-servants-in-canada-e103b4145dfd
Posted by Richard Akerman on July 20, 2017 at 10:24 AM in collaboration, Open Government, Open Source, Social Networking, Web/Tech, Weblogs | Permalink | Comments (0)
Reblog
(0)
|
|
I did a keynote in 2008 where I said "Every web resource its machine reader", reinterpreting Ranganathan for the computer age.
I want to look back on how we almost built an ecosystem of information for human and machine readers, and then it fell apart.
Below, I will see if I can tell the tale of the decline of the blogosphere and end up with thoughts about the Antikythera Mechanism and scholarly communication.
In 2007, Darren Barefoot wrote in The Tyee about an era that just 9 years later, is totally gone.
It begins:
I subscribe to the RSS feeds for about 175 blogs.
Later down, it says
Technology reporter and uber-geek Tod Maffin runs Inside the CBC. It's kind of an industry blog, in that it covers the world of Canadian public broadcasting...
Let's have a look at Inside the CBC now, in 2016
It is definitely not by Tod Maffin, nor does it cover the world of Canadian public broadcasting. It's more or less what we would call this year fake news. More specifically, it's a kind of human content written for robots, but not at all in the way I intended. It's most likely, given the jumble of topics (neopets, skates, weddings in Gatlinburg), a search-keyword-driven tasking of quick content creation. "Neopets searches are peaking, quick, write something about neopets!"
Which is to say, the ecosystem of interlinking conversations that Barefoot describes in 2007 is quite gone. It's clear there's some complicated history behind its demise, but for simplicity's sake, here's what we can find from the Internet Archive: by February 1, 2011 the blog has a posting from January 10, 2011 at 8:58 pm. And that blog post remains, frozen in time at the top of the page until at least January 28, 2013. By June 21, 2014 the page reads simply "The domain insidethecbc.com is no longer parked by GoDaddy." By December 21, 2014 it has become, rather unexpectedly, a blog in German: "Wir haben ein großes Forschungszentrum mit über 100 Mitarbeitern und Niederlassungen in Übersee und Asien." / "We have a large research center with over 100 employees and branches in overseas and Asia." By September 9, 2016 it has transformed again, writing in (sort-of) English about "The advantage of getting nucific bio x4 coupon codes" and by October 22, 2016 it has settled into its current format.
The long and short of this is firstly, this is a disaster for a "many small pieces, loosely joined" ecosystem, and secondly, without Internet Archive, or even if the current owner of Inside the CBC changed their robots.txt, all of the original site would be gone.
The story of Inside the CBC turned out to be more complicated than I thought. Is it illustrative of the decline of the blogosphere? Well just seven years after his article in The Tyee, Barefoot is blogging "In 2014, what is my blog for?"
What has happened? Well, basically, it turned out that this world of interlinking blogs and feed reading at best fell apart, at worst was deliberately dismantled.
I realise the latter is a more compelling story than the former, but it's a combination of factors. In the In Our Time episode The Library of Alexandria, we may look to hear again the story of how the library burned, but the conclusion is actually that it really just faded away. With the rise of the Christian Roman Empire, the old knowledge and the old conversations, the dialogues between the books, just weren't of as much interest any more.
But one of the reasons we still have this discussion about the destruction of the library is because of the reality of so much loss of information.
In Reality Is Not What It Seems (BBC Radio 4 adaptation episode 1, Fiat Lux), Carlo Rovelli tells the history of physics in the conventional way that most western European scientists do, beginning with the Greeks. In particular, he speaks of Democritus, and of his dismay at the loss of Democritus' original writings. "We know of his thought only through the quotations and references made by other ancient authors, and by their summaries of his ideas. I often think that the loss of the works of Democritus in their entirety is the greatest intellectual tragedy to ensue from the collapse of Classical civilization."
SIDEBAR
This formulation of the history of physics is so common that The Big Bang Theory parodies it in episode 3x10, where in order to teach Penny about Leonard's current research, Sheldon will only present the topic by starting with "It is there in Ancient Greece that our story begins..."
The episode is entitled The Gorilla Experiment, it aired in 2009. It's not to be confused with 7x23 The Gorilla Dissolution which aired in 2014.
ENDSIDEBAR
Consider that while Rovelli is lamenting the loss of writings from 2400 years ago, we are not even doing a good job of maintaining writings from 9 years ago. In fact, some Internet content is only now available through quotations and references made by other bloggers.
(And its worth noting that when you read this, you probably won't be able to listen to Fiat Lux, because the BBC is only making it available online for another 25 days. And depending on your local copyright laws, you may not be able to legally view a clip from The Big Bang Theory without having purchased access to the episode.)
It didn't help that Google closed Google Reader. It's a minor miracle that Google FeedBurner still exists. The demise of (Facebook-owned) FriendFeed removed a conversation option from the web. Overall, the ecosystem has shifted to closed commercial services and to search results driving traffic to commercial sites.
If you want an idea of how fragile this ecosystem is, turn to the Elections Quebec page on Electronic Voting. It used to include four press releases. These press releases have now been "archived", which in this particular case means removed entirely from the web. This is what the page looked like September 16, 2016 with the press releases (page from the Internet Archive)
and this is what it looks like now, with no indication the press releases ever were there.
There is nothing insidious about this, this is just standard web procedure - the press releases are from 2006, probably they were rarely accessed, so you run your ROT (Redundant, Outdated, Trivial) analysis and conclude that ten years online was enough, it's time for the press releases to go.
But once they go, they're gone. Are they in the Library and Archives Canada web archive? Are they in the BAnQ web archive? It's hard to know. Neither provides a public search interface that can uncover the Elections Quebec pages. Without coverage in the national archive, and in the provincial archive, and in the Internet Archive, content that is removed from the web is just simply gone forever.
So this is where we find ourselves. We have a single main service, the Internet Archive, that depends on private funding and that is attempting to make a backup of itself in Canada. Coverage from other web archives is unclear and may be nonexistent. Every day, as websites are reorganised, or content is deliberately removed for various reasons, or websites are simply neglected until the domain expires, our online Library of Alexandria fades. Not in some blaze of destruction, but more from lost interest, the same as the original.
You might think this is not a problem, because we can get the press releases from the Internet Archive, but it never indexed them. So for all intents and purposes they are gone now.
SIDEBAR
Because I happened to be paying particular attention to Canadian electronic voting information over the past few months, I was able to recover three of the press releases using a combination of Pinboard, Google and Bing caches. The search engine caches would have been replaced very quickly, so it's only good fortune that I was able to grab the content before it was lost.
I have also become a bit obsessed about adding manually adding pages to the Archive, which you can do by going to https://web.archive.org/ and pasting the URL into the box under Save Page Now and clicking Save Page.
ENDSIDEBAR
Blogging, with its interlinked web of discussions, was a kind of web immune system. It wasn't perfect, but it was a way to have a conversation, and a way to provide signals to Google about what was and wasn't important, and to some extent, about what was and wasn't true.
It used to be possible to blog about a topic and get discovered through the network of blogs, to get added to feed readers, to become a new information source.
It used to be possible to blog about a topic and get good Google search ranking, to be discovered through search and thus be an important contributor to the conversation.
This is all basically gone. Many of the blogs are gone, the blog discovery ecosystem is gone, the feed readers are gone, and Google search rank is very hard to get.
Here's a quote
For some, this past election year was about the slow death of the current political system.
Can you place it in time? It's from... 1997. Jon Katz wrote enthusiastically in Wired about how online conversations were going to transform political discourse.
On the Net last year, I saw the rebirth of love for liberty in media. I saw a culture crowded with intelligent, educated, politically passionate people who – in jarring contrast to the offline world – line up to express their civic opinions, participate in debates, even fight for their political beliefs.
...
I watched people learn new ways to communicate politically. I watched information travel great distances, then return home bearing imprints of engaged and committed people from all over the world. I saw positions soften and change when people were suddenly able to talk directly to one another, rather than through journalists, politicians, or ideological mercenaries.
I saw the primordial stirrings of a new kind of nation – the Digital Nation – and the formation of a new postpolitical philosophy.
Jon Katz, in 2016, now writes books about dogs.
So how did the hopes for online conversation and engagement go to the dogs? How did we get from a postpolitical philosophy to post-truth as the word of the year?
2007 may have been the tipping point. The iPhone was announced in January of 2007.
Sitting in front of a desktop, you have a keyboard and a mouse and a screen. This drives a certain kind of text-based, highlight-and-insert content creation.
Smartphones and tablets, on the other hand, are terrible at long text and inserting content. They are great for creating photos and videos. And so now that's the world we have, the photo and video world.
All of the signals that we needed to rank and sort and link and discover are gone. Now it's just bam! image, bam! video. No context, no links, just endless streams of content. The web now is, in short, television rather than a library. And the consequences of this are huge.
In MIT Technology Review, Hossein Derakhshan writes
Before I went to prison [in 2008], I blogged frequently on what I now call the open Web: it was decentralized, text-centered, and abundant with hyperlinks to source material and rich background. It nurtured varying opinions. It was related to the world of books.
Then for six years I got disconnected; when I left prison and came back online, I was confronted by a brave new world. Facebook and Twitter had replaced blogging and had made the Internet like TV: centralized and image-centered, with content embedded in pictures, without links.
...
The problem is not that television presents us with entertaining subject matter but that all subject matter is presented as entertaining.” (Emphasis added.) And, Postman argued, when news is constructed as a form of entertainment, it inevitably loses its function for a healthy democracy.
In other words, we used to be able to use the web to have a conversation, and now we are basically using the web to amuse ourselves to death.
The blog ecosystem helped to create a kind of web immune system, an immune system that Google could use to surface healthy information. With that gone, it's no wonder that false news can spread easily.
Science depends on a web of citations, a web of knowledge. Without a web of knowledge on the actual web, how can we hope to make discoveries and determine what is of interest. How can we challenge information when we're in our filter bubbles, Facebooking to one another, off of the public web?
In 2008 (remember my talk from 2008? it's way back at the start of this blog post), I thought we might be able to meld human and machine understandings in order to advance the conversation. I promoted the idea of better formatting information for machine processing, in order that we could benefit from machine-aided search and discovery.
I deliberately chose not to emphasize automatically-generated information. Depending on the era, I've heard that the Semantic Web would solve discovery, or that OAI-ORE was going to link everything together, or now that Artificial Intelligence and Big Data will discover all connections automatically.
Would that this were so.
In Searching for Lost Knowledge in the Age of Intelligent Machines, Adrienne LaFrance writes
What if other objects like the Antikythera Mechanism have already been discovered and forgotten? There may well be documented evidence of such finds somewhere in the world, in the vast archives of human research, scholarly and otherwise, but simply no way to search for them. Until now.
This is a compelling vision of knowledge that we don't even know that we have, that could be unearthed if we just digitized and translated everything, and then sent our AIs digging for connections. Maybe Democritus is out there, in some copy of a copy of an Arabic translation. Maybe this whole lost world of complex mechanical devices is out there on paper somewhere, just waiting to be found.
But there is a pretty harsh collision between that vision and the reality of the web we've created.
We were on a path that might have enabled this scholarly discovery, although probably with a lot more human intervention than techno-utopians would like. But now we can't even find things from a few years ago. We were assembling the book-to-book conversations of a new Library of Alexandria, and now we're scraping down the pages like in the Archimedes Palimpsest
This medieval Byzantine manuscript then traveled to Jerusalem, likely sometime after the Crusader sack of Constantinople in 1204.[7] There, in 1229, the original Archimedes codex was unbound, scraped and washed, along with at least six other parchment manuscripts, including one with works of Hypereides. The parchment leaves were folded in half and reused for a Christian liturgical text of 177 pages; the older leaves folded so that each became two leaves of the liturgical book.
But we're doing far far worse than writing over thousand-year-old knowledge. We're erasing completely information from last year, with no xray to recover it.
Which is to say, as much as I love the vision of recovering the lost history of the Antikythera Mechanism, I'm more worried we won't even have a history of last year.
This is the web now
The Web of Desperate Popup Intrusion.
That used to be the web of users deciding to subscribe to feeds, instead of being badgered to sign up to email newsletters, or to view ads, or to pay up immediately.
So let's try to unwind some of our mistakes. Rather than some AI utopia that will unearth lost connections from millenia past, let's deliberately build a human reality of intentional reconnection. Some suggestions:
I will readily admit that I got pulled away into the quick rewards of Twitter from the slow work of blogging. And it's become a self-reinforcing system. As blogging and feedreading faded, blog hits and links and comments faded. Blog rank in Google faded. Wearing another hat, I wrote 595 blog posts about online voting from 2004 to 2016. All that work, and (for the particular Google results I see today on this computer), the blog shows up once on page 5 of results for "online voting" canada. It's hard to maintain one's enthusiasm for 5th page ranking. It's hard to maintain one's blogging for two web hits a day.
But then I remember I originally started blogging just for myself. I didn't anticipate my blog would grow and connect in the way that it did for a while. So I am going to try to get back to blogging, because everywhere I go now, even for the few seconds of an elevator ride, I see people lost in their smartphone screens. I don't see how we can continue on losing ourselves in those little personalized screens.
Posted by Richard Akerman on December 03, 2016 at 11:51 AM in Books, Links to Audio, Links to Presentations, Metadata, Searching, Semantic Web, Social Networking, Technology Foresight, Web/Tech, Weblogs | Permalink | Comments (0)
Reblog
(0)
|
|
Above from March 30, 2013. I previously did a map in 2011.
Basically the Government of Canada network (blue, on the right) gets larger and even more densely connected. That network also includes people from the City of Ottawa and people working in Ottawa. (Ottawa is basically a small and tightly connected town, in and out of government.)
The other two big clusters are still NRC (the National Research Council of Canada, where I work) and library/scholarly publishing/science.
It's interesting that NRC is still very distinct from Government of Canada. There are however more connections bridging the gaps between the three groups. Here's what it looked like in 2011:
UPDATE 2013-04-03: InMaps is one of several offerings from LinkedIn Labs. You go to the labs site, pick the service you want, and then authorize LinkedIn to activate it. ENDUPDATE
Previously:
January 31, 2011 LinkedIn maps my connections - I analyse the meaning
Posted by Richard Akerman on March 30, 2013 at 06:55 AM in Social Networking, visualization, Web/Tech | Permalink | Comments (0)
Reblog
(0)
|
|
France's open data site, launched on 5 December 2011, has now released a second version four months later. It adds an essential element, public community features (this is in addition to the private developer community DataConnexions that they already launched).
The new community features of data.gouv.fr include a discussion forum, an ideas market, and highlights of open data reports from elsewhere on the web. Top contributors to the discussions are highlighted.
Find the new features at
http://www.data.gouv.fr/Communaute
SIDEBAR: Canada's open data site data.gc.ca was launched in pilot mode on March 17, 2011 and envisions a second version in three years (2014).
During Year 1 of our Action Plan, we will continue to expand on the number of datasets made available through the existing portal, and we will complete our requirements for the next generation platform. In Years 2 and 3, we will design and initiate implementation of the new data.gc.ca portal, as well as further improve the level of standardization of data published by departments. The Government will make use of crowdsourcing, particularly among Canada's open data community, to make sure that this new open data portal meets the needs and expectations of those who will use it most, and provides the best possible opportunity to support entrepreneurs eager to make use of Government of Canada data.
from Canada's Action Plan on Open Government, Activity Stream 2 - Open Data.
END SIDEBAR
Previously:
February 20, 2012 France launches open data community Dataconnexions
July 6, 2011 update on open data in France
Posted by Richard Akerman on April 18, 2012 at 03:04 PM in Open Data, Social Networking, Web/Tech | Permalink | Comments (0)
Reblog
(0)
|
|
Etalab, the developers of data.gouv.fr, launched their community initiative Dataconnexions on February 16, 2012.
Pour favoriser la réutilisation des données publiques par les acteurs de l'innovation, Etalab initie la Communauté Dataconnexions : L’objectif de "Dataconnexions" est clairement de favoriser la création d’une place de marché française de la donnée, en mettant directement en relation la demande des "créateurs de projets" de tous horizons (entrepreneur, salarié, free lance, étudiant ou chercheur, seul ou en équipe, indépendant ou issu d’une start-up, TPE/PME, institution, grande entreprise) et l'offre de services de partenaires, acteurs économiques et de l’innovation de premier plan, susceptibles de leur apporter une expertise nécessaire en fonction de leurs besoins.
The initiative includes:
They state in their announcement blog post, in the text of the document that they posted to Scribd, that part of their mission is "encourager la réutilisation la plus large possible des données publique" - to encourage the greatest possible reuse of open data (my translation) - and that this mission includes connecting all the different actors in the innovation system through Dataconnexions.
Etalab tweets @etalab
The hashtag is #dataconnexions
The main site is http://www.dataconnexions.fr/ (redirects to pages on the Etalab site)
One of their partners is Google, there is an event March 14, 2012 - Atelier Open Data.
Etalab actually already had their first hacking event, (Paris) Android Dev Camp 2012. For more information:
These kinds of open data initatives show an understanding that simply providing the data is not enough - it requires active engagement with the community to build the innovation ecosystem that can maximize the benefits of releasing the data.
Posted by Richard Akerman on February 20, 2012 at 04:00 PM in collaboration, Open Data, Social Networking | Permalink | Comments (0)
Reblog
(0)
|
|
The Senate Official Languages Committee is conducting an "Examination on the use of the Internet, new media and social media and the respect for Canadians’ language rights".
There have been two meetings so far and the next is scheduled for October 31, 2011.
The first meeting was on October 24, 2011. Video and audio are available from ParlVU. This meeting was with the Office of the Commissioner of Official Languages.
Video format is Window Media Video (WMV). In order to view this video on a Mac you will need to install Flip4Mac WMV (free).
Transcripts have not been posted yet.
The second meeting was on October 27, 2011. Video and audio are available from ParlVU. This meeting was with Treasury Board Secretariat. Notably, President of the Treasury Board Tony Clement.
The discussions included social media, the Digital Economy Strategy, and open data.
iPolitics.ca did an article based on the meeting and post-meeting interviews: Twitter, Facebook and social media ‘critical’ to government, Clement says. Some quotes from the article:
"It would be quite bizarre if we’re trying to hire the best and the brightest young people with great vitality coming into the public service and they’re used to having tethered tablets and instant social media feedback and they can do half of their work at the coffee shop without any difficulty and then suddenly they are transformed into a public servant and none of that is available. That would be a bizarre situation." - Tony Clement
...
Corinne Charette, the government’s chief information officer, said 66 federal government institutions now have bilingual twitter accounts and official government communications are presented in both official languages.
It’s also popular with government employees, she said.
“Social media is an exciting development. It is widely requested by public servants..."
...
Clement told senators he is also pushing ahead with his open data initiative, to get more raw data out of institutions like Statistics Canada and make it public.
“The more data we can get out there, the more applications can be thought of by brilliant people, entrepreneurs.”
The next meeting will be on October 31, 2011. Live video and audio will be available from ParlVU. The meeting will again be with Treasury Board Secretariat.
The Senate tweets at @SenateCA / @senatCA
Hashtags (as declared in the Senate video stream) are #senCA and #OLLO
Twitter accounts:
Previously:
October 19, 2011 Treasury Board President Clement speaks about open government at GTEC
January 28, 2011 open government in Canada [ETHI committee]
Here's a Storify of some of the Twitter traffic from October 27, 2011
Posted by Richard Akerman on October 29, 2011 at 05:27 PM in Open Data, Open Government, Social Networking, Web/Tech | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
I guess the "make everything social" decree went out to Google Scholar as well.
There is now essentially a scholarly social network built in - individual user profiles displaying citations, links to colleagues.
Today we’re introducing Google Scholar Citations: a simple way for you to compute your citation metrics and track them over time.
...
You can enable automatic addition of your newly published articles to your profile. This would instruct the Google Scholar indexing system to update your profile as it discovers new articles that are likely yours. And you can, of course, manually update your profile by adding missing articles, fixing bibliographic errors, and merging duplicate entries.
You can also create a public profile with your articles and citation metrics
Google Scholar Blog - Google Scholar Citations - July 20, 2011
There's more information at http://scholar.google.com/intl/en/scholar/citations.html
Citations is in limited release, you have to sign up on a waiting list, usual procedures.
Here is the profile for Anurag Acharya: http://scholar.google.com/citations?user=nGEWZbkAAAAJ&hl=en
UPDATE 2013-11-21: Google Scholar has added a personal library feature. ENDUPDATE
Previously:
November 20, 2004 Google Scholar (my first mention of Scholar, shortly after it was launched)
Posted by Richard Akerman on July 21, 2011 at 06:53 AM in Academic Library Future, Metadata, Science, Searching, Social Networking, Web/Tech | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
Google+ doesn't seem to be providing an RSS feed of G+ posts on your public profile, it looks like it's providing a feed of your Buzz public posts instead. In general there seems to be a decline in blogging combined with less use of RSS readers (and less provision of RSS feeds). I think this is an unfortunate degradation of a distributed content ecosystem that was working well. Obviously it is in the interests of pretty much all the corporations to centralise content, to have you live "inside" Facebook, Google, Quora, Twitter, LinkedIn, Flickr or whatever particular content garden the corporation provides.
If this continues along the current track, Facebook will know everything about your relationships and the history of their evolution, as well as all of your personal interests, Twitter will know everyone you share work links with and the topics that you discuss and follow, LinkedIn will know your entire work history and all your work connections, and Google will contain mostly everything else related to your professional life as well as (for some people) aspects of your personal life. This is the inverse of the original content model, in which you produce content on many many different sites that have no direct interconnections (thus necessitating sites like FriendFeed to aggregate it all back together). Your photo self is separate from your work blogging self is separate from... etc. Now instead it appears we are being pushed to, on our own time, wrap up our entire selves in a nice incredibly-detailed demographic package for corporations. How nice of us.
Posted by Richard Akerman on July 12, 2011 at 05:44 PM in E-Commerce, Social Networking, Web/Tech, Weblogs | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
In thinking more about the different ways in which we share online, it occured to me there is another way to describe the difference between the personal and the professional.
The personal is about relationships, various types of interpersonal connections. Managing relationships often requires things not said, or things said to one group but not to another, for reasons of etiquette, diplomacy, human nature etc. Managing relationships thus has as a primary component the management of privacy, the management of who can hear what you say about what topics and what people. This is the Facebook Friends List model and the Google+ Circles model. It's as much about who doesn't (in theory) hear what you say as about who does. When Mark Zuckerberg says (in my understanding) that we basically shouldn't have privacy, that it is somehow dishonest to have different things we share with different people, I think he's talking about a model of human interaction quite different from the one we are used to. He's basically talking about The Invention of Lying world, a world where everyone tells the absolute truth to everyone in public and in private, all the time. This world is funny to us because it is so cruel, there are no social graces. I don't think it would be a better world in terms of interpersonal relationships. Does everyone really want to hear all their faults announced?
This type of interaction is quite different from the reputation management of a professional life. Reputation is about visibility, reputation requires a public presence. Professional interactions that are not confidential thus benefit from being as open as possible. If I want to build my reputation online, the best way is to share as broadly as possible my knowledge and my interests. In general then, reputation is not about limited who can read your thoughts or see your links. It may have an aspect of managing the channels so that you reach the maximum number of interested people, but it doesn't benefit from private sharing in the way that relationship management does.
Therefore I think the Google+ circles model is ill-suited for professional interactions, instead you benefit from just setting everything to Public... in which case you might as well use Twitter and a blog. There will be lots of people of course who will use G+ as a blog replacement, but this has a downside: you lose control of your own information, at least from the standpoint of Internet presence and statistics. In effect it's like Knol, where you were supposed to put all your expertise on Google's site, to boost Google's search rankings. For Twitter it's the standard for brief sharing, but that doesn't have a big reputation impact. However for long-form writing (like this blog post) I would much rather have the "home" be a site under my control, where I can manage the design and analyse the stats, rather than giving my content to Google to live under an inscrutable URL - mine is http://plus.google.com/117260312446321547979 - in effect making Google itself not just the discovery channel, but the content channel, the reputation channel for my professional life. I can see how that has huge benefits for Google, but to me it seems like a lot of control to give up.
Right now I think we already have some well-defined channels:
* Facebook - mostly for personal (relationship) interactions and sharing of billions of photos (often with various privacy rights applied)
* Twitter - mostly for link sharing and short q&a type interactions - mostly professional reputation for those who use it for work sharing
* Flickr - photo hosting
* Tumblr - photo resharing
* blogs - long-form ideas, building reputation in a certain field or area
* discussion groups - places where people can gather together around a particular topic or specialisation
I can certainly see that Google+ can attack on multiple fronts: photo hosting & resharing (but Picasa Web has been around for a long time already with minimal impact), "blog" hosting (but with many fewer features than a full-blown blogging platform, and loss of the unique reputation you can build around a customised blog). However what I see mainly is multiple gaps: there's no group functionality (although an improved, less-email-centric Google Groups could fix this), there's no way to filter content inbound (so that you could read topics, not people) and there's no way to share content outbound (so that you could e.g. feed anything you hashtag with #opendata on Twitter to a certain circle or circle discussion group).
Right now, Google+ looks like a direct Facebook replacement, relationship management, privacy (really pseudo-privacy) management tool. It will take quite a bit of evolution if they want it to be a professional tool.
Posted by Richard Akerman on July 12, 2011 at 05:43 PM in Privacy, Social Networking, Web/Tech | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
G+ at least gives us lots of opportunities to talk about G+.
Summary
Google+ Circles don't come anywhere near modelling the complexity of our online interactions.
Details
One of the main topics is Circles, which people seem to have difficulty thinking through - either in creating them or understanding how they work.
This interest in circles is a bit surprising because Facebook has had Friend Lists (which are basically the exact same thing is circles) for a long time but almost no one talks about using Friend Lists.
So I have to give Google kudos for drawing attention to this type of feature, although I'm a bit puzzled why it gets so much attention since Facebook is the same, even to the extent of a per-post pulldown at the bottom where you can select the lists that will see your Facebook post.
Facebook is actually placing more emphasis on Groups now, perhaps because G+ doesn't have groups (yet anyway).
It's important to understand that Circles & Friend Lists are filtering-at-source, outbound filtering, content "channel" preselection. There are two common scenarios for outbound pre-filtering:
1) The "NOT" scenario for personal content, e.g. I want my friends to see this picture but I do NOT want my family seeing it
2) The "I want to signal a topic, content type or urgency" scenario, e.g. I will put these photos in the Ottawa buildings group
First, it's important to understand the "NOT" scenario is a fantasy, it's pseudo-control over your content. Basically as long as the content exists in some recorded, viewable (or hearable) format somewhere, it can be made public. If you want private, I suggest spy-type park bench conversations with a noise screen and check your companion for a wire first. Other than that there is no private. There's only pseudo-private.
So for example you can push content into your "Super Top Secret" Circle with "sharing disabled" on the post, but anyone in that circle can just copy and paste or screen capture or take a photo of the screen or any of dozens of other methods of duplicating recorded content (some more work than others).
(As a side note, this is why DRM is a fantasy, which doesn't stop attempts such as Microsoft's Information Rights Management.)
So putting your content anywhere, whether it be in an SMS, an email, a Facebook posting... you should expect it could become public at any time. This is why I would suggest against say, creating two Google+ Circles called "Circle where I pretend I love (task)" and "Circle where I tell people I trust that I actually hate (task)". That's just asking for trouble.
There are lots of ways people currently use entire sites or communication types as different types of channels.
Sometimes you use a channel to reach a particular group, sometimes you use it because it provides pseudo-privacy, sometimes because it signals urgency or lack thereof. Sometimes something that seems like a very simple channel is actually very complex (the most notable example is Twitter).
For example, I use
* email: rapid ad-hoc bi-drectional "private" communication - also if I want to be sure you will see it (the TCP of messaging)
* SMS (text messaging): realtime 1-1 bi-directional "private" communication; urgent; expectation that message will interrupt someone
* Facebook: either "private" sharing with a group (friends) or sharing-because-it's-where-everyone-is for e.g. work people who don't use other channels - but with no real expectation everyone will see it (the UDP of messaging)
* Yammer: in theory for sharing with closed work group of people just at my department (I say in theory because there is not enough use to see if it works well)
* Twitter: public stuff of interest with no expectation everyone will see it (UDP) plus signals of topics (hashtags) AND signals of per-individual interest (@ someone is public way of signalling, DM says this is both important and "private") AND accounts for different types of audiences (e.g. some people use different accounts for regular tweeting and livetweeting, or different accounts for different topics)
* blog: a mix of long-form ideas (with no expectation they will be read) and link sharing (for people who don't follow other channels)
* Topic or participant-specific groups: many tools (Facebook, LinkedIn etc. etc.) provide this feature where people with a common interest or some other factor in common can have a shared space.
There are channels that I never use, just due to the particular group of people I communicate with and history (e.g. use of instant messaging grew long after I was in the habit of emailing friends).
The short overview above is just to make the point that there is a lot more nuance to "the subtlety and substance of real-world interactions" as Google puts it than just controlling what channel you push into. For work content, public with at-destination filtering (e.g. publish & subscribe) may make more sense as I usually don't know a priori per-individual what content may be of interest, and I'm usually not engaged in pseudo-hiding content from one group or another.
In summary, at the moment Circles only makes sense to me as a way to NOT share content with selected individuals, which is more of a Facebook use case, but be mindful that at best this is just the appearance of not sharing content; it can be copied and made public at any time. The single most notable absence in Google+ is the lack of groups. Since Circles are not just per-source filtering, but fully or partially hidden per-source filtering (you can't see who is in someone else's groups, or even their group names), Circles doesn't help people form interest groups. This is kind of weird since one of the aspects of the Internet is that, far from being the anti-social medium portrayed sometimes in the media, the Internet leads almost naturally, organically, to people forming discussions around common topics of interest. Google does of course have an entire Groups product, called, well, Google Groups, which is an inheritor of the USENET tradition of granular per-topic discussions.
There are a lot of ways that Google could make this experience easier:
* first and foremost Google+ needs to provide Group functionality - just a next-generation version of Google Groups would probably do this
* use GMail data to suggest groups (this technology has already been built by Google, it will suggest possible "missing people" when you compose an email, based on people you usually email together)
- similarly use GReader and GBuzz data
* partner with LinkedIn to suggest groups - either based on connections (see my post LinkedIn maps my connections on LinkedIn clustering) and/or based on existing LinkedIn groups
* extract communication patterns from Twitter to suggest groups, also allow import/conversion of Twitter lists
* alllow various location-based groupings including: location groups based on profile location (or past location) information; location groups based on location detection in browser; location groups based on topic declarations (for example I would like to be able to share information just of local interest to Ottawa residents, but there's no way I'm going through every single person who follows me and figuring out who lives in the Ottawa area)
* allow recipient (destination) filtering by content type and content source, so I can e.g. choose to see all your posted external links but none of your photos or status messages (Facebook provides this to a limited extent and Friendfeed provides this for every separate content source it knows of)
* statistics so you can discover who is using & sharing and "+1ing" your content and can adjust your channel models accordingly
* pull channels (e.g. pull in content from Twitter, perhaps filtered by hashtag)
Ultimately the point is that our interactions are much richer than a flat communication model - but Google has provided only one small aspect, pre-filtering at source, out of a much much larger set of source, topic, and urgency filters. Pre-filtering at source is mainly use for (an illusion of) personal information control, not for work sharing. Google+ does not come anywhere near modelling the way people currently signal public/private, topics and urgency by their choice of channels.
There is a great post that explains this much better than I have above: Dave Gray - Sharing universe.
Posted by Richard Akerman on July 12, 2011 at 08:17 AM in Privacy, Searching, Social Networking, Web/Tech | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
I think it's a mistake to think of Google+ as a Facebook replacement or even as an attempt to replace Facebook.
Google has a basic problem: they make their money from search ads. Mostly from search ads when people are looking to buy something. So a search like "best 2009 used cars" represents a ton of money for them. A Facebook posting "hey guys, can anyone recommend a good 2009 used car?" is a disaster for Google. Not only is it probably invisible to their indexing engine, it connects people to information without search intermediation that you can attach ads to.
So Google has two problems: 1) people are moving out of its search-driven search space into one-to-many questions to selected groups of people 2) Google doesn't have a lot of rich information about the social context of people who are searching. Things Google might want to know: how connected is the searcher, how influential, how wealthy, what interests does she have, what content does she share...
Some people stay logged in to Google (GMail) all day, but this trackable identity can be lost if e.g. the user is reading GMail using iPad mail app, or if like me they only login to GMail web for a few minutes, check email, and then log out.
Google would much rather have a single, perpetual, always-logged-in identity. A social network with a notification bar encourages people to stay logged in, lest they miss a notifcation.
Google is a data-driven organisation. So let us imagine that through Google+ "all" Google gets is a complete map of the social graph of all the top tech influencers in the world, along with the kinds of things they like to share, and their interests (their "sparks" in Google+ terminology). And along with that, Google gets social graphs for a few million other people. Even if G+ then only limps along, Google now has much better data to analyse for its core search business.
So I think Google+ is mainly about providing Google with enormous amounts of data that it can analyse to determine social signals for search, to understand Q&A social behaviour, to find out how people are grouped and interconnected, and to have data to drive social driven re-ranking and display. But most importantly by far, it gives Google the beginnings of data to optimise ads for the social graph. Would you rather pay 1 cent to display your ad to someone who has zero tech influence, or $1000 to get your ad in front of the eyeballs of a tech influencer whose posts are reposted and retweeted thousands of times? Would you rather pay 1 cent to display an ad to random people based on search keywords, or $100 to display the ad to a "circle" of people who have demonstrated a sustained interconnected interest in your particular topic?
This post inspired by GigaOm: Why Google+ won’t hurt Facebook, but Skype will hate it found via @nicholemcgill
Previously:
July 1, 2011 Google Plus
Posted by Richard Akerman on July 03, 2011 at 12:03 PM in Privacy, Searching, Social Networking, Web/Tech | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
Google Plus launched in limited release Tuesday June 28, 2011. Google says
Today, the connections between people increasingly happen online. Yet the subtlety and substance of real-world interactions are lost in the rigidness of our online tools.
In this basic, human way, online sharing is awkward. Even broken. And we aim to fix it.
Google Blog - Introducing the Google+ project: Real-life sharing, rethought for the web - June 28, 2011
Plus, which lives at http://plus.google.com/ , is a social network centred around sharing a few types of content: text, photos, links and locations. There was a brief window opened with lots of invites going out, but it's still in "Field Trial" and they've disabled additional invites for the moment.
We've shut down invite mechanism for the night. Insane demand. We need to do this carefully, and in a controlled way. Thank you all for your interest!
Vic Gundotra (reported by businessinsider.com to be VP of Social at Google), in a public post on Google Plus - https://plus.google.com/107117483540235115863/posts/PhJFJqLyRnm - June 29, 2011 at 11:45PM
As a rough overview, Google+ is very very much like Facebook.
The home page of Google+ is called Stream and works much like the Facebook News Feed. From a user interface perspective they're basically identical. Here's a zoom in on the two share boxes at the top of the respective home screens:
Google Plus
Google manages sharing rights (who can see things you share) using "Circles", which are groups of people you select. Facebook manages sharing rights using Friend Lists (if you've never seen Facebook's feature, you can configure it by clicking on Friends in the left-hand main menu and then clicking on Manage Friend Lists).
Each service lets you specify per-post what groups of people should be able to see the post. Google's claim is that Circles are easier to manage and more visibly surfaced as a key feature of the service. It's certainly true that with Circles you can drag and drop people around, and there are some animations when you perform various actions on Circles (the most notable being when you delete a circle, it rolls away and disappears).
I'm not convinced however that this somewhat-better sharing list management is enough of a killer app to distinguish Google+ from Facebook. And Facebook could easily and I assume quickly slap a better user interface on top of its existing Friend List feature, which is already surfaced at the bottom of every post (by clicking on the lock icon, as shown in the screenshot above). And I'm not sure that "Your Circles" and "Extended Circles" is clearer language than "Friends only" and "Friends of friends" (the functionality described by those two sets of phrases is the same, "Extended Circles" = "Friends of friends").
Just as Facebook posts have "Like - Comment - Share" (with share only appearing depending on post rights), Google+ has +1 - Comment - Share. Google+ posts ALWAYS have share, by default. You have to manually disable resharing (which you can only do on an individual post after it is created, not globally) if you don't want people to have a Share link for your post.
SIDEBAR: Google+ and Google Plus One
In a move which will create endless brand confusion, Google Plus, the social network, and Google Plus One, the "Like" button, are two different things. But Google Plus also uses +1 buttons on posts to indicate the equivalent of a Facebook "Like".
You can read more about the global use of +1 across the web at http://www.google.com/+1/button/ and there's also a June 1, 2011 post about it in the Google Blog.
END SIDEBAR
You can also chat (the same as in Facebook). And you get notifications if e.g. someone adds a comment to an item you posted (the same as in Facebook). You can indicate people in a post by using a + and selecting their name (in Facebook you use an @).
If you're in the main interface, there are basically only two things you can see that don't translate directly into Facebook features: Sparks, which is a way of (on a separate screen) getting search results for keywords of interest, and Hangout, which is a multi-way video chat.
And indeed when you go to the main plus.google.com page without being logged in, the three things it shows as features are Circles, Hangout and Sparks. The other two features Google mentions if you drill down are Instant Upload (every photo and video you take on your mobile device immediately goes onto G+; better have a good data plan and a fast connection; only works on Android at the moment) and Huddle (basically a small group chat, like a private IRC chat room).
That's basically it. And yes of course it's initial field trial preliminary first features etc. etc.
Stephen Levy wrote a long article for Wired about the project: Inside Google+ — How the Search Giant Plans to Go Social. I do like the code name, "Emerald Sea". And there's no doubt a lot more to come. But right now, it's basically Facebook, except without your social graph there, for people with a Gmail address.
UPDATE 2011-07-01: A couple other points worth mentioning
1) It appears that unlike Facebook which limits you to a (perhaps insider-joke) 420 character maximum per initial status update, Google+ lets you write status posts that are as long as you want - Andy Hertzfeld wrote a public post that is essentially a blog post - https://plus.google.com/117840649766034848455/posts/FddaP6jeCqp
2) I also note that the URLs that G+ produces (as with the giant jumble of plus.google.com/USERID/post/POSTID above) are terrible SEO - Google Search loves URLs with descriptive text in them, not URLs that are just alphanumeric jumbles.
ENDUPDATE
Surprising absences:
* you can't search the Stream. Yes, that's right, Google, the company built on search, provides no way for you to search the stuff that is posted (which was a bit of a pain as I tried to assemble this blog post from items that I and others had posted in the Stream)
* There is no real integration with Google Buzz - it's not even easy to find Google Buzz, which is hidden away in your own profile, under Buzz.
* There is no integration whatsoever with Google Docs or Google Reader, two key places where people have existing social networks where they share and comment. There's also no integration with Google Books, a rich source of content.
* It's actually very hard to get content into Google Plus other than by copying and pasting, or uploading. There's no bookmarklet to share into the service. There's no way to import RSS feeds. There's no way to gateway to or from Twitter or to/from bookmarking services (even Google Bookmarks).
* It appears that photos added to G+ also go into Picasa Web, and you can "import" photos into the G+ display from G+. It seems that videos added live inside G+, they don't go to YouTube.
* There is no way to share content by topic, e.g. there are no hashtags or tags you can apply to posts.
* There is no way to filter by content e.g. you can't set a setting so that you see all of someone's posted links, but none of their posted photos.
* There's no chain of attributions, unlike Tumblr - if Bob shares Sue's post and I reshare it from Bob, you never see that I got it from Bob, you only see that it's from Sue.
* Via Brian Alkerton I found out there doesn't appear to be any way to see the global public stream (the stream of all public updates) although later in the discussion it mentions you can see public posts "nearby" (that is tagged with locations close to your current one) using the Android app
UPDATE 2011-07-01: You can also get the "nearby" option on the mobile site m.google.com/plus ( or www.google.com/mobile/+/ ). In Stream you will see a link on the upper-left-hand side that says "< Nearby" (it will of course request your location from your browser; if this is not available nothing will be displayed). ENDUPDATE
* There's no API although ReadWriteWeb reports you can sign up for a developers mailing list.
* I don't know if you can get Analytics or other stats on your Public posts.
And there are of course various bugs and eccentricities.
Additionally, for all the emphasis on Circles and privacy, it's still pretty easy to leak stuff out. Bob posts in his stream, Sue shares the post, now I can share the post too, even if I don't know Bob. Posts only say "Public" or "Limited" and when you click on "Limited" you only see (up to 21) individual user names, not the circles that you used in the original rights on the post.
Also, Circles themselves are private objects, you can't see what circles I've put you in, and I can't make a circle and share it with other people in the way I can make a Twitter list and share it. Other than Huddle there's no way for people in a Circle to all get in one place and do things together, there's no concept of group spaces.
However, surprisingly, anyone you add to a Circle shows up in a flat list on your public profile by default. You have to go to https://plus.google.com/me/about/edit/nv in order to change or disable this display (info from a public Plus post by Gary Boyer of Google).
If you set the rights to Public, the post appears on what used to be your Google Profile page, which is now your Plus page. This is mine: http://plus.google.com/117260312446321547979 There doesn't seem to be any way to claim a short userid URL (e.g. http://plus.google.com/scilib ) - so this is a pretty awkward way of sharing your public identity, unless you and the people you talk to can memorise 21-digit numbers.
There's no confirmation needed before someone can add you to a circle - you have to see the notification that you were added and then add them to the Blocked Circle if you don't want your public content to show up in their stream.
Overall, I am surprised that (whether they intended to do so or not), the field trial as currently released is basically Google Facebook, except with only the main Facebook News Feed features (no apps, no groups, no pages etc.) This is very different from Wave, which was actually quite difficult to grasp as it was a tool quite unique to the way Lars and Jens Rasmussen liked to work together. People who have used Facebook in particular, or similar interfaces like Yammer or even the main LinkedIn sharing interface, will have no difficulty understanding the concepts of Google Plus. But I'm not sure what would draw them to use Google Plus over existing services.
I'm not sure this mental model of many different Circles that all live within one service is the right one.
I thought a lot about circles and I basically share things in the following ways:
* with friends through email if I want to have some confidence that a website leak won't make a private joke suddenly public
* with friends through Facebook if it's a general thing that doesn't need any response. specifically photos through Facebook. Facebook not because I like it particularly, I'd rather just put stuff in a private space on e.g. Flickr but it's too hard to get people into pseudo-private web sharing spaces other than Facebook.
* everything else I do is either work public, or Ottawa public, or personal commentary, or interest-specific public.
- if it's work public and short or a link, I put it on Twitter (I sometimes copy to Facebook, Yammer, emails or other locations ONLY because not everyone follows Twitter)
- if it's work public and either I really think it's of particular interest or it's some longform thoughts I put it in my blog (which echos out through RSS to various channels including FriendFeed and even a dedicated Twitter broadcast account @scilibfeed)
- if it's about events in Ottawa, Ottawa urban planning, something I saw walking down the street, or a snarky comment etc. I put it into my personal Twitter or my personal blog
- if it's interest-specific, I go where the people are - I might post digital photography info to dpreviews.com, or Ottawa building photos to SkyscraperPage Ottawa forum, or Battlestar Galactica thoughts to an io9 comments thread.
Here's the thing: this doesn't map well to Google Plus Circles at all. Other than e.g. pictures of my friends' pets, everything is public, it's just public in different places and channels. It's not all in different private circles. And even "work public" is more nuanced than that, because that has sub-interest categories as well, all of which I can signal by using blog post tags or Twitter hashtags. It's very rare that I want to reach just a small number of specific people, and when I do I always send an email. I don't see circles replacing those targetted emails for me. In any case, in general I want to reach anyone on the entire Internet with a particular interest e.g. in open data / #opendata, not just people I know who have that interest.
So in the end, unless Google brings out deep integration with more Google products, and a lot more unique features that are useful to people, I don't see how Google Plus competes either with its obvious direct competitor, Facebook, nor with its secondary competitors (e.g. LinkedIn and Yammer) nor with the general public web tools (blogs and microblogs, photo-sharing sites, video-sharing sites, topic-specific sites).
Another way in which this is clearly NOT the public web is that there are Community Standards for Google Plus. (I am thankful to jilliancyork.com for comparing the Facebook and Google Plus community standards). This is a policed sharing network, with specific types of content excluded (basically, the same types of content that Facebook prohibits, including nudity). This is not a way for you to create your own private sharing space, this is a policed space that Google permits you to use - and remember of course the maxim "If you are not paying for it, you're not the customer; you're the product being sold." (UPDATE 2011-07-11: quote attributed to Andrew Lewis aka blue beetle - thanks to @drs1969 for pointer to blog post with info and to @RepoRat for confirmation. ENDUPDATE)
A function that is useful to me, content aggregation and discussions with groups, is currently provided by FriendFeed but it's pretty clear that it is doomed (Facebook bought FriendFeed and immediately put their staff on new projects). One direction that Google Plus could go would be to be a content aggregator in the way that FriendFeed is, as I said on Plus:
If G+ made it possible to 1) automatically stream content in and 2) create topic discussion group areas, then I think it would be a usable replacement for FriendFeed.
But I don't know that there are enough people with the use case of aggregating content (or the energy to set up and maintain all the imports) in order to make that use case generally useful. (In case you're wondering what it looks like in practice, my FriendFeed account is http://friendfeed.com/scilib but I have to admit that Twitter is by far the main place I share and discuss content now.)
I do anticipate that the +1 button will become more integrated across Google properties and will become the primary way to share into Google Plus (right now although there is a "share" box in the new black Google toolbar, it doesn't pre-populate with a page url when you click it when you're visiting another Google site). And I expect that there will be more integration with other Google services generally. More options is always better, and Google Takeout is a nice guarantee of content portability. It's certainly a much smoother launch than the privacy disaster of Buzz or the confusion of Wave. But I still see a leaning towards dubious defaults (e.g. the displaying of people you've added to your circles, requiring locating an obscure setting screen to disable) combined with a lack of compelling, unique new features.
Time will tell.
Posted by Richard Akerman on July 01, 2011 at 12:10 AM in Privacy, Social Networking, Web/Tech | Permalink | Comments (1) | TrackBack (0)
Reblog
(0)
|
|
I wrote previously about Tumblr as a sharing-driven site, primarily used for photos.
Now that my Tumblr has been up for a while (it holds images I have licensed in the Creative Commons), I have more information about how Tumblr is used to share and tag photos.
You can think of Tumblr as a giant exercise in the crowdsourced assembly of photo collections. People extract features that interest them, grouping photos in their own Tumblr streams.
So Tumblr has both explicit metadata, implicit metadata, official & unofficial collections, and social mechanisms for description and discovery.
For explicit metadata, Tumblr allows tags, creating a folksonomy of tags about an image (when you reblog an image, you can add your own tags). You can also attach descriptive text to an image.
You can see both per-Tumblr tag views, e.g. http://rakerman.tumblr.com/tagged/books
as well as Tumblr-global tag views http://www.tumblr.com/tagged/books. (Be aware that Tumblr has basically no concept of separating adult content, so take care when browsing even the most innocuous of tags.) Search is entirely tag-based, e.g. when you enter "books" into the search box, it will return the tagged/books page. It doesn't search the text in descriptions. UPDATE 2011-06-23: Upon further examination it is not entirely clear how the search works. Sometimes it returns hits that appear to be on description text, sometimes it returns hits for photos that don't appear to have any metadata at all. ENDUPDATE
(Google does index the descriptive text, so you could search e.g. site:rakerman.tumblr.com reading.)
Tumblr also has (I presume manually-assembled) official collections for topics, e.g. http://www.tumblr.com/spotlight/books however these collections are not surfaced automatically in tag-based search (that is, if you search on books, it won't highlight the fact that there is also a books spotlight topic).
Where there is an additional layer of implict metadata is through the collections that people assemble themselves (to some extent using Tumblr to reblog photos in this way is one of the main use cases for the site). For example, when I posted a picture of the Uppsala University Library it was discovered and reblogged by aveclivres, a Tumblr devoted to images with books in them. (It was discovered even though I had no followers at the time, presumably by watching/searching the books tag.)
This particular image also gives a chance to see the "transclusion" reblogging effect, as a photo spreads in the Tumblrsphere. It is last-in first-out, so it is essentially a most-recent-first activity stream specific to this photo (to this blog post, technically).
So in addition to the explicit metadata, we have the "collections" (aveclivres, bibliofila) that the post is in, which adds additional implicit metadata, "this is like these other objects in this collection".
"Reblogging" is like Twitter retweeting, and "liked"/favorites are similar to Twitter faves. The reblogging mechanism is powerful as it lets the content spread, with popular images going viral in the same way that a popular tweet can go viral.
To some extent, this reblogging of images is a continuously running categorization "game" - more unstructured but also requiring less effort to create than a special online museum game site (see http://museumgames.pbworks.com/ for more information about this approach to online engagement).
Images present a special problem for categorization as unlike full-text it's much harder to use their contents for self-description except in a crude way ("mostly red", "lots of circles") - their "aboutness" requires human interpretation - interpretation which machines still struggle with, to the extent that image search mainly replies on surrounding context, user-supplied metadata, and image similarity.
The idea of posting images online for users to categorise is of course not new, it has happened through the Flickr Commons and other channels. But Tumblr is a new way to tap into this user interest.
Tumblr has the concept of "following", which is also similar to Twitter - you see the stream of posts from people you follow on your dashboard, and it is one click from there to reblog or favourite a post you like. In this way you can watch feeds of interest, giving a social channel for discovery of content. In other ways Tumblr by default is not social in the way we think of traditional blogs - it doesn't have comments by default, and it has very limited mechanisms for feedback (the "ask me a question" option which allows for a question with a single response).
While Tumblr does provide powerful "Twitter for photography" features, its important to recognize that it is not a fully-featured photo management platform. Here are some feature comparisons between Flickr and Tumblr.
Flickr has a multi-level access rights scheme per-photo (public, private, friends, family). Tumblr is mainly for public blogging.
Flickr has a concept of adult ("unsafe") photos and "safe" public photos, with options to flag images. Tumblr does not. Tumblr has no advanced searching that will let you search only "safe" images.
Flickr records and exposes rich photo metadata, from date taken and location (if available) to the EXIF details about what camera was used, the exposure settings etc. Tumblr does not surface any of these details, not even the date the image was taken.
Flickr has detailed per-photo rights information, including standard copyright and Creative Commons notices. Tumblr does not. You could apply a license to your entire Tumblr feed, but there are no built-in mechanisms to do so easily.
So be aware that in posting images to Tumblr you're moving from a personal photo collection environment where you have a lot of control about the individual photos and there is rich metadata, to basically a pure image sharing environment in Tumblr, where most if not all metadata is lost from the photo.
Tumblr is also about more than photos, it's a full microblogging environment where you can post text and video, but I've only looked at the photo aspect because its a common use of Tumblr. There are lots of hybrid approaches as well, for example the US State Department Tumblr always features a prominent photo at the top, but has explanatory text as well.
UPDATE: Another great example of how Tumblr can provide infrastructure and a collection idea can go viral is http://dearphotograph.com/ - the idea of overlaying old photos on top of current scenes. A June 20, 2011 Globe and Mail article discusses how the idea started and the sudden spike of popularity for the site. ENDUPDATE
UPDATE 2011-06-22: http://www.tumblr.com/explore is another way to discover top tags and content. ENDUPDATE
UPDATE 2011-06-28: Tumblr has added the option to display basic photo EXIF metadata (and presumably could support richer EXIF metadata). ENDUPDATE
Previously:
November 8, 2008 billions of photos
Posted by Richard Akerman on June 21, 2011 at 03:37 PM in Photo, Social Networking, Web/Tech, Weblogs | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
Tumblr is a quick way to set up a blog, particularly an image-based blog (one that is mostly photo posts).
Ted Nelson's vision of hypertext (Xanadu) included the concept of transclusion - everything would be linked back to an original source, with a full citation trail, rather than copy & pasted. Transclusion turns out to be hard to do for complex things, so we basically didn't do it. Even a block quote is just copy & pasted text, with the onus on the author of the HTML page to embed the correct links back to the source.
Tumblr gets you to transclusion quickly without knowing you are doing it. In some ways this is an extension of the retweet and modified tweet (RT and MT) that we see in Twitter - trails of pointers back to the original source. As with the new style Twitter, the metadata is created automatically when you click (Retweet for Twitter, Reblog for Tumblr).
Tumblr does a much better job of explicitly surfacing this metadata however, leading to popular photos that have incredibly long chains back to the source, showing the originator (at the very bottom), and crediting the secondary sources that it was picked up from.
So you start out with e.g.
* scilib posted this
and then it builds
* Bob reblogged this from scilib
* Sue reblogged this from Bob
* Alice liked this
all in one long unbroken chain (assuming everyone is using the Reblog or Favorite button)
Tumblr as a whole (all *.tumblr.com sites or custom-domain sites backended by Tumblr) surpassed 250 million page views per day (reported on May 17, 2011).
It's easy to use with a particular emphasis on sharing, which tends to reach better the majority of people who like to share but don't necessarily create many images themselves. Its focus on easy sharing (and good viewing experience e.g. on the iPad) make it a much more powerful sharing platform than Flickr.
Flickr is really focused on helping you maintain and tag your own gallery of photos, but pretty poor at helping you to share them or discover others (within the site). Flickr does now let you easily link images to a Tumblr blog though.
These various characteristics (along with many other features I haven't covered) have led to Tumblr being adopted by some government organisations. You can find some here
http://www.tumblr.com/spotlight/politics
mixed in with other political opinion tumblrs. A few key ones are:
I think experimenting in this space is entirely appropriate - the nature of the web continues to evolve. The current engagement levels with content appear to be low though (you can see the number of "notes", which combines reblogs and faves, on each posting).
To better understand some of the context around using Tumblr in the government, read Measured Voice's post Why We Recommended Tumblr for the New USA.gov Blog. Their core point was "an interface that encourages sharing and interaction".
In general reblog & like tend to be the main methods of engagement, as they are built-in. Beyond that Tumblr has very minimal communications features, with an asked-answered single question system. There is no commenting by default - you can add a comment when you reblog something, but there is no discussion thread attached to a post, although one can be added, as blog.usa.gov does.
When everything is working well, you see both traditional network behavior, in which a posting may stumble along with a few hits until it gets reblogged by a supernode, followed by a flood of likes and reblogs, as well as global attention behavior, where the reblogs and likes sweep around the globe from east to west as parts of the world awake and fall asleep.
In case you're wondering how the discovery takes place, the Tumblr dashboard is sort of like a tweetstream - it shows postings from people you follow, updated continuously. Your Tumblr dashboard becomes a bit like an RSS reader for the Tumblogs you follow, except with much more emphasis on the streaming nature, rather than trying to make sure you don't miss a single post.
I'm doing simple experiment just to see whether using Tumblr helps photos circulate more widely, by starting to post my Flickr Creative Commons licensed images to http://rakerman.tumblr.com/
As you can imagine, there are lots of potential issues with copyrighted images being endlessly reproduced, although at least as long as the original source is the creator (which is not always the case) there is an advantage in that the endless reproductions are tracked in detail.
Posted by Richard Akerman on June 14, 2011 at 10:39 PM in Photo, Social Networking, Web/Tech, Weblogs | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
I thought I had listed these a while ago but I guess it was just in some emails I sent. It is not all that easy to find publications on the Library of Parliament site (and like many sites they do reorgs that change URLs).
http://www2.parl.gc.ca/sites/lop/virtuallibrary/ResearchPublications-e.asp#culture used to work, now use http://www2.parl.gc.ca/sites/lop/virtuallibrary/ResearchPublicationsCat-e.asp?cat=culture
Government 2.0 and Access to Information:
One of the reports is by Amanda E. Clarke (@amanda_e_clarke) who is now at Oxford.
Posted by Richard Akerman on February 28, 2011 at 10:12 AM in Government 2.0, Open Data, Social Networking | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
A surprisingly stark clustering into three very thinly linked groups, as generated by LinkedIn Labs Maps:
See the source of this image at LinkedIn Labs.
While LinkedIn can tell you the interconnectedness of my connections, it doesn't (yet) tell you how strong the connections are - although presumably this could be derived at least for online by analysing Twitter traffic, blog links etc. to see how often you @ message people, retweet them, link to their blog posts, comment on their blog posts... Then thicker lines could indicate stronger online connections (an interesting research project). And you might want to use brightness to indicate a fading over time of connections, if they are not maintained.
(I should mention before I launch into the rest of this that I am fairly stingy with my LinkedIn connections - in general it's limited to people I have worked with or communicated with extensively. I know some people use LinkedIn as an online rolodex of everyone they encounter in a work context, but I don't use it that way.)
In the absence of LinkedIn explaining the connections for me, here's my analysis: the story this tells is that I have three groups of LinkedIn connections: people from my workplace where I hold my substantive position, NRC-CISTI (green, lower left); people from elsewhere in the Government of Canada (blue, upper right); and a library / science / scholarly communication group (various orangey-purple colours, upper left). Most of the last group are not in Ottawa, and many are not even in Canada, instead being in the US, the UK and elsewhere.
What I think is interesting is the online and offline story this tells. In terms of maintaining social connections, when I'm working at CISTI, the green connections are maintained by face-to-face contact daily. The government contacts I see face-to-face weekly, mostly at after-hours events related to social media, collaboration and public service renewal. And the scholarly comms / library connections I see usually at most once or twice a year, at conferences.
So in an interpersonal sense, the lines to workplace colleagues would be very thick. But if you were to be able to apply an "awareness of work activities and interests" filter, the picture actually changes dramatically. In that case, the connections are strongest, the amount of information transferred largest, for people with a sustained online presence. Communication within an organisation is a classic problem. But it's interesting that it now is possible in some cases for me to have a much better understanding of what is happening outside my organisation, what people are working on and investigating, than to understand what's happening internally.
Twitter in this case can be thought of as a vibration of the connection, a continuous thrumming of quick notes of activity - a thought shared, a link retweeted. In network terms we would call this a "keep alive", a ping. "I'm here, I'm working on things," Twitter says. But for real information density, you need to have blog posts - a blog post is a thick solid line, a rich informational link, particularly if it has a lot of out-links and comments. And of course, this starts to look like synapses - frequently used connections get stronger.
This can lead to an odd dichotomy, where your social connections at work, which traditionally would have been the richest sources of information about what's going on, may actually serve only a tribal purpose, whereas "virtual" connections link you much more strongly into the information you need to do you work, and provide a sense of ambient awareness about important developments in your fields of interest.
The change - the fact that when people narrate their work, regardless of where they are, you can understand better what is going on - this change I think is part of what drives the gap in understanding, between the people who say they must be connected in order to do their work, and the people who see online activities are purely social, the "why do employees need Facebook" question. This is a result of a confusion between networking for social connections, and networking for information connections.
I also think people apply an odd scepticism to these online connections, as if they're somehow not "real", as if only face-to-face is real interaction. To which I say, is calling someone to tell them you love them not "real" because you're not both in the same physical space? Is writing someone a heart-felt handwritten letter of thanks not "real"? It's a very odd concept of real if that's the case.
All this to say, we have to be careful about analogies from the physical world to the digital world. People hear the "social" in social media and think employees are going into purely entertaining spaces, to take a break. Whereas what has actually emerged for some is a professional knowledge network that gives them more information and more context more rapidly from external sources than is available within their own organisation.
So be aware: if your organisation is a tight cluster of interconnections, with few links reaching outside the organisation, and with very thin amounts of information exchanged across the connections, you're going to be outperformed both by employees within your organisation who are better at making professional connections online and contributing to the online ecosystem of information, as well as by organisations that as a whole are able to learn this communications lesson.
If you want a classic example of an organisation failure in this area, there's no better one than the organisation intranet, an inward facing mirror that reflects only your own images and ideas back at you, typically outdated ones frozen in the web of intranet approvals and process. There's a reason it's called the World Wide Web, not the organisation internal network. The power came with open worldwide connections, not with organisations talking to themselves. Open allows serendipitous connections, unanticipated discoveries. Choose open.
Posted by Richard Akerman on January 31, 2011 at 11:13 PM in collaboration, Knowledge Management, Social Networking, Web/Tech | Permalink | Comments (2) | TrackBack (0)
Reblog
(0)
|
|
"created_at":"Wed Jun 06 15:35:14 +0000 2007" (extracted from http://api.twitter.com/1/users/show.json?screen_name=scilib)
Closing in on four years on Twitter. I can't remember exactly, but I think I noticed people were talking about it more and more, on blogs and in person, so I decided to check it out. It took me about six months to figure out how I could use it, to understand what Twitter was for me. It has aspects of "social bookmarking that works" (that is, that actually shares bookmarks rather than just keeping them to yourself) as well as a social network maintenance layer - a way to keep in contact with people in your network, during the sometimes-long gaps between in-person conversations. It's also a way to get some understanding of people you have yet to meet, to discover a bit about their interests and personality.
It is not, exactly, a blog killer. But it has dramatically reduced the amount of routine news or links that I post. A bunch of factors led to a dramatic reduction in my blogging. The main one had nothing to do with Twitter - in June 2009 when I posted the blog is quiet and said it was because of "Reason I can't tell you which will be announced soon" it was because I had planned to move to scienceblogs.com. I was excited about the move. But then there was some minor setup... and I started second-guessing what I wanted to post, I was uncertain about what to say, now that I was outside of my own space. I started a few posts in draft, but I was really hesitant to do a completely new launch, on a new site. Whereas previously I would have had an idea and immediately fired up my browser, I no longer felt that I should just share anything, anytime. (This was nothing to do with ScienceBlogs, they were perfectly welcoming.) And so my momentum drained away, and my energies were all channeled entirely into Twitter. To the extreme that my posts went June 2009... August 2009... July 2010.
Meanwhile, my tweets are up to 500 or more a month, according to the (probably imperfect) stats gathered by TweetStats for my account.
What have a learned, from this long sejour entirely in Twitterland? First and foremost, I think one loses a lot by not blogging. Twitter can to some extent maintain a presence online, but it can't expand it or make substantial impact. Pretty much all of the opportunities that have come to me from sharing online came from sustained blog posting, from long-form sharing of my own ideas, not from tweeting or retweeting. If you want to share your ideas in a way that will generate substantial discussion and spark interest in a major way, you have to write in the long form. It's the content creators who are the top of the Internet pyramid - to have an impact you must be writing your ideas, narrating your work. Not just for others, but as importantly, to better understand yourself, to have an online archive of your thoughts and work over time.
Nick Charney puts it nicely
As a knowledge worker myself, I feel that my blog is one of my strongest assets: it helps me contextualize my thinking, forms a narrative, is searchable, can hyperlink to other sources, and allows for comments and debate.
From Briefing Notes to Govblogging - January 28, 2011
When I started a work blog in 2004, it was as simple as wanting to be able to google my own conference notes. The easiest way for me to make that happen was to just stick them in a blog. I had no idea it would become more than that, notes to myself.
Our communication channels are evolving continuously. In 2004 the library blogosphere was evolving rapidly, and my RSS reader was my daily go-to place to find out what was going on. Later, I found Friendfeed was a valuable addition, allowing additional conversation and sharing around a broader spectrum of material than just blog posts. I used delicious, but I never found it worked very well even as a personal archive - I was much more likely to find what I was trying to remember by trying some keywords in a search of my blog, than guessing what I might have used for tags in delicious, and it was pretty rare that someone would be monitoring delicious closely enough to pick up a link that I had posted.
Twitter does a much better job of "ambient awareness" in a few senses - it lets me know generally what major events are happening (such as the ongoing events in Egypt), amd it is also a good way for me to find links of interest in specific topic areas (for me: open data, government 2.0, library technology, scholarly publishing). But it's important to understand, this is my unique window on to Twitter. I have very carefully selected whom I want to follow (once I hoped to keep it to 200 people or less, now I am vowing to keep it under 500); I also go through my followers daily and block both blatant spammers and people that I think are coming from keyword searches that don't match my main content. Twitter for me is very much a curated experience - both from me in curating the information I want to see by selecting whom I follow, and in turn from that group, in the content they write and retweet.
There is a major problem though, which is a dramatic loss in findability. Twitter is designed as an ephemeral stream. And once you're following a substantial number of people, the river flows very quickly. I can retweet and favourite a lot faster than I can read. And once tweeted or favourited, Twitter doesn't make it easy to search, even to record for a long time what you have highlighted. I use Friendfeed to consume both my tweets and my favourites but it's not a great solution - its search is imperfect and it is in danger of disappearing entirely at any time (it was bought by Facebook and is no longer actively supported).
There are some dedicated tools, such as T-keeper and Archivist, I tend to use those specifically for recording event hashtag traffic though, rather than to capture all of my tweets. There are also services that will bookmark any link you tweet, but often these want read/write access to your Twitter stream, and I am very reluctant to grant write access to any app.
I would be interested in hearing what workflows people are using to keep track of their tweets and favourites so that they can go back and read things later - I imagine a lot of people are using Instapaper, but I haven't managed to integrate this into my workflow yet.
Another factor driving Twitter use is the fact that it is easily read/write on mobile. There are lots of good Twitter clients for mobile devices. On the iPhone I use Echofon, and on Blackberry the official Twitter app, which has some nice notification integration. As I use my iPhone a lot on transit, Twitter has provided an easy way to monitor what's going on, and to provide feedback.
Specifically in the Ottawa context, Twitter has been a powerful tool for keeping up with local events, reporting from those events, and connecting to attendees before and after. For example, the recent Third Tuesday about the NCC's use of social media was announced on Twitter, microblogged on Twitter using hashtag #3tyow, and has also been a catalyst for further conversations with Daniel Feeny (@feeny_d) who was the presenter. I actually worry about a digital divide in Ottawa, as those who are still using only mailing lists and blogs are missing out on a lot of the events and discussion that now are solely on Twitter.
Also, it's important to understand there are three quite different Twitter experiences: the web site, the mobile apps, and the desktop tools.
Twitter through the website itself is a somewhat limited experience - although more features are now easier to use and better exposed in the new user interface. It still won't let you post a retweet with a comment, or shorten your URLs for you, or help you post images. A Twitter web toolkit thus includes not only an open Twitter window, but e.g. bit.ly for URL shortening and e.g. Twitpic for picture posting - a rather awkward, manual integration. It's actually easier for me to post a link or a picture using Echofon, as there is a Safari "post to Echofon" bookmarklet, and built-in picture posting support.
Echofon (and other iPhone apps) also give visual indication of new @-messages and direct messages (DMs), unlike the web app. I probably use Echofon more than any other single iPhone app.
That being said, you only get the full power of Twitter with a desktop application like TweetDeck. Twitter is not just you and the network of followers and followees you have, it is global conversations. You can track these using Twitter searches that provide RSS feeds, but it is much easier to monitor them at a glance in a tool like TweetDeck (or the web-based HootSuite). For example, while I get information in my feed about open data from the people I follow, I also monitor the #opendata hashtag in TweetDeck, along with many others. Selecting hashtags of interest and monitoring them can be a great way to learn about a new topic and keep abreast of new developments. Don't forget you can use booleans, so you can e.g. monitor information about the three GC 2.0 core tools by using the search "gcpedia OR gcconnex OR gcforums".
Here's what I'm monitoring right now:
Like I said, a river of information. But you don't have to try to drink everything from the firehose. A good start is just to find the people and hashtags that are useful to you and start monitoring them, when you can - maybe even just check in once a day to get a sense of what's going on. How you use Twitter, and how often, can evolve from there. For myself, I was happy to have my deep immersion in Twitter for over a year, but I'm happy to be back blogging now as well.
Posted by Richard Akerman on January 30, 2011 at 12:57 PM in Social Networking, Twitter, Web/Tech, Weblogs | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
We call on the Government of Egypt to ensure that freedom of expression is respected by, among other measures, unblocking Internet sites.
Statement by Minister Cannon on Situation in Egypt - Jan 27, 2011
Nous faisons appel au gouvernement égyptien pour assurer le respect de la liberté d’expression, notamment en cessant de bloquer l’accès à des sites Internet.
Déclaration du ministre Cannon sur la situation en Égypte - 27 jan 2011
Via Twitter, where the Foreign Affairs account is @DFAIT_MAECI / @MAECI_DFAIT (linked from the news release).
Déclaration du ministre #Cannon sur la situation en #Égypte http://ow.ly/3LUbZless than a minute ago via HootSuiteMAECI
MAECI_DFAIT
Statement by Minister #Cannon on situation in #Egypt http://ow.ly/3LU91less than a minute ago via HootSuiteDFAIT
DFAIT_MAECI
In the US, Press Secretary Robert Gibbs (@PressSec)
Very concerned about violence in Egypt - government must respect the rights of the Egyptian people & turn on social networking and internetless than a minute ago via webRobert Gibbs (EOP)
PressSec
And a statement by President Obama
I also call upon the Egyptian government to reverse the actions that they’ve taken to interfere with access to the Internet, to cell phone service and to social networks that do so much to connect people in the 21st century.
In a press briefing, Robert Gibbs frames these as universal human rights
it is our strong belief that inside of the framework of basic individual rights are the rights of those to have access to the Internet and to sites for open communication and social networking.
The US State Department has also been calling for restoration of Internet and social networking access. Philip J. Crowley, U.S. Assistant Secretary of State for Public Affairs, tweeted
We are concerned that communication services, including the Internet, social media and even this #tweet, are being blocked in #Egypt.less than a minute ago via webPhilip J. Crowley
PJCrowley
Widely reported, e.g. Washington Post - U.S. warns against blocking social media, elevates Internet freedom policies.
In case you're wondering what happened, here's what Internet traffic to and from Egypt looked like, from Thursday to Friday from New York Times article Egypt Cuts Off Most Internet and Cell Service.
It's reported that with broadband Internet completely blocked, but telephones still partially working, people have turned to dial-up to ISPs outside of Egypt, using old modems: Pour contourner le blocage du Web, les modems 56K.
(Note that Blackbird Pie, which I used to embed tweets, always says "less than a minute ago"; it loses date information.)
Posted by Richard Akerman on January 30, 2011 at 11:27 AM in Current Affairs, Social Networking, Twitter | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
As long as you don't have a private Twitter feed, apps can consume your Twitter via RSS or the API, and either republish the entire thing, or extract selected tagged tweets.
For Yammer, simply enter your Twitter username into your account preferences, and then any tweet hashtagged #yam will be copied to your Yammer status. Yammer Integrates with Twitter gives some information. In the latest version of Yammer, the setting is under Account->Profile...Twitter Username.
In LinkedIn, Edit Profile->Twitter... lets you pull in all tweets, or only tweets hashtagged #in.
You can also push status updates from LinkedIn to Twitter. See LinkedIn Learning Center - Twitter for more info.
To share updates from LinkedIn to Twitter, check the box next to the Twitter icon on the LinkedIn home page. The first time you do this, Twitter will verify your account name and password. Whenever the Twitter box is checked, that update will publish to your Twitter feed.
You can add an application (with all the associated security and privacy risks of Facebook apps) called Selective Tweets, once configured you simply use hashtag #fb to get tweets into Facebook. Another option is http://twitter.com/about/resources/widgets/facebook - The Twitter app for Facebook.
If you want to send postings from Facebook TO Twitter, use http://www.facebook.com/twitter/ - you can choose which types of content you want to be posted.
If you want to display tweets in a window on a site or a blog, you can use http://twitter.com/about/resources/widgets
There are lots of different ways, one is Feedburner Socialize. See Feedburner Socialize and stats.
Be mindful that if you want to send status messages TO Twitter, you have to give the source application write access to your tweet stream. That means if the source application is compromised, it can be used to post malicious information (e.g. malware links) to Twitter.
You can see what applications you have added, and what permissions they have, at http://twitter.com/settings/connections - note that Twitter permissions are all-or-nothing, in the sense that you don't get to choose what permissions a connected application has - if it requests read and write, you can only permit it entirely or deny it completely, you can't choose to grant it just read permissions.
There are also various desktop tools, such as TweetDeck, that support viewing and posting to multiple different types of status feeds simultaneously.
This topic is related to the issue of moving content around in an enterprise and consolidating notifications.
Jan 14, 2011 RSS and enterprise notification architecture
Posted by Richard Akerman on January 24, 2011 at 10:06 AM in RSS Feed Tools, Social Networking, Twitter, Web/Tech | Permalink | Comments (1) | TrackBack (0)
Reblog
(0)
|
|
The Office of the Privacy Commissioner of Canada has released their final agreement with Facebook, in which Facebook notably agrees to change the API to third-party applications, enabling much more granular control of the personal information you share with them.
News Release - Facebook agrees to address Privacy Commissioner’s concerns - August 27, 2009
(announced via the the @PrivacyPrivee Twitter feed, incidentally)
This is an issue that has concerned me for some time, so it's great to see it being resolved in a positive manner.
Here's a question I asked about Facebook in 2007, when I was blogging the OECD Participative Web forum
QUESTION: Hi, Richard Akerman from the National Science Library of Canada.and here's the panel discussion that ensued - the cast of characters is Mozelle Thompson from Facebook, John Lawford from the Public Interest Advocacy Centre, and Gary Davis from the Irish Internet Data Authority, with Hugh Stephenson from the U.S. Federal Trade Commission chairing.One of the things that I've seen in the discussion is we are talking mostly about silos, but Web 2.0 is about mashing sites up, about linking sites together, about crossing between sites and combining them together.
Not to pick on Facebook, but Facebook has a fabulous feature, which is Facebook Applications. However, in order for me to give my informed consent, I have only one choice. To use this application, I share my information with a third party.
I think that is a valid option, but the question, the broader question, the policy question is: How do we deal with privacy when we expect that sites will want to interlink like this, that people will want to connect their information like this? How do we control the spread of the information?
Are there technological ways to do that? Are there policy ways to manage it? If I share with a third party, how do I stop the third party from sharing on?
So I'm interested obviously particularly in the Facebook experience but the broader panel as well.
Thank you.
MR. THOMPSON: I think that question is there for a reason. I mean, when I say that, when it warns you that in order to use this application, you have to share some information with that application, it's because if you don't want to share your information with that application, you should not download that application.from OECD transcriptOne of the things, you are absolutely correct we have over 5,000 applications. And aside from the applications that are created by Facebook itself, it is very difficult to police every single other one for what everybody else does.
For example, if Amazon has an application that you can download on Facebook, then you are going to have to be guided by Amazon's policy.
That being said, do we have certain standards about data mining and other things? Absolutely.
We tell sites that if they want to create an application and they want to ask you for information, that's great. We are not going to give you information about our users. We leave it then up to the user to determine whether they want to use this application or not. And that has to do with a trusted site relationship.
MR. STEVENSON: Thank you.
John, I think you wanted to get on this, and then Gary, and then one more question.
MR. LAWFORD: The way you dealt with that in legislation, you just ask for someone's consent, right, and that should be the end of it. If you don't want to use that program, you don't consent, except that what you are getting for that application is they are asking for more personal information probably in your sign‑up than they need to to provide that application to you.
They've already got the fact that you have been referred from Facebook and now they are asking for additional personal information.
That's where we are saying that for a Web 2.0 type statute, whether internationally or nationally, you should be able to ask for the plain vanilla transaction. So you have name, address, if you need it, and I get my application, not all this other stuff.
MR. THOMPSON: That's a little bit misleading in the following sense: that is you are Amazon and you have an application on Facebook or some other company has an application on Facebook, if it's Expedia or Travelocity, they are going to need some information from you in order for them to do a transaction with you. That's your relationship with them.
We are not collecting that information. That third party is collecting that information. That's the purpose of the warning. Not because we need that information. We already know what we need to know because you are our user. You are absolutely right.
But we put the warning there so that if you are using a third party application, you know that they are collecting information about you. It's a benefit to consumers.
MR. STEVENSON: Thank you.
Let's give Gary a chance to intervene on this and then I think we have one more question.
MR. DAVIS: Just from a data protection perspective, I don't know the actual characteristics of Facebook applications and there could be anything else.
One of the principles is the purpose limitations. So if I give my information for one purpose, which is to sign up to that, the third party, then if they anything else with it other than the reason for which you gave it, then you would have a valid complaint to us as the Data Protection Commissioner's Office and we would investigate it.
Also, and again understanding the nature of the relationship that exists, if Facebook applications could be deemed to be handling the information on behalf of Facebook, well then there's a contractual obligation there. And one might say that a privacy standard would be that the contract that is entered into would specify between Facebook and whoever manages Facebook applications, that they may not use the information for any other purpose.
I would expect to see that. If you weren't seeing that going forward, well then that's a privacy point that one would expect to be articulated.
MR. STEVENSON: Thank you.
Posted by Richard Akerman on August 27, 2009 at 10:46 AM in Current Affairs, oecdwebforum2007, Social Networking, Software Development, Web/Tech | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
(I almost wrote Spark CBC, since that's their Twitter name.)
Spark Episode 76 (audio link available directly in the post, as well as various podcast options)
At 22:02 or so in, they take on the challenge of explaining web APIs, or more specificially, they ask Jer Thorp to help walk them through the concept. It's always interesting to hear the descriptions people use. For example, I would generally say "machine-to-machine", which is probably way too abstract. I also tend(ed) to describe APIs in the context of Service-Oriented Architecture, which probably confused the issue (and the audience). I don't generally talk about computer programs communicating with other computer programs.
I think in general what's presented on the show is a pretty good explanation: websites are opening up their information using APIs, so they can leverage open innovation - outside developers. We are a long long way from a completely interoperable web of standard APIs though.
Here's the Twitter-sized explanation I had proposed (taking quite a lot of my space to talk about how there wasn't enough space):
I would argue as well that web development has gotten sophisticated enough that, while APIs are ideal (at least if well constructed), you can actually get a lot by opening your data, which is the key first step. Open data enables mashups, APIs just make mashups easier. Open data means sharing the information your organisation has, out on the web - ideally your default becomes to share.
We're still in early days of open data. The Guardian calls their approach "Data Store - Facts You Can Use". I've written previously about the US Data.gov initiative, which currently has the world's simplest website (a giant box reading "coming soon"), but I think is supposed to launch this month. It's similarly challenging to point to open data cities, because while the Twitter-enabled Toronto @MayorMiller announced toronto.ca/open at Mesh, it also reads simply "Under Construction".
What will be possible is mashups, visualizations, APIs, analysis and much more.
I believe the long term success of projects like StimulusWatch Canada and ChangeCamp Ottawa will depend on open data, and (eventually) on all levels of government having open APIs as well.
Which circles me around to the opening topic of the podcast, about whether online activism ("slacktivism") can actually translate into meaningful real-world activity. The answer, I think, is tied in with the segment about lurking... the web is mostly lurk, only maybe 10% participate. Some tiny fraction of those online participants might translate into offline actions. Maybe one in a thousand? But nevertheless, it does happen.
While I generally refuse to join these "click your support" Facebook groups (in part because I don't like FB much anyway), they can be low barrier entry points, in particular since so many Canadians (who may otherwise not be very social-web enabled) are in FB.
The kind of canonical Canadian example is the Fair Copyright for Canada group, with its (at time of posting) 90,071 members. It was brought up in the House of Commons. It did translate into some offline activism. And the sheer numbers did, I think, get both attention and generate concern for the party proposing the bill. There are still lots of issues with that number. Lots of people around the world care about copyright. For all I know, that's 81,000 copyright-concerned Americans, and 9000 Canadians. Such is the global web.
I do think "feel-good clicks" are a bit dangerous, they give you the perception of action without actually doing anything. I've long been concerned by this kinda of almost mystical power ascribed to online organising. In my review of Al Gore's The Assault on Reason, I said
Don't get me wrong, I think the Internet has a role to play in reasoned discourse. A small role. A useful tool for pointing attention to falsehoods and referencing inconvenient truths. But electronic communications have a fatal allure of virtual action.
Concerned about the environment? No need to go outside and walk in the woods, or clean up a polluted lot in your neighbourhood, or knock on your representative's door and explain the urgency of your position.
No, instead you can just fire off an email, write a blog posting, and then turn up the air conditioning and the lights and stretch out on the couch and read a good book.
That being said, I have myself translating the online into real world action on a number of occasions. As I wrote in the StimulusWatch blog, it was an online posting that led me to an event that started a chain leading to the creation of the project.
That same event, and online chatter about a local conference, also led me (as partially outlined in my posting Making government data visible - and is Change coming to Ottawa?) to ChangeCamp Ottawa, a very real event happening at City Hall on May 16, which I have been helping to organise, an event which of course has a substantial online presence including a social network for the specific event, as well as being part of the larger ChangeCamp group on Facebook.
Similarly, a local news article in a free neighbourhood paper (yes, in print, with ink and everything) about a small garden/park space led me to a Facebook group which led me to an offline meeting which led me to create http://www.savethegarden.ca/
And of course, on a much more spectacular scale, the Obama campaign used (and continues to use) online organising as a tool, but they were very clear that the purpose of online was to drive a very extensive (and successful) ground game, people talking and knocking on doors, calling on phones, out in the real world.
So I think when it works best, the online world leads you offline, and offline leads you back online. It's an ongoing discussion that flows across place and time.
Discussions enable meetings, data enables websites, websites enable more meetings, meetings come to consensus on APIs, APIs enable mashups... round and round it goes.
Posted by Richard Akerman on May 05, 2009 at 07:51 AM in Current Affairs, Data Management, Social Networking, Software Development | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
FriendFeed has launched its Twitter-like "real-time" redesign, which has some of us unhappy and thinking about why we're unhappy.
What I think is that there are two different modes of operation you can address: real-time and asynchronous.
So first, what is the nature of realtime, what characteristics does it have?
It's kind of a strange question in a way - we live in realtime. It is now, I am typing.
But that nature is its strength and its limitation.
In realtime, a big part of your thoughts are concerned with yourself. This is Facebook and Twitter. Twitter's question is: What are you doing? but there's an implied "now" on the end. What are you doing now?
Not what are you thinking. Not what ideas have you developed.
What are you doing now?
This is a legitimate mode of interaction. But it has issues:
* realtime doesn't scale, because you only have a very narrow window of immediate attention
I can talk to one person. I can have a conversation with two people.
Apocryphal stories of Millenials or whatever gen we're up to now having dozens of chat windows open simultaneously aside, there's only so much time in the now. In the now you can broadcast to many people. But converse? It's not like you can type into chat windows simultaneously. The "multiple conversations" people have is really: slice of attention, slice of attention, slice of attention, slice of attention, one after the other. We are not multitaskers. We are serial taskers. At some point, that attention gets sliced so thin that all you can say is "yes", "no" and "lol".
Realtime by the nature of the limited slice you have, has to be: I, me, doing, now.
* realtime creates a false sense of urgency. realtime is the pace of short-term business thinking. realtime is the tick-tock of self-centred false importance.
Look at the crackberry man, out in the world, but staring at his little screen. Out in the world, but living in his email. He sent a message almost 10 seconds ago! Why has no one replied? Don't they realise he's important?
Yes, it's a caricature, but it has some truth. Between your twitter follower alerts and your friendfeed follower alert and your facebook sheep throwing and your spam messages and the 100 other new emails and your tweetdeck and your calendar alarms... where are the cycles to, you know, actually think about anything.
Realtime is the buzz buzz buzz of busy-ness. Business busy-ness. But don't confuse activity with productivity.
* realtime is time-zone discrimination. realtime is local.
Your scope of immediate thought and action is local. In the now, a fire across the country is interesting, a fire in your building is ALARM ALARM. Realtime is a great tool for re-connecting with your local community, with the people who are awake when you are, where you are. But realtime by its very nature excludes the non-local, the timezone outsiders.
* realtime is loss of control
You can't keep up, the thought is gone, the tweet is gone, the friendfeed posting has scrolled... run twice as fast as you can. Realtime brings not only false urgency, but since no one can keep up with everything, simultaneously, it means a loss of control, over information and over your ability to act.
* realtime is cr*p to monetize, except for search
Realtime is very rapid conversation. Conversation is inherently hard to monetize. Conversation is between people, in order to monetize it, you as the advertiser, the stranger, have to step in between the two people talking, and shout. Imagine how popular that is.
Realtime conversation, scrolling off the screen second by second, is even worse. Oh, they're talking about cats, I'll put up a cat ad... oh wait, they're talking about dogs... oh wait, they're going to go see Wolverine... oh wait, they're gone. Oh ya, that's a genius space to try to stick your ads in. Good luck with that.
You can monetize the search of realtime conversations, but don't confuse search with conversation.
** In Summary **
Realtime has value. I'm in Ottawa, now. If GCPEDIA goes down, I tweet that it is down, I see tweets when it comes back up. I can ask a question of my local community, what's happening tonight? is Bank Street closed? where are the buses re-routed?
This is an absolutely useful connection and capability.
But it is only one possible way to interact.
The global asynchronous conversation, web pages linking to web pages, which turned into blogs linking to blogs, is another mode of operation.
There is no question hovering above the big empty box of your Blogger posting. There is just a word: Create.
Not what are you doing, right now. But what are you thinking, whenever. Like this blog posting that I write now, but you may read at the appropriate time for you, in hours, days or weeks.
Asynchronous can have a measured pace, can be reflective, can weave in many different threads from many different sources, because you have the luxury of time.
* asynchronous is about the flow of conversation, not about the immediate individual acting
You link to me, I think, then link back. I connect to your ideas, I don't speak directly in the moment to you.
* asynchronous is global, timezone agnostic
My friendfeed has people from all over the world. Some are ending their day as mine begins, others haven't yet awoken. Where's Berci? Where's Bora? Well it doesn't exactly matter, because if they post, I see their thoughts at the time of my choosing, and vice versa.
* asynchronous is actually a lot better to monetize
Because we're taking our time to scan and interact with long form ideas, you can figure out what we're talking about, and stick an ad next to that conversation, and we may actually have time to look at it.
** In Summary **
Asynchronous is about the global circulation of ideas. I don't need to know where you are, or when you are, because I'm interacting with your thoughts at the time and place of my own choosing.
Facebook and Twitter are already in the realtime, "what am I doing now" space.
Delicious and StumbleUpon and others are already in the "look what I found" space.
FriendFeed's strength was in the global, asynchronous conversation about the things that we'd found, the ideas that we have. This is a particular type of conversation support that lends itself very well to scientific discourse, bouncing ideas back and forth around the globe, day after day. The scientific discourse is a global asynchronous conversation, by its very nature.
Realtime mode is a nice feature for supporting conferences, when there really is immediate "this is happening now" to report on. But it's a mode mismatch for long-form conversation about ideas.
As I have suggested on FriendFeed, it may be that in the end, with the divergence of the founder's goals from some of the users, we may have to write user requirements for the global asynchronous conversation, and if FriendFeed can no longer support them, then move elsewhere, or have a replacement site built.
What FriendFeed will lose is people who, at the time and place of their choosing, spend a lot of time on the site. Time is attention. Attention is eyeball you can put ads in front of. FriendFeed will lose the people who were paying the most attention.
I was inspired to write this by Cameron Neylon's thoughtful posting Science in the open » “Real Time”: The next big thing or a pointer to a much more interesting problem?
In a way, this posting is part of my conversation with Cameron, despite the fact I have only the vaguest sense of where (in the UK?) and when he is.
We have taken different approaches to the issue of realtime, and in particular I want to raise a very important, indeed critical point that Cameron makes: filtering takes time. One of the other incredibly powerful aspects of FriendFeed in asynchronous mode is that it bubbles up items of interest, through the collective action, the filtering that results from my friends liking and commenting on items. When I open async FF in the morning, what I see is not the latest ideas, what I see is the most important ideas, based on the filtering of a community I trust. Realtime, by its nature, has no time for filtering. No time for filtering turns a curated stream of useful information into a firehose of content, the little specks of gold that dance in the stream lost because they are mixed in with the flood of useless noise.
Previously:
April 6, 2009 why I don't like FriendFeed beta
Posted by Richard Akerman on May 04, 2009 at 07:06 AM in E-Commerce, Social Networking, Web/Tech | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
After, shall we say, considerable deliberation, ChangeCamp Ottawa went with Pathable as our provider for a conference social network. What I found compelling is that they have EventBrite integration, which means once we were ready to go, it was able to pull current registrations across (with names, emails, and affiliations) and send out Pathable invites, and as new users register it will automatically pick up the EventBrite registration and send out an invite.
Posted by Richard Akerman on April 23, 2009 at 05:58 PM in Conference, Social Networking, Web/Tech | Permalink | Comments (0) | TrackBack (0)
Reblog
(0)
|
|
Recent Comments