The Canadian Open Data Experience – CODE – is a 48 hour nationwide app development competition
The other thing that I announced today, by the way, is our first appathon, where we're going to invite entrepreneurs of all ages next February  to go onto the data.gc.ca website/portal and in a 48-hour period come up with the next app that uses Canada's open government data and develop something that will be of use to citizens
Possibly this will be timed to coincide with International Open Data Day 2014 on February 22, I don't know.
UPDATE 2014-01-09: CODE event will be February 28th to March 2nd ENDUPDATE
There is a basic signup site at http://www.canadianopendataexperience.com/ and social media
The government is supporting it, but not running it. It will be run by XMG, who do the similarly-named but separate Great Canadian Appathon (it's not clear if the events will be one and the same in 2014).
August 24th-25th, 2013
HUB Ottawa, 71 Bank Street
"The hackathon will bring together technologists, data analysts, and international development experts from across the sector to create useful products, insights and analysis of Canada’s international aid data."
You have to apply to attend, deadline for applications is August 8, 2013.
They are also soliciting project ideas.
September 13th-14th, 2013
University of Ottawa
"The first day of the event will be a series of speaker panels about different data sources and applications and the second day will be a hackathon with various datasets and the CanLII API."
Site: CanLII hackathon (hackerleague.org)
You can see previous postings in category open data
On April 22, 2013 TBS released the Expenditure Database. It allows one to browse and search expenditures organised in ways that are easier to understand than the "votes" system under which they are actually allocated. It is great progress within the overall open government initiative.
However, from a technology, design and process perspective, I think there are some opportunities that are being missed.
Minister Clement is fond of saying that the government's data is like grandmother's silver, hidden away. In fact, our current model of delivering applications is more like hiding the entire kitchen from view. In Ottawa's Victorian homes, the kitchen is usually at the back of the house, closed off from the dining room. The servants were supposed to work there in obscurity, with only the final result appearing with a dramatic flourish in the formal dining room.
In most of those homes that wall has now been smashed down, because we found that in the post-servant age, the kitchen is the hub of activity in the house, where we socialise and cook together openly, a very human and social activity.
But in government software development, that wall still stands. Applications are developed behind closed doors by public servants, and then suddenly appear, fully formed, on release day. This model of closed software development has some very real consequences:
We need not just open data, we need the entire philosophy of open source web development: we need to develop in the open.
The UK Government has released a fantastic Government Service Design Manual. It has some explicit statements and some built-in assumptions. The underlying concepts are to develop government services using modern software engineering processes. This means being open about the code as it is in development (e.g. through blogs explaining the work and github repositories making the code available) and iterating through the design, from Discovery, through Alpha, Beta, Live and (all important in the government) Retirement.
The UK Government Digital Service does its work in the open, through blogs, twitter, and github.
Software development is a process: the application you release at a moment in time is not an end, what's important is that the application should tell a story about itself, so that it can be improved and so that it can be an inspiration for further work.
The Expenditure Database is a great step in providing easy citizen access to the underlying data. But it doesn't tell any of the story of how it works, how the data is processed, and who did the work. The meal has been delivered and it looks very nice, but the kitchen and the process of making it are still closed off from view, hidden in mystery.
Instead one has to dig through the code and make guesses. For example, it looks upon examination that it is drawing the data from a local file: http://www.tbs-sct.gc.ca/ems-sgd/edb-bdd/data.js
If instead the code were in a shared repository and documented, it would be much easier to understand how it works and the rights to reuse and modify would be a lot clearer. As well one would expect there would be both examples of using the data file as well as an explanation of how it was generated.
What Expenditure Database development would look like if it followed this model:
Just a few examples of what we're missing as a consequence of the current implementation:
I welcome your feedback on this post, with any clarifications, corrections, suggestions or pointers.
The Government of Canada has its Web Experience Toolkit on GitHub
By making the WxT open source, it has been possible for organisations outside of the government to use it for their websites and to improve it. Both the University of Ottawa and the City of Ottawa are now using WxT.
There was a writeup in Wired about the WxT: Canadian [Coders] Solve Mystery of Open Source Government.
the Treasury Board of Canada hosted a CodeFest to invite hackers — mostly government staffers — to hack its Web Experience Toolkit, or WET — a set of open-source tools that the Treasury Board uses for building websites. One hundred and fifty people came. Many of them were young developers, excitedly swapping code and sharing ideas across tables.
There are some minor issues: It's Treasury Board Secretariat, not Treasury Board, and TBS is the government's central policy agency not an "obscure Canadian tax-collecting agency". (Admittedly it is difficult to decode that from the TBS website, but if you read far down enough you get to "policies, directives, regulations, and program expenditure proposals with respect to the management of the government's resources".)
Author of the article is Robert McMillan @bobmcmillan
Featured or indirectly mentioned are:
UPDATE: There is a Drupal working group, the next meeting is January 25, 2013 at Ottawa City Hall.
Here's a thing I wrote on the government internal wiki (which is only available to government employees). Some of the content was based on an email I received.
If you have ideas about how to make it better, please feel free to leave a comment (or edit it directly on the wiki, if you're in the Government of Canada).
There are a number of ways to encourage the use of open data and the building of a community. This article uses the broad umbrella of "hackfest" (hackathon, codefest, etc.) to cover some approaches.
Common approaches include:
Sites that offer data, APIs, or other technical capabilities often need support from a broader community in order to sustain and grow their capabilities. This is a key element of community engagement.
Communities may be sector-specific, or they may be based on particular skills or expertise.
Key communities may include:
In cases where a desired outcome is the development of new applications or hardware in order to promote economic growth, software development experts are a community that needs specific targeted outreach.
In the United Kingdom, the national Open Data White Paper identifies the need for a Developer Outreach Strategy
We need to work collaboratively to ensure that developers are aware of what datasets are being released, in what timeframes, and to maintain relationships with those at the cutting edge of technology who can help government do things differently and in more agile ways. This kind of conversation between government and users facilitates capacity building both ways to great benefit for the public good.
One approach to this for government websites is to ensure that every site has a /developer subpage that provides contact and community information. Such a page needs to be backed up with active community engagement.
In the US, the national Digital Government Strategy states
To establish a “new default,” the policy will require that newly developed IT systems are architected for openness and expose high-value data and content as web APIs at a discrete and digestible level of granularity with metadata tags. Under a presumption of openness, agencies must evaluate the information contained within these systems for release to other agencies and the public, publish it in a timely manner, make it easily accessible for external use as applicable, and post it at agency.gov/developer in a machine-readable format.
Many events have idea generation as a first stage of the event (typically online-only to get people started thinking). Others are ideas-only (no code created), or run idea generation in parallel with the coding.
There has been an upsurge in organisations offering prizes for creative use of their technology assets, most typically an "apps contest" using open data that they have released. This has ranged from the municipal to the international level.
It is an easy entry point into leveraging the value of released data, but it should be considered only one piece of an overall engagement strategy.
A caution on expectations from cash prizes - they will incentivize creation of individual apps, but they are not sufficient to help build community. Four Ways Summer of Smart has Reinvented Civic Hackathons states
Don’t (just) offer cash prizes. (Hackathons should build a community.) We have seen dozens of apps contents and hackathons where it’s assumed that a few thousand dollars of incentive as a prize will help seed the projects and produce innovation. But as many have found, this model simply isn’t effective – and there is a clear reason why: civic hackers are driven not by money, but by the potential of their work to create civic and social change.
The City of Ottawa will have a second contest in fall 2012. The lead at the City of Ottawa is Rob Giggey (Twitter: @rob_giggey).
The US government offers a general platform for citizen competitions of many different types called Challenge.gov The tagline is: "On Challenge.gov, the public and government can solve problems together."
Developers will sometime create simply based around a data release or a hackfest event, without needing to have a specific contest. For example a wide range of applications for bus arrival times were created following the City of Ottawa's release of real-time transit open data - there was no contest, but the data release was widely promoted and there was a related 2012 Ottawa Transit Data Day event.
There are many Ottawa-area groups with expertise related to community-building and software development.
Having liberated Ottawa's real-time bus location data, the city is now moving to the next step of bringing developers and OC Transpo together to make (even more) amazing things.
Ottawa Transit Data Day is June 2, 2012 at Ottawa City Hall
Register at http://transit-data-yow.eventbrite.com/
Find more information about the available data in the Open Data Ottawa blog
Suggest datasets at DataOtt
Ask questions in the GPS API mailing list
Provide feedback directly about OC Transpo open data: OCOpenData@ottawa.ca
Event hashtag will probably be #yowdata
So many communications channels. Whew.
To develop is to be in the physical world, with all its rich social connections (and all its frustrating constraints) and yet to be able to tap into the invisible virtual world, to craft new things out of thin air.
This touches on the whole "should X be coders" debate. I'm inclined to agree with Rushcoff's Program or Be Programmed - we have to not just use the tools, we have to understand at least some of the underpinnings and be empowered to hack away to change things.
In the UK there's quite a bit of discussion about moving the computer curriculum from teaching how to use Microsoft Office (I shudder to think this is actually considered "computer training") to covering more foundational concepts and teaching how to code.
A story arrived in my Twitter stream that illustrates just how fundamental this set of skills can be - with painful honesty Shawn Graham tells a tale of failure and lessons learned - a community that succeeded on the front end, but fell apart (no documentation, taken over by spammers, no backups) on the back end: How I Lost the Crowd: A Tale of Sorrow and Hope.
We need to get to a place where people can have great ideas and know how to implement them, either themselves or by knowing the right resources to draw upon. To some extent the new outsiders are the people who don't know how to code. In professions that once seemed perhaps distant from technology such as journalism and the humanities, coding is becoming a core competency. Fortunately there are great initiatives like Girl Develop It, Young Rewired State and Mozilla Webmaker (the source of the video I embedded above). No matter where you are it's almost certain there's a meetup or three for beginning coders. Let's get coding.
July 24, 2007 software development, staffing and new library technology
December 7, 2005 librarians 2.0 don't need to be coders 2.0
Random Hacks of Kindness (RHoK) will be December 3 & 4, 2011.
It's a global collaboration with the possibility of local events.
It happens that International Open Data Hackathon also falls on December 3, 2011. It is focused on getting cities to host hackathon events.
Ottawa is hosting one at City Hall, you can register for free
It's not just a technical event, it's about bringing everyone with an interest in open data, open government, and citizen engagement together so we can exchange ideas and make stuff.
In general, people want their organisations to do wonderful things. Yet even when organisations are staffed with individually excellent employees, often the result is less than the sum of its parts. Or in other words, "Why can't my organisation do x?"
You can think of this as the lifecycle of an idea.
idea -> implementation -> operations
The above of course only works if you have infinite resources, to take the vast number of ideas, implement them all, and maintain them forever. Since this is impossible, instead we try to talk about this using an analogy to living things, e.g. from seed to growing tree to mature tree to chopped down tree.
There are many different angles on this, in technology commonly we talk about the Software Development Life Cycle. Unfortunately in most organisations, this is actually the Software Development Lie Cycle, for many reasons.
First is that the above line from idea to operations is woefully oversimplified. It's actually a loop with current state turning into future state (which becomes the new current state)
(entire current organisation and current technology and idea environment) -> new ideas and improvements on current systems -> gateway -> prototypes and upgrade projects -> gateway -> new and existing operations (future state) -> death gateway -> decommisioned systems
There are at least three problems that organisations encounter as they go around this loop
1. The processes are bad at each step
2. The staff and management are unable to properly execute the processes, even if the processes themselves are good
3. The entire world changes while you're trying to move from your fixed current state to a wishfully-fixed future state
Specifically when it comes to information technology, big organisations are bad at IT. Also small organisations. Also software engineers. Also, well, basically everyone.
The good news is we have a handle on some of the main problems. In fact we've been trying to find solutions to these problems for decades. The bad news is that solving these problems is very hard, and so quick fixes rather than real fixes are very tempting.
One of the underlying issues is the overall factory model we have both for producing and managing. This is very deeply embedded both in every organisation as well as in the assumptions of every employee and manager. It is hard to break out of a century-long mindset; in fact it's hard to realise you even have that mindset.
Software engineers have tried to attack this problem from all fronts: better languages (e.g. Ada from the US Defence Department), better methodologies (e.g. agile instead of waterfall), better overall context (e.g. Service-Oriented Architecture), better component sharing (e.g. github).
All of these are challenging because of the underlying factory organisation model. For large organisations in particular, certain approaches pull the entire organisation as if by magnetism. In particular, organisations will tend to do
* big projects (in people resources, dollars, and time elapsed)
* complex projects (e.g. many different technologies to be integrated)
* ambitious projects (e.g. trying to fix many problems at once)
* business-incremental projects (i.e. to add new features to a current process or product offering)
* perceived low-risk projects (e.g. with elaborate cost analysis and justifications)
* poorly-communicated projects (management doesn't communicate their intentions "down", IT doesn't communicate complexities "up" and comms doesn't talk to EA which doesn't talk to the coders who don't talk to... etc.)
Also business tends to measure certain costs while ignoring others (typically capital expenditures and coding time are carefully tracked, but having endless meetings is often considered to have zero cost).
This force pulls on all initiatives, pulling them towards having more stakeholders, more committees, more senior-level buy-in, more analysis, more consultations. And once substantial effort is invested, the "sunk cost" psychology means that it is almost impossible to stop a running initiative, or to turn off a running system.
This force pulls towards bigness and complexity in general - towards big bang Enterprise Architecture, elaborate Service-Oriented Architecture, big-iron computers and networking hardware, "enterprise-class" heavyweight software backed by giant consulting firms... organisations somehow believe if they can just do big enough things with big enough technology, their problems will be solved.
There are many, many reports over decades that have documented this. A recent one is System Error: fixing the flaws in government IT from the Institute for Government in the UK.
Pretty much everyone wants to fix this. (Of course consultants may in some cases have perverse incentives to perpetuate dysfunctional processes that generate a lot of money for them, and managers may want to retain processes that give them power, but in general people want to do good work.)
One key problem is that usually people don't take a systems view. They identify one problem area in the cycle and want to attack just that, thinking that will solve the entire problem. For example:
* if we could just generate more / better ideas, everything else would work
* if we had a better planning process, everything else would work
* if we did more prototypes, everything else would work
* if we had more hackers, everything else would work
* if we changed development methodologies (e.g. to agile development), everything else would work
The reality of course is that unless you attack ALL steps in this cycle, the end results will always be, as they say, hashtag #fail
And even if you do each individual step well, you may still be choosing to do the wrong things.
Let's take a typical example. Your organisation has an existing telephone system (PBX with voicemail) and knows to the penny most of the costs (in reality it probably knows most of the clear expenditures but has a more hazy idea of systems maintenance costs and opportunity costs). Someone, probably at a senior level, has read about Voice over IP (VoIP). Investigation initiated. RFP written. Consultants consulted. ROI laid out. Network upgrades needed. Hardware to purchase. Desktop software to integrate. Options to evaluation. Choices are made, purchasing, coding, implementing... a nice project that takes a year+ to replace a telephone on an employee's desk with... a telephone on an employee's desk. Everyone happy.
Except in all this process no one stepped back to say: do we need the telephone at all? Does anyone like using the telephone? Are there other cheaper solutions? What if we got rid of telephones altogether? What do telephones do to knowledge work?
In fact, you can find whole threads of productivity literature that will tell you that telephone interruptions disrupt knowledge work, and that other methods of communication may be more effective. In parallel the trends are clearly towards a global decline in voice traffic and a huge increase in data traffic. People are not talking on their cellphones, they're texting and Facebooking. Cartoons are appearing about the "phone gap", where texting users expect to receive a text asking whether a call can be made, rather than the phone just ringing out of the blue.
Over a decade ago in The Time Trap, Telephone Interruptions get an entire chapter
There is something wholly irresistible about a ringing telephone. ... Yet tune it out you must, for telephone interruptions can shatter your concentration and fracture your productivity as nothing else can.
(In fact the solution to this is quite simple, it just requires throwing out 20th Century convention and turning the ringer on your phone off, which is what I do. My work phone and Blackberry never ring. Although mostly these days they never ring because no one uses the phone any more.)
All this to say, even if somehow every stage of the giant VoIP project was perfectly executed, they still would have done a good job building the wrong thing. But in general of course, every single stage is badly executed.
So you need a plan of attack for every stage. At every stage you need BOTH better process and better-trained capable people. Like a bike lane system which is barely used at 98% complete but starts to hum when every pathway is connected, you only get the big improvements when every stage of your ideas cycle works well. In addition, we need the ability to continuously step back to the big picture and decide whether an entire initative still makes sense in light of changing conditions.
So you must
* know your current state well (most organisations have no idea what their actual processes are and are often shocked to see ridiculous legacy processes once they're diagrammed on a whiteboard)
* have a good process for gathering and generating ideas (often perceived as a major barrier, this is actually quite straightforward - the issue is rarely a shortage of ideas)
* have a very good process for gatewaying ideas to the investigation stage (making sure to let through both some high-risk prototype ideas and some improve-the-business ideas)
* be very very fast and agile at cranking out high-risk prototypes
* have a very good and very strict process for gatewaying prototypes into full implementation (sunk cost and idea attachment tend to make it hard to kill a running system no matter how many times it was emphasized that it was temporary)
* have a short process to take very focused features to production (e.g. a few months for a few very specific features)
* be adaptable as everything changes along the way
* have a process to kill things from production once they're no longer relevant (this is like the prototype killing problem, except multiplied by 100 as there may be decades of sunk cost)
* have a process to continuously review running initiatives and kill them (or modify them dramatically) if the world has changed while they were running
* Start tracking real employee costs not just specific activities and expenditures. If you have to have a committee and burn $5000 of people time to decide to spend $500 on an Internet tool license, you have a costing misallocation problem.
At every stage just the process alone won't do it, you also need people who are creative, well-trained, adaptable, communicative, sharing their knowledge and documenting what they're doing as they go along.
So for example you might:
* immediately document your current systems (enterprise architecture)
* introduce an environment scanning process (and/or partner with existing horizon-scanning initiatives)
* devise a ruthless gating process that senior management can implement with input from all staff
* reward success and failure - reward successful products but also celebrate the lessons from prototypes that don't pass the gates
* create a prototyping group (innovation team, tiger team, skunkworks etc.) but ensure it has a close working relationship (or is simply a role within) the main implementation/operations team
* Ensure everyone is getting continuous training. Including senior management. Create time for reverse mentoring and innovation.
* Commit to FULLY FUNDING THE OPERATIONS of prototypes which pass the gate into full implementation
* Commit to annually review the relevance of running systems and to plan to kill the ones that need to retire entirely or be replaced with new approches
* If you are doing technology projects, ensure you aren't a technology island. That means many different things including:
** the teams need full, unblocked Internet access, because that's where the information and the tools are
** the teams need to connect out to modern approaches and other developers - this means participating in local hackfests, sharing code on github, etc.
** the teams need to be able to share, publically, their challenges, the work they're doing - through blogs, presentations etc.
This is all a long, long way from our current factory model. In the factory model, interchangeable employees toil on the factory floor, silently, while high above the managers gaze down benevolently from their glass-encased offices, and every project is initiated from senior management, for senior management. It is a long way from a hierarchical 20th century factory organisation to a 21st century agile adhocracy that is both well-connected internally and well-connected out to the entire world of agile innovation.
Plus which, even if you have all of this top of mind, you can still fail. Microsoft and Google hire the top computer scientists (and PhDs in many other areas) every year, AND have deliberate initiatives to attack every stage of this process, to try to ensure they don't fall into the big organisation trap... and yet both Microsoft and Google continue to fall into this trap in various ways. The point being, even organisations that set out to build agility into their DNA like Google can fail, which is why you get endless "why Google can't build (small startup x)" or "why Yahoo Flickr didn't invent (new photo sharing approach y)". In fact to prove my point here's both Why Google Can't Build Instagram AND Flickr Should Have Built Instagram. But They Didn’t. Here’s Why.
The fact of the matter is, whether your organisation is 300, 3000, or 300,000 it is never going to be the same as 8 people in a room with a coffeepot. The best you can do is be mindful of that, and have a mix of internal improvements and standalone innovation, spinning in and out of your organisation continuously.
And that is my answer to @dbast's question
Our organisations are not incapable of coming together to create awesome prototypes. The data.gc.ca prototype was done with a mix of dedicated resources and corner-of-desk, in 7 weeks. It would be another year before it was made publically available. GCPEDIA was done in months and has run for over two years, but it has yet to be operationally integrated or sustainably funded. Hacking for hacking's sake will produce more orphaned prototypes that no one wants to sustain but no one is willing to shut down.
Hacking for hacking's sake also won't help if at least at a minimum the organisation itself has not recognized the problems of big projects and backed up that recognition with both public statements and effective internal initiatives.
I'm not saying hacking to demonstrate the art of the possible isn't great - it is. Hackathons and apps contests have in many cases opened the eyes of CIOs to what can be done with openness combined with outreach and enthusiasm. But unless you're willing to turn everything off the day after the demo, you still need to have an organisation-wide transformation in order to have a repeatable process to get from innovation through a gateway to full implementation and sustainable operations.
Basically the situation sucks. This is not a factory, Taylorism doesn't apply to solve the entire problem. We can't chop this up into smaller and smaller problems and make each piece more and more efficient. If you have The Innovation Group and The Gateway Group and The Operations Group you just end up with efficient silos and an inefficient organisation. This problem needs both bottom-up attacks to demonstrate local efficiencies and possibilities as well as systems thinking, a systematic organisation-wide attack on the problem as a whole. Taylorism is easy and high-achievers get to be individual Linchpin superheros. Systems change is hard. But systems change is what we need.
The Office of the Privacy Commissioner of Canada has released their final agreement with Facebook, in which Facebook notably agrees to change the API to third-party applications, enabling much more granular control of the personal information you share with them.
News Release - Facebook agrees to address Privacy Commissioner’s concerns - August 27, 2009
(announced via the the @PrivacyPrivee Twitter feed, incidentally)
This is an issue that has concerned me for some time, so it's great to see it being resolved in a positive manner.
Here's a question I asked about Facebook in 2007, when I was blogging the OECD Participative Web forum
QUESTION: Hi, Richard Akerman from the National Science Library of Canada.and here's the panel discussion that ensued - the cast of characters is Mozelle Thompson from Facebook, John Lawford from the Public Interest Advocacy Centre, and Gary Davis from the Irish Internet Data Authority, with Hugh Stephenson from the U.S. Federal Trade Commission chairing.
One of the things that I've seen in the discussion is we are talking mostly about silos, but Web 2.0 is about mashing sites up, about linking sites together, about crossing between sites and combining them together.
Not to pick on Facebook, but Facebook has a fabulous feature, which is Facebook Applications. However, in order for me to give my informed consent, I have only one choice. To use this application, I share my information with a third party.
I think that is a valid option, but the question, the broader question, the policy question is: How do we deal with privacy when we expect that sites will want to interlink like this, that people will want to connect their information like this? How do we control the spread of the information?
Are there technological ways to do that? Are there policy ways to manage it? If I share with a third party, how do I stop the third party from sharing on?
So I'm interested obviously particularly in the Facebook experience but the broader panel as well.
MR. THOMPSON: I think that question is there for a reason. I mean, when I say that, when it warns you that in order to use this application, you have to share some information with that application, it's because if you don't want to share your information with that application, you should not download that application.from OECD transcript
One of the things, you are absolutely correct we have over 5,000 applications. And aside from the applications that are created by Facebook itself, it is very difficult to police every single other one for what everybody else does.
For example, if Amazon has an application that you can download on Facebook, then you are going to have to be guided by Amazon's policy.
That being said, do we have certain standards about data mining and other things? Absolutely.
We tell sites that if they want to create an application and they want to ask you for information, that's great. We are not going to give you information about our users. We leave it then up to the user to determine whether they want to use this application or not. And that has to do with a trusted site relationship.
MR. STEVENSON: Thank you.
John, I think you wanted to get on this, and then Gary, and then one more question.
MR. LAWFORD: The way you dealt with that in legislation, you just ask for someone's consent, right, and that should be the end of it. If you don't want to use that program, you don't consent, except that what you are getting for that application is they are asking for more personal information probably in your sign‑up than they need to to provide that application to you.
They've already got the fact that you have been referred from Facebook and now they are asking for additional personal information.
That's where we are saying that for a Web 2.0 type statute, whether internationally or nationally, you should be able to ask for the plain vanilla transaction. So you have name, address, if you need it, and I get my application, not all this other stuff.
MR. THOMPSON: That's a little bit misleading in the following sense: that is you are Amazon and you have an application on Facebook or some other company has an application on Facebook, if it's Expedia or Travelocity, they are going to need some information from you in order for them to do a transaction with you. That's your relationship with them.
We are not collecting that information. That third party is collecting that information. That's the purpose of the warning. Not because we need that information. We already know what we need to know because you are our user. You are absolutely right.
But we put the warning there so that if you are using a third party application, you know that they are collecting information about you. It's a benefit to consumers.
MR. STEVENSON: Thank you.
Let's give Gary a chance to intervene on this and then I think we have one more question.
MR. DAVIS: Just from a data protection perspective, I don't know the actual characteristics of Facebook applications and there could be anything else.
One of the principles is the purpose limitations. So if I give my information for one purpose, which is to sign up to that, the third party, then if they anything else with it other than the reason for which you gave it, then you would have a valid complaint to us as the Data Protection Commissioner's Office and we would investigate it.
Also, and again understanding the nature of the relationship that exists, if Facebook applications could be deemed to be handling the information on behalf of Facebook, well then there's a contractual obligation there. And one might say that a privacy standard would be that the contract that is entered into would specify between Facebook and whoever manages Facebook applications, that they may not use the information for any other purpose.
I would expect to see that. If you weren't seeing that going forward, well then that's a privacy point that one would expect to be articulated.
MR. STEVENSON: Thank you.
(I almost wrote Spark CBC, since that's their Twitter name.)
Spark Episode 76 (audio link available directly in the post, as well as various podcast options)
At 22:02 or so in, they take on the challenge of explaining web APIs, or more specificially, they ask Jer Thorp to help walk them through the concept. It's always interesting to hear the descriptions people use. For example, I would generally say "machine-to-machine", which is probably way too abstract. I also tend(ed) to describe APIs in the context of Service-Oriented Architecture, which probably confused the issue (and the audience). I don't generally talk about computer programs communicating with other computer programs.
I think in general what's presented on the show is a pretty good explanation: websites are opening up their information using APIs, so they can leverage open innovation - outside developers. We are a long long way from a completely interoperable web of standard APIs though.
Here's the Twitter-sized explanation I had proposed (taking quite a lot of my space to talk about how there wasn't enough space):
I would argue as well that web development has gotten sophisticated enough that, while APIs are ideal (at least if well constructed), you can actually get a lot by opening your data, which is the key first step. Open data enables mashups, APIs just make mashups easier. Open data means sharing the information your organisation has, out on the web - ideally your default becomes to share.
We're still in early days of open data. The Guardian calls their approach "Data Store - Facts You Can Use". I've written previously about the US Data.gov initiative, which currently has the world's simplest website (a giant box reading "coming soon"), but I think is supposed to launch this month. It's similarly challenging to point to open data cities, because while the Twitter-enabled Toronto @MayorMiller announced toronto.ca/open at Mesh, it also reads simply "Under Construction".
What will be possible is mashups, visualizations, APIs, analysis and much more.
I believe the long term success of projects like StimulusWatch Canada and ChangeCamp Ottawa will depend on open data, and (eventually) on all levels of government having open APIs as well.
Which circles me around to the opening topic of the podcast, about whether online activism ("slacktivism") can actually translate into meaningful real-world activity. The answer, I think, is tied in with the segment about lurking... the web is mostly lurk, only maybe 10% participate. Some tiny fraction of those online participants might translate into offline actions. Maybe one in a thousand? But nevertheless, it does happen.
While I generally refuse to join these "click your support" Facebook groups (in part because I don't like FB much anyway), they can be low barrier entry points, in particular since so many Canadians (who may otherwise not be very social-web enabled) are in FB.
The kind of canonical Canadian example is the Fair Copyright for Canada group, with its (at time of posting) 90,071 members. It was brought up in the House of Commons. It did translate into some offline activism. And the sheer numbers did, I think, get both attention and generate concern for the party proposing the bill. There are still lots of issues with that number. Lots of people around the world care about copyright. For all I know, that's 81,000 copyright-concerned Americans, and 9000 Canadians. Such is the global web.
I do think "feel-good clicks" are a bit dangerous, they give you the perception of action without actually doing anything. I've long been concerned by this kinda of almost mystical power ascribed to online organising. In my review of Al Gore's The Assault on Reason, I said
Don't get me wrong, I think the Internet has a role to play in reasoned discourse. A small role. A useful tool for pointing attention to falsehoods and referencing inconvenient truths. But electronic communications have a fatal allure of virtual action.
Concerned about the environment? No need to go outside and walk in the woods, or clean up a polluted lot in your neighbourhood, or knock on your representative's door and explain the urgency of your position.
No, instead you can just fire off an email, write a blog posting, and then turn up the air conditioning and the lights and stretch out on the couch and read a good book.
That being said, I have myself translating the online into real world action on a number of occasions. As I wrote in the StimulusWatch blog, it was an online posting that led me to an event that started a chain leading to the creation of the project.
That same event, and online chatter about a local conference, also led me (as partially outlined in my posting Making government data visible - and is Change coming to Ottawa?) to ChangeCamp Ottawa, a very real event happening at City Hall on May 16, which I have been helping to organise, an event which of course has a substantial online presence including a social network for the specific event, as well as being part of the larger ChangeCamp group on Facebook.
Similarly, a local news article in a free neighbourhood paper (yes, in print, with ink and everything) about a small garden/park space led me to a Facebook group which led me to an offline meeting which led me to create http://www.savethegarden.ca/
And of course, on a much more spectacular scale, the Obama campaign used (and continues to use) online organising as a tool, but they were very clear that the purpose of online was to drive a very extensive (and successful) ground game, people talking and knocking on doors, calling on phones, out in the real world.
So I think when it works best, the online world leads you offline, and offline leads you back online. It's an ongoing discussion that flows across place and time.
Discussions enable meetings, data enables websites, websites enable more meetings, meetings come to consensus on APIs, APIs enable mashups... round and round it goes.
* The Elsevier Grand Challenge just wrapped, reflect.ws was the winning entry
* CISTI's internal Innovation Challenge wrapped up on March 19, 2009. The winner was "Research in the News" (you can read more on the site).
This is only a brief survey and I'm sure I have missed some, but I'm blogging on the tiny keyboard of my netbook, so I am only doing a short posting. Please let me know of any others.
Our first widget is going to be a List Building Widget that will include platform integration for iGoogle, Facebook, MySpace, blogs and web pages, and/or desktops. The widget will allow users to:
- Build lists of favorite book, movie, music, game, etc. lists
- Build lists of materials needed for homework projects
- Share lists with friends
We are looking for open source designs that can be made available to and repurposed by other organizations seeking to engage young people. The widgets will be developed as part of a 2008 National Leadership Grant from the Institute of Museum and Library Services titled Homework NYC Widgets: A Decentralized Approach to Homework Help By Public Libraries.
The project team will accept and answer questions about the proposal via comments on the NYPL Labs blog.
See the blog post for more information, including contact information. Deadline to submit proposals is May 1, 2009.
It's great to see libraries reaching out to their community, to build technology that will benefit lots of organisations.
the idea is to have a reasonably informal event at which we try to do interesting stuff with library technology and/or data
He's put together a starting list of possible APIs... there must be many more that people could add or offer...
Through an unexpected series of events I find myself going to Open Repositories 2008
The lineup looks great including a keynote from Peter Murray-Rust, and two (!) sessions on Scientific Repositories.
There is also a Repository Challenge for developers with a £2,500 prize, which is like a million US dollars now (finally, Canadians get to make US dollar jokes). Kudos to David Flanders for leading this "let's just build stuff and see what works" approach.
I will be blogging under tag/category or08, and twittering under hashtag #or08
I made an Upcoming event, mainly because then if you add the machine tag
to your Flickr photos, it will automatically put in a nice "Taken at Open Repositories 2008" logo.
Posted by Richard Akerman on March 20, 2008 at 08:39 AM in Academic Library Future, Conference, Data Management, Digital Library, Institutional Repository, or08, Science, Software Development, Web/Tech | Permalink | Comments (1) | TrackBack (0)
| | |
The [National Library of Australia] has recently opened this "Library Labs" wiki space:
The aim of this space is to let our colleagues know what we are doing, to invite comments, questions and feedback and to provide a space for discussion and collaboration.
We have started to redevelop our digital library services using a service-oriented architecture and open source software solutions where these are functional and robust. We are also aiming to take a common ("single business") approach to collection management, discovery and delivery.
We are interested in forming a community of Australian business analysts and developers who are working on similar problems and who are interested in interoperable, standards-based solutions. We are also interested in working with colleagues at an international level to provide prototypes and testbeds for new and emerging standards.
via Warwick Cathro
Assistant Director-General, Innovation
National Library of Australia
The Google Book Search Book Viewability API enables developers to:
- Link to Books in Google Book Search using ISBNs, LCCNs, and OCLC numbers
- Know whether Google Book Search has a specific title and what the viewability of that title is
- Generate links to a thumbnail of the cover of a book
- Generate links to an informational page about a book
- Generate links to a preview of a book
via LibraryThing blog - Google Books in LibraryThing - March 13, 2008
We need APIs everywhere for everything.
AIR is intended to help software developers create applications that exist in part on a user’s PC or smartphone and in part on servers reachable through the Internet.
To computer users, the applications will look like any others on their device, represented by an icon. The AIR applications can mimic the functions of a Web browser but do not require a Web browser to run.
“There is a big cloud movement that is building an infrastructure that speaks directly to this kind of software and experience,” said Sean M. Maloney, Intel’s executive vice president.
New York Times - Adobe Blurs Line Between PC and Web - February 25, 2008
AIR has graduated from Adobe Labs, and is now available for free download at
I have to say, I'm not really clear how this is any different from, or better than, Java.
I guess the argument is that it uses web standardsy stuff, so it's easier to program than Java.
Plus it's all well and good to say cloud this and cloud that, for example, but I don't see any indication that AIR provides you with some storage cloud from Adobe that you can use.
I found a posting by a GWT developer that briefly introduces the major competing technologies in this space
As well I wonder about the implications for searching. Our search engines mostly eat text or things that can be converted easily to plain text (Word documents, PDFs, PowerPoints). You can do a whole fancy site in Flash and to a search engine it will barely exist. If we move from building text-based web sites to interaction based web apps, how will we ever be able to find anything again?
Access is Canada's premier library technology conference, featuring a single stream of sessions that deal with technology planning, development, challenges and solutions. ... now accepting proposals for prepared talks on the following topics (other ideas are more than welcome):
- customized web applications and search interfaces
- open source software
- national and provincial/state-wide consortia technology initiatives
- information policy
- digital and social media
- library catalogue innovations
- digitization projects
- institutional repositories
- end-user searching behaviours
- protocols and metadata
...or anything else suitably geeky, innovative and/or awe-inspiring
It was great to talk with various enterprise/IT/technology architects at DLF Fall Forum 2007 about our various ideas an approaches to modeling and implementing the new library technology infrastructure. But it's clear there are still more library technology architects out there (even if they may not have that in their job title), I ran across an interesting blog posting by Jonathan Rochkind:
Notes on future directions of Library Systems from Bibliographic Wilderness blog, September 28, 2007
Here are some notes on near/medium future directions of the library systems environment/architecture, and a sketch of requirements on where we want to go. These notes may or may not end up as part of an internal white paper here, as we analyze where we want to be headed (something good to do anyway, but we got a kick in the pants when the vendor ended development on our current ILS).
The library systems environment has grown from being composed of a single “Integrated Library System” at the origin of library systems in the 1980s, to being composed of an ‘ecology’ of systems in most libraries today. These different systems could be from various sources (proprietary and open source), fulfill various–both overlapping and distinct–functions, are used in various ways by different subsets of users (both library staff and our end-users), and interact with each other to varying degrees (some well, some poorly).
I wonder if I should consider adding a "library technology architecture" category to SLP?
PS Jonathan was at DLF but we didn't formally meet, it's odd how sometimes virtual connection and discovery is easier than in the physical world.
The last three weeks I’ve been thinking a great deal about the role of my department in fulfilling the Libraries mission and where the department needs to go in the next 3 years. Part of getting where we want to go has been this whole site redesign process. But not in the way most of the library thinks.
Long term I’d like a site which has a series of web services that can be exploited by my developers but also my the university web developers and who knows who else.
For faculty and grad students who want to do known item searching in our catalog, maybe something like LibX is the way to go. Or maybe allowing users to create their own search interface to a set of particular resources that they can embed in their browsers search bar or on their desktop as a search widget.
Ultimately, I feel like it is these kinds of services that will make of break a library’s virtual presence not the library website.
Library Web Chic - The future of Web Services isn’t the Library website - September 16, 2007
July 16, 2007 Library Journal netConnect - Web Services and the Social Catalogue
I must admit that even though there were a remarkable number of famous and extremely accomplished people at SciFoo, I had the benefit of not knowing who most of them were. You might think this is a rather odd perspective, not least of which because I'm usually the last person to laud the benefits of being ill-informed, but it made it much easier to sit across from Eva Vertes and have a conversation without knowing in advance that she's a super-genius cancer researcher.
Anyhoo, I found this profile of Charles Simonyi in Technology Review interesting, I didn't know about his foundational involvement with Microsoft Office, and his new project to create a new type of programming environment, intentional software. It might have been interesting to hear a question like "how was it like going back to Russia to train for your spaceflight after having left Hungary all those many years ago" rather than some the questions that were asked (e.g. "how does it smell in space").