datasets

You are currently browsing the archive for the datasets category.

This CBC.ca video gives a brief on how 2d and 3d street view data are collected. In this case it is the city of Toronto and the data collector is Tele Atlas. The things cartographers do to make maps! Tele Atlas seems to be selling georeferenced landmarks, street networks, and a variety of other data it collects simply by driving the streets with cameras and GPS mounted on the roof of cars. At 500 km a day and terrabytes of data, these folks are collecting and selling tons of geo-information that we like to play with on google earth, help find places in mapquest, and allow city planners or police forces to prepare evacuation plans, understand the characteristics of the route planned for a protest or know the point address in a 911 call.

The video also briefly discusses privacy issues, seems like the street is public space and if you happen to be naughty going into some taudry establishment and your act happens to be caught on film, well, so be it, either behave or accept the digital consequences of your private acts in public space, or so the video suggests!

Regarding access to these data, well, my guess is a big price tag. It is a private company after all!

I met with Wendy Watkins at the Carleton University Data Library Carleton University Data Library yesterday. She is one of the founders and current co-chair of DLI and CAPDU (Canadian Association of Public Data Users), a member of the governing council of the International Association of Social Science Information Service and Technology (IASSIST) and a great advocate for data accessibility and whatever else you can think of in relation to data.

Wendy introduced me to a very interesting project that is happening between and among university libraries in Ontario called the Ontario Data Documentation, Extraction Service Infrastructure Initiative (ODESI). ODESI will make discovery, access and integration of social science data from a variety of databases much easier.

Administration of the Project:

Carleton University Data Library in cooperation with the University of Guelph. The portal will be hosted at the Scholar’s Portal at the University of Toronto which makes online journal discovering and access a dream. The project is partially funded by the Ontario Council of University Libraries (OCUL) and OntarioBuys operated out of the Ontario Ministry of Finance. It is a 3 year project with $1 040 000 in funding.

How it works:

ODESI operates on a distributed data access model, where servers that host data from a variety of organizations will be accessed via Scholars’ Portal. The metadata are written in the DDI standard which produces XML. DDI is the

Data Documentation Initiative [which] is an international effort to establish a standard for technical documentation describing social science data. A membership-based Alliance is developing the DDI specification, which is written in XML.

The standard has been adopted by several international organizations such as IASSIST, Interuniversity Consortium for Political and Social Research (ICPSR), Council of European Social Science Data Archives (CESSDA) and several governmental departments including Statistics Canada, Health Canada and HRSDC.

Collaboration:

This project will integrate with and is based on the existing and fully operational Council of European Social Science Data Archives (CESSDA), which is cross boundary data initiative. CESSDA

promotes the acquisition, archiving and distribution of electronic data for social science teaching and research in Europe. It encourages the exchange of data and technology and fosters the development of new organisations in sympathy with its aims. It associates and cooperates with other international organisations sharing similar objectives.

The CESSDA Trans-Border Agreement and Constitution are very interesting models of collaboration. CESSDA is the governing body of a group of national European Social Science Data Archives. The CESSDA data portal is accompanied by a multilingual thesaurus, currently 13 nations and 20 organizations are involved and data from thousands of studies are made available to students, faculty and researchers at participating institutions. The portal search mechanism is quite effective although not pretty!

In addition, CESSDA is associated with a series of National Data Archives, Wow! Canada does not have a data archive!

Users:

Users would come to the portal, search across the various servers on the metadata fields, access the data. Additionally, users will be provided with some tools to integrate myriad data sets and conduct analyses with the use of statistical tools that are part of the service. For some of the data, basic thematic maps can also be made.

Eventually the discovery tools will be integrated with the journal search tools of the Scholar’s Portal. You will be able to search for data, find the journals that have used that data or vice versa, find the journal and then the data. This will hugely simplify the search and integration process of data analysis. At the moment, any data intensive research endeavour or data based project needs to dedicate 80-95% of the job to find the data from a bunch of different databases, navigating the complex licensing and access regimes, maybe pay a large sum of money, organizing the data in such a way that it is statistically accurate then make those comparisons. Eventually one gets to talk about results!

Data Access:

Both the CESSDA data portal project and ODESI are groundbreaking initiatives that are making data accessible to the research community. These data however will only be available to students, faculty and researchers at participating institutions. Citizens who do not fall into those categories can only search the metadata elements, see what is available but will not get access to the data.

Comment:

It is promising that a social and physical infrastructure exists to make data discoverable and accessible between and among national and international institutions. What is needed is a massive cultural shift in our social science data creating and managing institutions that would make them amenable to the creation of policies to unlock these same public data assets, some of the private sector data assets (Polls, etc.) and make them freely (as in no cost) available to all citizens.

More interesting stuff from Jon Udell, this time taking some climate data for his area, using the ManyEyes platform and trying to see what has been happening in New Hampshire in the last century.

The experiment is non-conclusive, but there is an excellent debate in the comment threads, about the problems with amateurs getting their hands on the data – and the hash they can make of things because they are not experts.

Says one commenter (Brendan Lane Larson, Meteorologist, Weather Informaticist and Member of the American Meteorological Society)

Your vague “we” combined with the demonstration of the Many Eyes site trivializes the process of evidence exploration and collaborative interpretation (community of practice? peer review?) with an American 1960s hippy-like grandiose dream of democratization of visualized data that doesn’t need to be democratized in the first place. Did you read the web page at the URI that Bob Drake posted in comments herein? Do you really think that a collective vague “we” is going to take the time to read and understand (or have enough background to understand) the processes presented on that page such as “homogenization algorithms” and what these algorithms mean generally and specifically?

To which Udell replies:

I really do think that the gap between what science does and what the media says (and what most people understand) about what science does can be significantly narrowed by making the data behind the science, and the interpretation of that data, and the conversations about the interpretations, a lot more accessible.

To turn the question around, do you think we can, as a democratic society, make the kinds of policy decisions we need to make — on a range of issues — without narrowing that gap?

There is much to be said about this … but Larson’s comment “Do you really think that a collective vague “we” is going to take the time to read and understand (or have enough background to understand) the … XYZ…” is the same question that has been asked countless times, about all sorts of open approaches (from making software, to encyclopaedia, to news commentary). And the answer in general is “yes.” That is, not every member of the vague “we” will take the time, but very often with issues of enough importance, many of the members of the vague “we” can and do take the time to understand, and might just do a better job of demonstrating, interpreting or contextualizing data in ways that other members of the vague “we” can connect with and understand.

The other side of the coin of course, is that along with the good amateur stuff there is always much dross – data folk are legitimately worried about an uneducated public getting their hands on data and making all sorts of errors with it – which of course is not a good thing. But, I would argue, the potential gains from an open approach to data outweigh the potential problems.

UDATE: good addition to the discussion from Mike Caulfield.

Quality Repositories, is a website that comes out of a stats (?) course at University of Maryland. It aims to evaluate the usefulness and availability of various sources of public data, from US Government, non-US government, academic, and sports related (?) data sets. Evaluations are based on criteria such as: online availability, browsability, searchability, retrievable formats etc. The about text:

Data repositories provide a valuable resource for the public; however, the lack of standards in terminology, presentation, and access of this data across repositories reduces the accessibility and usability of these important data sets. This problem is complex and likely requires a community effort to identify what makes a “good” repository, both in technical and information terms. This site provides a starting point for this discussion….

This site suggests criteria for evaluating repositories and applies them to a list of statistical repositories. We’ve selected statistical data because it is one of the simplest data types to access and describe. Since our purpose is partly to encourage visualization tools, statistical data is also one of the easiest to visualize. The list is not comprehensive but should grow over time. By “repositories” we mean a site that provides access to multiple tables of data that they have collected. We did not include sites that linked to other site’s data sources.

The site was created by Rachael Bradley, Samah Ramadan and Ben Shneiderman.

(Tip to Jon Udell and http://del.icio.us/tag/publicdata)

One of the great data myths is that cost recovery policies are synonymous with higher data quality. Often the myth making stems from effective communications from nations with heavy cost recovery policies such as the UK who often argue that their data are of better quality than those of the US which have open access policies. Canada, depending on the data and the agencies they come from is at either end of this spectrum and often in between.

I just read an interesting study that examined open access versus cost recovery for two framework datasets. The researchers looked at the technical characteristics and use of datasets from nations of similar socio-economic, jurisdiction size, population density, and government type (Netherlands, Denmark, German State of the North Rhine Westfalia, US State of Massachusetts and the US Metropolitan region of Minneapolis-St. Paul). The study compared parcel and large scale topographic datasets typically found as framework datasets in geospatial data infrastructures (see SDI def. page 8). Some of these datasets were free, some were extremely expensive and all under different licensing regimes that defined use. They looked at both technical (e.g. data quality, metadata, coverage, etc.) and non-technical characteristics (e.g. legal access, financial access, acquisition procedures, etc.).

For Parcel Datasets the study discovered that datasets that were assembled from a centralized authority were judged to be technically more advanced while those that require assembly from multiple jurisdictions with standardized or a central institution integrating them were of higher quality while those of multiple jurisdictions without standards were of poor quality as the sets were not harmonized and/or coverage was inconsistent. Regarding non-technical characteristics many datasets came at a high cost, most were not easy to access from one location and there were a variety of access and use restrictions on the data.

For Topographic Information the technical averages were less than ideal while for non-technical criteria access was impeded in some cases due to involvement of utilities (tendency toward cost recovery) and in other cases multiple jurisdictions – over 50 for some – need to be contacted to acquire a complete coverage and in some cases coverage is just not complete.

The study’s hypothesis was:

that technically excellent datasets have restrictive-access policies and technically poor datasets have open access policies.

General conclusion:

All five jurisdictions had significant levels of primary and secondary uses but few value-adding activities, possibly because of restrictive-access and cost-recovery policies.

Specific Results:

The case studies yielded conflicting findings. We identified several technically advanced datasets with less advanced non-technical characteristics…We also identified technically insufficient datasets with restrictive-access policies…Thus cost recovery does not necessarily signify excellent quality.

Although the links between access policy and use and between quality and use are apparent, we did not find convincing evidence for a direct relation between the access policy and the quality of a dataset.

Conclusion:

The institutional setting of a jurisdiction affects the way data collection is organized (e.g. centralized versus decentralized control), the extent to which data collection and processing are incorporated in legislation, and the extent to which legislation requires use within government.

…We found a direct link between institutional setting and the characteristics of the datasets.

In jurisdictions where information collection was centralized in a single public organization, datasets (and access policies) were more homogenous than datasets that were not controlled centrally (such as those of local governments). Ensuring that data are prepared to a single consistent specification is more easily done by one organization than by many.

…The institutional setting can affect access policy, accessibility, technical quality, and consequently, the type and number of users.

My Observations:
It is really difficult to find solid studies like this one that systematically look at both technical and access issues related to data. It is easy to find off the cuff statements without sufficient backup proof though! While these studies are a bit of a dry read, they demonstrate the complexities of the issues, try to tease out the truth, and reveal that there is no one stop shopping for data at any given scale in any country when it comes to data. In other words, there is merit in pushing for some sort of centralized, standardized and interoperable way – which could also mean distributed – to discover and access public data assets. In addition, there is an argument to be made to make those data freely (no cost) accessible in formats we can readily use and reuse. This of course includes standardizing licensing policies!

Reference Institutions Matter: The Impact of Institutional Choices Relative to Access Policy and Data Quality on the Development of Geographic Information Infrastructures by Van Loenen and De Jong in Research and Theory in Advancing Data Infrastructure Concepts edited by Harlan Onsrud, 2007 published by ESRI Press.

If you have references to more studies send them along!

What is the cost to taxpayers of public institutions purchasing public data? As citizens we do not like to pay for the same thing many times. So here is a real scenario and an estimated best guess of the #s on the cost to taxpayers for public data which they pay for many times via their public institutions whose job it is to work for the public interest and re-purchase data citizens have already paid for once in taxation:

a) Each Canadian municipality, city or town purchases demographic data from Statistics Canada. Lets suggest there are approximately 2000 of these entities. Lets say they each purchase a subset of the Census at varying scales, with a specialized geography to match their boundaries, so lets say they each spend conservatively $ 10 000 each (factoring that some small towns will buy less and others more).

2000 Towns/municipalities/cities * $ 10 000 = $ 20 000 000

b) Since many cities/towns/municipalities do not have efficient data infrastructures to manage their data assets, sometimes different departments purchase the same data twice or three times. So you may get planning, health and social welfare departments each purchasing the same data and not sharing as they are unaware and there is no central accessible repository they can mutually search. So lets pretend that the top 100 (conservative #) cities in Canada purchase the same/similar data 3 times each. We already included one purchase once above but we will keep to 3 as potentially some have purchased 4 times while the other 2900 units may have done so at least once.

100 Towns/municipalities/cities * 3 (duplicate copies of the same data) * $ 10 000 = $3 000 000

c) The best part, often each of these Towns/municipalities/cities are purchasing data for their entire respective provinces as they wish to do some cross comparisons. This means that each of these entities is paying each for the exact same/similar data set each time! Dam! Talk about a non-rivalrous good and how smart is StatCan? Dam we thought the public service did not have a corporate mindset!

d) The Provinces and Territories also each purchase Census data. They do not necessarily have a centralized data infrastructure either, they have bigger bureacracies, more departments, more specialized needs and bigger data requirements. So lets suggest that each Province and Territory spends $ 15 000 * 5 duplicate/similar sets, and an additional each $ 10 000 on multiple special orders between censuses.

13 Provinces/Territories * $ 15 000 * 5 = $ 975 000

13 Provinces/Territories * $ 10 000 = $ 130 000

d) Again many of the Provinces and Territories will purchase National scale datasets for comparison purposes, which like Towns/municipalities/cities are purchasing the exact same/similar copy of the exact same/similar data sets for the exact same geography numerous time. Recall the great part about information is its non-rivalrousness! We can each consume the same entity many times and none will suffer as a result. Unless of course you are a Canadian Tax Payer.

e) Then we have the Federal Government with approximately 350 departments and agencies and lets say each purchases some city data, some provincial data and a whole bunch of national data for $ 17 000 each. Then many, lets say 175 of these departments and agencies are purchasing special ordered data set to meet their particular needs, each at $ 7 500.

350 Federal Departments and Agencies * $ 17 000 = $ 5 950 000

175 Federal Departments and Agencies * $ 7 500 = $ 1 312 500

TOTAL:

  1. 2000 Towns/municipalities/cities * $ 10 000 = $ 20 000 000
  2. 100 Towns/municipalities/cities * 3 (duplicate copies of the same data) * $ 10 000 = $3 000 000
  3. 13 Provinces/Territories * $ 15 000 * 5 = $ 975 000
  4. 13 Provinces/Territories * $ 10 000 = $ 130 000
  5. 350 Federal Departments and Agencies * $ 17 000 = $ 5 950 000
  6. 175 Federal Departments and Agencies * $ 7 500 = $ 1 312 500

Grand Total of Census Data Expenditures by Taxpayers via Public Institutions in Canada: $ 31 367 500

The above is conservative number as it does not include the human resource expenditures like the following:

  1. Person hours associated for each public servant to negotiate and discuss their data needs
  2. Person hours for the StatCan officials to fill in the orders
  3. Person hours of the public servant lawyers to take care of licensing
  4. Person hours associated with all of the purchasing and accounting work to pay for, acquire and account for this money
  5. Person hours for each official who has to work the data in the same way to meet their needs
  6. Dunno if public agencies pay taxes on these! That would add insult to injury would it not?

It is also important to note, that hospitals, school boards, universities, crown corporations and a host of other quasi public institutions are doing the same thing. And that these numbers are only for census data, these do not include the cost of other datasets like road networks, water quality, maps, environment data and so on.

Would seem to me that we could spend a fraction of that cost to deliver the data online to all of these institutions, private sector, NGOs, and Citizens and we would all be better off financially. We would waive all the administration costs, and the license management costs, and we would all be smarter to! Further, we could reinvest that money into more research, air quality infrastructure, healthcare, waive recreation fees in municipalities etc. We could reinvest wisely in quality of life and know more how to do so at the same time.

PS-If anyone has:

  • come across any type of cost analysis reports etc.
  • has a better way to calculate this
  • knows of some real costs

Please pass them along! The more we have on this the better.

Looks like some of us are using less pesticides, purchasing a few more energy efficient and water conservation devices, composting only very slightly more than before, and it seems we dunno what to do with our toxic waste, we still throw out medicines and electronics in the regular curb pick up and we still commute to work one person per car which is too bad since

Passenger transportation accounts for about 12 per cent of Canada’s greenhouse gas emissions and efforts to improve efficiency are a high-profile part of the global warming debate.

Also, sadly we drink way more bottled water than is necessary in a country with an excellent drinking water infrastructure.

It would be great to get a hold of the raw data and play with it. It could be mapped and studied with other variables like income, city versus rural, ethnicity, mother tongue, population density, etc. This type of analysis could help target campaigns in certain under-performing areas and study why others are doing better.

Sources:

Putting Canadian “Piracy” in Perspective, a video from Geist and Albahary is a great way to present an argument. In Geist’s words

over the past year, Canadians have faced a barrage of claims painting Canada as a “piracy haven.” This video – the second in my collaboration with Daniel Albahary – moves beyond the headlines to demonstrate how the claims do not tell the whole story.

The video also uses quite a bit of public and private sector data to support its argument. This to me is what public data are for and this is what democracy looks like – when civil society has access to the data it requires to keep its government accountable, can keep citizens informed and can temper industry desires with public interest!

One of the cultural issues that has become pervasive as of late is the proliferation of policies and decisions being based on assumptions and not on facts, and in the case of the very powerful lobby against Canada on IP in the cultural sector – really biased reports that are not based on facts but on an industry’s desires and self interests. Look for the sources of the data and the methodology in all reports. Even in this great video! Geist and Albahary do a great job in this to show what is being said and repeated (memes) about the cultural industry in Canada and reality.

It is interesting that the video ends with a slide acknowledging the photos used, the music heard, the creators of the video and the license but not all the data sources in the charts! Some of the data references are in some of the bar charts while most statements are referenced with their source at the bottom of the slide. I always look for data references, else how can I go back and verify what was purported!

The data in the charts were:

  • Hollywood Studio Revenue Growth – Data Source unknown
  • Top Hollywood International Markets – Data Source unknown
  • Canadian Music Releases – Statistics Canada
  • Canadian Artist Share of Sales – Canadian Heritage Music Industry Profile
  • Digital Music Download Sales Growth – Data Source unknown
  • Private Copying Revenues 2000-2005 – Data Source unknown
  • RCMP Crime Data – Data Source unknown but assume the RCMP

*************************************
NOTE: See the comments of this post, the references to the data, quotes and reports that were not listed in the credits or with the information in the film are now fully described on Michael Geist’s Blog here.

Datalibre.ca received and excellent comment on the DLI post about access to some of the Statistics Canada data in schools and public libraries. Today I am looking at E-STAT online and am quite impressed – but alas I have not yet gone to a public library to check out what is actually there and what I can do. Nor do I know the limitations of CANSIM data. I did however speak on the phone with a fine librarian at the Main Ottawa Public Library this morning and look forward to digging for data later on today or tomorrow.

E-STAT is:

Statistics Canada’s interactive learning tool designed with the needs and interests of the education community in mind. E-STAT offers an enormous warehouse of reliable and timely statistics about Canada and its ever-changing people.

Using approximately 2,600 tables from CANSIM*, track trends in virtually every aspect of the lives of Canadians. Updated once a year during the summer, CANSIM contains more than 36 million time series.

Hundreds of schools across the country and Depository Service Program Libraries make these data accessible if you go in person to access them. You can get access to these data online only if you are registered with one of these institutions.

The E-STAT license on the data are quite restrictive.

The Government of Canada (Statistics Canada) is the owner or authorized licensee of all intellectual property rights (including copyright) in the data product referred to as E-STAT. Statistics Canada grants the educational institution a non-exclusive, non-assignable and non-transferable licence to use the data product subject to the terms below.

The data product supplied under this agreement shall at all times remain under the control of the institution. It may not be sold, rented, leased, lent, sub-licensed or transferred to any other institution or organization, and may not be traded or exchanged for any other product or service. The data product may not be used for the personal or commercial gain of any authorized user, nor to develop or derive for sale any other data product that incorporates or uses any part of this data product.

The data that are made available are Yearly updated Canadian Socio-economic Information Management System (CANSIM) data, the daily updates are sold for commercial purposes. I am also not sure how fine the geography is for E-STAT data, for instance if the data are available by Dissemination Blocks, Dissemination Area or, Census Tract, or Urban Areas (Note the cost associated with these and other maps). These make a difference, since DB is the finest granularity, DA is a larger neighbourhood level while CT covers a larger areas, while UAs are larger still. Each scale is for a different level of analysis and the boundaries if you aggregate any of these do not necessarily line up. Additionally, DB and DA are only for the 2006 Census while CT and UA are for others. I am guessing E-STAT is CT Scale data and larger.

E-STAT also has some census data, agricultural data, aboriginal survey data, some environmental data and health behaviour data for school aged children. Clearly not all the data are available and certainly not the specialized surveys such as business, waste management, household spending surveys, health, the survey of particular sectors etc. The data come with explanations, and teachers and users guides.

Lets see what we can get once I make a visit!

United Nations Common Database (UNCDB) … “provides selected series from numerous specialized international data sources for all available countries and areas.”

Even better:

As of 1 May 2007, use of the Common Database will be FREE OF CHARGE. No subscription will be necessary after that date, and any user can enjoy the full range of data, metadata and various search tools without restriction.

Does anyone know of any exciting applications of these datasets?

Newer entries »