Last night I attended a sort of meet up for people Tara Hunt had invited me to, to talk about microformats and media. She had wanted to start with photos, I think because of Riya, but it became clear after talking a bit that similar elements apply to rich media whether the piece being discussed was a photo or a video or an audio piece. The group started out mostly on computers trying to do a group chat, but I didn't have a computer, so I tried typing notes on Josh Kinberg's computer, but the software wasn't recording everyone's comments and it wasn't all that constructive.
I pulled out my notebook (I hadn't brought my laptop) and started writing a short list of elements that are common across all media types, in terms of what elements users publish over and over either on services like Flickr (and other photo sites) or Blip.tv (or other video sites) or audio sites like iTunes. At this point, everyone put away their laptops (funny how the paper can trump the computer once in a while, and while I don't really do paper, except for my notebook, it works for me at times like this). We centered around the notebook and the common document we were discussing, which consisted of a growing list of my notes:
If you want to know who attended, there are photos on Flickr. But the interesting part for me was realizing what we could make with this microformat, for users to publish with, for the publishing tools like Structured Blogging, which takes microformats and makes them into something bloggers can publish through plugins or through other tools that will be built later.
Microformats, as Tantek explained, need to have a page on the MF wiki that shows use cases that cover 80% of what users do now (as a rule of thumb) though arguments can be made for less, if they are really useful (like tags which are much lower across all users). On the Microformats list, the way Tantek and Ryan run it, it's been hard to tell what they meant by examples. When they would make these requests for examples, and I would then look at what people post for the examples, it didn't make any sense to me. But after talking, I think I understand what they want.
It's like the difference between taxonomy and folksonomy. Microformats come out of bottom up user generated use cases. Where as media metadata formats like SMIL and MPEG come out of top down committees. Not that they are bad, we are using those top down formats too in my other work. But as with taxonomy and folksonomy, so with microformats and top down metadata. They both have value and they each come from very different use cases and points of view.
We agreed that the Media metadata page had examples, and yet, it was overgrown, needed pruning, focused on metadata from the top down, instead of examples of what users do now. So last night Tantek explained what they meant by examples specifically. For example, we need to literally cut and paste a blog post from a user that can be used as an 80% use case, to show something as an example. Fair enough. So now, we need to add these examples in a constructive way, in order to argue the media format elements and microformat need for media publishing. We can think about a short list of elements that users use most of the time, when putting some media online, whether it's a photo at a service, or on their own blogs, or a video or audio piece.
Those elements (from my notes last night) are in the first list, becuase they reflect what I see online, though I will go find stats and use cases to back these up, or argue that the 20% useage of something enriches the whole community and so how far that argument goes -- tags are an example of that.
* Html URL
* Media URL
* Description or quotes (subsets of the object: a video quote and tags/description associated with it, a region annotation note for a photo, or the quote of a podcast and tags/description -- the detail for these subsets exists in the 'more info' section below)
* License (defaults to copyright, if none exists, but it's there, by US law, and many other areas of the world)
and for audio and visual:
(This is not the same for all types of media, and is published by users in very limited ways in practice, or is captured from the device or service or in some way, invisible to the user, and therefore often depends on a service to pick it up.)
|file size||file size||file size|
|.||bit / frame rate||bit rate|
|Portrait or Landscape||.||.|
|Region Annotation (subphotos: calculation of location)||Quotes of Video (subvideo: in and out points)||Quotes of Audio (subaudio: in and out points)|
|iPod compliant?||iPod compliant?||iPod compliant?|
|Inclusion in playlist?||Inclusion in playlist?||Inclusion in playlist?|
The second piece is figuring out the elements and schema that lie around those 80% use cases.
I don't think this is so hard now, despite how chaotic and crazy media metadata can be, where some of that is reflected on the media metadata page. Though that page is a very good attempt to organize the chaos. But I now have a picture of how to make this happen in my mind, that is simple, and gets us to a place where we reflect what users do in practice, bottom up. So, based on my notes last night, I'm going to try to fulfill Tantek's requirements, and see how far I get with it. Will update here with pages as they happen.Posted by Mary Hodder at January 20, 2006 07:36 AM | TrackBack