July 2010 Archives

Annette Mackenzie and Carrie Gracie at the BBC World Service

Back in May, I visited the BBC in London and did an interview with Carrie Gracie for the BBC World Service. It was for a show called "The Interview". It was a lot of fun and she let the conversation cover a broad range of things including my world view. ;-)

There is a web page for the show which includes links to the books and things that I talk about in the interview and a link to the audio.

There is also a BBC News article which summarizes the interview. Unfortunately, the article calls Creative Commons a "copyright-free, sharing movement online," which it's not. Creative Commons provides technologies and tools so that people can use copyright to help them share their works the way that they would like to legally. It's not "anti-copyright" or "copyright-free" - although it is about "freedom".

KMD Digital Journalism 2010  p2pu.png by joiito on Aviary

For the last three years, I've been teaching a course at The Keio Graduate School of Media Design (KMD) on Digital Journalism. Each year, I've tried to iterate on the format and see how I could manage my own interaction more effectively and make it impact more people.

This year I met Philipp from P2P University (P2PU). P2PU's mission is:

The Peer 2 Peer University is a grassroots open education project that organizes learning outside of institutional walls and gives learners recognition for their achievements. P2PU creates a model for lifelong learning alongside traditional formal higher education. Leveraging the internet and educational materials openly available online, P2PU enables high-quality low-cost education opportunities. P2PU - learning for everyone, by everyone about almost anything.

The online courses are more like communities of self-learners supported by a facilitator. The content is all licensed under a Creative Commons Attribution-Share-Alike license that allows anyone to reuse the content as long as they share it back. The courses build on the work of the past.

After some conversations with Philipp, I decided to try to do a mashup of the informal not-for-credit learning of P2PU and the formal for-credit course at KMD. I got a bit of resistance from the university at first about making the material available under a Creative Commons license and the idea of peer-to-peer learning, but we successfully navigated the committee meetings at KMD and were able to pull it off. (Thanks to everyone at KMD for this!)

We used P2PU's website and the forums as the central hub of communications augmented with a mailing list, UStream, Twitter (#kmdp2puDJ) and an IRC channel that was also accessible via a web interface on the P2PU website. Each week, we had assignments and a real-time seminar. The physical space was the Keio Hiyoshi campus, but I would video conference in via H.323 when I was out of town and we had guest speakers and remote students video in via Skype. We then streamed this and recorded it on UStream, using the IRC channel as the discussion and question area. We would tweet the UStream sessions and would gather an tag-along participants in real-time. The video of the seminars recorded in Tokyo in high definition and were uploaded later. (html/rss)

I think the complexity of the technology threw some of the participants off and there is a lot to be improved, but considering the complexity and the figuring-it-out-as-we-went-along aspect of it, it went amazingly well. We typically had dozens of people joining via UStream and a dozen or so people on the IRC channel.

The ad-libbing was really fun and worked well. For example, we were able to convince Hiroko Tabuchi of the New York Times, who at first was a viewer and retweeter of the UStream, to come and give a presentation in class the next week. I was then able to get Executive Director of Greenpeace Japan, Jun Hoshikawa to Skype in and talk to Hiroko and the class about the failure of the Japanese media in tracking the Greenpeace Japan trial.

In addition to the assignments, forum discussions and the real-time discussions, participants were asked to create or join projects. A number of interesting projects were launched. Hala started a blog about Muslims in Tokyo; Gueorgui, Alan and Richard started a project to work on non-GDP/market assessments; Gilmar and Gustavo started a blog about new abilities for modern journalists; Lena and Nadhir are working on a report about the course; and Richard and Rick started a blog about digital journalism in Tokyo.

The downside was that the participation from the Keio students was fairly limited. I think it was a combination of the English, the Monday morning scheduling and the amount of work that threw them off. However, the few students who survived made some great contributions.

I think that for the people participating from all over the world, the issue of the sessions happening at the same time in the Japanese time zone made it nearly impossible for some of them to participate in the real-time conversations.

Finally, I think that having so many modes of communications made it difficult to keep track of the threads.

However, I was really excited by the effectiveness and the quality of the discourse. Also, I realized that in many ways, the less planned serendipitous stuff worked the best. Cruising down my IM buddy list to find someone to pull into the class via Skype seemed to work very well.

We're going to try to see if we can keep some sort of persistent community going via the mailing list to try to iterate on both this mode of interaction as well as how best to learn about online journalism.

Update: Andria wrote a good post about the course.

When I was on the ICANN board, we were dealing with the issue of Internationalized Domain Names (IDNs), an initiative to allow non-latin characters in domain names. Technically, it was difficult and even more difficult was the consensus process to decide exactly how to do it. Many communities like the Chinese and Arabic regions were anxious to get started and were getting very frustrated with the ICANN process around IDNs. At times, it seemed like the Arab Internet and the Chinese Internet were ready to either fork away and make their own Internet to solve the problem or were ready to introduce local technical "hacks" to deal with the issue which would have broken many applications that depended the standard behavior of the Domain Name System.

Luckily, in the end, we were able to come up with some basic understandings around IDNs after a lot of work. The Internet held together in one piece, almost impossibly so.

When I joined the Open Source Initiative board of directors, we were also struggling with a similar, but slightly different problem. We called it "License Proliferation". License proliferation was the problem of companies and projects creating their own "vanity" Free and Open Source licenses rather than using existing, established licenses. Because these vanity licenses were tailored (at times even just very slightly from an existing licenses) to address the particular steward's needs, they added to the complexity of the source, causing users to become confused and creating legally incompatible bodies of code.

Copy-left licenses such as the Free Software Foundation's GNU Public License require derivative works be licensed under the same license. This feature - and to many coders this is a feature, not a bug - however, makes it challenging to combine code from projects with different licenses because of the requirement on how derivatives must be licensed. These islands of code looked a lot like a forked Internet, existing IM networks and email before the Internet connected them together.

Two great features of the Internet are the low cost of transaction and the standards and protocols that allow interoperability fueling the massive network effect that drives innovation.

At Creative Commons we have the benefit of hindsight as the "new layer" of the stack and are working hard to keep transaction costs low and interoperability high by trying to prevent license proliferation and "forking".

For instance, Wikipedia was established before Creative Commons licenses were available. Wikipedia, until last year, was licensed under the Free Software Foundation's GNU Free Document License (GFDL). The GFDL is copy-left license, very similar to the Creative Commons share-alike license which allows people to use the content as long as the derivatives are licensed under the same license. However, since the GFDL was primarily designed for documentation for free software, there were a number of attributes that made it sub-optimal for massive online collaborations like Wikipedia.

Also, as more and more content was being created under the Creative Commons Share-Alike license, it created two oceans of content that were not remixable or compatible because of the two different licenses. It was like having two Internets.

After years of discussion with the Free Software Foundation, the Wikipedia and Wikimedia board and community and the Creative Commons community, last year we were finally able to convert Wikipedia to a Creative Commons Share-Alike license. This brought together two communities and two bodies of content so that they could share and collaborate freely.

The moment felt a lot like the early days of email when finally you could send email to anyone instead of only those people on your network.

As the idea of sharing and free culture begins to become more and more accepted and governments, Internet services and even broadcasters begin to implement the idea of sharing, the specter of license proliferation has begun to present a real risk.

Companies and governments are beginning to create vanity licenses either for purely branding and egotistical reason or because there are certain features that they would like to "tweak". What many of these communities don't understand is that tweaking a free content license is a lot like tweaking character codes or the Internet protocol. While you may have some satisfaction of a minor feature or a feeling of ownership, you will introduce the friction of yet another license that we all have to understand and in many cases, fundamental incompatibility and lack of interoperability.

Creative Commons is not just a single license "option". We are a global conversation among lawyers, judges, academics, users and companies in over a hundred countries with extremely rigorous compatible license ports in more than 50 jurisdictions. We are focused on taking into consideration the needs of all of the stake holders in this new ecosystem and updating and modifying our licenses to try to provide as many options as possible while trying to keep things as simple as possible to achieve maximum interoperability and ease of use.

Some would argue that our six core licenses provide too many choices. Some of our critics point -- perhaps rightly -- to the fact that our own licenses are not all compatible with one another. Others would argue that they do not provide enough choices. But we believe, 350,000,000 licensed works later, that we are successfully navigating the sweet spot between simplicity and choice.

As sharing and the adoption of new, free licenses begins to accelerate, I believe we are in danger of creating sloppy licenses or incompatible licenses backed by torrents of content funded by well-meaning governments, non-profits, users and even commercial entities. Poorly drafted licenses, licenses that are not adequately stewarded or supported by a dedicated team of legal experts, content encumbered by onerous neighboring rights and isolated and restrictive licenses can create mountains of unusable content which we might call "free" but which for all practical purposes become puddles of unusable content and what we would call "failed sharing".

I would like to urge all of those people who have seen the benefit of sharing and free licensing to really consider the value of focusing on a single set of licenses and to resist the urge to create vanity or lets-just-add-this-one-feature-for-our-users licenses. We are trying to create a open global dialog and encourage people to join the conversation and present their cases for how our licenses might be improved and listen to the reason why each of the clauses in our license have been written the way they have.

For the future users of our content and participants in the architecture that we are creating, we really MUST try to hold this network together and try to proactively stamp out license proliferation and fragmentation. If the ICANN and OSI experiences provide any guidance and learnings -- and if we are to avoid the challenges and risks those organizations and communities confronted -- we all must be vigilant and uncompromising on this point.

Video of, Zach Coelius, CEO of Triggit talking about Demand-Side Platforms and Real-Time Bidding. An increasing number of ad networks and exchanges have begun making their inventory available for real-time bidding, most notably Google. This allows companies like Triggit to look at lots of inventory across a number of networks and do real-time bidding based on sophisticated analytics.

This is an interesting trend that I think will change the ad landscape pretty dramatically and could help content providers by dramatically increasing the value of their ads. It also allows a level of control that might give ad agencies a new role to make more creative campaigns than just bulk targeting.

Disclaimer: I'm an investor in Triggit.

Here's a video walkthrough of our Chiba home taken on the last day of my recent short trip there. I'm still getting used to the Flip Video and it tends to be a bit shakey and I'm pointing it a bit too downward. Also, Mizuka is trying to silently guide me through this shoot and sometimes grabs my shoulder which made it even shakier.

Anyway, hopefully my videos will improve through iteration. In the mean time, you can see what my Chiba home looks like during the beginning of summer in Japan.

About this Archive

This page is an archive of recent entries in the Business and the Economy category.

Books is the previous category.

Computer and Network Risks is the next category.

Find recent content on the main index.

Monthly Archives