Last evening the government of the United States made an announcement that sent shockwaves through the Internet governance world. The National Telecommunications & Information Administration (NTIA), a division of the Department of Commerce, publicly stated that it will not be renewing its contract with the Internet Assigned Numbers Authority (IANA) past its September 2015 expiry date.
The importance of this announcement cannot be underestimated.
The Internet is, for the most part, a product of U.S. interests, including the Department of Defense and the Department of Commerce. As a result, key Internet technical infrastructure has been operating under contract administered by the NTIA. Core to these operations are the functions IANA plays – the coordination of the DNS Root and Internet Protocol addressing. As you can imagine, among the entities that comprise the Internet governance ecosystem and certain states around the world, there are many that are opposed to U.S. government interests so close to the Internet’s operations.
Interestingly this announcement, however big it is, should not be seen as entirely unexpected.
I’ve blogged before about the current governance model in place to manage the Internet. Commonly called the multi-stakeholder model, it is a bottom-up, consensus-based model that includes an organic mix of public and private entities at the regional, national and international levels – those entities that have a stake in the success of the Internet. This complex network of inter-related and inter-connected bodies that comprise the Internet governance world is analogous to a natural ecosystem. And like a natural ecosystem, the current governance structures and processes grew organically, beginning in the 1960s when the Internet was entirely under the control of the United States government.
Like a natural ecosystem, the organisms that comprise the greater governance entity exist in a delicate balance. As it is continuously evolving, the entities involved in the governance of the Internet also need to evolve. The fact is many organizations have ceased to exist or were reorganized as a result of the changing needs of the Internet ecosystem. Who remembers the International Network Working Group or the Federal Networking Council?
I should also note that it has always been the intent of the government to transfer management of these functions to ICANN. Central to this commitment was the transitioning of the so-called ‘IANA functions’.
I believe we are witnessing another evolutionary step in the development of the Internet with today’s announcement. Momentum to reform the current Internet governance structures and systems has been gaining steam for a number of years. However, much of the current discourse on Internet governance focuses on the linkage between ICANN, IANA and the U.S. government. The U.S. government backing away from that accountability role removes a considerable barrier in those discussions.
We are, however, left with an accountability vacuum. Whether or not you agreed with the role of the U.S. government, the fact is they did play an important – if only very limited in recent years – role in ensuring IANA was doing the work it was tasked with. With the removal of the U.S. government as that accountability body, mechanisms or structures will likely need to be put in place in order to assume that role. That said, I’m confident any number of solutions will be proposed over the coming months, and that we are on the cusp of settling a number of the outstanding issues that have dogged the Internet governance world for years.
In 2014, about 1,000 new generic top-level domains (gTLDs) will be added to the Internet’s root. Given that there are currently fewer than 300 TLDs total (a mix of country code TLDs like .CA and generics like .COM), it’s almost cliché to say that the domain name industry is in the midst of one of the most significant changes in its history.
A change of this magnitude isn’t just rare – the term disruptive comes to mind. As the head of the registry for .CA, my competition will increase by more than 400 per cent.
This is a sign our industry has matured – domain names aren’t the ‘domain’ of Web geeks anymore, no pun intended. We’ve gone mainstream. Our once quiet and relatively invisible industry has become a mainstay in the media. Our products are even advertised in prime time and on the Superbowl.
That’s a world apart from what we’re used to. For the past two decades, the domain name market has been characterized by high demand and relatively few competitors. Frankly, this combination has meant that selling domain names has not been overly difficult. Registries offered a product – domain names – that Registrars sold on their behalf. There was very little customer-focused activity on behalf of the registry. Even with minimal investments in marketing and sales, a registry could enjoy a reasonable market share. Double digit growth rates have been commonplace.
In this new market, we can’t assume we’re going to sell domains based on high demand alone. There’s going to be more than enough choice to meet that demand. To that end, some of the registries for the new gTLDs are approaching the industry with radically different business models. As the Internet has become more important in the day-to-day lives of many of the world’s citizens, we’re selling more than just domain names. We’re selling an identity, an experience, and not just a virtual address.
Traditionally, new domain names became available following a standard pattern: a sunrise period (for trademark holders to claim their names), land rush (for keyword names) (PDF) and general availability (first come, first served). In the current delegation of new gTLDs, this pattern has been revamped. The sunrise period still exists, albeit in a minimal timeframe. Pre-registration is being offered for some domain names that do not even exist yet, often for a very high price. Some registries are using declining scales for this pre-registration period: if you’re interested in a domain name, it can be yours in the first phase of the scale for a high price (say, $10,000). If it’s not picked up in the first phase, the fee for pre-registration drops, and so on until it is registered.
Some registries are taking this a step further by offering pre-registration for TLDs that have not successfully been awarded by ICANN yet. Each new gTLD application came with a $185,000 USD fee (plus additional fees for reviews, infrastructure costs, etc.). To date, registries have not realized any revenue from these new domains. Offering a new gTLD is a pricey endeavour, and the applicants are looking to make that money back as soon as they can through pre-registration.
There have been TLDs with defined markets since the earliest days of the Internet. For example, many ccTLDs have residency requirements (.CA included) and there are industry-specific gTLDs such as .MUSEUM. With the exception of ccTLDs, history shows limiting the potential market for a TLD has not been a recipe for success.
There are a few registries that seem to believe that will no longer be the case. In fact, some of the new gTLDs are hyper-focused. Never mind ccTLDs that limit registrations to a nation’s citizens, there are city-specific domain names like .NYC that are only available to New York City businesses, organizations and residents. Apparently, it’s no longer enough to identify as an American using a .US domain name when you’re a citizen of the city that never sleeps:
“Increasingly, the Internet is not only about what you are, but where you are. A .nyc address tells the world you are located in NYC.”
Some new gTLDs can set a registrant apart from their competition in other ways. TLDs like .PLUMBING and .CONSTRUCTION are profession-specific. .CEO appears to be counting on exclusivity to stand out in a market with 1,300+ competitors.
The potential markets for these TLDs are limited, but the registries behind them are counting on the fact that the sheer number of domain names out there (265 million and counting) makes finding a plumber, NYC business or whatever else difficult. By moving the thinking from the ‘left of the dot’ to the right of it, these registries have found an innovative way to stand out from the crowd.
Bulk registrations are now offered by Donuts, a registry that is behind hundreds of gTLD applications (307 to be exact). They have a list of about 200 new gTLDs – if your organization wants to register all of them for a particular domain name (i.e. domain.bike, domain.camera, domain.wine, etc.), it will cost you a fixed price, not domain by domain as it has traditionally worked. For the domain name Registrant, domain names are about to be a larger expense for many organizations who wish to have an online presence – or prevent others from creating that presence in their name.
A number of the new gTLD registries are offering their products with a variable pricing model. ‘Premium’ domain names – those domains that are viewed to be more in demand – will have a higher initial registration fee. While this isn’t exactly new – similar pricing models have been in place for TLDs in the past – what is new is the fact that the renewal fee will also be higher for premium names. For those on the higher end of the spectrum, this could mean many thousands of dollars per year for a single domain name.
There are some well-known newcomers to the domain name business. Google has applied for 101 new gTLDs, and Amazon has applied for 76. I find it difficult to believe they are looking to expand their already very successful businesses into domain names.
Is it possible a company like Google is entering the market with a completely different business strategy? I’d say it’s not just possible, it’s likely. They have a history of entering a market and offering a product for free that was previously charged for – think about Google Public DNS, Google Analytics (how many people pay for website analytics anymore?) or Google Drive. Google’s profits come from ad sales – they’re going to be one of the domain name newcomers to watch in the near future. Whatever their plans are, I can guarantee they’re serious, and will be a departure from traditional domain name sales.
The changes I’ve discussed above are neither good nor bad, they just are. While the incumbent domain names, country codes and generics, have enjoyed limited competition for many years, new gTLDs are changing the world we operate in. In a sense, it’s a daunting prospect for the incumbent TLD operators, CIRA included. But it’s also an exciting time for the industry. Interest in TLDs is at an all-time high. Competition is healthy in any industry, and it will result in new and innovative ways of running a registry and marketing TLDs. We’re on the cusp of a new era in the domain name industry and I’m pleased to be a part of it.
Our primary focus at CIRA is to run the .CA registry and underlying domain name system (DNS) infrastructure. We do that work very well, and as a result we have come to play a critical role in the Canadian Internet ecosystem.
As the stewards of Canada’s online identifier, we also want to be good digital citizens. I believe that includes giving back to the community when we can and where it will make a difference. It’s also part of our corporate mandate. Our Community Investment Program, or CIP, is based on Article 3 of our ‘articles of continuance’ which allows us “to develop, carry out and support any other Internet-related activities in Canada.”
For the past couple of years, we have been working with community-based groups to establish Internet Exchange Points in Canada. To date, we have helped two get off the ground, and are working with groups in other cities.
For the past four years, we have hosted the Canadian Internet Forum, an arena for dialogue with Canadians about the Internet-related issues that are important to them. And, we have supported organizations like MediaSmarts, TechU.me and Code for Kids in their work to empower youth with the skills they need to be safe and successful in an increasingly digitally-dependent world.
I’m pleased to announce that beginning today, and until February 28, we are accepting applications for funding through our CIP. Eligible agencies include community groups, not-for-profits and academic institutions with projects in mind that enhance the Internet for the benefit of all Canadians. More information about applicant eligibility and the application process is available here.
An arm’s length committee comprised of CIRA Board members and community experts will evaluate the proposals and award funds based on merit.
In the coming year, CIRA will be investing up to $1,000,000 in the Canadian Internet community. Qualifying organizations may receive funding ranging from $25,000 to as much as $100,000 for their project. While the CIP is open to a wide range of ideas, the intent is to make a lasting positive impact on how Canadians access and use the Internet for the foreseeable future.
If you know an organization that could benefit from our support, please let them know about the Community Investment Program. Together we can enhance the Internet experience for all Canadians.
CIRA is a proud supporter of MediaSmarts, Canada’s leading media literacy organization. We believe that it is important to support the development of digital literacy skills among the next generation of Canada’s digital citizens. This week, MediaSmarts is releasing the third phase of their research project called Young Canadians in a Wired World (YCWW). Matthew Johnson, the Director of Education at MediaSmarts, wrote the following guest post on Public Domain about YCWW:
It goes without saying that eight years is a long time on the Internet. Between 2005, when MediaSmarts published Phase II of our Young Canadians in a Wired World research, and 2013, when we conducted the national student survey for Phase III, the Internet changed almost beyond recognition: online video, once slow and buggy, became one of the most popular activities on the Web, while social networking became nearly universal among both youth and adults. Young people’s online experiences have changed as well, so we surveyed 5,436 Canadian students in grades 4 through 11, in classrooms in every province and territory, to find out how. Our first report drawn from this survey, Life Online, focuses on what youth are doing online, what sites they’re going to and their attitudes towards online safety, household rules on Internet use and unplugging from digital technology. (Future reports based on this data will look at students’ habits, activities and attitudes towards privacy, digital permanence, bullying, commercialization, offensive content, online relationships and digital literacy in the classroom and in the home.)
One finding that’s unlikely to be a surprise is that nearly all youth are going online. In fact, 99 percent of students surveyed have access to the Internet outside of school using a variety of devices. The biggest change since our last survey is the proliferation of mobile devices, such as tablets, smartphones and MP3 players, which give youth constant – and often unsupervised – online access.
Not only are students getting connected, they’re staying connected: more than a third of students who own cell phones say they sleep with their phones in case they get calls or messages during the night. Students do try to balance their online and offline activities: nearly all say that they sometimes choose to go offline in order to spend more time with friends and family, go outside or play a game or sport, read a book or just enjoy some solitary quiet time. More worryingly, one in six students has gone offline in order to avoid someone who is harassing them.
What are Canadian youth doing when they’re online? For one thing, they’re looking for information – primarily about things like news, sports and entertainment, but also physical and mental health issues, relationship advice and sexuality. Two-thirds of students play online games, though the games they play differ significantly: boys in grades 4-6 choose Minecraft, a game in which players build virtual environments, while girls prefer virtual worlds such as Webkinz, Moshi Monsters and Poptropica, which contain chat and social networking features. Social networking is also a popular activity: while rates are highest for older students, a significant number of younger students – one-third in Grade 5 and almost half in Grade 6 – have a Facebook account, despite the site being closed to users under 13.
Another major change involves household rules regarding online activities. Although our 2012 focus groups with parents and youth showed parents were more concerned than ever about what youth were doing online, the average number of household rules has actually declined since 2005. Despite a high awareness of perceived “stranger danger”, the number of students who have a rule at home about meeting people whom they first met online actually dropped from 2005, from two-thirds to one-half.
Consistent with our previous research, household rules have a significant positive impact on what students do online, reducing risky behaviours such as posting their contact information, visiting gambling sites, seeking out online pornography and talking to strangers online. In general, though, the number of household rules takes a sharp dive after Grade 7 and at all ages girls are more likely to report having rules about their online activities than boys. The greater number of rules placed on girls may be based on a sense that girls are more vulnerable in general, but this may also relate to the fact that the Internet is a very different place for girls than for boys: girls are less likely to agree with the statement that “the Internet is a safe place for me” and more likely to agree that “I could be hurt if I talk to someone I don’t know online”. Despite these differences, both boys and girls feel confident in their ability to look after themselves, with nine out of ten agreeing with the statement “I know how to protect myself online”.
How often students say they have an adult or parent in the room with them while online has also changed since the 2005 survey, but surprisingly – especially considering the decline in household rules and the proliferation of mobile devices – this figure has risen. As with household rules, however, the rate is higher for girls. One in five Grade 4 students never has a parent or adult with them when they are online at home, and by Grade 8 – a time when students are most at risk of encountering and getting involved in trouble online – four out of ten students never go online with a parent or other adult in the room.
Students do see their parents as a valuable resource for learning about the Internet: nearly half of students say they have learned about issues such as cyberbullying, online safety and privacy management at home. As students get older, they’re less likely to report having learned about these issues from parents and more likely to learn from teachers. A worrying number of students haven’t learned about these topics from any source: more than half of students in grades 4-6 have not learned any strategies for authenticating online information either at home or at school.
Online Life has raised many issues that call for more in-depth study. However, the evidence is clear at this early stage that despite their confidence with digital tools – or perhaps because of it – Canadian youth, and particularly elementary-aged children, need instruction in digital literacy skills and parents and teachers need to be given tools and resources to help them provide that instruction.
Click here to read the full Life Online report: mediasmarts.ca/ycww.
Young Canadians in a Wired World – Phase III: Life Online was made possible by financial contributions from the Canadian Internet Registration Authority and Office of the Privacy Commissioner of Canada.
The Internet is at a crossroads. And while high-profile events like the introduction of new gTLDs and revelations about governments and online surveillance may be a catalyst for recent Internet governance reform initiatives, their necessity isn’t exactly new. After all, the current structures and processes in place were set up a decade and a half ago, an eternity in Internet years.
A key step in reviewing and renewing these structures is the Panel on Global Internet Cooperation and Governance Mechanisms, announced at the recent ICANN meeting in Buenos Aires. Last week I was in London for the first meeting of this panel to chart a path forward for the ongoing successful development of the Internet.
While the current Internet governance institutions were in large part responsible for getting the first two+ billion – mostly citizens of the developed world – online, they’re likely not the right ones to get the next two billion – the citizens of the developing world – online. Those new users reside in parts of the world that have political structures that are based in philosophical underpinnings that differ greatly from ours in the west.
That’s why I think the diversity of the panel members is noteworthy – not only are many sectors and industries represented, but geographic regions as well – panelists are all corners of the earth (you can view the panel’s membership here. With that said, it must be made clear that this panel is not meant to be representative. Rather, we are group of knowledgeable individuals who are committed to the success of the Internet, and as such have come together to identify significant, potential solutions for administrating the Internet.
The variety of industries represented is also noteworthy. From current and former world leaders to senior bureaucrats, the tech community and the private sector, the diversity of voices at the table is striking. As an ‘on-the-ground’ operator present at the table, I believe that I bring a unique and important perspective. Given the diversity of viewpoints on the panel, I am confident our collective insights will ensure whatever recommendations we make as a panel will work in the real world.
The first meeting was spent identifying the desirable traits for the future administration of the Internet. Within the current governance mechanisms, what do we need to keep? What’s missing? We all agreed that some enhanced role for governments is important to ensure the future success of the Internet. How do we accomplish this? And, since we’ve moved from a place where the Internet governance discussions are ‘how it’s used’ instead of ‘how it works,’ how do we address the inherent jurisdictional issues?
When we do engage voices like governments in more discussions, a funny thing happens; you start to realize that some of the issues dividing you are sometimes just a question of semantics. As a panel, we had a discussion about the term ‘governance’ that was demonstrative of the different worlds we are trying to bridge. In the political world, the term ‘governance’ is loaded, and carries with it ideas of power and authority, certainly not the same meaning that we in the Internet world have given it. Are the terms coordination and administration more helpful moving forward?
The task is lofty – to come up with possible mechanisms and arrangements to ensure the Internet delivers on its promise of prosperity to the whole world, not just the developed one. And for someone with a business background, I’m used to having an end goal in mind, and working backwards from there. These discussions work in the opposite way – we know our starting place but we need to work our way to the end goal. I find this approach somewhat freeing – it is a much more open process – and I believe more likely to succeed without an end goal in mind.
We will meet as a panel three times – the recent December meeting, once in February and once in May. At the end of February 2014 we’ll be releasing a high level report, and our ideas will be posted for comment and input. The much anticipated Brazil Internet summit will take place after our high-level report is released but before our work as a panel wraps up, so I’m sure our recommendations will form at least a part of the discussions there.
We are at a critical point in the ongoing development of the Internet. I believe it’s healthy to review an entity’s governance processes from time to time, and most certainly when we’re at an inflection point of sorts, as we currently are.
Much in the same way the National Hockey League has had to adapt the rules of hockey to accommodate faster, stronger, more technical players, we will have to adapt the mechanisms of the Internet governance world in the face of upcoming new players – new gTLDs, the addition of two billion new users and the effect of the NSA revelations. However, there are certain characteristics that can never change – the game of hockey is still played with 12 players on the ice and three periods. Any and all changes have to be for the benefit of the game, or in this case, the good of the Internet.
I was part of a group of about 200 people who attended and update yesterday on the Montevideo statement at the Internet Governance Forum in Bali. I’d like to share a few of my observations, and offer some unsolicited advice.
First, the de facto leader and champion of the multi-stakeholder model, the United States, has been sent to the penalty box in light of the NSA surveillance revelations. I made this point when the Snowden affair first came to light, and it’s tremendously apparent at the IGF that much of the credibility the Americans had in defending the multi-stakeholder model has dissipated (at least for the time being).
That’s left us – the advocates of the multi-stakeholder model – in a bit of a leadership vacuum at critical time in the Internet’s history. We are, after all, staring down the ITU’s Plenipotentiary meeting in November 2014. We well remember the ITU’s World Conference on International Telecommunications in Dubai last December. The last thing the Internet community wants or needs is a repeat of the discussions at the WCIT – the division between the supporters of the multi-stakeholder model and its detractors became both deeper and wider. There were times when many of us were genuinely concerned about the fate of the free and open Internet during the WCIT – the multi-stakeholder model was stood poised to sustain some serious damage, however we fortunately came back from the brink.
The telecommunications world is about to head into similar discussions at the Plenipot – this is not a time where you want to be without your strongest champion and de facto (like it or not) leader.
But here we are.
Fortunately, the I* group (ICANN, the RIRs, IETF, IAB, W3C and ISOC) were able to pick up the ball that had been fumbled by the U.S. government and are attempting to fill that vacuum. While I applaud their resourcefulness, I do see a number of challenges they will have to overcome to be successful.
WARNING: as a Canadian, non-hockey sports metaphors are tough. If I mangle this one, I can’t be held responsible.
The I* ( I-star) group, and Fadi Chehadé in particular, have picked up and carried that metaphorical ball down the field. A tight group of CEOs has been leading the charge so far, without much in the way of broader consultation. I get it. That’s what we as CEOs are paid to do, and they were right to do what they did. When presented with an opportunity like this, we have to analyze the situation make an informed decision, and and keep moving.
However, their challenge will be engaging the broader community before it’s too late. If the I* group misses that opportunity, they risk jeopardizing their credibility to lead the community going forward. It would be akin to advocating for the multi-stakeholder model in a non-multi-stakeholder manner. I can already hear the Internet governance conspiracy theorists and their refrain of ICANN overreach.
Simply put, the I* folks need to pass the ball off to the broader community before it’s too late. It’s not going to be easy, but multi-stakeholderism is a messy game. The sheer number and diversity of voices that come to the table – from the technical community to civil society to end users, governments, and many more – means that we will always be dealing with competing interests and challenges with meaningful engagement.
But it is exactly that messy, at times turbulent chaos that makes the multi-stakeholder model work. At one time or another, all of those voices are critical to the success of the Internet. Don’t get me wrong. I’m not saying that everybody needs to be a full participant in these discussions. That’s not what multi-stakeholderism is about. Yes, all of those voices have a role to play, but not necessarily on every issue, all the time. Where and when appropriate, different voices come up to the surface and are included, but it is a very rare thing that all voices would be relevant on a particular issue. Those who aren’t involved need to have trust in the processes and people if we are going to be successful.
The right voices for this discussion need to be identified and included in pretty short order if we are to be successful. Finally, while it’s great we are discussing these issues, but we also need to be very pragmatic in our approach. The fact is there isn’t a lot of time between now and the Plenipot in 2014, and we’ve got a lot of work to do. If we are going to re-examine the arrangements, processes, or frameworks that have been governing the Internet for more than a decade, we need to be nothing short of strategic. And maybe it’s my business background speaking, but I’d like to know what our end game is. That’s the I* group’s other challenge – to identify and articulate what we, as the Internet community, need to accomplish.
We need answers to some fundamental questions before we get too far down this path: What problem is the ‘coalition of the willing’ solving? How do we know when we’ve been successful? How are we going to get there?
My advice to the group is to move reasonably quickly from ‘thinking’ to ‘doing”, from strategy to execution. Yes, the thinking is a critical part of the process, but lets not forget how little time we have for the doing. The first step, in my opinion, will be to develop a crisp, realistic goal. Start with the end in mind.
At yesterday’s briefing, Chris Disspain articulated what I believe is a good first step towards articulating an objective for this emerging group:
“Working together in a coalition to offer the world a multi-stakeholder-based mechanism for dealing with Internet governance issues as a viable alternative to governments or government-centric mechanisms.”
With some massaging, I think his statement could guide the group’s work. We’re off to a good start – I’m hearing a lot of positive things about this process from the people at the IGF, as well as the grumbling. I believe that the people who are leading this process have the best interest of the broader community at heart, but they need to get the broad community involved asap, and we, the community, need to pick up the ball.
We are about to bid farewell to a special board member at CIRA. Paul Andersen is both our longest-serving director and the longest-serving chair of our Board of Directors in the organization’s history.
Paul was elected to the Board during the very first Board of Directors election in 2000, and has served almost continuously since then. For the past five years, Paul has been the Chair of the Board.
It’s a rare thing for an organization – especially one in high tech – to have a consistent voice and vision on its board, but that’s exactly what Paul has provided. His fingerprints are all over .CA. During his tenure as Chair, we undertook some of the most complex and forward-looking projects in the history of .CA:
- In 2008, CIRA undertook a wholesale redesign of the entire .CA registry system. Not a single aspect of the registration process was left untouched, from technological processes to policies and business practices. The result? A streamlined registration process from start to finish. As Chair (and as an experienced .CA Registrar), Paul helped guide this project – the largest one in the history of .CA – from inception right through to its implementation.
- Paul was instrumental in championing the implementation of DNSSEC, a set of extensions that provide an extra layer of security to the domain name system (DNS). This past January, Paul was on-hand as we took a major step forward in making .CA more safe and secure when we published a signed .CA zone file.
- While I’m on the technological front, anyone who knows Paul knows he is an outspoken advocate for IPv6, the next generation Internet Protocol. CIRA’s website was made permanently IPv6-ready in 2011 as part of World IPv6 Day, in part at Paul’s behest .
- At the recently held Canadian Internet Forum and CIRA AGM, Paul announced a major new funding initiative by CIRA aimed at strengthening the Internet in Canada. Once it is officially launched, CIRA will distribute up to one million dollars in total through its Community Investment Program (CIP) over the next year to community groups, academics and not-for-profits for projects that makes the Internet even better for all Canadians.
Of all of his accomplishments over his time with CIRA, I think this is the one he is most proud of. He has always championed CIRA’s social mandate, encouraging us to ensure .CA performed well as a top-level domain in order that we may give back to the Internet community, either through strategic investments or expertise. This is definitely evident in our work in facilitating Internet Exchange Points in Canada.
Paul played a role in each of these projects, from advocating for CIRA to implement DNSSEC and to adopt IPv6 to championing CIRA’s social role with the CIP. Under his leadership, CIRA became a forward-looking, world class registry.
While Paul’s tenure with CIRA may have come to an end, he’s a common sight at a myriad of tables where the future of the Internet is discussed. As a true champion for the Internet in Canada, I’m sure he will continue to shape its development for years to come.
Paul: from all of us at CIRA, thank you.
I’m at the Internet Governance Forum in Bali this week. With the recent revelations that the NSA are monitoring global Internet traffic and the subsequent fall out in both the Internet governance and the diplomatic worlds, this is the place to be.
Last week I posted the Montevideo Statement, an output of a meeting of a number of high-level organizations responsible for the coordination of the Internet’s technical infrastructure (a group known as the “I* organizations”, ICANN, the RIRs, IETF, IAB, W3C, and ISOC). In essence, this Statement is recognition of the need to address emerging issues facing the Internet governance world.
It must be noted that while it is safe to assume that while the NSA revelations were likely a catalyst for these organizations to come together and state the need to re-examine and update the multi-stakeholder model, these issues have been brewing for years. I’ve blogged in the past about how the challenges the Governmental Advisory Committee (GAC) has had in getting its voice heard and understood. And, if you’re a regular reader of my blog, you know that the bulk of the real estate here has been dedicated to my support of the multi-stakeholder model as the governance model that can ensure the future growth of the free and open Internet.
While my support for the multi-stakeholder model is unwavering, that does not mean I believe it is perfect. I also don’t believe it is unchangeable. It has its flaws, and I believe it’s high time we discuss them. I believe the Montevideo Statement is the proverbial icebreaker in that discussion.
Last night I attended a meeting in Bali of the I* organizations called with a number of other stakeholders. What I witnessed was promising. A relatively diverse mix of people were at the table discussing the key issues.
The governance structures that are in place today were set up for a different era. To say that the Internet is a ubiquitous part of all aspects of the social and economic fabric of the globe is not an exaggeration, but this wasn’t the case even a decade ago.
The Internet was primarily set up by the technical community so they governance structures followed suit – they were tech-heavy. As the Internet grew, so did the stakeholders involved, thus we’ve seen the inclusion of the private sector, academics, governments, and so on. And, while this organic growth has shown the flexible and fast-moving nature of the multi-stakeholder model, it has exposed some of the challenges in integrated the myriad of voices at the governance table.
It reminds me of Geoffrey Moore’ Crossing the Chasm, the book about marketing high tech products during the early start-up period. We – the early adopters: registries, registrars, academics, engineers, and so on – have brought the Internet to this point. There are two billion people online. It operates in a 100 per cent uptime environment. Internet technology has come a very long way – a technology like Skype was unthinkable 10 years ago.
The Internet’s been built and it works, but now it’s time to take it to the next level. To do that, some things have to change. Fundamentally, we’ve moved from a place where the Internet governance discussions have moved from ‘how it works’ to ‘how it’s used.’ The issues that are most important now are about online surveillance and cyber-safety and a number of other issues that may only tangentially involve technology.
I think we’ve reached the tipping point where the old guard that have led the agenda – the tech people, the academics – are no longer the only voices needed at the table. The old guard, so to speak, of the Internet governance world typically don’t have the organizational mandate to talk about ‘how’ the Internet is used. We’re from organizations like CIRA who have to be content agnostic. Our mandate covers how the Internet works, not what it’s used for.
The Internet’s success has mandated that end user concerns – and the concerns of governments who represent the end users – now need to be addressed at governance tables with more urgency than ever before. Here’s the challenge: these new voices are going to be some of the strongest ones at the table. They’ve been under-represented for a long time and they’ve got a lot to say and lots to add.
I’m happy to say that I heard some of this last night at the Montevideo Statement follow-up meeting.
Let’s be frank, it’s still VERY early days yet and there is plenty of work yet to do before we even get started. What’s next? We need some real objectives. We need an unbiased analysis, and by that I mean free of any ideological political baggage, of the current system to identify what is working and what needs to evolve.
A number of leading organizations responsible for the coordination of the Internet’s technical infrastructure recently met in Montevideo, Uruguay. One of the outputs of this meeting is the following Statement on the Future of Internet Cooperation (pasted unedited and in its entirety):
Uruguay, 7 October 2013
The leaders of organizations responsible for coordination of the Internet technical infrastructure globally have met in Montevideo, Uruguay, to consider current issues affecting the future of the Internet.
The Internet and World Wide Web have brought major benefits in social and economic development worldwide. Both have been built and governed in the public interest through unique mechanisms for global multistakeholder Internet cooperation, which have been intrinsic to their success. The leaders discussed the clear need to continually strengthen and evolve these mechanisms, in truly substantial ways, to be able to address emerging issues faced by stakeholders in the Internet.
In this sense:
They reinforced the importance of globally coherent Internet operations, and warned against Internet fragmentation at a national level. They expressed strong concern over the undermining of the trust and confidence of Internet users globally due to recent revelations of pervasive monitoring and surveillance.
They identified the need for ongoing effort to address Internet Governance challenges, and agreed to catalyze community-wide efforts towards the evolution of global multistakeholder Internet cooperation.
They called for accelerating the globalization of ICANN and IANA functions, towards an environment in which all stakeholders, including all governments, participate on an equal footing.
They also called for the transition to IPv6 to remain a top priority globally. In particular Internet content providers must serve content with both IPv4 and IPv6 services, in order to be fully reachable on the global Internet.
Adiel A. Akplogan, CEO
African Network Information Center (AFRINIC)
John Curran, CEO
American Registry for Internet Numbers (ARIN)
Paul Wilson, Director General
Asia-Pacific Network Information Centre (APNIC)
Russ Housley, Chair
Internet Architecture Board (IAB)
Fadi Chehadé, President and CEO
Internet Corporation for Assigned Names and Numbers (ICANN)
Jari Arkko, Chair
Internet Engineering Task Force (IETF)
Lynn St. Amour, President and CEO
Internet Society (ISOC)
Raúl Echeberría, CEO
Latin America and Caribbean Internet Addresses Registry (LACNIC)
Axel Pawlik, Managing Director
Réseaux IP Européens Network Coordination Centre (RIPE NCC)
Jeff Jaffe, CEO
World Wide Web Consortium (W3C)
It’s been a little more than a year since we launched our Internet Exchange Point (IXP) initiative at CIRA, and we’ve made significant progress in that time.
A couple of weeks ago, the community celebrated the launch of Canada’s newest Internet Exchange Point, the Manitoba Internet Exchange (MBIX), in Winnipeg. In April the Montreal Internet Exchange (QIX) was launched in Montreal. These IXPs, along with TorIX in Toronto, OttIX in Ottawa, and BCNet in Vancouver are part of an evolving Canadian Internet infrastructure that is higher performing, more secure, resilient, and affordable. And, there are some productive discussions among the Internet community in Calgary about establishing an IXP in that city.
Through our research (.PDF) and our work with communities both global and domestic, we’ve learned a few things about what makes an IXP successful, and what doesn’t. Most importantly, I think, is the fact that there is no ‘one size fits all model’. Rather, while there key ingredients common in successful IXPs, each one takes on a local flavour.
What I have found to be critical is what I call ‘good governance’ – being open to understanding the local Internet community’s needs and being able to evolve as that community changes. It’s about operating the IXP in a transparent and responsive manner. We have also found that most often the IXs that work are not-for-profits and operate for the benefit of the local Internet community. They are located in facilities that are open for any organizations to peer with, and have the capacity to grow to meet local needs. They are also open to input and support from the broader community.
Let me give you a couple of examples.
QIX, the ‘new’ IXP in Montreal, came out of an already existing one managed by the Réseau d’informations scientifiques du Québec (RISQ), an arm of the Quebec provincial government. It was not open for any organizations to peer with, and was managed by RISQ. Once the need for an open IXP became apparent in Montreal, RISQ worked with the local community to enhance and open QIX, and establish a new governance structure. RISQ still manages the day-to-day operations of QIX, but this not-for-profit is now governed by an independent board of directors.
In contrast, MBIX was started from nothing more than an idea and a committed group of volunteers. Everything – from the technical infrastructure to the governance structure – had to be built from scratch. The result is an open, not-for-profit IXP conceived of and built by the local community and run by a group of volunteers.
These two IXPs had very different beginnings and their current governance and operations have differences as well. However, they do have those ‘key ingredients’ I mentioned above in common: they are open and responsive to their local community, they are not-for-profit and are both located in a facility that is accessible and that can allow for growth.
When we started this initiative, our interest in establishing IXPs in Canada was driven by their key benefits: to improve the performance of the Internet in Canada through improved security, speed of data and network resilience. While we were also aware that IXPs can reduce the chance that national Internet traffic will travel to the U.S. Since then, the topic of IXPs has gotten a lot of traction in Canada. In light of the revelations that the National Security Agency (NSA) was monitoring Internet traffic crossing the U.S. border, the topic of IXPs has garnered significant media attention.
That discussion continues on Canadian Internet Forum, a national discussion on Internet-related issues hosted by CIRA.
I want to make two points clear. First, Internet traffic from all nations around the world destined for, or transiting, the U.S. can be subject to surveillance activity there. However, due to our geography and the configuration of North American networks, a large proportion – some say up to 40 per cent of Canadian domestic traffic that is traffic originating and terminating in Canada – transits the U.S., and is therefore affected by the NSA’s activities. This makes our IXP initiative more important than ever.
Second, while IXPs can reduce the chance that Canadian Internet traffic will flow to the U.S., that risk can NOT be eliminated. That’s not the way the Internet works. The Internet operates on the premise that bits of data travel through the fastest and most available route, regardless of national borders. Building more IXPs in Canada will build capacity, speed and resiliency in this country, creating opportunities for Canadian data to remain in this country. There is no way, however, to prevent all data to remain entirely in this country.
We have made significant progress and have continued to advance our understanding about establishing successful IXPs. I believe it is a good time to pause, reflect and celebrate our achievements. As a nation, we do have a long way to go before we can put Canada on the map as a digital leader with a robust network of Internet exchange points across the country. In the meantime, please find out more about your local IXP, or if your community doesn’t have one yet, get involved – contact us to help establish one!