The 2014 Internet Governance Forum

I’m at the Internet Governance Forum in Bali this week. With the recent revelations that the NSA are monitoring global Internet traffic and the subsequent fall out in both the Internet governance and the diplomatic worlds, this is the place to be.

Last week I posted the Montevideo Statement, an output of a meeting of a number of high-level organizations responsible for the coordination of the Internet’s technical infrastructure (a group known as the “I* organizations”, ICANN, the RIRs, IETF, IAB, W3C, and ISOC). In essence, this Statement is recognition of the need to address emerging issues facing the Internet governance world.

It must be noted that while it is safe to assume that while the NSA revelations were likely a catalyst for these organizations to come together and state the need to re-examine and update the multi-stakeholder model, these issues have been brewing for years. I’ve blogged in the past about how the challenges the Governmental Advisory Committee (GAC) has had in getting its voice heard and understood. And, if you’re a regular reader of my blog, you know that the bulk of the real estate here has been dedicated to my support of the multi-stakeholder model as the governance model that can ensure the future growth of the free and open Internet.

While my support for the multi-stakeholder model is unwavering, that does not mean I believe it is perfect. I also don’t believe it is unchangeable. It has its flaws, and I believe it’s high time we discuss them. I believe the Montevideo Statement is the proverbial icebreaker in that discussion.

Last night I attended a meeting in Bali of the I* organizations called with a number of other stakeholders. What I witnessed was promising. A relatively diverse mix of people were at the table discussing the key issues.

The governance structures that are in place today were set up for a different era. To say that the Internet is a ubiquitous part of all aspects of the social and economic fabric of the globe is not an exaggeration, but this wasn’t the case even a decade ago.

The Internet was primarily set up by the technical community so they governance structures followed suit – they were tech-heavy. As the Internet grew, so did the stakeholders involved, thus we’ve seen the inclusion of the private sector, academics, governments, and so on. And, while this organic growth has shown the flexible and fast-moving nature of the multi-stakeholder model, it has exposed some of the challenges in integrated the myriad of voices at the governance table.

It reminds me of Geoffrey Moore’ Crossing the Chasm, the book about marketing high tech products during the early start-up period. We – the early adopters: registries, registrars, academics, engineers, and so on – have brought the Internet to this point. There are two billion people online. It operates in a 100 per cent uptime environment. Internet technology has come a very long way – a technology like Skype was unthinkable 10 years ago.

The Internet’s been built and it works, but now it’s time to take it to the next level. To do that, some things have to change. Fundamentally, we’ve moved from a place where the Internet governance discussions have moved from ‘how it works’ to ‘how it’s used.’ The issues that are most important now are about online surveillance and cyber-safety and a number of other issues that may only tangentially involve technology.

I think we’ve reached the tipping point where the old guard that have led the agenda – the tech people, the academics – are no longer the only voices needed at the table. The old guard, so to speak, of the Internet governance world typically don’t have the organizational mandate to talk about ‘how’ the Internet is used. We’re from organizations like CIRA who have to be content agnostic. Our mandate covers how the Internet works, not what it’s used for.

The Internet’s success has mandated that end user concerns – and the concerns of governments who represent the end users – now need to be addressed at governance tables with more urgency than ever before. Here’s the challenge: these new voices are going to be some of the strongest ones at the table. They’ve been under-represented for a long time and they’ve got a lot to say and lots to add.

I’m happy to say that I heard some of this last night at the Montevideo Statement follow-up meeting.

Let’s be frank, it’s still VERY early days yet and there is plenty of work yet to do before we even get started. What’s next? We need some real objectives. We need an unbiased analysis, and by that I mean free of any ideological political baggage, of the current system to identify what is working and what needs to evolve.

 

 


The Montevideo Statement

A number of leading organizations responsible for the coordination of the Internet’s technical infrastructure recently met in Montevideo, Uruguay. One of the outputs of this meeting is the following Statement on the Future of Internet Cooperation (pasted unedited and in its entirety):

Uruguay, 7 October 2013
The leaders of organizations responsible for coordination of the Internet technical infrastructure globally have met in Montevideo, Uruguay, to consider current issues affecting the future of the Internet.

The Internet and World Wide Web have brought major benefits in social and economic development worldwide. Both have been built and governed in the public interest through unique mechanisms for global multistakeholder Internet cooperation, which have been intrinsic to their success. The leaders discussed the clear need to continually strengthen and evolve these mechanisms, in truly substantial ways, to be able to address emerging issues faced by stakeholders in the Internet.

In this sense:
They reinforced the importance of globally coherent Internet operations, and warned against Internet fragmentation at a national level. They expressed strong concern over the undermining of the trust and confidence of Internet users globally due to recent revelations of pervasive monitoring and surveillance.

They identified the need for ongoing effort to address Internet Governance challenges, and agreed to catalyze community-wide efforts towards the evolution of global multistakeholder Internet cooperation.

They called for accelerating the globalization of ICANN and IANA functions, towards an environment in which all stakeholders, including all governments, participate on an equal footing.

They also called for the transition to IPv6 to remain a top priority globally. In particular Internet content providers must serve content with both IPv4 and IPv6 services, in order to be fully reachable on the global Internet.

Adiel A. Akplogan, CEO
African Network Information Center (AFRINIC)

John Curran, CEO
American Registry for Internet Numbers (ARIN)

Paul Wilson, Director General
Asia-Pacific Network Information Centre (APNIC)

Russ Housley, Chair
Internet Architecture Board (IAB)

Fadi Chehadé, President and CEO
Internet Corporation for Assigned Names and Numbers (ICANN)

Jari Arkko, Chair
Internet Engineering Task Force (IETF)

Lynn St. Amour, President and CEO
Internet Society (ISOC)

Raúl Echeberría, CEO
Latin America and Caribbean Internet Addresses Registry (LACNIC)

Axel Pawlik, Managing Director
Réseaux IP Européens Network Coordination Centre (RIPE NCC)

Jeff Jaffe, CEO
World Wide Web Consortium (W3C)


Update on our work on Internet Exchange Points

It’s been a little more than a year since we launched our Internet Exchange Point (IXP) initiative at CIRA, and we’ve made significant progress in that time.

A couple of weeks ago, the community celebrated the launch of Canada’s newest Internet Exchange Point, the Manitoba Internet Exchange (MBIX), in Winnipeg. In April the Montreal Internet Exchange (QIX) was launched in Montreal. These IXPs, along with TorIX in Toronto, OttIX in Ottawa, and BCNet in Vancouver are part of an evolving Canadian Internet infrastructure that is higher performing, more secure, resilient, and affordable. And, there are some productive discussions among the Internet community in Calgary about establishing an IXP in that city.

Through our research (.PDF) and our work with communities both global and domestic, we’ve learned a few things about what makes an IXP successful, and what doesn’t. Most importantly, I think, is the fact that there is no ‘one size fits all model’. Rather, while there key ingredients common in successful IXPs, each one takes on a local flavour.

What I have found to be critical is what I call ‘good governance’ – being open to understanding the local Internet community’s needs and being able to evolve as that community changes. It’s about operating the IXP in a transparent and responsive manner. We have also found that most often the IXs that work are not-for-profits and operate for the benefit of the local Internet community. They are located in facilities that are open for any organizations to peer with, and have the capacity to grow to meet local needs. They are also open to input and support from the broader community.

Let me give you a couple of examples.

QIX, the ‘new’ IXP in Montreal, came out of an already existing one managed by the Réseau d’informations scientifiques du Québec (RISQ), an arm of the Quebec provincial government. It was not open for any organizations to peer with, and was managed by RISQ. Once the need for an open IXP became apparent in Montreal, RISQ worked with the local community to enhance and open QIX, and establish a new governance structure. RISQ still manages the day-to-day operations of QIX, but this not-for-profit is now governed by an independent board of directors.

In contrast, MBIX was started from nothing more than an idea and a committed group of volunteers. Everything – from the technical infrastructure to the governance structure – had to be built from scratch.  The result is an open, not-for-profit IXP conceived of and built by the local community and run by a group of volunteers.

These two IXPs had very different beginnings and their current governance and operations have differences as well. However, they do have those ‘key ingredients’ I mentioned above in common: they are open and responsive to their local community, they are not-for-profit and are both located in a facility that is accessible and that can allow for growth.

When we started this initiative, our interest in establishing IXPs in Canada was driven by their key benefits: to improve the performance of the Internet in Canada through improved security, speed of data and network resilience. While we were also aware that IXPs can reduce the chance that national Internet traffic will travel to the U.S. Since then, the topic of IXPs has gotten a lot of traction in Canada. In light of the revelations that the National Security Agency (NSA) was monitoring Internet traffic crossing the U.S. border, the topic of IXPs has garnered significant media attention.

That discussion continues on Canadian Internet Forum, a national discussion on Internet-related issues hosted by CIRA.

I want to make two points clear. First, Internet traffic from all nations around the world destined for, or transiting, the U.S. can be subject to surveillance activity there. However, due to our geography and the configuration of North American networks, a large proportion – some say up to 40 per cent of Canadian domestic traffic that is traffic originating and terminating in Canada – transits the U.S., and is therefore affected by the NSA’s activities. This makes our IXP initiative more important than ever.

Second, while IXPs can reduce the chance that Canadian Internet traffic will flow to the U.S., that risk can NOT be eliminated. That’s not the way the Internet works. The Internet operates on the premise that bits of data travel through the fastest and most available route, regardless of national borders. Building more IXPs in Canada will build capacity, speed and resiliency in this country, creating opportunities for Canadian data to remain in this country. There is no way, however, to prevent all data to remain entirely in this country.

We have made significant progress and have continued to advance our understanding about establishing successful IXPs. I believe it is a good time to pause, reflect and celebrate our achievements. As a nation, we do have a long way to go before we can put Canada on the map as a digital leader with a robust network of Internet exchange points across the country. In the meantime, please find out more about your local IXP, or if your community doesn’t have one yet, get involved – contact us to help establish one!


Why are Canadians complacent about government surveillance online?

Canada, Internet, Legislation4 Comments

As someone who works at the very heart of the Internet, I am concerned by the lack of outrage among Canadians about the National Security Agency’s (NSA) PRISM program which is monitoring the activities of Internet users around the globe and in our own backyard. I made my feelings about it clear in this opinion piece I wrote for the National Post, but even more information has been revealed about what information is collected since it was published last week.

On Wednesday, the Guardian reported that the surveillance activities of the NSA in the U.S. and the Government Communications Headquarters in the United Kingdom go far beyond what we previously thought. Both agencies are apparently able to crack into encrypted communications. This means that online encryption – something most of us rely on to keep our personal information like banking records such as HTTPS and Secure Sockets Layer (SSL) – is not private. What I find most disturbing in the Guardian’s report is that the NSA has been working with technology companies and Internet service providers to insert “secret vulnerabilities – known as backdoors or trapdoors – into commercial encryption software.”

It appears nothing online is safe from prying eyes, even encrypted information. And with these revelations that the NSA has been inserting weaknesses into Internet security standards, any trust end users had in Internet transactions is certainly eroding.

National security is important – protecting its citizens is both a government’s right and responsibility, and surveillance is a necessary tool in that regard. But, as I discussed in my last post, we’re talking about a government monitoring its citizens without judicial oversight. This isn’t surveillance like we’ve had in the past with phone taps and opening mail (both requiring a warrant). This is surveillance and storage of emails and Internet histories without a warrant – the government does not need to show any cause to monitor any citizen online.

Most of us in the western world have a long history of rejecting that level of state control over an individual’s freedom. As I stated in my last post, in Canada we only have to look at the Royal Commission into Certain Activities of the Royal Canadian Mounted Police (the MacDonald Commission), or to the Cold War for examples.

The truth is Canadians are concerned about privacy. A recent study by the Office of the Privacy Commissioner of Canada (OPC) found that two thirds of Canadians are concerned with protecting their privacy (PDF).

So then why do Canadians seem apathetic about their online privacy?  What is it about the Internet that changes our perceptions and values concerning this important right?

We worked with Ipsos Reid, the international opinion research company, to understand what Canadians think about online surveillance. On our behalf, Ipsos Reid conducted a survey of Canadians between July 24 and 28, 2013.

Let’s be clear about one thing upfront – while Canada does have an online surveillance program, we do not have a program with the depth and breadth of activity like PRISM. What we do have, however, is an opportunity to have the important discussion about what is and isn’t acceptable in terms of surveillance in the digital age before it has a chance to become what was revealed south of the border.

I was surprised to find out that half of Canadians (49 per cent) believe it is acceptable for the government to monitor email and other online activities of Canadians in some circumstances. When those circumstances include preventing “future terrorist attacks,” that number rises to 77 per cent.

That means that more than three quarters of Canadians are fine with the government monitoring our online activity, as long as it is in the interest of national security.

Why?

Our apathy may be in part due to the fact that only 18 per cent of us believe our Internet activity is confidential. In fact, our research shows that four in 10 of us believe the federal (Canadian) government is tracking their Internet activity. Sixty-three per cent believe that the government is monitoring who visit certain websites.

If most of us aren’t expecting privacy on the Internet, it would follow that we would be ambivalent about government’s watching us there. And, nearly 60 per cent of Canadians agree that they would be willing to give up their privacy on the Internet if it ‘would help the government foil terrorist plots’ – when it comes to national security, we’re much more forgiving.

Again, yes, I believe surveillance is a necessary tool for a government to use in the protection of its citizens. However, it must have appropriate and transparent judicial oversight.

And if you were going to use the ‘I have nothing to hide argument’, I encourage you to read this. The author points out that with the absurd number of laws there are (in the U.S., for example, there are 27,000 pages of federal statutes); the likelihood that you’ve violated at least one and don’t even know it is highly likely. Besides, even if you have nothing to hide, I bet you still close your curtains at night. We all need parts of our lives to be kept private.

He also makes a strong argument that we should have something to hide. The fact is social progress is often preceded by illegal activity. Last week we celebrated the 50th anniversary of Martin Luther King’s “I have a dream” speech. We now know that the FBI was spying on King and his counterparts in an effort to discredit him and the movement as a whole. It makes me wonder how far the civil rights movement would have come if the FBI had the technology to monitor communications that we have now.

If that doesn’t convince you, then I’d point out that for decades we in the west have vilified authoritarian regimes that would use these tactics, like the Stasi in East Germany, the KGB in the USSR or the Communist Party of China. The fact is that less than a generation ago we were willing to go to war – at least in part – over the degree of control a state should have over its citizens. Has our moral compass shifted that much in the years since the Berlin Wall fell?

I’m not a lawyer, but am I wrong in thinking that in Canada NSA-like surveillance would be a violation of our Charter rights? After all, under the Charter of Rights and Freedoms, Canadians have the right to be free from unreasonable searches and seizures of their property and their personal information. The determination of what is reasonable is up to a judge to make – it is not something to be carried out with a lack of judicial oversight.

Two quick side notes.

Interestingly, Canadians are a lot less forgiving when it comes to the private sector tracking us online. Only 20 per cent of Canadians are willing to give up their Internet privacy if it would help business they deal with provide them with information about new products or sales they might be interested in.

When it comes to online surveillance, we’re not that different from our neighbours to the south. A similar survey conducted in the United States found that 45 per cent of Americans think it’s acceptable for their government to monitor emails to prevent terrorist attacks.

To read the entire results of the survey and the survey methodology, visit our website (PDF).

I believe we need a national dialogue about online surveillance in Canada. While I believe offline surveillance – like wiretaps – provide a precedent for online surveillance, the sheer size and ubiquity of Internet communications is a bit of a game changer, so to speak. Never before has the government had the ability to monitor so much activity of so many people. Technology may have given governments the means to monitor us online, but that doesn’t mean we have given governments the right to do so.

I’m repeating the questions I asked in my last post:

Is it that we don’t care, or that we don’t understand, or has our moral compass shifted enough in the past two decades that we’re now okay with governments tracking our every move?

I’d like to hear your thoughts – why do Canadians seem so complacent with government surveillance?

 


NSA Internet surveillance: where is the outrage?

In my last post I discussed how, with the NSA’s PRISM surveillance program, the United States has likely unilaterally killed the Internet as we know it.

It didn’t have to happen. There were a series of events that led to us getting to a point where a democratic government – the self-professed leader of the free world – feels it can carry out activities like this with impunity.

The Internet is a new entity. From a public policy and legislative perspective, we’re just figuring it out. In Canada, we’re struggling with how to deal with cyber-bullying and globally we’re redefining copyright in light of the Internet. In terms of a disruptive technology, the Internet is about as big as it gets. That said, there are some activities for which there is offline precedence, and I think most of us would argue that surveillance is one of those activities.

Governments – even transparent, democratic ones – have always engaged in surveillance activities. They are sometimes an unfortunate necessity to maintain law and order. However, wiretaps have had a high degree of judicial oversight. In Canada, police need to meet a higher standard to obtain a wiretap warrant than a regular warrant. It’s the same for opening a private citizen’s mail, and for a host of other surveillance techniques.

And as a society, we have long recognized the seriousness of these activities.

In fact, outrage at RCMP activities in the 1970s, including unauthorized mail openings and electronic surveillance without warrants, resulted in the Royal Commission into Certain Activities of the Royal Canadian Mounted Police (the MacDonald Commission), and ultimately the creation of the Canadian Security Intelligence Service (CSIS).

It’s not long ago that, in the west at least, we found this type of activity so repugnant that we were willing to go to war over it. Stasi-like surveillance, and what it meant for personal freedom, was at the core of the Cold War. It was, in essence, a conflict fought over the level of state control over an individual’s freedom.

Now, we seem complacent in government monitoring of our activities, even if it is Stasi-like.

The fact is we know that the NSA is copying virtually every message sent from the U.S. to anywhere overseas. Cell phone data, Facebook updates, Google searches, emails – pretty much all communications – are tracked and stored by the U.S. government. And in case you thought you were safe because you’re Canadian, if you use any of these services your data is tracked and stored even if you reside in Canada. Social networking sites like Facebook store users’ data on servers in the U.S., and much of Canada’s Internet traffic transits through the U.S. even if the final destination is elsewhere (this is something CIRA has been actively working to change – see this).

Let me be clear about one thing. It’s not that governments should not have the power to monitor citizens under certain circumstances and with the appropriate oversight – it’s an unfortunate necessity to maintain law and order. But we’re not talking about surveillance with appropriate oversight. We’re talking about an opaque and deliberate system to gather and monitor the activities and communications of potentially everyone who is online.

Why should a government feel it is above judicial oversight to monitor its citizens’ activities, just because they’re online?

Because apparently, we’re fine with it. At the very least, we’re complacent with it.

I could write an entire post about why we should care, but others have already done so, and the reasons are both many and compelling.

Not only should we care, in my opinion we should be outraged.

Is it that we don’t care, or that we don’t understand, or has our moral compass shifted enough in the past two decades that we’re now okay with governments tracking our every move?

I’d like to hear your thoughts – why do we seem so complacent with government surveillance?

In my next post, I’ll discuss the research we carried out with Ipsos Reid to better understand what Canadians think about the PRISM program, and governments monitoring their online activities.


The Internet as we know it is dead.

The Internet as we know it is dead.

Not long ago, I would have argued the opposite to be true.

The free and open Internet was in what I felt to be a strong position just last month. The open democratic nations of the world had just come off the success of defending the multi-stakeholder model at the International Telecommunication Union (ITU) coordinated World Conference on International Communications (WCIT-12). And, as I discussed in a previous post, I was cautiously optimistic about the future of ICANN, the organization at the centre of the Internet governance ecosystem.

These were both signs that the Internet – and in particular the Internet governance ecosystem – was reaching a strong and healthy point. Now we are faced with the fact that the world’s most powerful democracy, the United States, has been systematically monitoring the Internet activity of both its own citizens and those of other nations.

The implications of the NSA’s PRISM surveillance program on the Internet governance world can best be explained by revisiting the events at WCIT-12.

WCIT-12 was a landmark meeting in the history of the Internet. A new version of the regulations that govern telecommunications activities globally was proposed and soundly rejected by many of the world’s democratic nations for provisions that would have extended the reach of the ITU over the Internet. This rejection was widely heralded as an endorsement for the current governance model applied to the Internet – the multi-stakeholder model – over a multi-lateral, United Nations model of governance.

I have articulated my reasons for supporting the multi-stakeholder model over a multi-lateral one many times, but my argument boils down to this: no other governance model puts the people and organizations that directly benefit from the Internet’s success in charge of it. The multi-stakeholder model is the only governance model that can support the development of a free and open Internet that has the potential to provide the world with all of the benefits it has to offer. Other models, including the multi-lateral model, are too open-to-influence by issues and actors that exist outside of the Internet ecosystem. Full stop.

WCIT-12 is just one example in a decade-long struggle for control of the Internet between – and, yes, this is an over-simplification but it works – open and transparent democratic nations and more authoritarian nations.

One of the main concerns at WCIT-12 – and voiced by the U.S. – was that new regulations could enable a system where (as I blogged at the time) “countries which do not have a strong commitment to human rights and democracy” would be able to put much of the global Internet traffic under significant surveillance.

Fast forward eight months, and we’re dealing with the news about the PRISM surveillance program. The irony of the fact that the country that led the charge against the new regulations for fear that it would give nations the authority to monitor Internet activity (among other reasons) is, of course, palpable.

Beyond the irony, the implications of the PRISM program run deep. The fact is the United States government has unilaterally invalidated the argument that the Internet must remain free and open for the good of the global community. While the U.S. has been doing its best to ensure nations are unable to monitor Internet activity, it has been working with the private sector in an effort to gather and monitor targeted Internet activity.

At the very least, this is a nail in the coffin of the multi-stakeholder model. They have effectively paved the way for the next attack on the multi-stakeholder model. The result?

Eventually a Balkanized Internet. An Internet that no longer provides access to global markets for business in the developing world. A global Internet that excludes people fighting authoritative regimes for basic human rights. An Internet that is no longer the incredible driver of positive economic and social change.

This is the first blog in a series of three I will be posting on this subject. In my next post, I will be discussing the apparent apathy among the citizens of open and democratic nations (Canada included) with regard to online surveillance.


The Canadian Internet Forum and CIRA’s AGM

On September 16, CIRA will host an important event in Montréal and I hope you can join us. For the first time, we are combining the Canadian Internet Forum (CIF) and CIRA’s Annual General Meeting (AGM). This is also the first time we will hold the CIF outside of Ottawa. By combining these two events, we are able to bring the important discussions the CIF has become known for to a new audience.

We have an exciting line-up, including panels on cyber-security policy in Canada and domestic Internet governance. We have also added a second, business-related stream that includes sessions on getting your business online and web analytics for your online business. Confirmed panellists include leading Canadian legal, security, policy and business experts. You can view the full list here.

In addition, Paul Brigner, the Regional Bureau Director, North America at the Internet Society will deliver an overview of the international Internet governance ecosystem. 

I’m pleased to say that we have confirmed a couple of very good keynote speakers, including:

-  Avinash Kaushik, a “Digital Marketing Evangelist, Google, Co-founder, Market Motive, and Bestselling Author.”

-  Virginia Heffernan, Former New York Times’ columnist, and Author.

More speakers may be announced as we get closer to the event. And, of course we will also be taking care of CIRA-related business at our AGM.

As always, the event is free and open for all to attend. Registration is now open. More details are available here.

I’d also like to remind you that our Board of Directors election process is underway. The Member nomination period wraps up today (August 12) 12 at 6 p.m. ET to nominate either yourself or someone else to serve on our Board. For more information, please visit our elections website. Some changes to our elections rules are highlighted on our elections website – I encourage you to review them.

I hope to see you in Montréal


ICANN 47: Policy debates and cautious optimism

A strange feeling came over me after I got home from ICANN 47 in Durban, something I haven’t felt after an ICANN meeting before.

The feeling? Optimism.

I’m optimistic, albeit cautiously so, about the future of ICANN and by extension, hopefully that of Internet governance in general.

If the meeting in Durban is any indication, we’ve come a long way in the past year. There were no outbursts of the type that characterized previous meetings. There were no indicted war criminals invited to dinner. And though it may be too soon to say for sure, I don’t think a letter will be sent to the government of South Africa about the quality of the hotel.

For a number of reasons, I think we have reached a turning point in the effectiveness of ICANN in the Internet governance ecosystem.

The meeting in Durban was very well organized. Although much of the heavy lifting in organizing an ICANN meeting is done by the local host, I can tell you from my experience as the host for ICANN 45, the staff at ICANN plays a large – and important – role in making sure the event is a success. Durban certainly was.

There was even difficult work accomplished – both the Registrar Accreditation Agreement (RAA) and the Registry Agreement (RA) were approved – a big step forward and a sign that ICANN has moved beyond some of the past tensions and sticking points with regard to the launch of the new gTLDs. On the importance of the RAA and the RA, Fadi Chehadé said it best: “We can see the last mile before the first new TLD is activated in the Internet’s root.”

I also have to give a nod to Fadi for putting together what appears to be a high-performing team – something I identified as a key priority for Beckstrom’s replacement last summer. In a relatively short period of time, key positions have been filled, and as far as I can tell from both the quantity and the quality of the work coming out of ICANN, the team has gelled under Fadi’s leadership.

I found the tone of the dialogue to be more respectful.

The Governmental Advisory Committee (GAC), a body that has a history of not being heard or understood, received credible, timely feedback from the ICANN Board about its advice on new gTLDs and the New gTLD Program. This is the sign of (dare I say it?) a respectful dialogue between the two bodies – not something I expected based on past experiences.

While I’m on the subject of new gTLDs, one of the striking events for me at ICANN 47 was the handling of the .AMAZON application. The GAC’s advice was clear – the .AMAZON TLD should not be awarded to Amazon.com, Inc. as it can be confused with the geographic region – and in all likelihood, ICANN will follow this advice.

While this is the right decision in my opinion, it didn’t stop the Amazon.com, Inc. representatives from publicly criticizing ICANN and the entire gTLD process.

Not surprising. However, what was surprising was how we were all able to listen, and to move on with the business of the day. Keep in mind, this is a large and powerful multinational corporation (who happens to hold the trademark for the name) calling out ICANN. It’s a sign that the organization has matured.

Having said all of this, I must stress that ICANN is not above criticism. Even if we’ve reached a turning point, the fact remains it will be a while before the old tensions dissipate and all stakeholders show up to meetings in the spirit of openness and trust. But, we have to give credit where credit is due. ICANN has been making a significant effort to reach out to the broader community, and is generally listening to feedback and adapting to it where applicable. It showed in Durban.

Fact is it was a relatively boring meeting, but boring in a good way. The fireworks that characterized past meetings have been replaced with substantive and respectful dialogue on policy issues. And you know what? That’s exactly the way it should be – it shows the multi-stakeholder model can work.

I’m looking forward to ICANN 48 in Buenos Aires.


PRISM, Internet Exchange Points and Canada

As the operator of the registry for the .CA top-level domain and the domain name system (DNS) infrastructure that supports it, I am uncomfortable, though not surprised, with the knowledge that a government is monitoring the activities of Internet users.

And while recent reports about the National Security Agency’s top-secret PRISM program actively monitoring Internet users in the United States and (by default) citizens of other countries – Canada included – are on the front page of newspapers around the world, Internet surveillance is not exactly new. It has been happening in one form or another since the early days of the commercial Internet in the mid-1990s.

However, the fact that online surveillance isn’t new does not: a) make it right, or b) mean that we shouldn’t do our best to make sure it doesn’t happen.

The Internet is far too important for us to become complacent. No other technological invention of the past millennium has had the social and economic effect that the Internet has had.

That said, for all of its complexity, the Internet is really driven by a series of transactions – either the exchange of information in personal communications or the exchange of technological/ informational communications at the DNS level. Those transactions work because there is a high degree of trust among the parties that operate the Internet.

Trust is the very foundation of the Internet.

Having an unknown, unauthorized party access to what is essentially private communications erodes that trust, and with it, the very foundation of what makes the Internet work. I believe eroding that trust – and with it the tremendous social and economic benefit the Internet brings – is too high a price to pay for national security.

It reminds me of this quote from Benjamin Franklin: “Those who would give up essential liberty to purchase a little temporary safety, deserve neither liberty nor safety.”

There is one way to protect ourselves, to some degree, from having our data fall under the jurisdiction of a foreign country. We must ensure more of it travels to its destination via Canadian routes.

Many Canadians may not realize that much of Canada’s domestic Internet traffic flows outside of the country. This is simply the way the Internet works. For example, a single email can be broken down into thousands of data packets, and each packet will take the fastest and most efficient route to its destination where that email will be reassembled. The majority of the time, that route involves travel through another country.

In our case, this often means our confidential data travels through the U.S., and is subject to any surveillance and laws in that jurisdiction.

Historically, it was often more economical for Canadian Internet Service Providers to move domestic traffic over established international links. Canada’s Internet is therefore heavily reliant on foreign infrastructure, and as a result, much of our Internet traffic flows through other countries.

In light of programs like the NSA’s PRISM, I do not believe this is acceptable any longer. It is time for Canada to repatriate its Internet traffic to the best extent possible, given the distributed nature of the DNS.

In my informed opinion, to do this will require more Internet Exchange Points, or IXPs, in Canada. IXPs are large data switches that allow Internet users in the same geographic area to connect directly with each other. An IXP allows local network traffic to take shorter, faster paths between member networks, ensuring more of that traffic remains local. Canada currently has fewer than five IXPs, well below the numbers our international counterparts have (the U.S., for example, has more than 80).

By building a robust Canadian Internet infrastructure, including a nation-wide fabric of IXPs, we can ensure more Canadian traffic stays in Canada, and is therefore only subject to Canadian law. We will also ensure that the trust that underlies the Internet in Canada remains strong, and we can continue to reap the benefits the Internet offers.


Worlds colliding

Internet governanceNo Comments

A couple of weeks ago, the International Telecommunication Union (ITU) hosted the World Telecommunication and Policy Forum (WTPF), a high-level exchange of views on information and communication technology (ICT) related policy issues (read ‘Internet’).

You may recall that I tend to get a tad suspicious whenever the ITU talks about anything Internet related. To date I haven’t been proven wrong – the fact is the ITU is looking to extend its reach over the Internet.

Unfortunately, once again there was a proposal – this time from Brazil – put forward at the WTPF that would result in the ITU exerting some control over the Internet. Titled “Opinion on the Role of Government in the Multistakeholder Framework for Internet Governance,” this proposal received support among more than a handful of member states (including Russia, India, Iran, and Argentina, among many others). It’s worth noting that most developed nations, Canada included, did not support Brazil’s proposal.

On the surface, it looks like the typical scenario of ITU members doing their best to wrestle control of the Internet from the U.S.-based ICANN, and in part it is. However, I believe there’s more to the picture than meets the eye.

I believe the driver behind Brazil’s proposal is actually rooted in the Governmental Advisory Committee’s (GAC) communiqué (PDF) coming out of the ICANN meeting in Beijing.

In Beijing, the GAC issued consensus advice on two proposed generic top-level domains, .africa (from DotConnectAfrica) and .gcc (for Middle Eastern Internet users). It did not do so on two other potential ‘geographic’ domain names, .patagonia and .amazon, for which there are also multiple proposals.

Rumour has it that it was the U.S. members of the GAC that did not go along with the rest of the GAC members, who believed that the geographic proposal for these domain names should be approved. As I understand it, the U.S. instead sided with the trademark holders of the domains in question, resulting in non-consensus advice.

Keep in mind, the ICANN Board has to treat consensus advice from the GAC differently from other advice. They either have to accept the advice, or explain why the advice was not accepted. This gives consensus advice more weight than non-consensus advice, where the ICANN Board can accept it or not, and not have to give any explanation.

Will the trademark holders win these gTLDs? Only time will tell. But, is it possible that the Brazilian proposal at the WTPF was retaliation against the U.S. for it not supporting its gTLD proposals at the GAC?

I believe it is.

What we are witnessing, in my opinion, is the gTLD debate boiling over into the ITU. And I believe this to be a dangerous precedent. The ICANN and ITU worlds are now interrelated.

This entire situation, however, foreshadows what the world of Internet governance would look like if the Internet were governed with a multi-lateral model instead of a multi-stakeholder one; where member states act in their own best interest (as it appears both Brazil and the U.S. are), instead of the best interest of the Internet.

As I’ve said before, the multi-stakeholder model is a big part of the reason the Internet has been so successful. That’s because the people and organizations that stand to benefit from its success are at the table when decisions about how it develops are made. Therefore, acting in the best interest of the Internet IS acting in your own best interest under the multi-stakeholder model.

We are all aware of how the multi-lateral governance model can get bogged down in this ‘eye for an eye’ diplomacy. It doesn’t help anybody, least of all the free and open Internet.