DNSSEC History Project

From $1

Welcome to the Internet Society DNSSEC History Project! Please refer to the About page for specific information on the project, its background, and the steps we're taking to gather this history. Once you've read that, start editing the content below!  

IF YOU WOULD LIKE TO CONTRIBUTE TO THIS PROJECT AND UPDATE INFORMATION, please email us at dnssechistory@isoc.org so that we can set you up with an account for editing.

The Domain Name System

The Domain Name System(DNS) is the distributed database that facilitates the translation of human readable names into IP addresses. Prior to DNS administrators exchanged a single file, called the hosts file, which manually mapped human readable names to IP addresses. As more hosts joined the Internet, and the hosts file grew in size, this became untenable. DNS was conceived to solve the problem of mapping names to IP addresses, and vice versa, at scale. 

The first RFCs addressing the problem of DNS were RFC 882 and RFC 883, written in 1983. RFC 882 defines the problem, and RFC 883 provides the specification for DNS. Since then, DNS has been continually evolving as new needs arose. DNS Security Extensions(DNSSEC) are the latest in a long line of additions to DNS now planned for the protocol.

DNS Security Prehistory

Few technologies are more critical to the operation of the Internet than the Domain Name System (DNS). The initial design of DNS did not take security into consideration, which was not unusual for protocols designed in the early 1980s. At the time of its development, and for many years there after, DNS had functioned without many formal security mechanisms, thereby making it vulnerable to DNS spoofing and other malicious attacks. 

Determining the Need for DNSSEC

[What drove the work? Big picture issues. Surely this includes the demonstrations of cache poisoning by Steve Bellovin and Tsutomu Shimomura in the early 1990s and the similar work by Dan Kaminsky in 2008, but it may include much other activity.] 


Any protocol is likely to have security vulnerabilities. This is especially true for a protocol that was designed before security was a concern, and one that has been in use during most of the evolution of the Internet. 

DNSSEC was designed to enable authentication of responses from DNS servers. DNSSEC defines methods for validating responses from DNS servers to ensure they have not been altered by bad actors. It does not address all threats to DNS, but it provides building blocks for providing additional data security for DNS, and applications and services that use DNS. 


Documented Problems

DNS security issues generally fall under the following categories:

·  Using reverse DNS to impersonate hosts 

·  Software bugs (buffer overflows, bad pointer handling, and so on) 

·  Bad crypto (predictable sequences, forgeable signatures) 

·  Information leaks (exposing cache contents or authoritative data) 

·  Cache poisoning (putting inappropriate data into the cache) 


Cache Poisoning

The earliest known security problem with DNS was DNS cache poisoning, also sometimes called DNS spoofing. DNS cache poisoning happens when a DNS server downstream from the authoritative one returns incorrect data to queries for names or IP addresses. This occurs because an attacker has 'poisoned' the cache of the downstream DNS server to return the malicious response. DNS cache poisoning is a subset of a group of problems computer scientists often classify as cache invalidation.

This problem, known to the Computer Science Research Group(CSRG) at U.C. Berkeley since 1989, was finally described in a paper by Steve Bellovin in 1993. Bellovin initially put off publishing the paper out of fear the information would be exploited. DNS cache poisoning was especially serious because rlogin and rsh, popular UNIX programs used at the time for remote administration, used name-based authentication. These programs depended on the information provided by DNS to be accurate when authenticating remote users. Specifically, rshd and rlogind looked up the hostname from the IP address and believed it without question. Since the reverse tree is under control of the remote network administrator, a corrupt site could point to a trusted name.

The CSRG's proposed fix was to do the reverse check, query the returned name and see if it pointed to the actual addresses. In technical terms query for both the PTR and A records, as opposed to simply the PTR records. However, replies to both types of queries could be spoofed by DNS cache poisoning. Cache poisoning attacks were found independently by Steve Bellovin, then of AT&T Bell Labs, and Tsutomu Shimomura. Bellovin's paper, originally written in 1993, described multiple cache poisoning attacks. He sent a slightly later version of his paper to CERT and had a number of meetings in Washington with Shimomura and assorted CERT and government personnel. No obvious fix was seen. It was fairly obvious that a digital signature-based fix to the DNS would be a good idea, but for multiple reasons, this was not pursued at the time. Reasons for not addressing the exploit included processing requirements, large packet size issues, encryption patents, US export controls on cryptography, and the difficulty of changing the protocol. Bellovin's paper advocated using cryptographic authentication for rlogin/rsh connections. This eventually led to the development of the ssh, sftp, scp and rsync as replacement programs for rlogin/rsh and telnet.

Concern over DNS cache poisoning, specifically that the leak would become publicly known, existed from 1989 to 1995. Occasionally, there were public postings that seemed to describing the problem. In early 1995, Bellovin found a copy of his paper on a convicted computer criminal's public FTP repository. After that, there was obviously no longer any point in keeping it secret. Bellovin then submitted his paper to the 1995 Usenix Security Symposium as "Using the Domain Name System for System Break-ins". A companion paper by Paul Vixie, "DNS and BIND Security Issues", described a number of ways in which BIND was hardened, including randomized query sequence numbers. 

The need for a better fix remained clear. A National Academies study with Bellovin and Steve Crocker on the committee, Trust in Cyberspace (National Academies Press, 1999, Fred Schneider, ed.) described the DNS as one of two crucial, central vulnerable areas in the Internet. The 2003 National Strategy to Secure Cyberspace repeated the warning, and noted that IETF working groups were working on the issue. Both documents appeared after publication of the first RFC on DNSSEC, RFC 2065, "Domain Name System Security Extensions", (January 1997, Donald Eastlake 3rd and Charles Kaufman). 

The Kaminsky Bug

In 2008, researcher Dan Kaminsky discovered a flaw in DNS, which allowed attackers to reliably guess DNS response sequence numbers. The exploit relied on determing the next valid DNS sequence number by tricking a target recursive DNS server into sending numerous bogus DNS queries. The attacker could then poison the cache of the target recursive DNS server by continually injecting malicious responses until the target recursive DNS server believed one of them due to matching sequence numbers. Since usually all the users of a given ISP use the same recursive DNS server, this specific cache poisoning attack allowed an attacker to trick all users of an ISP into visiting their malicious server. For example, an attacker could target an ISP, replacing a popular website with their own malicious copy without the users noticing. Thereby capturing usernames, passwords and other sensitive information. Against corporate environments, an attacker could disrupt or monitor operations by rerouting network traffic, capturing emails and other sensitive business data.[i] 

Prior to the public release of the bug on 7 August 2008, there were IETF participants who had inside knowledge of the Kaminsky attack. There also were participants whose information came only from press reports or other public sources, such as by looking at patches to open-source resolvers. Initial discussions had slightly surreal qualities, as those who had in-depth knowledge of the exploit discussed the ramifications with those who could only guess. 

 The exploit was announced on the IETF's DNS Operations(dnsop) list. Many of the follow up mails included a mathematical investigation estimating how long a typical exploit would take, discussions of vulnerability checkers, posts of relevant news articles, and links to studies of how quickly resolvers were patched in various environments. 

Another list where the exploit was discussed extensively was the IETF's DNS Extensions(dnsnext) list. This is the mailing list for the IETF's DNS Extensions working group, where any proposed changes to protocols would be initially discussed. A number of proposals on the list were discussed, as were a number of non-proposals. For example, Daniel J Bernstein discussed an alternative to DNSSEC, but did not put it forward as an Internet draft. Most of the proposals centered on ways to make it even more difficult for an attacker to spoof packets. Another common theme was for recursive servers to detect when someone was trying to spoof traffic. Then put the resolver into a mode that makes spoofs more difficult. In the end, the dnsext working group chairpersons decided to set deadlines for semi-formal proposals at the end of September 2008.  

Developing DNSSEC

[Original RFCs (1033, 1034, 1035). What were the key design goals and how were they addressed? What were the alternatives?]

Technical Design

    While some participants in the meeting(??What Meeting??) were interested in protecting against disclosure of DNS data to unauthorized parties, the design team(??Which design team??) made an explicit decision that "DNS data is `public'", and ruled all threats of data disclosure explicitly out of scope for DNSSEC.

  • While some participants were interested in authentication of DNS clients and servers as a basis for access control, this work was also ruled out of scope for DNSSEC per se.  
  • Backwards compatibility and co-existence with "insecure DNS" was listed as an explicit requirement.      
  • The resulting list of desired security services was       
    • data integrity, and 
    • data origin authentication.      
  • The design team noted that a digital signature mechanism would support the desired services.[ii] 

Implementation and Testing Cycles

[Includes bake-offs and other field tests. (Also see 'Meetings' section below. We leave it to you to decide whether a meeting belongs here or below.)] 

Major Redesigns

[Redesigns in RFCs (2533, 2534, 2535); Current DNSSEC specs - 4033, 4034, 4035] 


Early Adopters

[Early adopters (.SE, .BR, .CZ), their motivation and experience.] 

Bridging the Islands

[Attempts to address the challenge of piecemeal deployment. DLV. ITAR.] 

Role of Governments 

[Key steps inside governments (US and others), including both funding for R&D and organizational initiatives such as FISMA and the OMB memo. http://www.secure64.com/government-o...nssec-solution] 

Public and Policy Awareness Activities

[Outreach campaigns, conferences, presentations, etc. that aimed to encourage deployment.] 

Vendor View

[The view from various vendors, both hardware and software.] 

 An industry wide effort such as this, required immense collaboration and coordination across a chain of suppliers, vendors and service providers. Often times, this required that the traditional competitive urges to keep information strictly within one’s own boundary be loosened for the greater good and faster adoption. A trusted and neutral 3rd party to coordinate and encourage the collaboration was one of the ways we could have achieved that, and the DNSSEC Coalition is an example, as are many others. 


Specific, focused pieces 

The design of the key ceremony for the signing of the root surely deserves a chapter, for example. The implementation in each of the early TLDs is also worth one or more separate stories. The Swedish effort spanned a several years and involved everything from technical development to policy development. The implementation in Namibia was a much lighter weight project with its own interesting flavor. 

Signing the Root & Controversies

[There were controversies along the way -- what was the right way to do things, whether it was appropriate, or even still needed. Political challenges, roles of various involved parties.] 

Role of the US Government

The Root Zone is currently maintained by Verisign under supervision by the National Telecommunications and Information Administration(NTIA) of the US Department of Commerce (DoC). In its oversight role the DoC issued a Notice of Inquiry[6] seeking comments on the implementation of DNSSEC in the DNS hierarchy, in particular the Root Zone.

The main question that this NoI seeks to answer is who should have the ultimate responsibility for generating and managing the keys that sign the Root Zone. To date 6 different models have been drafted[7][iv] 

Key Management

Probably the biggest procedural challenge is the management of the encryption keys[5]. 

As shown above, DNSSEC relies on a hierarchical encryption system. But it means that every step in that chain needs to be signed. Therefore every domain name should deposit its key at the next step in the hierarchy. Key management consists of secure generation, distribution and storage of encryption keys. Key management is a careful balance between usability and security. Keys need to be replaced on a regular basis, but the more they are changed, the more complex the distribution and storage process becomes. The key management system is the most vulnerable part of the process as it is deemed to be easier to intercept a key or impersonate its owner than to break the encryption code of the key. 

To complicate matters, most domain name registries (the entity managing the top-level domain such as .org) do not have a direct relationship with the owner of the domain. The contractual relationship goes through a third party called the Domain Registrar. DNSSEC deployment will therefore need the buy-in from the Domain Registrars as they rely on automated interfaces to deal with their millions of customers. To include key distribution in this process may complicate matters significantly  

 Political Challenges

The final debate tries to answer the question: who has the ultimate responsibility, who keeps the key that secures the top of the hierarchical verification system? 

This is a larger responsibility than could be expected at first sight. Keeping the key that signs the Root Zone file gives control over its content. DNSSEC relies on that control to assure that the content of the file are secure and nobody can insert false information. There are however, also disadvantages that come with this benefit: it creates another single point of failure (besides the current root file generation process) and it gives complete power to the holder of the key to decide on any changes that may be requested. 

It should be pointed out that the current system has similar weaknesses: only after a redelegation process that can take a couple of weeks and involves the approval of the United States Department of Commerce a top-level domain can make changes to their entry in the Root Zone. While this system guarantees stability and security of the Root Zone, it also creates a single point of failure in an otherwise highly redundant system and provides control to an organization that is under the political control of one country. 

Friction in Deployment

[Even once agreed, DNSSEC (like many general infrastructure technologies) has had some friction in its path.] 

Olaf Kolkman said he believes the delays can be traced to three “key actors” in the DNS deployment world that created a chicken-and-egg problem. The first is the DNS hierarchy, which includes the root and moving into the enterprises and companies that make the high-level decisions. They are the people who need to sign and deploy DNSSEC so that those in the second group, who maintain ISP infrastructure, can validate the components that are used for validating the name servers. Finally, there are the people who can provide operating system and applications support once everything is in place. 

The chicken-and-egg problem comes in when the DNS hierarchy decision makers ask why they should invest in signing when signatures are not being validated; when the ISPs ask why they should invest in validation when there is nothing to validate; and when the operating system and application support folks ask why they should invest in development when there is so little infrastructure. 

When RFC 2535 was published, DNSSEC seemed ready for deployment. There was a period of code development and standardization, including regular interaction between the two, from 1995 to 2000, but there was very little deployment, except in a few labs. In 2000, the first real deployment trials began. Sweden's .se registry was interested in signing but discovered privacy and scalability issues. As Olaf explained, it wasn't until the registry world was making the jump to early deployment that both the standardization team and the development team noticed that something had been missed. The standardization efforts began anew. By 2008, the privacy and scalability issues had been addressed in the form of a technology known as NSEC3. 

Soon afterward, training became available, and software was being developed to facilitate deployment outside the laboratory setting. Top-level domains started signing, and things started bubbling up. 

“Right now, I think we are at the sweet spot when it comes to deployment,” Olaf said. “We are at the sweet spot of making the Internet as a whole a more secure place.”  

Technical Obstacles

A zone file is the database that is used to answer the queries sent to the DNS when typing an address in their browser. 


The zone file matches the domain name to an IP address. E.g. www.centr.org translates to 


A zone file for a TLD with a million domain names typically has a size around 100 MB. 


The size of that file is important as it is redistributed several times a day to different locations around the world to keep the zone up to date with the transactions in the registry. In some cases there are protocols in place where only differential updates are sent out, thus preventing the need to transfer the full zone. 


DNSSEC adds a significant amount of information to the zone to a level where the original size of the file should be multiplied by 4 (or even 10 if each domain in the zone has DNSSEC implemented as well). 


In some setups the file is compressed by factor 80 before transfer – with the DNSSEC key included this factor would be reduced significantly as compression works best with significant strings, not with random data. 


As a result uploading and managing the file is more difficult and takes much longer. In an emergency situation it could lead to significant delays in solving a problem or restoring the system. 


To keep the signatures secure, keys should be changed periodically depending on the key length and the internal policy. As a result, the number of interactions between a domain name owner and a registry would go up exponentially. In particular for larger registries and registrars this could lead to scalability issues.  

Procedural and Implementation Issues

Unless every player in the internet industry understands the value of DNSSEC and therefore implements it, those that do could see a poor return on investment. As a result it is not easy to argue that the resource intensive implementation of DNSSEC is necessary if there is no reassurance that the rest of the internet industry will follow. 

Implementing DNSSEC comes at a cost. For a medium sized ccTLD such as the Polish operator NASK (a zone with 1.300.000 domains) the total cost for DNSSEC implementation at the Registry AND Registrars level is estimated between 1.200.000 EUR and 1.500.000 EUR. This cost includes hardware upgrades, extra connectivity capacity, staff training and the changes to existing registry systems but doesn't include upgrade of the IP networks and ISPs caching DNS servers. 


One of the key barriers to overcome in DNSSEC deployment was industry inertia (resistance to change). This was manifested by various pieces if F.U.D (fear, uncertainty and doubt) that questioned the need for such an upgrade when the current technologies would do just fine. It wasn’t until the Kaminski bug was made public (coincidentally right after .ORG RSTEP was approved) that the overall inertia was shaken a bit. We cannot always guarantee that that the presumptions behind an efforts would be so readily proven right by a discovery or worse yet a catastrophe.  

Business Case

Hand in hand perhaps with the Inertia above, was the need for a business case for deployment. The early adopters were undertaking the effort because “it is the right thing to do” but the early and mass majority, waited for a business case to prove that the implementation would be worthwhile (i.e. the costs would be outweighed by the new opportunities, whether it be in differentiation from competitors or in actual new services that would bring in direct revenue). Part of the business case which was very hard to quantify was the current cost of DNS Cache poisoning, traffic hijacking etc, because many companies who had been a victim of such crimes were unwilling to divulge the fact or the details for fear of negatively impacting their brand. I am not sure this will change dramatically, but some way to quantify the costs of stating “as is” is needed to be able to address the critics who claim that the current system is just fine (perhaps with a few minor tweaks). It is the mentality of “if it ain’t broke, don’t fix it”. So how to show and quantify as much as possible that the current system is not sustainable as is.


(The timeline has now been moved to a separate "Timeline" page.)

What Else?


[Bake-offs, design meetings, interactions with other organizations, etc. Who was there, where was it and when did it take place? What happened, pro or con?] 

In July 2009, the Internet Society organized a panel discussion in Stockholm as part of IETF 75 for the purpose of making the issues associated with the adoption of DNSSEC accessible to a broader audience. See details, including presentation materials and a transcript. Moderated by Leslie Daigle, the Internet Society's chief Internet technology officer, the panel featured a distinguished group of developers, administrators, and Internet infrastructure operators who talked about their experiences with DNSSEC, the problems they've had to overcome, and what they see as next steps toward a more robustly secure Internet. 

On 27 July 2010, the Internet Society convened a panel of experts to talk about the DNS and to give insight into the state of the DNS’s overall security. In addition to the work they do in their day jobs—involving developing, deploying, and operating the DNS and related technologies—the panelists have each been involved in IETF activities as contributors, working group (WG) chairs, and Internet Engineering Steering Group and Internet Architecture Board members. Patrik Fältström, Barry Leiba, Lars-Johan Liman, and Danny McPherson have seen DNS technology issues from all angles. While their comments on the security of DNS are reported elsewhere (see “DNSSEC Doesn’t Mitigate All DNS Threats,” page 8), the panel discussion itself first highlighted a number of ways the DNS has become more than a host name/number lookup system and then emphasized that it will continue to evolve. 

By many of the metrics for protocol success that the IAB has cited in RFC 5218: What Makes for a Successful Protocol? the DNS is a successful protocol. It met a real need, it has allowed incremental deployment, it has had freely available code sources, and it has been openly maintained through IETF processes (such as the DNSOPS WG) for years. Furthermore, it has demonstrated its extensibility (through new uses) and its scalability (with tens of millions of domain names registered across all top-level domains); and with DNSSEC in place, threats are being mitigated. 

The panelists said the DNS is a little hard to position in the layer model of protocol design. Lars-Johan said the DNS is the glue between the transportation and application layers and that much of our use of the Internet (through applications and services) would simply stop without it. With its global footprint, it has become the go-to infrastructure for services that share some need for resource lookup. 

Patrik said that by storing materials in the DNS, which is now even more trustworthy, the DNS can be used for bootstrapping other infrastructure when DNSSEC is deployed. 

Barry outlined a case in point: the work of the DKIM (DomainKeys Identified Mail) is using aspects of the DNS to let domains publish (in the DNS) information about their practices in applying signatures to email and to take responsibility, using digital signatures, for having taken part in the transmission of an email. By storing this information in the DNS, the DNS becomes a critical component in the process of receiving (not just sending) email. 

To be successful in such approaches, Patrik said, it’s sometimes important to store a pointer to data (not the data itself): the DNS infrastructure for any given zone is likely administered separately from the dependent application using it to make data available, and sometimes the referenced data is larger than would reasonably be stored in a DNS record. It’s important to align administrative responsibility and data characteristics to be consistent with the DNS’s own architecture and expectations. 

Lars-Johan emphasized the same point, saying that even though the DNS can hold a lot of data across its namespace, hierarchy is important. For example, caching is important for keeping stress off higher levels. If you’re going to use the DNS to store some application data, it is imperative to ensure that your applications’ data needs—and data reference needs—fit into the DNS model. 

Panelists also discussed the importance of considering operational realities in order to ensure successful protocol extensions. Sometimes, designs that make perfect sense mathematically turn out to be operationally unsupportable. A case in point involved DNS bit string labels (RFC 2673), which worked well in theory but were too complex to consider deploying extensively in operational practice. That case underscores the need to design, conduct test deployments, and consider operational realities before committing to full-scale standardization and deployment. The ability to step back and reevaluate is an important part of overall successful protocol development and evolution.  

In July 2010, the DNSSEC History Project wiki was established (https://wiki.tools.isoc.org/DNSSEC_History_Project). The aim of the project is to collect information—in the forms of anecdotes, design documents, observations, and other contributions—from everyone who has material to share. Please do have a look at the wiki and contribute where you can. The intention of the project is to collect as much raw material as possible, with a view to being able to abstract some coherent lessons learned. These will be important lessons for all protocol development, not just for the DNS or DNSSEC: many of the same hurdles are faced by other broad-scale technologies. 


[Specific moments or episodes that capture something memorable. These need not be critical to the technical design or deployment. They might just be humorous, fun or otherwise part of the human experience.] 


[Quite a few decisions have been made along the way. Can we identify and document these? What alternatives existed? Why was the decision made the way that it was? (Some of these belong better under the larger "themes" list above.)] 


[What would you like to know that you don't know? What would you like to see documented that isn't already?]  

We may be able to find someone to curate a list of questions so we can discharge them when we've gotten the answers. (SDC: "The answers"?? No such thing! One person will write what he thinks is the answer and the next person will augment, argue or contradict. That's a good thing. In this context, I have in mind discharging the question as soon as there's an answered written down that seems to be a sensible response to the question. Since the whole wiki is open for editing at any time, there's always room to accommodate further contributions.)  


[Who has contributed to DNSSEC over the past twenty years? Let's use this section as a way of asking each person to contribute to this history. (In a separate section, perhaps to be added as a "theme," we can record the contributions of each person. The purpose here is to reach out to each individual and ask for his or her contribution. We can start with the list of names ISOC has been collecting for the recognition at the IESG Plenary, Wednesday, July 28, 2010 in Maastricht, NL.] 

Recent DNS Security Work

The IETF currently has two working groups (WGs) dedicated to DNS issues:  

  • The DNS Operations WG (DNSOP), which is an ongoing group dedicated to nonprotocol aspects of the DNS, such as DNSSEC best practices and root server recommendations
  • DNS-based Authentication of Named Entities WG (DANE),  a group focused on the use of the DANE protocol (RFC 6698) to use DNS and DNSSEC to distribute information about public keys associated with Internet services.

Previously, there was another working group focused on extensions to DNS.  This working group concluded its work in 2013.

  • The DNS Extensions WG (DNSEXT) documented changes to the DNS protocol, such as the DNSSEC RFCs and explanations of how wildcards work.  

All three groups recently have done security-related work. For example, draft-ietf-dnsop-reflectors-are-evil-06.txt has been approved for publication as best current practices, and work on DNSSEC trust anchor configuration and maintenance is going on in the dnsop working group. Refinements of the DNSSEC protocol (new hash algorithms in the wake of recently discovered weaknesses in existing hash algorithms, clarifications of DNSSEC) are being evaluated, and a draft describing how to make the DNS more resilient against forged answers (draft-ietf-dnsext-forgery-resilience-*.txt) was already being discussed before Kaminsky revealed the problems he discovered. 


IETF Journal Volume 4 Issue 2 (October 2008) 

IETF Journal Volume 5, Issue 2 (September 2009) 

IETF Journal Volume 6, Issue 2 (November 2010) 

NLNet Labs – A Short History of DNSSEC - http://nlnetlabs.nl/projects/dnssec/history.html 

1 nominum.com/history.php
2 Bellovin. Using the Domain Name System for System Break-Ins, 1995
3 NLnet Labs .nl.nl experiment: C'T Article. "DNSSEC in NL" is the final report about this experiment.
4 DNSSEC in NL: secreg-report.pdf