Securing web browsing: protecting the Tor network
Philipp Winter, Princeton University
There are more than 865 encryption tools in use worldwide, all addressing different aspects of a common problem. People want to protect information: hard drives from oppressive governments, physical location from stalkers, browsing history from overly curious corporations or phone conversations from nosy neighbors. They all rely on cryptography, a delicate craft that when done properly enables secure communication despite snoopers’ efforts.
However, bad cryptography can open gaping security holes, a fate that has befallen many popular systems. But without technical knowledge and experience, users can’t know the difference between good and bad tools until it’s too late.
One of the most popular cryptographic tools – with two million daily users – is Tor, a network for browsing the Internet anonymously. It relies on a large group of volunteers, some of whom are anonymous, which can raise questions about trusting the system. If expert users and developers had tools to detect suspicious behavior, they could root out problems, improving reliability – and trustworthiness – for everyone.
People use Tor for a wide variety of reasons: to research diseases, protect themselves from domestic abuse, prevent companies from profiling them or circumvent countrywide censorship, just to name a few. Tor does this by decoupling a user’s identity from his or her online activity. For example, when Tor is used, websites such as Facebook cannot learn where a user is physically located, and Internet service provider companies cannot learn what sites a customer is visiting.
The system works by connecting a user to the intended website over a sequence of encrypted connections through computers that sign up to participate in the network. The first computer in the relay sequence, called an “entry guard,” knows the user’s network address, because it accepts the incoming traffic. But because the content is encrypted, that computer doesn’t know what the user is doing online.
The second computer in the chain doesn’t know where the user is, and merely passes along the traffic to what is called the “exit relay.” That computer decrypts the user’s Internet activity and exchanges data with the unencrypted Internet. The exit relay knows what the user is doing online, but cannot easily identify who is doing it.
Once the exit relay gets the information from the Internet, it encrypts it and sends it back to the previous link in the chain. Each link does the same, until the original computer receives and decrypts the data, displaying it for the user.
Trusting the code
The Tor software is developed and distributed by a nonprofit called the Tor Project. People use Tor for free; funding comes from supporters such as individuals, companies, nonprofits and governments. Sensitive to concerns that big funders might cause the public to worry about who is really at the controls, the organization is working to improve its financial independence: recently its first crowdfunding campaign raised more than US$200,000.
In addition, the Tor Project has been outspoken about its dedication to privacy, including supporting Apple’s decision not to help the FBI access an encrypted iPhone by building an intentional weakness into the encryption software – which is often called a “backdoor.” The Tor Project declared, “We will never backdoor our software.”
Technically speaking, users can decide whether to trust the Tor system by verifying it independently. The source code is freely available, and the Tor Project encourages people to inspect all ~200,000 lines. A recently created bug bounty program should encourage developers and researchers to identify security problems and tell project programmers about them.
However, most people don’t build their own executable programs from source code. Rather, they use programs provided by developers. How can we evaluate their trustworthiness? Tor’s software releases are signed with official cryptographic signatures, and can be downloaded via encrypted and authenticated connections to assure users they have downloaded genuine Tor software that wasn’t modified by attackers.
In addition, Tor recently made “reproducible builds” possible, which allows volunteers to verify that the executable programs distributed by Tor have not been tampered with. This can assure users that, for example, the Tor Project’s computers that build executable programs are not compromised.
Trusting the network
While the software is developed by the Tor Project, the network is run by volunteers around the world, together operating 7,000 relay computers as of May 2016.
Some organizations publicize the fact that they operate one or more relays, but many are run by individual operators who don’t announce their participation. As of May 2016, more than one-third of Tor relays offer no way to get in touch with the operator.
It’s hard to trust a network with so many unknown participants. Just like at coffee shops with open Wi-Fi spots, attackers can intercept network traffic over the air or by running exit relays and snooping on Tor users.
Finding and removing bad actors
To protect Tor users from these problems, my team and I are developing two free software tools – called exitmap and sybilhunter – that allow the Tor Project to identify and block “bad” relays. Such bad relays could, for example, use outdated Tor relay software, forward network traffic incorrectly or maliciously try to steal Tor users’ passwords.
Exitmap tests exit relays, the thousand or so computers that bridge the gap between the Tor network and the rest of the Internet. It does this by comparing the operations of all the relays. For example, a tester could access Facebook directly – without Tor – and record the digital signature the site uses to assure users they are actually talking to Facebook. Then, running exitmap, the tester would contact Facebook through each of the thousand Tor exit relays, again recording the digital signature. For any Tor relays that deliver a signature different from the one sent directly from Facebook, exitmap raises an alert.
Our other tool, sybilhunter, seeks out sets of relays that could be under the control of a single person, such as a person who might use her relays to launch an attack. Among other things, sybilhunter can create images that illustrate when Tor relays join and leave the network. Relays that join and leave at the same times might be controlled by a single person.
Our research has identified a wide variety of misbehaving relays. Some tried to steal users’ login information for popular sites such as Facebook. Equally common were relays that were subject to countrywide censorship systems, blocking access to certain types of websites, such as pornography. Though the relay operators themselves are not altering the results, it does go against the Tor network philosophy that its use should not involve content filtering. We discovered a few exit relays that tried to steal Tor users’ money by interfering with Bitcoin virtual currency transactions.
It is important to view these results in proper perspective. While some attacks did appear concerning, misbehaving relays are in the clear minority, and not frequently encountered by Tor users. Even if a user’s randomly selected exit relay turns out to be malicious, other security features in the Tor Browser, such as the previously mentioned HTTPS-Everywhere, act as safeguards to minimize harm.Comment on this article
Philipp Winter is a member of the Tor Project.