A comprehensive, technical guide to Tor, onion routing, anonymity layers, and the real architecture behind the internet's hidden network.
Everything that traditional search engines can find and index. This includes news sites, social media platforms, e-commerce stores, and public-facing websites. It represents a surprisingly small fraction of total web content.
Content that isn't indexed by search engines but isn't inherently sinister — your email inbox, banking portals, academic databases, private corporate intranets, medical records, and content behind paywalls. You access the deep web every day.
A small subset of the deep web that requires special software — specifically the Tor Browser — to access. Sites here use .onion addresses instead of traditional domains. The dark web exists because of legitimate privacy needs: journalists communicating with sources, activists in authoritarian regimes, and privacy-conscious individuals.
While illegal activity does exist on the dark web, that's also true of the regular internet. The technology itself is neutral — it was originally developed by the U.S. Naval Research Laboratory for secure intelligence communications.
Researchers at the U.S. Naval Research Laboratory — David Goldschlag, Michael Reed, and Paul Syverson — develop the concept of onion routing for protecting intelligence communications online.
Roger Dingledine and Nick Mathewson, along with Syverson, deploy the Tor network alpha. The name "Tor" originally stood for "The Onion Router."
The Naval Research Laboratory releases the Tor source code under a free license. The Electronic Frontier Foundation (EFF) begins funding Tor development.
The Tor Project, Inc. is established as a 501(c)(3) nonprofit organization dedicated to maintaining and advancing the Tor software.
A user-friendly browser based on Firefox is packaged with Tor, making anonymous browsing accessible to non-technical users. The .onion ecosystem begins growing rapidly.
Each node in the Tor circuit has a specific role — click one to see what it knows and can't see. You can also click the encryption labels between nodes (like "3× encrypted") to understand what each encryption state means and why it matters.
The first relay your traffic touches. It knows your real IP address but cannot see what you're requesting or where traffic ultimately goes. You keep the same guard node for 2–3 months to resist certain profiling attacks.
The anonymous intermediary. It only knows the guard node sent it traffic, and that it needs to forward it to the exit node. It can't see your IP or your destination. Its entire purpose is to break the link between origin and destination.
The final relay that sends your request to the actual internet. It can see the destination and any unencrypted traffic content, but it has no idea who you are. If you visit an HTTPS site, the exit node only sees encrypted data. For .onion sites, there is no exit node at all.
Key principle: No single node in the Tor circuit knows both the sender and the destination. The guard knows who but not where. The exit knows where but not who. The middle knows neither. This is the fundamental guarantee of Tor's design.
An onion service generates a public-private key pair. The .onion address is derived from the public key — for v3 onion services, it's a 56-character base32-encoded string ending in .onion.
The server publishes "introduction points" to the Tor network. When a client wants to connect, both sides build circuits to a mutually agreed "rendezvous point," so neither side reveals their IP to the other.
This means .onion sites never touch the regular internet. There's no DNS, no exit node, no IP exposure. The connection is end-to-end encrypted within Tor.
Onion services provide the strongest anonymity model Tor offers: both parties are hidden. This is used for sensitive applications like SecureDrop (media whistleblowing platform), the Facebook .onion mirror, and ProtonMail's .onion access.
Because both sides build their own circuits that meet at a rendezvous point, an attacker would need to compromise six relays simultaneously (three in each circuit) to deanonymize the connection — a dramatically harder attack than compromising a standard three-node circuit.
v3 addresses: Modern .onion addresses are 56 characters long and use Ed25519 public key cryptography — a significant security upgrade from the older 16-character v2 addresses.
When your Tor client builds a circuit, it negotiates a unique symmetric encryption key with each of the three relays using Diffie-Hellman key exchange. None of these keys are shared between relays.
Before sending your data, the client encrypts the payload three times — first with the exit node's key, then the middle node's key, then the guard node's key. Each relay decrypts one layer with its key and forwards the result.
If the client used a single encryption key, any relay that obtained it could read the traffic. With layered encryption, compromising one relay reveals nothing — you'd need all three keys to reconstruct the original data.
This is also why Tor circuits are rotated every 10 minutes for new connections. Even if an adversary is statistically analyzing traffic patterns, frequent circuit rotation limits how much correlated data they can gather.
| Configuration | What ISP Sees | What Guard Sees | Risk Profile | Verdict |
|---|---|---|---|---|
| Tor Only | You're connecting to Tor (visible entry node IPs are public) | Your real IP address | ISP knows you use Tor but can't see content | Sufficient for most |
| VPN → Tor | You're using a VPN (doesn't know it's Tor) | VPN's IP address, not yours | VPN provider could log that you use Tor; adds latency | Situational |
| Tor → VPN | You're connecting to Tor | Your real IP address | VPN sees traffic destination; becomes a fixed exit point; defeats Tor's rotation | Not recommended |
| Tor with Bridges | Encrypted traffic to unknown IP (doesn't look like Tor) | Your real IP (bridge is the guard) | Best for hiding Tor usage from ISP without trusting a VPN | Best alternative |
This is a common misconception. Your ISP can see that you're connecting to known Tor entry nodes (the list of guard node IPs is public). Adding a VPN hides the Tor usage from your ISP — they only see VPN traffic.
However, using a VPN is also visible to your ISP, and in some contexts VPN usage itself may draw scrutiny. The more important question is: does your threat model require hiding Tor usage from your ISP?
For most users in democratic countries, ISP visibility of Tor is not a practical concern. For users in restrictive environments where Tor is blocked or surveilled, Tor bridges (especially with pluggable transports like obfs4 or Snowflake) are the recommended solution — not VPNs.
When you use VPN → Tor, you're moving trust from your ISP to your VPN provider. Your VPN now knows your real IP and that you're using Tor. You're betting that your VPN provider is more trustworthy than your ISP — and that they truly keep no logs.
The Tor Project's official stance is nuanced: they don't universally recommend or discourage VPN + Tor, but emphasize that most users don't need it, and misconfiguring it can actually weaken anonymity.
The Tor Project recommends: If you need to hide Tor usage, use bridges with pluggable transports rather than VPNs. Bridges are unlisted entry points that make your traffic look like normal HTTPS — no VPN trust required.
Your ISP can see Tor usage but cannot see what sites you visit, what data you transmit, or your browsing behavior. All traffic between you and the guard node is encrypted.
Websites see the exit node's IP, not yours. Combined with Tor Browser's anti-fingerprinting features (uniform window size, disabled WebGL, spoofed user-agent), you blend in with all other Tor users.
Anyone monitoring your Wi-Fi (coffee shop, hotel, university) only sees encrypted Tor connections — no browsing content leaks through.
A powerful adversary monitoring both your entry and destination traffic simultaneously could potentially correlate timing patterns. This is Tor's hardest unsolved problem, but requires nation-state resources.
If the Tor Browser has an unpatched vulnerability, an attacker could execute code that bypasses Tor entirely. This is why keeping Tor Browser updated is essential, and why the security slider exists.
Tor cannot protect you from yourself. Logging into personal accounts, providing identifying information, enabling JavaScript on untrusted sites, or downloading files that execute outside Tor all break anonymity.
If your device is already compromised with malware or a keylogger, Tor is irrelevant — the attacker can see everything before it enters the Tor circuit. Device security is a prerequisite.
An entity that can observe all internet traffic simultaneously (theoretically, a coalition of intelligence agencies) could defeat Tor through traffic analysis. Tor does not claim to defend against this threat model.
Tor Browser includes a security level setting (Standard, Safer, Safest). Higher levels disable JavaScript, remote fonts, and media — dramatically reducing your attack surface at the cost of usability.
Never mix your anonymous and real identities. A single login to a personal account, one revealing search query, or a reused username can permanently link your Tor session to your real identity.
Downloaded files (especially PDFs and Office documents) can contain resources that load outside Tor when opened. Always open downloads in a disconnected environment or use Tails OS, which forces all traffic through Tor.
Consistent connection patterns (same time every day, same session length) can be used to profile users. Varying your usage patterns adds noise to any potential correlation analysis.
For serious anonymity needs, use Tails (amnesic live OS that routes everything through Tor) or Whonix (isolated VM-based Tor gateway). These prevent accidental traffic leaks that a normal OS can cause.
Even encrypted content has metadata — file creation dates, GPS data in images, document author fields. Strip metadata from any files before sharing them anonymously.