We believe three things, in order.
A vote that can’t be honest is not a vote.
A poll that can be tilted is not a poll.
A result that can’t be verified is not a result.
The platform is built backwards from those three. Everything else followed.
When the poll-owner asks the question, the poll-owner asks for the answer
A poll is supposed to surface what people think. Most online polls surface what the operator wants found — and not because the operator is lying. Online polling has a quiet failure mode that doesn’t require any malice to produce.
They write the question, and a question phrased "Should the city ban X?" gets a different answer than "Should the city allow X?". They list the options, and the option at the top of the list collects more votes than the option at the bottom regardless of content. They invite the audience, and the audience they invite tilts the result before anyone has cast a single vote. They watch the live count, and the live count itself shifts later voters toward whatever side is winning. They decide when to close the poll, and which numbers to highlight when it ends.
None of this requires a poll-owner with bad intent. Honest people, asking honest questions, still produce biased results when their tools don’t push back.
And there’s a second failure mode that doesn’t depend on the poll-owner at all. When voters know — or suspect — that their answer is being tracked, they stop answering honestly. They give the answer that’s socially safe, the one their boss would approve of, the one that won’t end up in a profile attached to their name a decade from now. The poll asks "what do you think?" and gets back "what is safe to say?". The result looks fine; the numbers add up; they just aren’t the numbers the public actually believes.
In a system where the operator counts the votes, controls the question, and quietly knows who voted what, you are trusting the operator on every axis at once. Most people do, by default, because there’s nothing else to trust.
"Anonymous" is a description of the database, not a marketing word — and it goes both directions
Anonymity is a property of architecture, not of policy. A database that has your identity but chooses not to display it doesn’t make you anonymous — it makes you unnamed. We hold to the stricter definition: we don’t have your identity on the voter side, and we don’t have it on the asker’s side either.
Voter side. At our lowest verification tier, you give us nothing voluntarily — no email, no phone, no wallet, no real name. We still hash your IP and device fingerprint so the same person can’t vote a hundred times from a hundred tabs, but those hashes don’t identify you.
At higher verification tiers — where we do need to know something about you to stop fake votes — the boundary is different but the result is the same. Take SMS verification (IMDifferent ID): you prove you have access to a phone number, we hash it, you cast a vote. After that, the platform has the hash — not your phone number. Looking up "who voted for option X?" from inside our database requires an outside input (your actual phone) tested against a one-way hash that doesn’t decode. Nothing in the public surface, the API, or our operational workflow runs that query. The verification step records "this hashed identifier voted"; it never records "this person voted".
The fingerprint of your vote in the daily on-chain record is a one-way hash: a thumbprint, not a copy. Given that thumbprint, no one — not us, not a court order, not a future acquirer — can reconstruct the vote, the voter, or the moment.
Poll-creator side. By default, the poll page does not display who created the poll. You see the question, the options, the category, and the verification tier required — you don’t see "asked by Bob from HR" or "asked by [favorite/least-favorite politician]". That’s deliberate. Once you know who’s asking, the question stops being neutral: voters tilt toward what they think the asker wants to hear, or against it on principle. Hiding the asker is how we stop that pressure from entering the result. The platform’s database does record which account created the poll (so the creator can manage their own polls), but that link does not cross into the public-facing surface.
What we actually do about fake votes
The cost of casting a fake vote should always be greater than the value of casting one. That’s the rule. Everything else is implementation.
A platform that claims "anonymous" without anti-fraud is trivially gamed: one person, a hundred browser tabs, a hundred votes, calls it democracy. We don’t do that. Every vote enters the system through one of five doors, and each door has a different cost to walk through:
Anonymous
no door checkFree, fast, easy to game. Used when the poll’s stakes are low and the poll-owner explicitly accepts the trade-off.
IMDifferent ID
a phone numberOne SIM card, one vote. Costs a few dollars per fake vote — enough to stop bot swarms, not enough to stop a determined attacker with a budget.
Gitcoin Passport
a sybil scoreAggregates signals from existing identity sources to produce a "this is probably one human" badge. Harder to fake at scale than SMS.
Coinbase Verified
a KYC’d financial accountFaking this means faking a bank-grade identity check. We’re talking thousands of dollars per fake vote, not pennies.
World ID
proof you have one human irisCosts a physical visit to a Worldcoin Orb. Effectively impossible to fake at scale; one of the strongest anti-sybil mechanisms publicly available today.
The exact cost of each door is documented; nothing is hidden behind marketing language. The Anonymous tier is anonymous on purpose — it’s not a security failure, it’s a setting the poll-owner chose, and every result lists the tier it was collected at.
Different polls deserve different doors
Friction is a tool, not a virtue. Too little, and you get fake votes. Too much, and you get no votes at all. The right amount is the amount calibrated to what’s actually at stake.
We don’t make every poll go through World ID. Most polls don’t need that level of friction, and the friction itself excludes voters. Asking your neighbors to scan their irises before voting on the dog-park renovation is overkill; asking a foundation’s board to vote on a $5 million budget allocation through an anonymous tier is reckless. Both are wrong, in opposite directions.
So the poll-owner picks the door. A casual public poll opens at IMDifferent ID. A union election runs through Coinbase Verified. An AI-safety referendum that absolutely cannot tolerate sybil attacks runs through World ID only. The verification tier is recorded with the vote, and anyone reviewing the result later can see exactly what kind of audience produced it — and judge the result accordingly.
Why we don’t use Web2 social sign-in, on purpose
Convenience that comes from a third party watching every login is not convenience for you. It is convenience for them, and a slow leak of who you are to companies you didn’t sign up to inform.
Most websites let you sign in with a social-account provider — what the industry calls "Web2 social login". It’s convenient — and that is precisely the problem. Every time you sign in with one of those providers, the provider learns: that user account just logged into this site at this time. Over years, those records become a profile of your civic life: which polls you voted in, when, and how often.
We don’t integrate with any of them. Not because the cryptography is weaker — it isn’t — but because the privacy boundary is in the wrong place by default. Once the social-account provider knows you logged into a polling platform, that information is stored in their systems forever, governed by their privacy policy, not ours.
Instead, when you create an account, we send you a secured link by email — a one-time, time-limited login URL that works once and then expires. No password to leak. No third-party login provider tracking you across sites. Your email provider sees the message; nobody else does. And — important — the secured link doesn’t expose your identity to us either. After the link is used, we hash your email and discard the plain-text version. The hash is enough to log you back in (you type your email again, we hash it the same way, we match the hash) but it isn’t enough for us to know who you are. The email passed through our system; it didn’t stay there.
We can’t email you, even if we wanted to
Privacy that depends on us not changing our minds isn’t privacy — it’s a hope. Real privacy takes the choice away from us, which is the only kind we trust ourselves to keep.
The strongest test of "privacy-by-design" is the boring one: can the company actually contact its users for marketing, retention, "we miss you" emails, or anything else? On most platforms, the answer is yes — they have your plain-text email, ready to be exported into a marketing tool the day they decide to grow faster.
On ours, the answer is no. After secured-link login completes, we hash your email and discard the original. The hash is enough to log you back in (you type your email again, we hash it the same way, we match the hash) but it isn’t enough to send you anything. There is no list of plain-text user emails in our database. There is no marketing tool integration. There is no "newsletter we’ll launch when growth slows". This isn’t restraint; it’s architecture. The information simply isn’t there.
The same logic applies to phone numbers, IP addresses, and device fingerprints. They get hashed before they touch persistent storage. They get used to detect repeated abuse from the same source. They cannot be exported, sold, or subpoena’d in plain-text form because they aren’t stored in plain-text form.
And we don’t run ads — so we don’t need any of this. Targeted ads run on knowing who you are: your age, your gender, your rough location, the kinds of polls you click on. A platform that monetizes through ad revenue has to collect demographic signals to function. Removing ads from the picture removed the reason to collect that data in the first place.
Don’t trust this page — verify it
Trust is something earned over years. We did not have years. So we built the platform so that trust isn’t required — only verification is.
The on-chain anchor record proves processing integrity, not voter identity: it shows that no votes were added, removed, or rewritten between the moment they were sealed and the moment you ran the verifier — but it never shows who any individual voter was. Every other claim above is auditable from outside this site:
- The contract. Base mainnet, address fixed and verified on-chain. Read the source, the bytecode, and the anchorer address yourself in the anchor browser.
- The daily anchors. Pick any day in the anchor browser. Take the published Merkle root, run the standalone verifier with any vote ID, and confirm the proof against the on-chain root. No part of this requires our cooperation.
- The wallet identity. The public anchor card at
/.well-known/imdifferent-anchor.jsondeclares the chain, contract, and anchor wallet address. The signed audit manifest at/api/audit/manifestand the monthly proof-of-life chain at/.well-known/imdifferent-pol/are both signed by that same wallet — verify the signatures against the on-chain anchorer address yourself. - The threat model. The independent verification guide documents what each of these mechanisms proves and, just as importantly, what they don’t.
If any step fails to verify, the page is wrong. Tell us at admin@imdifferent.id — we’d rather hear it from you than ship it forward.
