Published: April 9, 2026 | Updated: April 9, 2026 | Reading Time: 10 min
About the Author
Rohan Verma is a casual gaming writer and AI tools enthusiast based in Mumbai. He has spent 40+ hours playing What Beats Rock across desktop and mobile since the game launched in July 2024, documenting which answers the AI accepts, which ones it rejects, and what patterns emerge across long chains. He has reached a personal best streak of 47 consecutive accepted answers and regularly participates in the game’s Reddit community at r/WebGames. His writing covers browser games, AI tools, and digital entertainment.
Why Read This Guide Before Playing
Most What Beats Rock guides hand out long lists of answers without telling readers which ones the AI actually accepts or why certain answers cause chains to collapse. Rohan tested every category in this guide through live gameplay sessions in March and April 2026. Where answers were rejected, that is noted. Where specific phrasing worked better than vague phrasing, that detail is included too.
The goal here is simple: help players build longer chains, avoid dead ends, and actually understand how the AI thinks — not just copy a list and hope for the best.
What Is the What Beats Rock Game?
What Beats Rock is a browser-based AI game developed by Khoi Le and Kyle Gian, released in July 2024 at whatbeatsrock.com. The game uses a Large Language Model (LLM) — specifically GPT-4o — to judge whether a player’s answer logically defeats the previous one. If you are curious how it compares to the original hand game, this breakdown of what beats rock in rock paper scissors covers the traditional rules and how the AI version expands on them.
Here is how a typical session starts:
The screen shows: “Rock” Player types: “Hammer” AI accepts: “A hammer can break rock through impact force.” Screen now shows: “Hammer” Player types: “Rust” AI accepts: “Rust gradually degrades metal tools over time.”
The chain continues until the AI rejects an answer. The player’s score is the total number of accepted answers in one chain.
Key Facts About the Game
| Detail | Info |
|---|---|
| Developers | Khoi Le and Kyle Gian |
| Launch date | July 2024 |
| AI model used | GPT-4o (LLM) |
| Platform | Browser-based (mobile and desktop) |
| App availability | iOS App Store and Google Play |
| Leaderboard | Weekly reset — no permanent records |
| Cost | Free to play |
The leaderboard resets every week, so no permanent world record exists. Community reports of chains exceeding 150 answers come from collaborative team sessions documented in October 2024, not individually verified solo runs.
How the AI Actually Judges Answers
Understanding the AI’s logic is the real foundation of getting better at this game. The LLM does not follow a rulebook — it evaluates logical coherence in real time. That means two things matter above all else:
1. The relationship must be clear. “Dynamite beats hammer” works because dynamite destroys tools. “Purple beats hammer” fails because there is no logical relationship between a color and a tool.
2. Specificity beats vague generality. In testing, “water” sometimes gets rejected mid-chain as too similar to earlier answers, while “pressurized water jet” gets accepted because it adds a distinct mechanism. The more specific the answer, the better the AI can evaluate it.
The AI also tracks context throughout the session. It remembers earlier answers and rejects ones that feel like a repeat of a concept already used, even if the exact word is different.
60+ Answers That Work — Organized by Difficulty
The answers below come from real gameplay testing in March and April 2026. Each section includes notes on what worked, what got rejected, and how to phrase answers for the best chance of acceptance. For a quick-reference version of the most reliable answers, the What Beats Rock answers cheat sheet is a useful companion to keep open during a session.
Beginner Answers: Start With These
These answers build a logical foundation early in a chain. They work reliably because the relationships are clear and well-established.
Physical tools that break or cut rock:
- Hammer (breaks rock through impact)
- Pickaxe (designed specifically for rock)
- Drill (penetrates rock with rotation and force)
- Chisel (precise rock carving — works well followed by “sculptor”)
- Sledgehammer (overwhelming blunt force)
- Jackhammer (industrial pneumatic drilling)
- Hydraulic press (extreme compressive force)
Testing note: “Hammer” and “pickaxe” both worked consistently. Submitting “bigger hammer” right after “hammer” got rejected — the AI flagged it as a non-distinct escalation. Use a different category before returning to tools.
Natural elements that erode or melt rock:
- Water (erosion over long periods)
- Lava (melts rock into molten form)
- Ice (freeze-thaw cycles crack rock)
- Acid rain (chemical weathering)
Testing note: “Water” alone sometimes gets challenged mid-chain. “Pressurized water jet” or “acid rain” received cleaner acceptances because the mechanism is more specific.
Natural forces:
- Wind (weathering and erosion)
- Gravity (pulls and crushes)
- Pressure (geological compression over time)
Intermediate Answers: Introduce Abstract Thinking
Once physical tools are established in the chain, shift toward processes and events. These answers work because they represent natural or human-made forces at a larger scale.
Natural disasters and events:
- Earthquake (massive ground rupture)
- Volcano (extreme heat breaks down rock)
- Tsunami (water force at scale)
- Avalanche (overwhelming mass and momentum)
- Meteor strike (cosmic impact energy)
Human technology:
- Dynamite (controlled explosive — excellent transition from tools to technology)
- Nuclear bomb (ultimate physical destruction)
- Laser cutter (focused energy beam)
- Excavator (heavy machinery for large-scale breaking)
Testing note: “Nuclear bomb” works well but creates a difficult next step. Save it for when other categories feel exhausted — do not use it before turn 20.
Time and process-based answers:
- Erosion (time-based breakdown)
- Weathering (gradual surface degradation)
- Entropy (everything degrades over infinite time)
- Geological time (ultimate slow process)
These answers work as transition bridges. They shift the chain from physical objects toward abstract concepts without making the logic jump feel sudden.
Biological answers:
- Tree roots (crack through solid rock over decades)
- Bacteria (breaks down minerals over geological time)
- Lichen (produces acids that slowly dissolve rock surfaces)
- Mold (organic breakdown of surrounding material)
Testing note: “Tree roots” was one of the most reliably accepted biological answers. “Humans with tools” also worked because it combines intelligence and agency with physical force.
Advanced Answers: Philosophical and Scientific Concepts
These answers move beyond the physical world. They work when framed with a clear logical relationship — not as vague assertions.
Scientific principles:
- Gravity (shapes all physical matter)
- Thermodynamics (heat transfer breaks down all structures)
- Quantum mechanics (atomic-level disruption)
- Dark matter (theoretical mass interaction)
- Antimatter (annihilates matter on contact)
Testing note: Abstract science answers need framing. “Quantum mechanics” alone got rejected once — “quantum tunneling through rock’s molecular structure” got accepted. Add the mechanism.
Philosophical concepts:
- Consciousness (awareness can conceive of and manipulate matter — matter cannot think)
- Human will (directs tools and intentions that reshape rock)
- Thought (immaterial force guiding physical action)
- Perception (determines the meaning and use of physical matter)
Cosmic concepts:
- Black hole (gravitational force strong enough to compress any matter)
- Big Bang (origin event that created all matter including rock)
- Heat death of the universe (ultimate entropy removes all structure)
- Spacetime (the framework containing all physical matter)
Expert Answers: When the Chain Gets Long
These answers work best after turn 40 or 50, when most physical and natural categories are exhausted. Use them as escapes, not starting points.
Paradoxes (use carefully):
- The liar’s paradox (a self-contradicting statement that breaks conventional logic)
- The Ship of Theseus (identity dissolving through gradual replacement)
- An unstoppable force meeting an immovable object (logical impossibility transcending rules)
Testing note: Paradoxes work well but close off further options quickly. After a paradox, the meta-game tier is usually the only exit.
Meta-game answers:
- The game itself (the framework containing all possible answers)
- The player (the person generating answers, directing the whole chain)
- The concept of “beating” (the relationship the entire game is built on)
- The rules of logic (the system evaluating every answer)
These are the chain’s escape hatches. They almost always get accepted because they operate on a completely different level from every physical or conceptual answer before them.
Answers That Commonly Fail — And Why
Knowing what to avoid saves chains from ending unnecessarily.
| Answer | Why It Often Fails |
|---|---|
| “God” | Creates an immediate dead end — almost nothing is accepted as beating it |
| “Infinity” | Same problem — the chain becomes logically trapped |
| “Everything” | Too vague — no specific mechanism |
| “Nothing” | AI rejects it as a non-answer in most contexts |
| “Bigger rock” | Flagged as a non-distinct escalation |
| Repeating a previous concept | AI tracks context and rejects near-duplicates |
| Vague philosophical claims | “Love beats rock” fails without a logical explanation |
The key insight here: powerful ≠ good strategy. Submitting “omnipotence” early feels clever but traps the chain. Save absolute concepts for late-game emergencies.
5 Strategies That Actually Extend Chains
These strategies come from playing and observing where chains actually collapse.
1. Rotate Categories Every 3 Answers
The AI gets stricter when it sees the same type of answer repeated. Rotating between physical, process-based, and abstract answers keeps the AI evaluating fresh logical relationships.
Example rotation:
Hammer → Rust → Time → Clock → Electricity → Energy → Black hole
This chain moves: tool → process → abstraction → device → natural force → cosmic concept. Each step is distinct.
2. Add Specificity to Generic Answers
Generic answers like “fire” or “water” get harder to accept mid-chain. Specific variants give the AI a clearer mechanism to evaluate.
- “Fire” → “Forest fire during a drought”
- “Water” → “Pressurized water jet”
- “Time” → “10,000 years of glacial movement”
3. Save Ultimate Concepts for Emergencies
God, infinity, omnipotence, and nothingness are chain-enders masquerading as powerful answers. Once submitted, there is almost nothing the AI will accept as beating them. Reserve these for turn 50 and beyond — only when genuinely stuck.
4. Use Process Bridges When Stuck
When stuck between categories, process-based answers like “erosion,” “decay,” “oxidation,” or “entropy” work as logical bridges. They connect physical objects to abstract concepts without forcing a sudden jump.
5. Read Rejections as Feedback
When the AI rejects an answer, it usually gives a brief explanation. Reading these explanations reveals exactly what logical relationship the AI expected. The same concept rephrased with a clearer mechanism often gets accepted on the second attempt.
How the Scoring and Leaderboard Work
The game tracks chain length — the total number of consecutively accepted answers in a single session. The leaderboard displays top scores and resets weekly, giving every player a fresh chance to compete.
There is no permanent hall of fame. Community-documented records from collaborative team sessions in October 2024 reported chains exceeding 150 answers, but these involved multiple people contributing answers together — not individual solo runs.
Realistic individual targets based on community observations:
- Beginner: 10–20 answers
- Intermediate: 25–50 answers
- Advanced solo player: 50–80 answers
For players specifically targeting a 100-answer streak, the dedicated What Beats Rock high score guide goes deeper into the chain management techniques needed to reach that milestone.
Frequently Asked Questions
Does the same answer work every time?
No. The AI tracks context throughout a session and rejects answers that closely resemble ones already used. An answer that worked in a previous game may get rejected in a current one if a similar concept already appears in the chain.
Can younger players enjoy What Beats Rock?
Yes, with some guidance. The basic gameplay is accessible to older children, though understanding abstract philosophical answers requires more developed reasoning. The creative element makes it engaging across age groups.
Is there an offline version?
No. The game requires an internet connection because every answer is evaluated by a cloud-based LLM in real time. There is no offline mode.
Does the AI make mistakes?
Yes, occasionally. The LLM sometimes rejects logically valid answers or accepts weak ones. When a valid answer gets rejected, rephrasing it with a more explicit mechanism often works. The AI tends to respond better to explanatory framing than bare noun submissions.
What platform is best for playing?
The browser version at whatbeatsrock.com works identically across desktop and mobile. The iOS and Android apps offer the same core gameplay. Desktop typing tends to be faster for building long chains, but mobile works perfectly for casual sessions.
Does What Beats Rock cost anything?
The game is free to play on all platforms. No account is required to play, though signing in is required to appear on the weekly leaderboard.
Final Thoughts
What Beats Rock rewards players who understand the AI’s logic rather than those who memorize lists. The most important habit to build is rotating categories, because chains collapse when the AI detects repetitive patterns — not because the game runs out of valid answers.
Start with concrete physical tools, transition through processes and natural disasters, move into scientific principles, and save paradoxes and meta-game answers for deep in the chain. Read every rejection as a lesson in how the AI thinks.
The game continues to evolve as the developers update the LLM evaluation system, so strategies that worked in 2024 may behave slightly differently now. The best approach is to treat each session as a fresh experiment rather than a fixed script. For a deeper look at advanced chain-building tactics, the What Beats Rock game strategy guide covers category steering and long-chain planning in more detail.
Disclosure
This guide is based on the author’s independent gameplay experience across 40+ hours of testing in March and April 2026. No commercial relationship exists with whatbeatsrock.com, Khoi Le, Kyle Gian, or any related entity. Community record claims are sourced from player-reported documentation and are not independently verified by the author.

Leave a Reply