Setting realistic expectations is the single fastest way to get more value from Claude. Most disappointment with AI tools comes from assuming they can do things they cannot, then concluding the tool is broken when it fails. This article lists the limits in plain terms so you can plan around them.
None of these are temporary; they are inherent to how the tools work today. Some may change in future, but plan for now.
Claude cannot browse the live web by default
Claude does not have a permanent live internet connection. It cannot, on its own, open a website, log into an account, scroll a feed, or check whether a page has changed.
Important nuance: Claude Code has a WebFetch tool that can pull individual pages when you specifically ask. Claude Desktop can use Web Search if it has been enabled by an MCP server or built-in connector. So Claude is not blind to the internet — but it does not browse continuously, and live access happens per-request, not as a persistent connection.
What this means in practice: do not expect Claude to know what was published this morning, what is currently trending, what the weather is, or what the price of a stock is — unless you give it the URL or paste the data in.
Claude does not remember between Desktop and Code
Memory in Claude Desktop and auto memory in Claude Code are separate systems on separate stores. Telling Desktop something does not teach Code, and vice versa. They are different products that share an account, not a brain.
If you want context to apply in both, you have to set it up in both. This is annoying and may change in future, but it is how it works today.
Claude cannot run while your machine is asleep
Claude Desktop's Cowork (scheduled tasks) and Claude Code sessions both run on your local machine. If your machine is asleep, in standby, or off, neither runs. They are not server-side automation; they are local applications.
Cowork retries skipped runs once when your machine wakes — you get a single catch-up, not a queue of every missed run. For automation that genuinely needs to be always-on (server-side, cloud-hosted), use the Anthropic API with a proper hosting environment, not the Desktop app.
Claude cannot guarantee deterministic output
Ask Claude the same question twice and you may get two slightly different answers. The model is probabilistic. For most use cases this does not matter — both answers are useful. For some use cases it matters a lot: legal language, regulatory text, anything that needs an exact, stable phrasing.
If determinism matters, the right pattern is: have Claude generate the first version, then save that version as a template and stop regenerating. Use Claude to maintain and adjust the saved version, not to recreate it from scratch each time.
Claude cannot reliably do hard maths
Large language models are not calculators. Claude can describe how to solve a maths problem, can write code that solves it, can interpret the result — but for the actual numerical computation, especially anything with many steps or large numbers, expect errors.
For real calculations, ask Claude to write the calculation as code and run it (Claude Code can execute Python or shell scripts directly). For Desktop, ask Claude to give you the formula and put it in Excel. Do not trust mental-arithmetic-style answers from an LLM.
Claude cannot read PDFs perfectly
Claude is good at PDFs but not flawless. PDFs with complex multi-column layouts, scanned images of text rather than real text, embedded forms, or non-standard fonts can be misread or partially missed.
For high-stakes documents (legal contracts, financial statements), spot-check anything Claude tells you against the original. Do not assume the summary captures everything; for very long PDFs, ask Claude to identify what sections it found difficult to read.
Claude cannot promise fact-accuracy
Claude can fabricate plausible but wrong facts — names, dates, statistics, citations, page numbers, quotations. This is not malice; it is how the underlying model works. The output looks identical whether the fact is true or invented.
For any factual claim that matters, ask Claude to cite the source, then verify the source exists and says what Claude claims. The cost of one fact-check is small; the cost of publishing a fabricated citation is much larger.
Claude cannot retain context indefinitely in one conversation
Each conversation has a context window — the maximum amount of text Claude can keep in working memory at one time. When the conversation grows past that limit, older content is summarised or dropped to make room. This is called compaction. After compaction, Claude may forget specific details from earlier in the same conversation.
For very long working sessions, the right pattern is to end the conversation periodically with an explicit summary ("Summarise what we have decided so far in five bullet points") and start a new conversation with that summary as the opening message. This resets the context budget while preserving the substance.
Claude cannot access tools you have not connected
Out of the box, Claude does not have access to your calendar, email, accounting software, project management tool, or other business systems. Adding access requires setting up an MCP server (Model Context Protocol) for each service, or in some cases enabling a built-in connector.
Until you set those up, Claude can only work with what you tell it in the conversation. If you ask it to "check my schedule" without a calendar MCP server connected, it will tell you it cannot.
Claude cannot do anything that requires being you
Claude cannot make legal commitments on your behalf, sign contracts, attend meetings as you, build relationships, exercise judgement that requires your personal accountability, or take responsibility for outcomes. It can draft, summarise, prepare, and assist — but the ownership of every output remains yours.
This sounds obvious until you see what people try to delegate. Treat Claude as a capable assistant whose work you stand behind, not a delegate who acts in your stead.
Claude cannot replace domain expertise
Claude is good at the structure and language of expert work. It is not a substitute for actual expertise. A medical query gets a plausible answer; the answer may also be wrong in ways only a clinician would catch. A legal query gets a competent-looking memo; the memo may misstate jurisdiction-specific rules a lawyer would notice immediately.
Use Claude to accelerate work in domains where you are already competent enough to spot errors. Be careful using it as a substitute for expertise you do not have.
What this means in practice
Claude is most useful when you treat it as a capable assistant who:
- Knows a lot but cannot verify what it remembers
- Works fast but not always accurately
- Drafts well but does not own the outcome
- Operates inside the context you give it, not outside it
That framing is realistic. Working with it that way produces consistently useful results. Treating Claude as anything more produces frustration; treating it as anything less wastes the capability.
