OpenStreetMap logo OpenStreetMap

Post When Comment
Teaching AI to Understand OpenStreetMap Tags

MCP, or Model Context Protocol, is a tool designed specifically for AI systems and developers building AI applications. It acts as a backend server that provides structured data access—like querying OpenStreetMap tagging schemas—without any user interface (UI).

No UI Involved

This MCP server has zero graphical interface, web forms, or chat-like frontend; it’s pure API endpoints for programmatic use. AI agents “know” how to connect to it via standardized protocols to fetch, validate, or suggest OSM tags automatically.

Everyday Users Can’t Use It

If you only interact with AI through simple web forms (like asking questions on ChatGPT’s site), you won’t be able to use this at all—no buttons to click or prompts to type will work here. It’s not for casual chatting; only AIs or coders who integrate it into apps can leverage it.

Teaching AI to Understand OpenStreetMap Tags

An MCP based on a formal schema produces much better results than the classic approach of creating agents like “You are an OSM expert”.
Instead of guessing from a loose prompt, the model is constrained by a strict structure, context, and rules it must align with.

Handling edge cases will be one of the main themes for versions v4 and v5.
I keep thinking about how to properly express all those situations where a cat is not a cat, a dog is not a dog, and a mouse is not a mouse – cases where the tag looks “obvious”, but the semantics are not obvious at all.

This effectively requires designing a dedicated exception schema, because there is currently no schema that explains these nuances clearly and comprehensively.
As a side effect, this might turn into an engine that can power a range of other use cases – essentially an attempt to translate the OSM wiki into a structured API.

I treat this as an endless playground for self‑improvement: even if I never reach the “ideal” goal, I will learn a lot along the way – from evaluating whether the project makes sense at all, through gaining humility toward complex problems, to better understanding OSM edge cases and how to work with AI.
All of that is a net gain, regardless of the final outcome.

The whole thing was originally created with another tool in mind: something that ingests spaghetti‑like text notes, voice notes, and photos, and still manages to extract something reasonably coherent from them, without blindly combining completely unrelated tags.

Building drawing tips, a perfect guide

I have some edits that require a lot of fiddling, but I believe there are easier ways. I’ll send you questions about that, I just need to record how I do it now so you know what I’m trying to achieve. Maybe you can give me a better recipe.

Better-osm-org for watchmans

That’s interesting what you did. Good job. Inspiring.