What Is NLWeb and Should You Implement It Today?
NLWeb is Microsoft's open protocol that lets AI agents ask your website questions and get structured answers. It's at 3% adoption and growing. Here's what it does, how it works, and whether you should implement it today.
Founder & CEO at AgentReady
NLWeb in Plain English
Imagine a customer visits your website and asks: "Do you ship to Germany, and what's the return policy for electronics?" Right now, they'd need to find your shipping page, check a country list, then navigate to your returns page, find the electronics section, and piece together an answer.
Now imagine an AI agent asks the same question. With traditional web crawling, the AI does essentially the same thing — scraping pages, parsing HTML, hoping it finds the right information. It might get it right. It might hallucinate. It might miss the nuance.
AgentReady™ considers NLWeb the most significant shift in how AI interacts with websites since structured data. Developed by Microsoft and released as an open protocol, NLWeb gives your website the ability to answer natural language questions directly — no scraping, no guessing, no hallucination.
The AI agent asks. Your site answers. Accurately, authoritatively, and in a structured format that AI platforms can parse, cite, and act on.
How NLWeb Actually Works
At a technical level, NLWeb is straightforward. You create an endpoint at /.well-known/nlweb on your domain. When an AI agent wants to query your site, it sends a natural language question to that endpoint. Your NLWeb implementation processes the query and returns a structured JSON response.
The response includes the answer itself, a confidence score, source references pointing to the specific content on your site that informed the answer, and metadata about the answer's freshness and scope. This gives AI platforms exactly what they need: a verified, source-attributed answer they can confidently present to users.
Behind the scenes, your NLWeb implementation can work in several ways. The simplest approach maps queries to existing content — your FAQ, product specs, policy pages — and returns the relevant sections. More sophisticated implementations use your own content database with semantic search to match queries to answers. The most advanced setups use fine-tuned models trained on your specific content.
// Query: POST /.well-known/nlweb
{
"query": "What sizes does the blue running shoe come in?",
"context": {
"user_intent": "product_inquiry",
"response_format": "structured"
}
}
// Response:
{
"answer": "The Blue Velocity Runner is available in US sizes 7-14, including half sizes. Wide width (2E) is available in sizes 8-13.",
"confidence": 0.97,
"sources": [
{
"url": "/products/blue-velocity-runner",
"title": "Blue Velocity Runner - Product Page",
"last_updated": "2026-02-28"
}
],
"metadata": {
"answer_type": "factual",
"content_freshness": "current"
}
}NLWeb query and response example — structured, authoritative, and citable
Traditional Crawling vs. NLWeb: A Comparison
The difference between how AI agents interact with your site today versus how NLWeb changes the dynamic is stark. Understanding this gap is key to grasping why NLWeb matters.
Traditional crawling is indirect and error-prone. An AI agent crawls your HTML, tries to extract meaning from markup that was designed for human browsers, and constructs an answer based on what it thinks it found. This process introduces hallucination risk at every step — the AI might miss content hidden in tabs, misinterpret layout as hierarchy, or combine information from different contexts incorrectly.
NLWeb eliminates this indirection entirely. The AI asks a question. Your site returns the definitive answer. There is no interpretation gap, no parsing ambiguity, no hallucination risk from your content.
Traditional Crawling vs. NLWeb Query
Who's Adopting NLWeb (and Why It Matters)
As of early 2026, NLWeb adoption sits at approximately 3% of commercial websites. That number is concentrated in specific sectors: tech companies, SaaS platforms, developer documentation sites, and some forward-thinking e-commerce brands.
Microsoft's backing gives the protocol significant credibility and staying power. They've integrated NLWeb support into Azure AI services, making implementation substantially easier for sites already in the Microsoft ecosystem. Bing's AI features natively support NLWeb endpoints, giving participating sites a direct advantage in AI-powered search results.
The adoption pattern mirrors early Schema.org implementation. In 2012, structured data was used by fewer than 5% of sites. Google started rewarding rich results, and adoption surged. NLWeb is in that same pre-acceleration phase. The sites implementing now will have a significant advantage when AI platforms begin preferentially using NLWeb endpoints over traditional crawling.
Should You Implement NLWeb Today?
The honest answer depends on your site type, your technical resources, and your competitive landscape.
Implement NLWeb now if you operate an e-commerce site with complex product catalogs, a SaaS platform with documentation, a content publisher with reference material, or any business where AI agents need accurate, current information about your offerings. The competitive advantage is real and growing.
Wait on NLWeb if you have a simple brochure site with minimal content, limited development resources, or no clear use case for AI agents querying your content. In this case, implement llms.txt first — it takes minutes, requires no backend work, and provides immediate value.
The middle path: start with llms.txt today, plan for NLWeb in Q2-Q3 2026. Use the interim to map your content, identify the questions AI agents would ask about your business, and prepare your content architecture for NLWeb integration.
- E-commerce with 100+ products: implement NLWeb now for product query handling
- SaaS with documentation: implement now to reduce support load and improve AI citations
- Content publishers: implement now to protect attribution and control AI access
- Local business or brochure sites: start with llms.txt, plan NLWeb for later
- Budget-constrained: llms.txt first (free), then NLWeb when resources allow
Getting Started with NLWeb Implementation
If you've decided NLWeb is right for your site, the implementation path is more accessible than you might think. The protocol is designed to be incrementally adoptable — you can start simple and expand.
Phase 1: Content Mapping. Identify the 50-100 most common questions AI agents (and users) ask about your business. Map each question to existing content on your site. This exercise alone improves your understanding of content gaps.
Phase 2: Basic Endpoint. Set up the /.well-known/nlweb endpoint with a content-matching system that routes incoming queries to your mapped answers. Open source reference implementations exist for Node.js, Python, and .NET.
Phase 3: Expand and Refine. Add semantic search to handle query variations. Integrate with your CMS so answers stay current automatically. Monitor query logs to identify new questions and expand your coverage.
The investment is real but manageable. Most mid-size sites can have a functional NLWeb endpoint running within one to two development sprints. The payoff — authoritative AI representation of your business — scales indefinitely. Check your current AI readiness score to see where NLWeb fits in your optimization roadmap.
Frequently Asked Questions
Is NLWeb only for Microsoft/Bing or does it work with other AI platforms?
NLWeb is an open protocol. While Microsoft developed it and Bing natively supports it, the specification is publicly available and any AI platform can query NLWeb endpoints. As AI agents become more prevalent, cross-platform support is expected to grow significantly.
How much does NLWeb implementation cost?
Implementation costs vary by complexity. A basic content-matching endpoint for a mid-size site typically requires 1-2 development sprints (2-4 weeks). Open source reference implementations reduce the starting investment. Ongoing costs are primarily server resources for handling queries, which are minimal for most sites.
Can NLWeb replace structured data (Schema.org) markup?
No — NLWeb and Schema.org serve different purposes. Schema.org provides structured metadata about your content for search engines. NLWeb provides a conversational query interface for AI agents. They're complementary. Sites should implement both for maximum AI visibility.
Check Your AI Readiness Score
Free scan. No signup required. See how AI engines like ChatGPT, Perplexity, and Google AI view your website.
Scan Your Site FreeSEO veteran with 15+ years leading digital performance at 888 Holdings, Catena Media, Betsson Group, and Evolution. Now building the AI readiness standard for the web.
Related Articles
NLWeb, MCP, and llms.txt: The Three Protocols That Will Define the Agentic Web
The agentic web runs on three protocol layers. llms.txt tells AI what to read. NLWeb lets AI ask questions. MCP lets AI take action. Here's how they fit together and which one your site needs first.
GuidesHow to Create the Perfect llms.txt File (With Templates)
The llms.txt file tells AI models what your site is about and where to find key content. Here is exactly how to create one, with copy-paste templates for every site type.
GuidesThe Complete Guide to Making Your Website AI-Ready in 2026
Everything you need to know about making your website visible to AI systems in 2026 — the 8 factors that determine whether AI agents cite your content or skip it entirely.