Sanity
Master headless CMS development with Sanity, including content modeling best practices, GROQ query optimization, Studio customization, real-world implementation patterns, and building scalable content platforms.

Content Agent: AI That Runs Content Operations at Scale
Content Agent is an AI built specifically for large-scale content operations, going far beyond traditional AI writing assistants that only handle one document at a time. It understands your content schema—including document types, fields, validation rules, and relationships—so it can transform raw source material into structured content, audit entire libraries, and execute coordinated updates across thousands of documents. Key capabilities include pipeline transformations from sources like press releases or specs into blog posts or product pages, large-scale analysis to detect missing metadata or outdated content, intelligent bulk editing for rebrands and URL changes, visual editing of images via natural language, and integrated web research to keep content accurate and current. Technically, it leverages document sets, staged changes with reviewable releases, schema-aware validation, a specialized multi-agent architecture, and direct integration with Sanity’s Content Lake. Content Agent is included in all Sanity plans and uses an AI credit system based on queries and actions, with clear examples of typical usage and tools for monitoring and controlling costs.

Building a GDPR-Compliant Cookie Consent System with Sanity CMS
The author decided to build a custom cookie consent solution for their blog to avoid the complexities and privacy issues of third-party libraries. Their solution is fully GDPR-compliant, privacy-first, and integrates with Sanity CMS. The custom implementation includes three main components: a Sanity CMS schema for managing consent content, a React Context for state management, and conditional loading of analytics based on user consent. The system emphasizes accessibility, server-side data fetching, and smart reload logic to ensure a seamless user experience. The cookie banner is designed to be editor-friendly, allowing non-technical users to update content without code changes. The result is a lightweight, under 5KB, GDPR-compliant system that respects user privacy by default, loads no third-party tracking libraries, and provides full transparency with a detailed policy page. Users can withdraw consent at any time, and the system is fully accessible, making it an ideal solution for personal blogs or small sites.

Automating Audio Narration with Sanity Blueprints and Google Text-to-Speech
I recently developed an automated audio narration pipeline for my blog using Sanity, Next.js, and Google Cloud Text-to-Speech. The system regenerates a high-quality MP3 narration automatically whenever the blog post content changes, avoiding manual steps, wasted API calls, or infinite loops. The process relies on Sanity's native automation tools, including Blueprints, delta detection, and GROQ projections. The system reacts to content changes at the CMS level, triggering narration generation only when the blog post's body field changes. This is achieved by using Sanity's delta function to detect changes and a secure webhook to initiate the narration generation via a Next.js API route. The generated MP3 is uploaded back to Sanity and linked to the post. This ensures narration is generated only once per meaningful content change. Storing the audio in Sanity is effective for a personal blog, as it utilizes Sanity's CDN, keeps editorial state and content in one place, and eliminates the need for extra storage services. The result is a fully automated, content-driven audio system with no manual triggers, unnecessary TTS calls, or client-side secrets, providing a clean separation of concerns and scalability.

Building Event-Driven Content Automation: Auto-Summaries with Sanity Agent
This post discusses the implementation of an automatic summary generation system using Sanity's event-driven architecture. The system leverages Sanity Agent, Sanity Functions, and Blueprints to autonomously generate summaries when content is published. It consists of three main components: a Sanity Blueprint that triggers on post publication, a Sanity Function that orchestrates summary generation, and a Sanity Agent that processes content through a language model to create summaries. The event-driven architecture allows the system to react immediately to content changes, eliminating manual intervention and scaling efficiently with content volume. This approach ensures consistent and timely summary generation, enhancing content workflows. The architecture can be extended to other automated tasks like image optimization and SEO metadata generation, treating content as an active event source rather than passive data.

The AI-Driven Journey: Crafting Content with Sanity MCP
This blog post, created entirely by AI using the Sanity MCP server, showcases the integration of AI with content management systems. Sanity MCP allows AI assistants to interact with CMSs using natural language commands, eliminating the need for manual coding. The post details the process of creating AI-generated content, from schema deployment to publishing, and highlights real-world applications such as content scaling, migration, and multilingual support. It emphasizes the collaboration between humans and AI, where AI handles execution and humans provide strategy and review. The post also discusses the ethical aspects of AI-generated content, advocating for transparency and human oversight. Finally, it offers a technical deep dive into MCP's workings and encourages readers to explore AI-driven content creation.

Reading time sanity blog
Reading time estimates are useful for readers to gauge their engagement time with content. For Sanity-powered blogs, calculating reading time is challenging due to Sanity's block content storing rich text as structured data. The recommended solution is to calculate reading time directly in a GROQ query. This involves using the pt::text() function to extract plain text from Portable Text blocks, splitting the text to count words, and dividing by 200 to estimate minutes to read, based on an average reading speed of 200 words per minute. Calculating reading time in GROQ queries enhances performance by reducing client-side processing and improving load times. Adjust the average reading speed based on the complexity of your content for more accurate estimates.