February, 4 2026
Highlights
Observational Memory for long-running agents
Observational Memory is a new Mastra Memory feature which makes small context windows behave like large ones, while retaining long-term memory. It compresses conversations into dense observations logs (5–40x smaller than raw messages). When observations grow too long, they're condensed into reflections. Supports thread and resource scopes. It requires the latest versions of @mastra/core, @mastra/memory, mastra, and @mastra/pg, @mastra/libsql, or @mastra/mongodb.
Skills.sh ecosystem integration (server + UI + CLI)
@mastra/server adds skills.sh proxy endpoints (search/browse/preview/install/update/remove), Studio adds an “Add Skill” dialog for browsing/installing skills, and the CLI wizard can optionally install Mastra skills during create-mastra (with non-interactive --skills support).
Dynamic tool discovery with ToolSearchProcessor
Adds ToolSearchProcessor to let agents search and load tools on demand via built-in search_tools and load_tool meta-tools, dramatically reducing context usage for large tool libraries (e.g., MCP/integration-heavy setups).
New @mastra/editor: store, version, and resolve agents from a database
Introduces @mastra/editor for persisting complete agent configurations (instructions, models, tools, workflows, nested agents, processors, memory), managing versions/activation, and instantiating dependencies from the Mastra registry with caching and type-safe serialization.
Breaking Changes
@mastra/elasticsearch: vector document IDs now come from Elasticsearch_id; storedidfields are no longer written (breaking if you relied onsource.id).
Changelog
@mastra/core@1.2.0
-
Update provider registry and model documentation with latest models and providers
Fixes: e6fc281
-
Fixed processors returning
{ tools: {}, toolChoice: 'none' }being ignored. Previously, when a processor returned empty tools with an explicittoolChoice: 'none'to prevent tool calls, the toolChoice was discarded and defaulted to 'auto'. This fix preserves the explicit 'none' value, enabling patterns like ensuring a final text response whenmaxStepsis reached.Fixes: #12601
-
Internal changes to enable observational memory
-
Internal changes to enable
@mastra/editor -
Fix moonshotai/kimi-k2.5 multi-step tool calling failing with "reasoning_content is missing in assistant tool call message"
-
Changed moonshotai and moonshotai-cn (China version) providers to use Anthropic-compatible API endpoints instead of OpenAI-compatible
-
moonshotai:
https://api.moonshot.ai/anthropic/v1 -
moonshotai-cn:
https://api.moonshot.cn/anthropic/v1This properly handles reasoning_content for kimi-k2.5 modelFixes: #12530
-
-
Fixed custom input processors from disabling workspace skill tools in generate() and stream(). Custom processors now replace only the processors you configured, while memory and skills remain available. Fixes #12612.
Fixes: #12676
-
Workspace search index names now use underscores so they work with SQL-based vector stores (PgVector, LibSQL).
@mastra/client-js@1.2.0
-
Internal changes to enable observational memory
-
Internal changes to enable
@mastra/editor -
Improved workspace filesystem error handling: return 404 for not-found errors instead of 500, show user-friendly error messages in UI, and add MastraClientError class with status/body properties for better error handling
Fixes: #12533
@mastra/convex@1.0.2
-
Fixed import path for storage constants in Convex server storage to use the correct @mastra/core/storage/constants subpath export
Fixes: #12560
@mastra/editor@0.2.0
Created @mastra/editor package for managing and resolving stored agent configurations
This major addition introduces the editor package, which provides a complete solution for storing, versioning, and instantiating agent configurations from a database. The editor seamlessly integrates with Mastra's storage layer to enable dynamic agent management.
Key Features:
- Agent Storage & Retrieval: Store complete agent configurations including instructions, model settings, tools, workflows, nested agents, scorers, processors, and memory configuration
- Version Management: Create and manage multiple versions of agents, with support for activating specific versions
- Dependency Resolution: Automatically resolves and instantiates all agent dependencies (tools, workflows, sub-agents, etc.) from the Mastra registry
- Caching: Built-in caching for improved performance when repeatedly accessing stored agents
- Type Safety: Full TypeScript support with proper typing for stored configurations
Usage Example:
import { MastraEditor } from "@mastra/editor";
import { Mastra } from "@mastra/core";
// Initialize editor with Mastra
const mastra = new Mastra({
/* config */
editor: new MastraEditor()
});
// Store an agent configuration
const agentId = await mastra.storage.stores?.agents?.createAgent({
name: "customer-support",
instructions: "Help customers with inquiries",
model: { provider: "openai", name: "gpt-4" },
tools: ["search-kb", "create-ticket"],
workflows: ["escalation-flow"],
memory: { vector: "pinecone-db" }
});
// Retrieve and use the stored agent
const agent = await mastra.getEditor()?.getStoredAgentById(agentId);
const response = await agent?.generate("How do I reset my password?");
// List all stored agents
const agents = await mastra.getEditor()?.listStoredAgents({ pageSize: 10 });
Storage Improvements:
- Fixed JSONB handling in LibSQL, PostgreSQL, and MongoDB adapters
- Improved agent resolution queries to properly merge version data
- Enhanced type safety for serialized configurations
Fixes: #12631
@mastra/elasticsearch@1.1.0
Added API key, basic, and bearer authentication options for Elasticsearch connections.
Changed vector IDs now come from Elasticsearch _id; stored id fields are no longer written (breaking if you relied on source.id).
Why This aligns with Elasticsearch auth best practices and avoids duplicate IDs in stored documents.
Before
const store = new ElasticSearchVector({ url, id: "my-index" });
After
const store = new ElasticSearchVector({
url,
id: "my-index",
auth: { apiKey: process.env.ELASTICSEARCH_API_KEY! }
});
Fixes: #11298
@mastra/evals@1.1.0
Added getContext hook to hallucination scorer for dynamic context resolution at runtime. This enables live scoring scenarios where context (like tool results) is only available when the scorer runs. Also added extractToolResults utility function to help extract tool results from scorer output.
Before (static context):
const scorer = createHallucinationScorer({
model: openai("gpt-4o"),
options: {
context: ["The capital of France is Paris.", "France is in Europe."]
}
});
After (dynamic context from tool results):
import { extractToolResults } from "@mastra/evals/scorers";
const scorer = createHallucinationScorer({
model: openai("gpt-4o"),
options: {
getContext: ({ run }) => {
const toolResults = extractToolResults(run.output);
return toolResults.map((t) => JSON.stringify({ tool: t.toolName, result: t.result }));
}
}
});
Fixes: #12639
@mastra/fastify@1.1.1
Fixed missing cross-origin headers on streaming responses when using the Fastify adapter. Headers set by plugins (like @fastify/cors) are now preserved when streaming. See https://github.com/mastra-ai/mastra/issues/12622
Fixes: #12633
@mastra/inngest@1.0.2
Fix long running steps causing inngest workflow to fail
Fixes: #12522
@mastra/libsql@1.2.0
-
Internal changes to enable observational memory
-
Internal changes to enable
@mastra/editor
@mastra/mcp-docs-server@1.1.0
Restructure and tidy up the MCP Docs Server. It now focuses more on documentation and uses fewer tools.
Removed tools that sourced content from:
- Blog
- Package changelog
- Examples
The local docs source is now using the generated llms.txt files from the official documentation, making it more accurate and easier to maintain.
Fixes: #12623
@mastra/memory@1.1.0
Added Observational Memory — a new memory system that keeps your agent's context window small while preserving long-term memory across conversations.
Why: Long conversations cause context rot and waste tokens. Observational Memory compresses conversation history into observations (5–40x compression) and periodically condenses those into reflections. Your agent stays fast and focused, even after thousands of messages.
Usage:
import { Memory } from "@mastra/memory";
import { PostgresStore } from "@mastra/pg";
const memory = new Memory({
storage: new PostgresStore({ connectionString: process.env.DATABASE_URL }),
options: {
observationalMemory: true
}
});
const agent = new Agent({
name: "my-agent",
model: openai("gpt-4o"),
memory
});
What's new:
observationalMemory: trueenables the three-tier memory system (recent messages → observations → reflections)- Thread-scoped (per-conversation) and resource-scoped (shared across all threads for a user) modes
- Manual
observe()API for triggering observation outside the normal agent loop - New OM storage methods for pg, libsql, and mongodb adapters (conditionally enabled)
Agent.findProcessor()method for looking up processors by IDprocessorStatesfor persisting processor state across loop iterations- Abort signal propagation to processors
ProcessorStreamWriterfor custom stream events from processors
Fixes: #12599
-
Expose token usage from embedding operations
-
saveMessagesnow returnsusage: { tokens: number }with aggregated token count from all embeddings -
recallnow returnsusage: { tokens: number }from the vector search query embedding -
Updated abstract method signatures in
MastraMemoryto include optionalusagein return typesThis allows users to track embedding token usage when using the Memory class.
Fixes: #12556
@mastra/mongodb@1.2.0
-
Internal changes to enable observational memory
-
Internal changes to enable
@mastra/editor
@mastra/observability@1.2.0
-
Increased default serialization limits for AI tracing. The maxStringLength is now 128KB (previously 1KB) and maxDepth is 8 (previously 6). These changes prevent truncation of large LLM prompts and responses during tracing.
To restore the previous behavior, set
serializationOptionsin your observability config:serializationOptions: { maxStringLength: 1024, maxDepth: 6, }Fixes: #12579
-
Fix CloudFlare Workers deployment failure caused by
fileURLToPathbeing called at module initialization time.Moved
SNAPSHOTS_DIRcalculation from top-level module code into a lazy getter function. In CloudFlare Workers (V8 runtime),import.meta.urlisundefinedduring worker startup, causing the previous code to throw. The snapshot functionality is only used for testing, so deferring initialization has no impact on normal operation.Fixes: #12540
@mastra/pg@1.2.0
-
Internal changes to enable observational memory
-
Internal changes to enable
@mastra/editor
@mastra/playground-ui@9.0.0
-
Use EntryCell icon prop for source indicator in agent table
Fixes: #12515
-
Add Observational Memory UI to the playground. Shows observation/reflection markers inline in the chat thread, and adds an Observational Memory panel to the agent info section with observations, reflection history, token usage, and config. All OM UI is gated behind a context provider that no-ops when OM isn't configured.
Fixes: #12599
-
Added MultiCombobox component for multi-select scenarios, and JSONSchemaForm compound component for building JSON schema definitions visually. The Combobox component now supports description text on options and error states.
Fixes: #12616
-
Added ContentBlocks, a reusable drag-and-drop component for building ordered lists of editable content. Also includes AgentCMSBlocks, a ready-to-use implementation for agent system prompts with add, delete, and reorder functionality.
Fixes: #12629
-
Redesigned toast component with outline circle icons, left-aligned layout, and consistent design system styling
Fixes: #12618
-
Updated Badge component styling: increased height to 28px, changed to pill shape with rounded-full, added border, and increased padding for better visual appearance.
Fixes: #12511
@mastra/schema-compat@1.1.0
-
Added Standard Schema support to
@mastra/schema-compat. This enables interoperability with any schema library that implements the Standard Schema specification.New exports:
toStandardSchema()- Convert Zod, JSON Schema, or AI SDK schemas to Standard Schema formatStandardSchemaWithJSON- Type for schemas implementing both validation and JSON Schema conversionInferInput,InferOutput- Utility types for type inference
Example usage:
import { toStandardSchema } from "@mastra/schema-compat/schema"; import { z } from "zod"; // Convert a Zod schema to Standard Schema const zodSchema = z.object({ name: z.string(), age: z.number() }); const standardSchema = toStandardSchema(zodSchema); // Use validation const result = standardSchema["~standard"].validate({ name: "John", age: 30 }); // Get JSON Schema const jsonSchema = standardSchema["~standard"].jsonSchema.output({ target: "draft-07" });Fixes: #12527
@mastra/server@1.2.0
-
Internal changes to enable observational memory
-
Internal changes to enable
@mastra/editor -
Internal changes for better gateway selection in Studio
-
Improved workspace filesystem error handling: return 404 for not-found errors instead of 500, show user-friendly error messages in UI, and add MastraClientError class with status/body properties for better error handling
Fixes: #12533
mastra@1.2.0
-
Fixed peer dependency checker fix command to suggest the correct package to upgrade:
-
If peer dep is too old (below range) → suggests upgrading the peer dep (e.g.,
@mastra/core) -
If peer dep is too new (above range) → suggests upgrading the package requiring it (e.g.,
@mastra/libsql)Fixes: #12529
-
New feature: You can install the Mastra skill during the
create-mastrawizard.The wizard now asks you to install the official Mastra skill. Choose your favorite agent and your newly created project is set up. For non-interactive setup, use the
--skillsflag that accepts comma-separated agent names (e.g.--skills claude-code).Fixes: #12582
-
Pre-select Claude Code, Codex, OpenCode, and Cursor as default agents when users choose to install Mastra skills during project creation. Codex has been promoted to the popular agents list for better visibility.
Fixes: #12626
-
Add
AGENTS.mdfile (and optionallyCLAUDE.md) duringcreate mastracreationFixes: #12658