Microsoft's lightweight SDK for LLM orchestration.
Semantic Kernel is an open-source SDK from Microsoft for building AI agents and integrating LLMs into enterprise applications in C#, Python, and Java. It acts as middleware between your existing code and AI models, letting you define functions and plugins that an LLM can call automatically to complete tasks. Microsoft uses Semantic Kernel internally across its own enterprise products, and it has version 1.0 stability commitments across all three supported languages, with built-in telemetry, filters, and security hooks designed for production deployment. It is now the foundation of Microsoft Agent Framework, Microsoft’s enterprise-grade successor for multi-agent orchestration.
The key idea is the Kernel: a dependency injection container that holds all your AI services, plugins, and memory. When a user sends a request, the Kernel selects the right service, assembles the prompt, sends it to the LLM, and routes any function calls back to your code. Here is how the components fit together:
Both frameworks let you connect LLMs to tools and data. Semantic Kernel is built with enterprise environments in mind, emphasising strong typing, dependency injection, telemetry, and non-breaking API stability across C#, Python, and Java. LangChain is Python-first with a larger community ecosystem and more rapid experimentation. Semantic Kernel integrates more deeply with the Azure and .NET ecosystem; LangChain gives broader access to third-party integrations.
Not exactly. Semantic Kernel is the SDK that powers agents, plugins, memory, and process orchestration. Microsoft Agent Framework is the higher-level enterprise successor that combines Semantic Kernel with AutoGen's multi-agent patterns, adding graph-based workflows, A2A messaging, and long-term support. If you are starting a new project, Microsoft Agent Framework is the recommended path. If you are an existing Semantic Kernel user, your code migrates with minimal changes.
Yes. Semantic Kernel supports OpenAI, Hugging Face, NVIDIA NIM, Ollama, LMStudio, ONNX, and other local or cloud-hosted models through its AI service connector interface. The Kernel is model-agnostic by design, so you can swap models without rewriting plugin or agent logic.
Type-safe agent framework from the Pydantic team.
FrameworkDeeper agent loops with planning, sub-agents, and persistent memory for longer-running tasks.
FrameworkFastMCP is the standard Python framework for building MCP servers and clients — write a function, add a decorator, and your tool is ready for any AI agent to use.
FrameworkTS toolkit for streaming LLM UIs across providers.
FrameworkUpdates from the AI world — what shipped, what we’re using in production, and what’s worth your attention. Two emails a month, no spam.