Semantic Kernel for .NET Developers: Beginner’s Guide (2026)

·

·

You’ve learned how to call an AI API. You can send a message and get a response. Now your requirements get real: the user wants the AI to look up data from your database, call an external API, remember previous sessions, and plan a multi-step workflow without you hardcoding every step.

That’s where Semantic Kernel enters. It’s Microsoft’s AI orchestration framework for .NET — the layer above raw API calls that handles complex AI application patterns. This guide explains what it is, when to use it, and how to build with it.

Table of Contents

What Is Semantic Kernel?

Semantic Kernel (SK) is an open-source SDK from Microsoft that orchestrates AI models, plugins, and memory to build complex AI applications. Where a basic AI API call sends a prompt and receives a response, Semantic Kernel adds:

  • Plugins — functions (your own code) that the AI model can call to get data or perform actions
  • Planners — let the model decide which plugins to call and in what order to achieve a goal
  • Memory — semantic search over past conversations, documents, or data
  • Agents — AI actors that autonomously use tools to complete multi-step tasks
  • Prompt templates — parameterized prompts with variable injection and rendering logic

The Mental Model

Think of SK as the difference between a calculator and a spreadsheet. Raw AI APIs are the calculator — you give them input, they return output. Semantic Kernel is the spreadsheet — cells (functions/plugins) can reference each other, formulas can chain, and the engine coordinates execution.

SK vs Microsoft.Extensions.AI: Which to Use?

This confuses many developers. Here’s the clear answer:

  • Microsoft.Extensions.AI (MEAI) — thin abstraction over AI API calls (IChatClient, IEmbeddingGenerator). Use it when you’re making direct AI calls and want provider portability. It’s analogous to ILogger.
  • Semantic Kernel — AI orchestration framework. Use it when you need plugins, planners, agents, or complex multi-step AI workflows. SK uses MEAI internally for its AI calls.

They’re complementary, not competing. Simple apps: MEAI is enough. Complex AI workflows: SK on top of MEAI. SK even exposes its IChatCompletionService as an IChatClient so you can mix both.

Setup and First Kernel

# Install core SK package
dotnet add package Microsoft.SemanticKernel

# Optional: specific connectors
dotnet add package Microsoft.SemanticKernel.Connectors.AzureOpenAI
dotnet add package Microsoft.SemanticKernel.Connectors.OpenAI

Build Your First Kernel

using Microsoft.SemanticKernel;

// Minimal kernel with OpenAI
var kernel = Kernel.CreateBuilder()
    .AddOpenAIChatCompletion(
        modelId: "gpt-4o-mini",
        apiKey: "your-api-key")
    .Build();

// Or with Azure OpenAI
var kernel = Kernel.CreateBuilder()
    .AddAzureOpenAIChatCompletion(
        deploymentName: "gpt-4o-mini",
        endpoint: "https://your-resource.openai.azure.com/",
        apiKey: "your-azure-key")
    .Build();

Kernel in ASP.NET Core DI

// Program.cs
builder.Services.AddKernel()
    .AddAzureOpenAIChatCompletion(
        deploymentName: builder.Configuration["AzureOpenAI:Deployment"]!,
        endpoint: builder.Configuration["AzureOpenAI:Endpoint"]!,
        apiKey: builder.Configuration["AzureOpenAI:ApiKey"]!);

// Inject Kernel into your services
public class AiService
{
    private readonly Kernel _kernel;
    public AiService(Kernel kernel) => _kernel = kernel;
}

Plugins: Teaching the AI What It Can Do

Plugins are collections of functions that the AI model can discover and call. They’re the bridge between the AI’s knowledge and your application’s data and capabilities.

Creating a Plugin

public class OrderPlugin
{
    private readonly IOrderRepository _orderRepo;

    public OrderPlugin(IOrderRepository orderRepo)
    {
        _orderRepo = orderRepo;
    }

    [KernelFunction("get_order_status")]
    [Description("Gets the current status of a customer order")]
    public async Task<string> GetOrderStatusAsync(
        [Description("The order ID, format: ORD-XXXXX")] string orderId)
    {
        var order = await _orderRepo.GetByIdAsync(orderId);
        if (order is null)
            return $"Order {orderId} not found";

        return $"Order {orderId}: Status={order.Status}, " +
               $"EstimatedDelivery={order.EstimatedDelivery:d}, " +
               $"TrackingNumber={order.TrackingNumber ?? "Not yet assigned"}";
    }

    [KernelFunction("cancel_order")]
    [Description("Cancels an order if it hasn't shipped yet")]
    public async Task<string> CancelOrderAsync(
        [Description("The order ID to cancel")] string orderId,
        [Description("Reason for cancellation")] string reason)
    {
        var result = await _orderRepo.CancelAsync(orderId, reason);
        return result.Success
            ? $"Order {orderId} has been cancelled. Refund will be processed in 3-5 days."
            : $"Cannot cancel order {orderId}: {result.ErrorMessage}";
    }
}

Registering and Using Plugins

// Register plugin with DI-resolved dependencies
kernel.ImportPluginFromObject(
    new OrderPlugin(orderRepository),
    "Orders");

// Or use the DI-integrated approach
builder.Services.AddSingleton<OrderPlugin>();
// Then in the kernel builder:
kernelBuilder.Plugins.AddFromType<OrderPlugin>("Orders");

Native Functions

Native functions are C# methods decorated with [KernelFunction]. They can call databases, external APIs, file systems — anything your .NET code can do.

public class WeatherPlugin
{
    private readonly IWeatherService _weather;
    public WeatherPlugin(IWeatherService weather) => _weather = weather;

    [KernelFunction]
    [Description("Get current weather for a city")]
    public async Task<WeatherResult> GetWeatherAsync(
        [Description("City name")] string city,
        [Description("Temperature unit: celsius or fahrenheit")] string unit = "celsius")
    {
        return await _weather.GetCurrentAsync(city, unit);
    }

    [KernelFunction]
    [Description("Get 5-day forecast for a city")]
    public async Task<List<ForecastDay>> GetForecastAsync(
        [Description("City name")] string city)
    {
        return await _weather.GetForecastAsync(city, days: 5);
    }
}

// The AI automatically knows what parameters to pass based on [Description] attributes

Prompt Functions

Prompt functions are parameterized prompt templates — reusable prompts with variables injected at runtime.

// Inline prompt function
var summarizeFunc = kernel.CreateFunctionFromPrompt(
    "Summarize the following {{$input}} in {{$style}} style, max {{$maxWords}} words:\n\n{{$text}}",
    new PromptExecutionSettings { MaxTokens = 500 });

// Invoke with arguments
var result = await kernel.InvokeAsync(summarizeFunc, new KernelArguments
{
    ["input"] = "customer review",
    ["style"] = "professional",
    ["maxWords"] = "100",
    ["text"] = reviewText
});

Console.WriteLine(result.GetValue<string>());

Prompt Templates from Files

// Plugins/ReviewAnalysis/Summarize/skprompt.txt
You are a review analyst. Analyze this {{$productType}} review:

{{$reviewText}}

Provide:
1. Sentiment: [Positive/Neutral/Negative]
2. Key themes (max 3)
3. Actionable insight for the product team

// Plugins/ReviewAnalysis/Summarize/config.json
{
  "schema": 1,
  "description": "Analyzes a product review for sentiment and themes",
  "execution_settings": {
    "default": {
      "max_tokens": 300,
      "temperature": 0.3
    }
  }
}

// Load from directory
var pluginsDir = Path.Combine(AppContext.BaseDirectory, "Plugins");
var reviewPlugin = kernel.ImportPluginFromPromptDirectory(
    Path.Combine(pluginsDir, "ReviewAnalysis"));

Chat History and Conversations

using Microsoft.SemanticKernel.ChatCompletion;

public class CustomerSupportSession
{
    private readonly Kernel _kernel;
    private readonly IChatCompletionService _chat;
    private readonly ChatHistory _history;

    public CustomerSupportSession(Kernel kernel)
    {
        _kernel = kernel;
        _chat = kernel.GetRequiredService<IChatCompletionService>();
        _history = new ChatHistory();
        _history.AddSystemMessage(
            "You are a friendly support agent for Contoso. " +
            "Use the Orders plugin to look up order information.");
    }

    public async Task<string> ReplyAsync(string userMessage, CancellationToken ct = default)
    {
        _history.AddUserMessage(userMessage);

        var settings = new OpenAIPromptExecutionSettings
        {
            ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions
        };

        var response = await _chat.GetChatMessageContentAsync(
            _history,
            settings,
            _kernel,
            ct);

        _history.AddAssistantMessage(response.Content!);
        return response.Content!;
    }
}

ToolCallBehavior.AutoInvokeKernelFunctions tells SK to automatically handle the tool-call loop — if the model decides to call a plugin function, SK executes it and feeds the result back to the model, repeating until the model returns a final response.

Planners and Auto Function Calling

The planner lets the model decompose a complex goal into steps, selecting which plugins to call and in what order — without you hardcoding the workflow.

// Function call auto-invocation handles simple multi-step scenarios
var settings = new OpenAIPromptExecutionSettings
{
    ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions,
    MaxAutoInvokeAttempts = 10 // Limit iterations to prevent infinite loops
};

var history = new ChatHistory();
history.AddSystemMessage("Help users manage their orders using available tools.");
history.AddUserMessage("Check if order ORD-12345 is delivered. If not, get the tracking info.");

var chat = kernel.GetRequiredService<IChatCompletionService>();
var response = await chat.GetChatMessageContentAsync(history, settings, kernel);
// Model automatically calls GetOrderStatus, then GetTrackingInfo if needed

Handlebars Planner for Complex Workflows

using Microsoft.SemanticKernel.Planning.Handlebars;

var planner = new HandlebarsPlanner(new HandlebarsPlannerOptions
{
    MaxTokens = 4000,
    AllowLoops = true
});

var plan = await planner.CreatePlanAsync(kernel,
    "Analyze all orders placed this week and send a daily summary report to the manager");

Console.WriteLine(plan.ToString()); // See the generated plan
var result = await plan.InvokeAsync(kernel);

Memory and Vector Search

using Microsoft.SemanticKernel.Memory;

// Register memory with an embedding model
var memory = new MemoryBuilder()
    .WithAzureOpenAITextEmbeddingGeneration("text-embedding-3-small", endpoint, apiKey)
    .WithMemoryStore(new VolatileMemoryStore()) // Or use Qdrant, Azure AI Search, etc.
    .Build();

// Save information to memory
await memory.SaveInformationAsync(
    collection: "product-faq",
    text: "Our return policy allows returns within 30 days with original receipt.",
    id: "return-policy",
    description: "Return and refund policy");

await memory.SaveInformationAsync(
    collection: "product-faq",
    text: "Shipping within the US takes 3-5 business days via standard shipping.",
    id: "shipping-info");

// Search memory semantically
var results = memory.SearchAsync(
    collection: "product-faq",
    query: "How long do I have to return something?",
    limit: 3,
    minRelevanceScore: 0.7);

await foreach (var result in results)
{
    Console.WriteLine($"[{result.Relevance:P0}] {result.Metadata.Text}");
}

// RAG pattern: search memory → inject into prompt → get grounded response
var faqContext = new StringBuilder();
await foreach (var item in memory.SearchAsync("product-faq", userQuestion, limit: 3))
    faqContext.AppendLine(item.Metadata.Text);

var ragPrompt = $"""
    Answer the user's question using only the provided context.
    If the answer isn't in the context, say you don't have that information.

    Context:
    {faqContext}

    Question: {userQuestion}
    """;

var answer = await kernel.InvokePromptAsync(ragPrompt);

Agents (Preview)

SK Agents are AI actors that autonomously plan and execute multi-step tasks using plugins and memory.

using Microsoft.SemanticKernel.Agents;

// Create a specialized agent
var researchAgent = new ChatCompletionAgent
{
    Name = "ResearchAgent",
    Instructions = """
        You are a research assistant. When given a topic:
        1. Search for relevant information using available tools
        2. Synthesize findings into a structured report
        3. Cite sources and flag any information gaps
        """,
    Kernel = kernel
};

// Run the agent
var thread = new AgentGroupChat(researchAgent);
thread.AddChatMessage(new ChatMessageContent(AuthorRole.User,
    "Research the latest developments in .NET MAUI performance optimization"));

await foreach (var response in thread.InvokeAsync())
{
    Console.WriteLine($"[{response.AuthorName}]: {response.Content}");
}

// Multi-agent collaboration
var writerAgent = new ChatCompletionAgent
{
    Name = "WriterAgent",
    Instructions = "You write clear, engaging technical blog posts based on research provided.",
    Kernel = kernel
};

var groupChat = new AgentGroupChat(researchAgent, writerAgent)
{
    ExecutionSettings = new AgentGroupChatSettings
    {
        TerminationStrategy = new ApprovalTerminationStrategy()
    }
};

FAQ

Is Semantic Kernel stable enough for production in 2026?

The core features (kernel, plugins, chat history, basic function calling) are stable. Agents and some advanced planner features are still in preview/beta. The stable APIs have been production-proven at Microsoft and by the community. Check the package release notes for stable vs preview markers before adopting specific features.

How does Semantic Kernel compare to LangChain?

LangChain is Python-first with .NET/TypeScript ports. Semantic Kernel is .NET-first (with Python/TypeScript ports). SK is more naturally DI-integrated, has better support for Microsoft/Azure services, and is officially maintained by Microsoft. LangChain has a larger community and more third-party integrations currently. For .NET developers: SK is the better choice.

Can I use Semantic Kernel with local models (Ollama)?

Yes — SK supports any OpenAI-compatible endpoint, including Ollama. Use AddOpenAIChatCompletion with Ollama’s URL as the base address. Function calling quality depends on the local model’s instruction-following capability — Llama 3.1 70B and Phi-3 Medium are recommended for tool use.

What’s the difference between a Plugin and an Agent in SK?

A Plugin is a set of functions the AI model can call when needed during a conversation. An Agent is an autonomous AI actor that uses plugins and its own reasoning to complete a goal over multiple steps. Think of plugins as tools and agents as workers who use those tools.

How do I test SK-based code?

Mock IChatCompletionService for unit tests of your orchestration logic. For integration tests, use a real (but cheap) model like GPT-4o-mini with deterministic prompts. SK’s DI integration makes swapping mock implementations straightforward.

Conclusion

Semantic Kernel is the right framework when your AI feature needs to do more than answer questions — when it needs to query your database, call external APIs, remember context across sessions, or autonomously plan multi-step workflows. The plugin system is intuitive for .NET developers (decorated C# methods), and the DI integration makes it feel like a natural part of an ASP.NET Core or MAUI application.

Start with plugins and auto function calling — those two features cover 80% of practical AI application scenarios. Graduate to memory and agents when your requirements genuinely demand them.

[INTERNAL_LINK: Microsoft.Extensions.AI explained] [INTERNAL_LINK: Azure OpenAI SDK for .NET guide]


Leave a Reply

Your email address will not be published. Required fields are marked *