EF Core is genuinely good ORM software. It’s also the source of some of the most painful performance issues in .NET applications — not because it’s broken, but because the abstractions hide costs that bite you at scale.
This isn’t a list of theoretical optimizations. These are the techniques that show up in actual performance investigations on real production applications. Each section includes before/after query examples and explains why the change matters.
Table of Contents
- Use AsNoTracking for Read-Only Queries
- Fix N+1 Query Problems
- Project to DTOs, Not Full Entities
- Compiled Queries
- Split Queries for Complex Includes
- Bulk Operations Without Loading Data
- Efficient Pagination
- Diagnosing Slow Queries
- FAQ
Use AsNoTracking for Read-Only Queries
By default, EF Core tracks every entity it loads — recording its original state in the ChangeTracker for later dirty checking. For read-only operations (API responses, reports, read-models), this is pure overhead.
// Slow — tracks entities you never need to update
var products = await context.Products
.Include(p => p.Category)
.Where(p => p.IsActive)
.ToListAsync();
// Fast — no tracking overhead
var products = await context.Products
.AsNoTracking()
.Include(p => p.Category)
.Where(p => p.IsActive)
.ToListAsync();
The performance gain scales with query size. On queries returning 100+ entities, AsNoTracking typically reduces execution time by 20–30% and cuts memory allocation significantly — the ChangeTracker isn’t storing snapshots of every property on every entity.
AsNoTrackingWithIdentityResolution
If your query returns the same entity multiple times (through multiple includes), AsNoTracking creates duplicate objects. Use AsNoTrackingWithIdentityResolution to deduplicate without the full tracking overhead:
var orders = await context.Orders
.AsNoTrackingWithIdentityResolution()
.Include(o => o.Items)
.ThenInclude(i => i.Product)
.ToListAsync();
// Products referenced by multiple items are the same object — no duplicates
Fix N+1 Query Problems
The N+1 problem is the most common EF Core performance killer. It happens when you load a collection, then access a related entity for each item — causing one query per item instead of a single join.
// N+1 — This looks innocent but executes 1 + N queries
var orders = await context.Orders.ToListAsync(); // 1 query
foreach (var order in orders) // For 100 orders: 100 more queries
{
Console.WriteLine(order.Customer.Name); // Lazy load fires here
}
// Fixed — 1 query with a join
var orders = await context.Orders
.Include(o => o.Customer) // eager load
.ToListAsync();
foreach (var order in orders)
{
Console.WriteLine(order.Customer.Name); // already loaded, no query
}
Detecting N+1 in Development
// Add this to your DbContext configuration to log all queries
optionsBuilder.LogTo(
Console.WriteLine,
new[] { DbLoggerCategory.Database.Command.Name },
LogLevel.Information);
// Or use MiniProfiler / SQL profiler to see query counts per request
Lazy Loading Is a Trap
Lazy loading (via Microsoft.EntityFrameworkCore.Proxies) feels convenient but almost always causes N+1 problems in production. Disable it by default and use explicit loading or eager loading instead:
// In DbContext configuration — disable lazy loading
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
optionsBuilder.UseLazyLoadingProxies(false); // disabled by default anyway
// Never enable unless you fully understand the query implications
}
Project to DTOs, Not Full Entities
Loading full entities when you only need 3 of 20 columns wastes both database bandwidth and memory. Projecting to a DTO tells EF Core to SELECT only what you need:
// Inefficient — loads all columns, all navigation properties
var products = await context.Products
.Include(p => p.Category)
.Include(p => p.Images)
.ToListAsync();
var dtos = products.Select(p => new ProductListDto(p.Id, p.Name, p.Price));
// Efficient — SELECT Id, Name, Price, Category.Name only
var dtos = await context.Products
.Select(p => new ProductListDto(
p.Id,
p.Name,
p.Price,
p.Category.Name)) // EF Core generates a JOIN, selects only Category.Name
.ToListAsync();
The SQL generated by the projection version is dramatically leaner. No unnecessary columns, no extra joins for data you don’t use. On large tables with many columns (especially text/blob columns), this can reduce query time by 50%+.
Compiled Queries
Every time EF Core executes a LINQ query, it compiles the expression tree to SQL. For hot-path queries called thousands of times per second, this compilation overhead adds up. Compiled queries skip it:
// Define once at class level — compiled once, reused forever
private static readonly Func<AppDbContext, int, Task<Product?>> GetProductByIdQuery =
EF.CompileAsyncQuery((AppDbContext context, int id) =>
context.Products
.AsNoTracking()
.Include(p => p.Category)
.FirstOrDefault(p => p.Id == id));
// Use in your service — no compilation overhead
public async Task<Product?> GetProductAsync(int id)
{
return await GetProductByIdQuery(_context, id);
}
Benchmark results: on a query executed 10,000 times, compiled queries show 15–25% lower total execution time. For APIs handling high request volume on fixed query patterns, this is worth the verbosity.
Split Queries for Complex Includes
When you include multiple collections, EF Core generates a Cartesian product JOIN — the result set grows multiplicatively with each collection. For Order → Items → Products → Images, a single order could return hundreds of rows in the result set.
// Bad for multiple collection includes — Cartesian product
var orders = await context.Orders
.Include(o => o.Items)
.ThenInclude(i => i.Product)
.ThenInclude(p => p.Images) // Cartesian product explosion
.ToListAsync();
// Better — split into separate queries, EF Core assembles in memory
var orders = await context.Orders
.Include(o => o.Items)
.ThenInclude(i => i.Product)
.ThenInclude(p => p.Images)
.AsSplitQuery() // executes 3 separate queries, no Cartesian product
.ToListAsync();
Split queries trade fewer rows returned for more round trips. The break-even point depends on network latency and data volume, but for queries with 3+ collection includes, AsSplitQuery almost always wins.
[INTERNAL_LINK: EF Core migrations best practices guide]
Bulk Operations Without Loading Data
EF Core’s default update/delete pattern requires loading data first:
// Slow — loads every entity before updating
var products = await context.Products
.Where(p => p.CategoryId == categoryId)
.ToListAsync();
foreach (var product in products)
product.IsActive = false;
await context.SaveChangesAsync(); // N UPDATE statements
EF Core 7+ ExecuteUpdate / ExecuteDelete
EF Core 7 added ExecuteUpdate and ExecuteDelete — single SQL statements without loading data:
// Fast — single UPDATE statement, no data loaded
await context.Products
.Where(p => p.CategoryId == categoryId)
.ExecuteUpdateAsync(setters =>
setters.SetProperty(p => p.IsActive, false));
// Fast — single DELETE statement
await context.Orders
.Where(o => o.Status == "Cancelled" && o.CreatedAt < cutoffDate)
.ExecuteDeleteAsync();
The SQL generated: UPDATE Products SET IsActive = 0 WHERE CategoryId = @p0. One query, regardless of row count. For batch operations on thousands of rows, this is not a marginal improvement — it’s the difference between an operation that takes 30 seconds and one that takes 200 milliseconds.
Bulk Inserts with EFCore.BulkExtensions
dotnet add package EFCore.BulkExtensions
var products = GenerateProducts(10000); // 10,000 entities
// EF Core default — 10,000 INSERT statements
await context.Products.AddRangeAsync(products);
await context.SaveChangesAsync(); // ~5 seconds
// BulkExtensions — single bulk INSERT
await context.BulkInsertAsync(products); // ~200ms
Efficient Pagination
Skip/Take pagination loads all rows up to the offset into memory before returning results. For page 1000 of 10 results, the database processes 10,000 rows:
// Offset pagination — gets slower as page number increases
var page = await context.Products
.OrderBy(p => p.Id)
.Skip(pageNumber * pageSize) // database must scan all prior rows
.Take(pageSize)
.ToListAsync();
Keyset Pagination (Cursor-Based)
// Keyset pagination — always fast, regardless of page depth
var page = await context.Products
.Where(p => p.Id > lastSeenId) // index seek instead of scan
.OrderBy(p => p.Id)
.Take(pageSize)
.ToListAsync();
Keyset pagination requires a stable sort key (usually a primary key or timestamp) and doesn’t support random-access page jumping. For infinite scroll, “load more” patterns, or API cursor-based pagination, it’s strictly superior to offset pagination.
Diagnosing Slow Queries
Sensitive Data Logging + Query Logging
// Development only — logs full SQL with parameters
optionsBuilder
.EnableSensitiveDataLogging()
.LogTo(Console.WriteLine, LogLevel.Information);
EF Core Interceptors
public class SlowQueryInterceptor : DbCommandInterceptor
{
private static readonly TimeSpan Threshold = TimeSpan.FromMilliseconds(500);
public override async ValueTask<DbDataReader> ReaderExecutedAsync(
DbCommand command,
CommandExecutedEventData eventData,
DbDataReader result,
CancellationToken ct = default)
{
if (eventData.Duration > Threshold)
{
Log.Warning("Slow query ({Duration}ms): {CommandText}",
eventData.Duration.TotalMilliseconds,
command.CommandText);
}
return result;
}
}
optionsBuilder.AddInterceptors(new SlowQueryInterceptor());
FAQ
Should I use AsNoTracking by default and only track when needed?
Many teams configure their DbContext with QueryTrackingBehavior.NoTracking as the default and explicitly call AsTracking() when they need to modify entities. This is a good practice for read-heavy APIs — just be deliberate when you switch to tracking for writes.
When does EF Core generate bad SQL that a hand-written query would fix?
Complex LINQ expressions involving multiple subqueries, conditional logic, or operations that don’t map cleanly to SQL (string methods, complex math) can generate inefficient SQL. Always check the generated SQL with ToQueryString() for complex queries. For critical performance paths, raw SQL via FromSqlRaw or Dapper is sometimes the right answer.
How do I find N+1 problems in my existing application?
Enable query logging in development and look for patterns where the same table is queried multiple times with different ID values in a single request. MiniProfiler or dotnet-trace with EF Core events can surface this systematically. For production, Application Insights or Datadog trace your database calls per request.
Does EF Core support database-generated columns and computed columns?
Yes. Use .HasComputedColumnSql() in your model configuration for database-computed columns, and ValueGeneratedOnAddOrUpdate() for server-generated values. EF Core won’t try to INSERT or UPDATE these columns.
Is Dapper better than EF Core for performance-critical applications?
Dapper is faster than EF Core for raw query execution because it does less work — no change tracking, no expression translation. For read-heavy hot paths where you control the SQL, Dapper is a legitimate choice. Most applications benefit from using both: EF Core for CRUD operations and complex relationships, Dapper for performance-critical read queries and reporting.
EF Core performance problems are almost always predictable. N+1 queries, missing AsNoTracking, loading full entities when you need DTOs, offset pagination on deep pages — these are the 80% case. Fix these before reaching for Dapper or micro-optimizations, and your EF Core application will be fast enough for most workloads.

Leave a Reply