Jump to content
Xtreme .Net Talk

  • Posts

    • Continuing our tradition, we are excited to share a blog post highlighting the latest and most interesting changes in the networking space with the new .NET release. This year, we are introducing updates in the HTTP space, new HttpClientFactory APIs, .NET Framework compatibility improvements, and more. HTTP In the following section, we’re introducing the most impactful changes in the HTTP space. Among which belong perf improvements in connection pooling, support for multiple HTTP/3 connections, auto-updating Windows proxy, and, last but not least, community contributions. Connection Pooling In this release, we made two impactful performance improvements in HTTP connection pooling. We added opt-in support for multiple HTTP/3 connections. Using more than one HTTP/3 connection to the peer is discouraged by the RFC 9114 since the connection can multiplex parallel requests. However, in certain scenarios, like server-to-server, one connection might become a bottleneck even with request multiplexing. We saw such limitations with HTTP/2 (dotnet/runtime#35088), which has the same concept of multiplexing over one connection. For the same reasons (dotnet/runtime#51775), we decided to implement multiple connection support for HTTP/3 (dotnet/runtime#101535). The implementation itself tries to closely match the behavior of HTTP/2 multiple connections. Which, at the moment, always prefer to saturate existing connections with as many requests as allowed by the peer before opening a new one. Note that this is an implementation detail and the behavior might change in the future. As a result, our benchmarks showed a nontrivial increase in requests per seconds (RPS), comparison for 10,000 parallel requests: client single HTTP/3 connection multiple HTTP/3 connections Max CPU Usage (%) 35 92 Max Cores Usage (%) 971 2,572 Max Working Set (MB) 3,810 6,491 Max Private Memory (MB) 4,415 7,228 Processor Count 28 28 First request duration (ms) 519 594 Requests 345,446 4,325,325 Mean RPS 23,069 288,664 Note that the increase in Max CPU Usage implies better CPU utilization, which means that the CPU is busy processing requests instead of being idle. This feature can be turned on via the EnableMultipleHttp3Connections property on SocketsHttpHandler: var client = new HttpClient(new SocketsHttpHandler { EnableMultipleHttp3Connections = true }); We also addressed lock contention in HTTP 1.1 connection pooling (dotnet/runtime#70098). The HTTP 1.1 connection pool previously used a single lock to manage the list of connections and the queue of pending requests. This lock was observed to be a bottleneck in high throughput scenarios on machines with a high number of CPU cores. We resolved this problem (dotnet/runtime#99364) by replacing an ordinary list with a lock with a concurrent collection. We chose ConcurrentStack as it preserves the observable behavior when requests are handled by the newest available connection, which allows collecting older connections when their configured lifetime expires. The throughput of HTTP 1.1 requests in our benchmarks increased by more than 30%: Client .NET 8.0 .NET 9.0 Increase Requests 80,028,791 107,128,778 +33.86% Mean RPS 666,886 892,749 +33.87% Proxy Auto Update on Windows One of the main pain points when debugging HTTP traffic of applications using earlier versions of .NET is that the application doesn’t react to changes in Windows proxy settings (dotnet/runtime#70098). The proxy settings were previously initialized once per process with no reasonable ability to refresh the settings. For example (with .NET 8), HttpClient.DefaultProxy returns the same instance upon repeated access and never refetch the settings. As a result, tools like Fiddler, that set themself as system proxy to listen for the traffic, weren’t able to capture traffic from already running processes. This issue was mitigated in dotnet/runtime#103364, where the HttpClient.DefaultProxy is set to an instance of Windows proxy that listens for registry changes and reloads the proxy settings when notified. The following code: while (true) { using var resp = await client.GetAsync("https://httpbin.org/"); Console.WriteLine(HttpClient.DefaultProxy.GetProxy(new Uri("https://httpbin.org/"))?.ToString() ?? "null"); await Task.Delay(1_000); } produces output like this: null // After Fiddler's "System Proxy" is turned on. http://127.0.0.1:8866/ Note that this change applies only for Windows as it has a unique concept of machine wide proxy settings. Linux and other UNIX-based systems only allow setting up proxy via environment variables, which can’t be changed during process lifetime. Community contributions We’d like to call out community contributions. CancellationToken overloads were missing from HttpContent.LoadIntoBufferAsync. This gap was resolved by an API proposal (dotnet/runtime#102659) from @andrewhickman-aveva and an implementation (dotnet/runtime#103991) was from @manandre. Another change improves a units discrepancy for the MaxResponseHeadersLength property on SocketsHttpHandler and HttpClientHandler (dotnet/runtime#75137). All the other size and length properties are interpreted as being in bytes, however this one is interpreted as being in kilobytes. And since the actual behavior can’t be changed due to backward compatibility, the problem was solved by implementing an analyzer (dotnet/roslyn-analyzers#6796). The analyzer tries to make sure the user is aware that the value provided is interpreted as kilobytes, and warns if the usage suggests otherwise. If the value is higher than a certain threshold, it looks like this: The analyzer was implemented by @amiru3f. QUIC The prominent changes in QUIC space in .NET 9 include making the library public, more configuration options for connections and several performance improvements. Public APIs From this release on, System.Net.Quic isn’t hidden behind PreviewFeature anymore and all the APIs are generally available without any opt-in switches (dotnet/runtime#104227). QUIC Connection Options We expanded the configuration options for QuicConnection (dotnet/runtime#72984). The implementation (dotnet/runtime#94211) added three new properties to QuicConnectionOptions: HandshakeTimeout – we were already imposing a limit on how long a connection establishment can take, this property just enables the user to adjust it. KeepAliveInterval – if this property is set up to a positive value, PING frames are sent out regularly in this interval (in case no other activity is happening on the connection) which prevents the connection from being closed on idle timeout. InitialReceiveWindowSizes – a set of parameters to adjust the initial receive limits for data flow control sent in transport parameters. These data limits apply only until the dynamic flow control algorithm starts adjusting the limits based on the data reading speed. And due to MsQuic limitations, these parameters can only be set to values that are power of 2. All of these parameters are optional. Their default values are derived from MsQuic defaults. The following code reports the defaults programmatically: var options = new QuicClientConnectionOptions(); Console.WriteLine($"KeepAliveInterval = {PrettyPrintTimeStamp(options.KeepAliveInterval)}"); Console.WriteLine($"HandshakeTimeout = {PrettyPrintTimeStamp(options.HandshakeTimeout)}"); Console.WriteLine(@$"InitialReceiveWindowSizes = {{ Connection = {PrettyPrintInt(options.InitialReceiveWindowSizes.Connection)}, LocallyInitiatedBidirectionalStream = {PrettyPrintInt(options.InitialReceiveWindowSizes.LocallyInitiatedBidirectionalStream)}, RemotelyInitiatedBidirectionalStream = {PrettyPrintInt(options.InitialReceiveWindowSizes.RemotelyInitiatedBidirectionalStream)}, UnidirectionalStream = {PrettyPrintInt(options.InitialReceiveWindowSizes.UnidirectionalStream)} }}"); static string PrettyPrintTimeStamp(TimeSpan timeSpan) => timeSpan == Timeout.InfiniteTimeSpan ? "infinite" : timeSpan.ToString(); static string PrettyPrintInt(int sizeB) => sizeB % 1024 == 0 ? $"{sizeB / 1024} * 1024" : sizeB.ToString(); // Prints: KeepAliveInterval = infinite HandshakeTimeout = 00:00:10 InitialReceiveWindowSizes = { Connection = 16384 * 1024, LocallyInitiatedBidirectionalStream = 64 * 1024, RemotelyInitiatedBidirectionalStream = 64 * 1024, UnidirectionalStream = 64 * 1024 } Stream Capacity API .NET 9 also introduced new APIs to support multiple HTTP/3 connections in SocketsHttpHandler (dotnet/runtime#101534). The APIs were designed with this specific usage in mind, and we don’t expect them to be used apart from very niche scenarios. QUIC has built-in logic for managing stream limits within the protocol. As a result, calling OpenOutboundStreamAsync on a connection gets suspended if there isn’t any available stream capacity. Moreover, there isn’t an efficient way to learn whether the stream limit was reached or not. All these limitations together didn’t allow the HTTP/3 layer to know when to open a new connection. So we introduced a new StreamCapacityCallback that gets called whenever stream capacity is increased. The callback itself is registered via QuicConnectionOptions. More details about the callback can be found in the documentation. Performance Improvements Both performance improvements in System.Net.Quic are TLS related and both only affect connection establishing times. The first performance related change was to run the peer certificate validation asynchronously in .NET thread pool (dotnet/runtime#98361). The certificate validation can be time consuming on its own and it might even include an execution of a user callback. Moving this logic to .NET thread pool stops us blocking the MsQuic thread, of which MsQuic has a limited number, and thus enables MsQuic to process higher number of new connections at the same time. On top of that, we have introduced caching of MsQuic configuration (dotnet/runtime#99371). MsQuic configuration is a set of native structures containing connection settings from QuicConnectionOptions, potentially including certificate and its intermediaries. Constructing and initializing the native structure might be very expensive since it might require serializing and deserializing all the certificate data to and from PKS #12 format. Moreover, the cache allows reusing the same MsQuic configuration for different connections if their settings are identical. Specifically server scenarios with static configuration can notably profit from the caching, like the following code: var alpn = "test"; var serverCertificate = X509CertificateLoader.LoadCertificateFromFile("../path/to/cert"); // Prepare the connection option upfront and reuse them. var serverConnectionOptions = new QuicServerConnectionOptions() { DefaultStreamErrorCode = 123, DefaultCloseErrorCode = 456, ServerAuthenticationOptions = new SslServerAuthenticationOptions { ApplicationProtocols = new List<SslApplicationProtocol>() { alpn }, // Re-using the same certificate. ServerCertificate = serverCertificate } }; // Configure the listener to return the pre-prepared options. await using var listener = await QuicListener.ListenAsync(new QuicListenerOptions() { ListenEndPoint = new IPEndPoint(IPAddress.Loopback, 0), ApplicationProtocols = [ alpn ], // Callback returns the same object. // Internal cache will re-use the same native structure for every incoming connection. ConnectionOptionsCallback = (_, _, _) => ValueTask.FromResult(serverConnectionOptions) }); We also built it an escape hatch for this feature, it can be turned off with either environment variable: export DOTNET_SYSTEM_NET_QUIC_DISABLE_CONFIGURATION_CACHE=1 # run the app or with an AppContext switch: AppContext.SetSwitch("System.Net.Quic.DisableConfigurationCache", true); WebSockets .NET 9 introduces the long-desired PING/PONG Keep-Alive strategy to WebSockets (dotnet/runtime#48729). Prior to .NET 9, the only available Keep-Alive strategy was Unsolicited PONG. It was enough to keep the underlying TCP connection from idling out, but in a case when a remote host becomes unresponsive (for example, a remote server crashes), the only way to detect such situations was to depend on the TCP timeout. In this release, we complement the existing KeepAliveInterval setting with the new KeepAliveTimeout setting, so that the Keep-Alive strategy is selected as follows: Keep-Alive is OFF, if KeepAliveInterval is TimeSpan.Zero or Timeout.InfiniteTimeSpan Unsolicited PONG, if KeepAliveInterval is a positive finite TimeSpan, -AND- KeepAliveTimeout is TimeSpan.Zero or Timeout.InfiniteTimeSpan PING/PONG, if KeepAliveInterval is a positive finite TimeSpan, -AND- KeepAliveTimeout is a positive finite TimeSpan By default, the preexisting Keep-Alive behavior is maintained: KeepAliveTimeout default value is Timeout.InfiniteTimeSpan, so Unsolicited PONG remains as the default strategy. The following example illustrates how to enable the PING/PONG strategy for a ClientWebSocket: var cws = new ClientWebSocket(); cws.Options.KeepAliveInterval = TimeSpan.FromSeconds(10); cws.Options.KeepAliveTimeout = TimeSpan.FromSeconds(10); await cws.ConnectAsync(uri, cts.Token); // NOTE: There should be an outstanding read at all times to // ensure incoming PONGs are promptly processed var result = await cws.ReceiveAsync(buffer, cts.Token); If no PONG response received after KeepAliveTimeout elapsed, the remote endpoint is deemed unresponsive, and the WebSocket connection is automatically aborted. It also unblocks the outstanding ReceiveAsync with an OperationCanceledException. To learn more about the feature, you can check out the dedicated conceptual docs. .NET Framework Compatibility One of the biggest hurdles in the networking space when migrating projects from .NET Framework to .NET Core is the difference between the HTTP stacks. In .NET Framework, the main class to handle HTTP requests is HttpWebRequest which uses global ServicePointManager and individual ServicePoints to handle connection pooling. Whereas in .NET Core, HttpClient is the recommended way to access HTTP resources. On top of that, all the classes from .NET Framework are present in .NET, but they’re either obsolete, or missing implementation, or are just not maintained at all. As a result, we often see mistakes like using ServicePointManager to configure the connections while using HttpClient to access the resources. The recommendation always was to fully migrate to HttpClient, but sometimes it’s not possible. Migrating projects from .NET Framework to .NET Core can be difficult on its own, let alone rewriting all the networking code. Expecting customers to do all this work in one step proved to be unrealistic and is one of the reasons why customers might be reluctant to migrate. To mitigate these pain points, we filled in some missing implementations of the legacy classes and created a comprehensive guide to help with the migration. The first part is expansion of supported ServicePointManager and ServicePoint properties that were missing implementation in .NET Core up until this release (dotnet/runtime#94664 and dotnet/runtime#97537). With these changes, they’re now taken into account when using HttpWebRequest. For HttpWebRequest, we implemented full support of AllowWriteStreamBuffering in dotnet/runtime#95001. And also added missing support for ImpersonationLevel in dotnet/runtime#102038. On top of these changes, we also obsoleted a few legacy classes to prevent further confusion: ServicePointManager in dotnet/runtime#103456. Its settings have no effect on HttpClient and SslStream while it might be misused in good faith for exactly that purpose. AuthenticationManager in dotnet/runtime#93171, done by community contributor @deeprobin. It’s either missing implementation or the methods throw PlatformNotSupportedException. Lastly, we put up together a guide for migration from HttpWebRequest to HttpClient in HttpWebRequest to HttpClient migration guide. It includes comprehensive lists of mappings between individual properties and methods, e.g., Migrate ServicePoint(Manager) usage and many examples for trivial and not so trivial scenarios, e.g., Example: Enable DNS round robin. Diagnostics In this release, diagnostics improvements focus on enhancing privacy protection and advancing distributed tracing capabilities. Uri Query Redaction in HttpClientFactory Logs Starting with version 9.0.0 of Microsoft.Extensions.Http, the default logging logic of HttpClientFactory prioritizes protecting privacy. In older versions, it emits the full request URI in the RequestStart and RequestPipelineStart events. In cases where some components of the URI contain sensitive information, this can lead to privacy incidents by leaking such data into logs. Version 8.0.0 introduced the ability to secure HttpClientFactory usage by customizing logging. However, this doesn’t change the fact that the default behavior might be risky for unaware users. In the majority of the problematic cases, sensitive information resides in the query component. Therefore, a breaking change was introduced in 9.0.0, removing the entire query string from HttpClientFactory logs by default. A global opt-out switch is available for services/apps where it’s safe to log the full URI. For consistency and maximum safety, a similar change was implemented for EventSource events in System.Net.Http. We recognize that this solution might not suit everyone. Ideally, there should be a fine-grained URI filtering mechanism, allowing users to retain non-sensitive query entries or filter other URI components (e.g., parts of the path). We plan to explore such a feature for future versions (dotnet/runtime#110018). Distributed Tracing Improvements Distributed tracing is a diagnostic technique for tracking the path of a specific transaction across multiple processes and machines, helping identify bottlenecks and failures. This technique models the transaction as a hierarchical tree of Activities, also referred to as spans in OpenTelemetry terminology. HttpClientHandler and SocketsHttpHandler are instrumented to start an Activity for each request and propagate the trace context via standard W3C headers when tracing is enabled. Before .NET 9, users needed the OpenTelemetry .NET SDK to produce useful OpenTelemetry-compliant traces. This SDK was required not just for collection and export but also to extend the instrumentation, as the built-in logic didn’t populate the Activity with request data. Starting with .NET 9, the instrumentation dependency (OpenTelemetry.Instrumentation.Http) can be omitted unless advanced features like enrichment are required. In dotnet/runtime#104251, we extended the built-in tracing to ensure that the shape of the Activity is OTel-compliant, with the name, status, and most required tags populated according to the standard. Experimental Connection Tracing When investigating bottlenecks, you might want to zoom into specific HTTP requests to identify where most of the time is spent. Is it during a connection establishment or the content download? If there are connection issues, it’s helpful to determine whether the problem lies with DNS lookups, TCP connection establishment, or the TLS handshake. .NET 9 has introduced several new spans to represent activities around connection establishment in SocketsHttpHandler. The most significant one HTTP connection setup span which breaks down to three child spans for DNS, TCP, and TLS activities. Because connection setup isn’t tied to a particular request in SocketsHttpHandler connection pool, the connection setup span can’t be modeled as a child span of the HTTP client request span. Instead, the relationship between requests and connections is being represented using Span Links, also known as Activity Links. Note The new spans are produced by various ActivitySources matching the wildcard Experimental.System.Net.*. These spans are experimental because monitoring tools like Azure Monitor Application Insights have difficulty visualizing the resulting traces effectively due to the numerous connection_setup → request backlinks. To improve the user experience in monitoring tools, further work is needed. It involves collaboration between the .NET team, OTel, and tool authors, and may result in breaking changes in the design of the new spans. The simplest way to set up and try connection trace collection is by using .NET Aspire. Using Aspire Dashboards it’s possible to expand the connection_setup activity and see a breakdown of the connection initialization. If you think the .NET 9 tracing additions might bring you valuable diagnostic insights, and you want to get some hands-on experience, don’t hesitate to read our full article about Distributed tracing in System.Net libraries. HttpClientFactory For HttpClientFactory, we’re introducing the Keyed DI support, offering a new convenient consumption pattern, and changing a default Primary Handler to mitigate a common erroneous usecase. Keyed DI Support In the previous release, Keyed Services were introduced to Microsoft.Extensions.DependencyInjection packages. Keyed DI allows you to specify the keys while registering multiple implementations of a single service type—and to later retrieve a specific implementation using the respective key. HttpClientFactory and named HttpClient instances, unsurprisingly, align well with the Keyed Services idea. Among other things, HttpClientFactory was a way to overcome this long-missing DI feature. But it required you to obtain, store and query the IHttpClientFactory instance—instead of simply injecting a configured HttpClient—which might be inconvenient. While Typed clients attempted to simplify that part, it came with a catch: Typed clients are easy to misconfigure and misuse (and the supporting infra can also be a tangible overhead in certain scenarios). As a result, the user experience in both cases was far from ideal. This changes as Microsoft.Extensions.DependencyInjection 9.0.0 and Microsoft.Extensions.Http 9.0.0 packages bring the Keyed DI support into HttpClientFactory (dotnet/runtime#89755). Now you can have the best of both worlds: you can pair the convenient, highly configurable HttpClient registrations with the straightforward injection of the specific configured HttpClient instances. As of 9.0.0, you need to opt in to the feature by calling the AddAsKeyed() extension method. It registers a Named HttpClient as a Keyed service for the key equal to the client’s name—and enables you to use the Keyed Services APIs (e.g., [FromKeyedServices(...)]) to obtain the required HttpClients. The following code demonstrates the integration between HttpClientFactory, Keyed DI and ASP.NET Core 9.0 Minimal APIs: var builder = WebApplication.CreateBuilder(args); builder.Services.AddHttpClient("github", c => { c.BaseAddress = new Uri("https://api.github.com/"); c.DefaultRequestHeaders.Add("Accept", "application/vnd.github.v3+json"); c.DefaultRequestHeaders.Add("User-Agent", "dotnet"); }) .AddAsKeyed(); // Add HttpClient as a Keyed Scoped service for key="github" var app = builder.Build(); // Directly inject the Keyed HttpClient by its name app.MapGet("/", ([FromKeyedServices("github")] HttpClient httpClient) => httpClient.GetFromJsonAsync<Repo>("/repos/dotnet/runtime")); app.Run(); record Repo(string Name, string Url); Endpoint response: > ~ curl http://localhost:5000/ {"name":"runtime","url":"https://api.github.com/repos/dotnet/runtime"} By default, AddAsKeyed() registers HttpClient as a Keyed Scoped service. The Scoped lifetime can help catching cases of captive dependencies: services.AddHttpClient("scoped").AddAsKeyed(); services.AddSingleton<CapturingSingleton>(); // Throws: Cannot resolve scoped service 'System.Net.Http.HttpClient' from root provider. rootProvider.GetRequiredKeyedService<HttpClient>("scoped"); using var scope = provider.CreateScope(); scope.ServiceProvider.GetRequiredKeyedService<HttpClient>("scoped"); // OK // Throws: Cannot consume scoped service 'System.Net.Http.HttpClient' from singleton 'CapturingSingleton'. public class CapturingSingleton([FromKeyedServices("scoped")] HttpClient httpClient) //{ ... You can also explicitly specify the lifetime by passing the ServiceLifetime parameter to the AddAsKeyed() method: services.AddHttpClient("explicit-scoped") .AddAsKeyed(ServiceLifetime.Scoped); services.AddHttpClient("singleton") .AddAsKeyed(ServiceLifetime.Singleton); You don’t have to call AddAsKeyed for every single client—you can easily opt in “globally” (for any client name) via ConfigureHttpClientDefaults. From Keyed Services perspective, it results in the KeyedService.AnyKey registration. services.ConfigureHttpClientDefaults(b => b.AddAsKeyed()); services.AddHttpClient("foo", /* ... */); services.AddHttpClient("bar", /* ... */); public class MyController( [FromKeyedServices("foo")] HttpClient foo, [FromKeyedServices("bar")] HttpClient bar) //{ ... Even though the “global” opt-in is a one-liner, it’s unfortunate that the feature still requires it, instead of just working “out of the box”. For full context and reasoning on that decision, see dotnet/runtime#89755 and dotnet/runtime#104943. You can explicitly opt out from Keyed DI for HttpClients by calling RemoveAsKeyed() (for example, per specific client, in case of the “global” opt-in): services.ConfigureHttpClientDefaults(b => b.AddAsKeyed()); // opt IN by default services.AddHttpClient("keyed", /* ... */); services.AddHttpClient("not-keyed", /* ... */).RemoveAsKeyed(); // opt OUT per name provider.GetRequiredKeyedService<HttpClient>("keyed"); // OK provider.GetRequiredKeyedService<HttpClient>("not-keyed"); // Throws: No service for type 'System.Net.Http.HttpClient' has been registered. provider.GetRequiredKeyedService<HttpClient>("unknown"); // OK (unconfigured instance) If called together, or any of them more than once, AddAsKeyed() and RemoveAsKeyed() generally follow the rules of HttpClientFactory configs and DI registrations: If used within the same name, the last setting wins: the lifetime from the last AddAsKeyed() is used to create the Keyed registration (unless RemoveAsKeyed() was called last, in which case the name is excluded). If used only within ConfigureHttpClientDefaults, the last setting wins. If both ConfigureHttpClientDefaults and a specific client name were used, all defaults are considered to “happen” before all per-name settings for this client. Thus, the defaults can be disregarded, and the last of the per-name ones wins. You can learn more about the feature in the dedicated conceptual docs. Default Primary Handler Change One of the most common problems HttpClientFactory users run into is when a Named or a Typed client erroneously gets captured in a Singleton service, or, in general, stored somewhere for a period of time that’s longer than the specified HandlerLifetime. Because HttpClientFactory can’t rotate such handlers, they might end up not respecting DNS changes. It is, unfortunately, easy and seemingly “intuitive” to inject a Typed client into a singleton, but hard to have any kind of check/analyzer to make sure HttpClient isn’t captured when it wasn’t supposed to. It might be even harder to troubleshoot the resulting issues. On the other hand, the problem can be mitigated by using SocketsHttpHandler, which can control PooledConnectionLifetime. Similarly to HandlerLifetime, it allows regularly recreating connections to pick up the DNS changes, but on a lower level. A client with PooledConnectionLifetime set up can be safely used as a Singleton. Therefore, to minimize the potential impact of the erroneous usage patterns, .NET 9 makes the default Primary handler a SocketsHttpHandler (on platforms that support it; other platforms, e.g. .NET Framework, continue to use HttpClientHandler). And most importantly, SocketsHttpHandler also has the PooledConnectionLifetime property preset to match the HandlerLifetime value (it reflects the latest value, if you configured HandlerLifetime one or more times). The change only affects cases when the client was not configured to have a custom Primary handler (via e.g. ConfigurePrimaryHttpMessageHandler<T>()). While the default Primary handler is an implementation detail, as it was never specified in the docs, it’s still considered a breaking change. There could be cases in which you wanted to use the specific type, for example, casting the Primary handler to HttpClientHandler to set properties like ClientCertificates, UseCookies, UseProxy, etc. If you need to use such properties, it’s suggested to check for both HttpClientHandler and SocketsHttpHandler in the configuration action: services.AddHttpClient("test") .ConfigurePrimaryHttpMessageHandler((h, _) => { if (h is HttpClientHandler hch) { hch.UseCookies = false; } if (h is SocketsHttpHandler shh) { shh.UseCookies = false; } }); Alternatively, you can explicitly specify a Primary handler for each of your clients: services.AddHttpClient("test") .ConfigurePrimaryHttpMessageHandler(() => new HttpClientHandler() { UseCookies = false }); Or, configure the default Primary handler for all clients using ConfigureHttpClientDefaults: services.ConfigureHttpClientDefaults(b => b.ConfigurePrimaryHttpMessageHandler(() => new HttpClientHandler() { UseCookies = false })); Security In System.Net.Security, we’re introducing the highly sought support for SSLKEYLOGFILE, more scenarios supporting TLS resume, and new additions in negotiate APIs. SSLKEYLOGFILE Support The most upvoted issue in the security space was to support logging of pre-master secret (dotnet/runtime#37915). The logged secret can be used by packet capturing tool Wireshark to decrypt the traffic. It’s a useful diagnostics tool when investigating networking issues. Moreover, the same functionality is provided by browsers like Firefox (via NSS) and Chrome and command line HTTP tools like cURL. We have implemented this feature for both SslStream and QuicConnection. For the former, the functionality is limited to the platforms on which we use OpenSSL as a cryptographic library. In the terms of the officially released .NET runtime, it means only on Linux operating systems. For the latter, it’s supported everywhere, regardless of the cryptographic library. That’s because TLS is part of the QUIC protocol (RFC 9001) so the user-space MsQuic has access to all the secrets and so does .NET. The limitation of SslStream on Windows comes from SChannel using a separate, privileged process for TLS which won’t allow exporting secrets due to security concerns (dotnet/runtime#94843). This feature exposes security secrets and relying solely on an environmental variable could unintentionally leak them. For that reason, we’ve decided to introduce an additional AppContext switch necessary to enable the feature (dotnet/runtime#100665). It requires the user to prove the ownership of the application by either setting it programmatically in the code: AppContext.SetSwitch("System.Net.EnableSslKeyLogging", true); or by changing the {appname}.runtimeconfig.json next to the application: { "runtimeOptions": { "configProperties": { "System.Net.EnableSslKeyLogging": true } } } The last thing is to set up an environmental variable SSLKEYLOGFILE and run the application: export SSLKEYLOGFILE=~/keylogfile ./<appname> At this point, ~/keylogfile will contain pre-master secrets that can be used by Wireshark to decrypt the traffic. For more information, see TLS Using the (Pre)-Master-Secret documentation. TLS Resume with Client Certificate TLS resume enables reusing previously stored TLS data to re-establish connection to previously connected server. It can save round trips during the handshake as well as CPU processing. This feature is a native part of Windows SChannel, therefore it’s implicitly used by .NET on Windows platforms. However, on Linux platforms where we use OpenSSL as a cryptographic library, enabling caching and reusing TLS data is more involved. We first introduced the support in .NET 7 (see TLS Resume). It has its own limitations that in general are not present on Windows. One such limitation was that it was not supported for sessions using mutual authentication by providing a client certificate (dotnet/runtime#94561). It has been fixed in .NET 9 (dotnet/runtime#102656) and works if one these properties is set as described: ClientCertificateContext LocalCertificateSelectionCallback returns non-null certificate on the first call ClientCertificates collection has at least one certificate with private key Negotiate API Integrity Checks In .NET 7, we added NegotiateAuthentication APIs, see Negotiate API. The original implementation’s goal was to remove access via reflection to the internals of NTAuthentication. However, that proposal was missing functions to generate and verify message integrity codes from RFC 2743. They’re usually implemented as cryptographic signing operation with a negotiated key. The API was proposed in dotnet/runtime#86950 and implemented in dotnet/runtime#96712 and as the original change, all the work from the API proposal to the implementation was done by a community contributor filipnavara. Networking Primitives This section encompasses changes in System.Net namespace. We’re introducing new support for server-side events and some small additions in APIs, for example new MIME types. Server-Sent Events Parser Server-sent events is a technology that allows servers to push data updates on clients via an HTTP connection. It is defined in living HTML standard. It uses text/event-stream MIME type and it’s always decoded as UTF-8. The advantage of the server-push approach over client-pull is that it can make better use of network resources and also save battery life of mobile devices. In this release, we’re introducing an OOB package System.Net.ServerSentEvents. It’s available as a .NET Standard 2.0 NuGet package. The package offers a parser for server-sent event stream, following the specification. The protocol is stream based, with individual items separated by an empty line. Each item has two fields: type – default type is message data – data itself On top of that, there are two other optional fields that progressively update properties of the stream: id – determines the last event id that is sent in Last-Event-Id header in case the connection needs to be reconnected retry – number of milliseconds to wait between reconnection attempts The library APIs were proposed in dotnet/runtime#98105 and contain type definitions for the parser and the items: SseParser – static class to create the actual parser from the stream, allowing the user to optionally provide a parsing delegate for the item data SseParser<T> – parser itself, offers methods to enumerate (synchronously or asynchronously) the stream and return the parsed items SseItem<T> – struct holding parsed item data Then the parser can be used like this, for example: using HttpClient client = new HttpClient(); using Stream stream = await client.GetStreamAsync("https://server/sse"); var parser = SseParser.Create(stream, (type, data) => { var str = Encoding.UTF8.GetString(data); return Int32.Parse(str); }); await foreach (var item in parser.EnumerateAsync()) { Console.WriteLine($"{item.EventType}: {item.Data} [{parser.LastEventId};{parser.ReconnectionInterval}]"); } And for the following input: : stream of integers data: 123 id: 1 retry: 1000 data: 456 id: 2 data: 789 id: 3 It outputs: message: 123 [1;00:00:01] message: 456 [2;00:00:01] message: 789 [3;00:00:01] Primitives Additions Apart from server sent event, System.Net namespace got a few small other additions: IEquatable<Uri> interface implementation for Uri in dotnet/runtime#97940 Which allows using Uri in functions that require IEquatable like Span.Contains-0)) or SequenceEquals-system-readonlyspan((-0)))) span-based (Try)EscapeDataString)) and (Try)UnescapeDataString)) for Uri in dotnet/runtime#40603 The goal is to support low-allocation scenarios and we now take advantage of these methods in FormUrlEncodedContent. new MIME types for MediaTypeNames in dotnet/runtime#95446 These types were collected over the course of the release and implemented in dotnet/runtime#103575 by a community contributor @CollinAlpert. Final Notes As each year, we try to write about the interesting and impactful changes in the networking space. This article can’t possibly cover all the changes that were made. If you are interested, you can find the complete list in our dotnet/runtime repository where you can also reach out to us with question and bugs. On top of that, many of the performance changes that are not mentioned here are in Stephen’s great article Performance Improvements in .NET 9. We’d also like to hear from you, so if you encounter an issue or have any feedback, you can file it in our GitHub repo. Lastly, I’d like to thank my co-authors: @antonfirsov who wrote Diagnostics. @CarnaViire who wrote HttpClientFactory and WebSockets. The post .NET 9 Networking Improvements appeared first on .NET Blog. View the full article
    • Learn what is new in the Visual Studio Code January 2025 Release (1.97) Read the full article View the full article
    • We recently reshipped ASP.NET Core 2.1 as ASP.NET Core 2.3 for ASP.NET Core users that are still on .NET Framework. To stay in support, all ASP.NET Core users on .NET Framework should update to this new version. Note This post only applies if you’re using ASP.NET Core on .NET Framework. If you’re using ASP.NET Core 2.x on .NET Core 2.x, it is already out of support, and you should upgrade to a supported version such as .NET 8. How to upgrade To upgrade ASP.NET Core apps running on .NET Framework to ASP.NET Core 2.3: Upgrade your NuGet packages: Update your project to use ASP.NET Core 2.3 packages. These packages are the same as ASP.NET Core 2.1 but re-versioned. Remove any dependency on changes introduced in ASP.NET Core 2.2: ASP.NET Core 2.2 apps that depend on changes in ASP.NET Core 2.2 will need to remove any dependency on these changes. Test your application: Thoroughly test your application to verify that everything works as expected after the upgrade. Background Early versions of ASP.NET Core were provided for .NET Framework and .NET Core. ASP.NET Core 2.1 has been supported on .NET Framework to facilitate migrations to later .NET versions. However, ASP.NET Core 2.2 went out of support with the rest of .NET Core 2.2 on all platforms in 2019. ASP.NET Core 2.2 shipped before we had a predictable schedule and alternating releases of Standard Term Support (STS) and Long Term Support (LTS). Many users upgraded to ASP.NET Core 2.2, not realizing that this reduced their support duration. As a result, some users are inadvertently running on the unsupported version of ASP.NET Core 2.2 on .NET Framework. Since ASP.NET Core 2.x for .NET Framework is shipped as a set of packages, downgrading isn’t easy; there are well over one hundred packages to downgrade with inconsistent version numbers. Some NuGet packages also now require ASP.NET Core 2.2, so downgrading to ASP.NET Core 2.1 could result in NuGet dependency with errors. In order to make staying in support easier, we’ve reshipped ASP.NET Core 2.1 as ASP.NET Core 2.3, so you can simply upgrade to a supported version. By reshipping ASP.NET Core 2.1 as ASP.NET Core 2.3, we provide users on ASP.NET Core 2.2 an off ramp to the supported version via a regular NuGet upgrade. Users updating from ASP.NET Core 2.2 to 2.3 will need to remove any dependencies on changes introduced in ASP.NET Core 2.2. Users on ASP.NET Core 2.1 should also update to 2.3 with the assurance that it’s the same code as 2.1. Moving forward, any servicing updates to ASP.NET Core for .NET Framework will be published based on 2.3. The following table summarizes the state of support of the various ASP.NET Core 2.x version on .NET Framework: Product .NET Framework Support ASP.NET Core 2.1 Unsupported, replaced by ASP.NET Core 2.3 ASP.NET Core 2.2 Ended December 23, 2019 ASP.NET Core 2.3 Supported, same code as 2.1 Caution ASP.NET Core 2.2 is not supported and went out of support over five years ago. If you’re using ASP.NET Core 2.2 on .NET Framework, we strongly recommend updating to ASP.NET Core 2.3 as soon as possible in order to stay supported and to receive relevant security fixes. Why we’re reshipping ASP.NET Core 2.1 as ASP.NET Core 2.3 You might wonder why we don’t reship ASP.NET Core 2.2 as 2.3 instead. The reason is that ASP.NET Core 2.2 includes breaking changes. ASP.NET Core 2.2 went out of support five years ago, while ASP.NET Core 2.1 remained supported. We don’t want existing supported ASP.NET Core 2.1 apps to break when updating to ASP.NET Core 2.3. Summary ASP.NET Core users on .NET Framework should update to the latest ASP.NET Core 2.3 release to stay in support. This update enables ASP.NET Core 2.2 users to update to a supported version by doing a NuGet package upgrade instead of a downgrade. ASP.NET Core 2.1 users updating to ASP.NET Core 2.3 should experience no change in behavior as the packages contain the exact same code. ASP.NET Core 2.2 users may need to remove any dependencies on ASP.NET Core 2.2 specific changes. Any future servicing fixes for ASP.NET Core on .NET Framework will be based on ASP.NET Core 2.3. Questions? Please ask in this issue: ASP.NET Core 2.1 becomes ASP.NET Core 2.3. The post ASP.NET Core on .NET Framework servicing release advisory: ASP.NET Core 2.3 appeared first on .NET Blog. View the full article
    • The DeepSeek R1 model has been gaining a ton of attention lately. And one of the questions we’ve been getting asked is: “Can I use DeepSeek in my .NET applications”? The answer is absolutely! I’m going to walk you through how to use the Microsoft.Extensions.AI (MEAI) library with DeepSeek R1 on GitHub Models so you can start experimenting with the R1 model today. MEAI makes using AI services easy The MEAI library provides a set of unified abstractions and middleware to simplify the integration of AI services into .NET applications. In other words, if you develop your application with MEAI, your code will use the same APIs no matter which model you decide to use “under the covers”. This lowers the friction it takes to build a .NET AI application as you’ll only have to remember a single library’s (MEAI’s) way of doing things regardless of which AI service you use. And for MEAI, the main interface you’ll use is IChatClient. Let’s chat with DeepSeek R1 GitHub Models allows you to experiment with a ton of different AI models without having to worry about hosting. It’s a great way to get started in your AI development journey for free. And GitHub Models gets updated with new models all the time, like DeepSeek’s R1. The demo app we’re going to build is a simple console application and it’s available on GitHub at codemillmatt/deepseek-dotnet. You can clone or fork it to follow along, but we’ll talk through the important pieces below too. First let’s take care of some prerequisites: Head on over to GitHub and generate a personal access token (PAT). This will be your key for GitHub Models access. Follow these instructions to create the PAT. You will want a classic token. Open the DeepSeek.Console.GHModels project. You can either open the full solution in Visual Studio or just the project folder if using VS Code. Create a new user secrets entry for the GitHub PAT. Name it GH_TOKEN and paste in the PAT you generated as the value. Now let’s explore the code a bit: Open the Program.cs file in the DeepSeek.Console.GHModels project. The first 2 things to notice are where we initialize the modelEndpoint and modelName variables. These are standard for the GitHub Models service, they will always be the same. Now for the fun part! We’re going to initialize our chat client. This is where we’ll connect to the DeepSeek R1 model. IChatClient client = new ChatCompletionsClient(modelEndpoint, new AzureKeyCredential(Configuration["GH_TOKEN"])).AsChatClient(modelName); This uses the Microsoft.Extensions.AI.AzureAIInference package to connect to the GitHub Models service. But the AsChatClient function returns an IChatClient implementation. And that’s super cool. Because regardless of which model we chose from GitHub Models, we’d still write our application against the IChatClient interface! Next up we pass in our question, or prompt, to the model. And we’ll use make sure we get a streaming response back, this way we can display it as it comes in. var response = client.CompleteStreamingAsync(question); await foreach (var item in response) { Console.Write(item); } That’s it! Go ahead and run the project. It might take a few seconds to get the response back (lots of people are trying the model out!). You’ll notice the response isn’t like you’d see in a “normal” chat bot. DeepSeek R1 is a reasoning model, so it wants to figure out and reason through problems. The first part of the response will be it’s reasoning and will be delimited by \<think> and is quite interesting. The second part of the response will be the answer to the question you asked. Here’s a partial example of a response: <think> Okay, let's try to figure this out. The problem says: If I have 3 apples and eat 2, how many bananas do I have? Hmm, at first glance, that seems a bit confusing. Let me break it down step by step. So, the person starts with 3 apples. Then they eat 2 of them. That part is straightforward. If you eat 2 apples out of 3, you'd have 1 apple left, right? But then the question shifts to bananas. Wait, where did bananas come from? The original problem only mentions apples. There's no mention of bananas at all. ... Do I have to use GitHub Models? You’re not limited to running DeepSeek R1 on GitHub Models. You can run it on Azure or even locally (or on GitHub Codespaces) through Ollama. I provided 2 additional console applications in the GitHub repository that show you how to do that. The biggest difference between the GitHub Models version is where the DeepSeek R1 model is deployed, the credentials you use to connect to it, and the specific model name. If you deploy on Azure AI Foundry, the code is exactly the same. Here are some instructions on how to deploy the DeepSeek R1 model into Azure AI Foundry. If you want to run locally on Ollama, we’ve provided a devcontainer definition that you can use to run Ollama in Docker. It will automatically pull down a small parameter version of DeepSeek R1 and start it up for you. The only difference is you’ll use the Microsoft.Extensions.AI.Ollama NuGet package and initialize the IChatClient with the with OllamaChatClient. Interacting with DeepSeek R1 is the same. Of course these are simple Console applications. If you’re using .NET Aspire, it’s easy to use Ollama and DeepSeek R1. Thanks to the .NET Aspire Community Toolkit’s Ollama integration, all you need to do is add one line and you’re all set! var chat = ollama.AddModel("chat", "deepseek-r1"); Check out this blog post with all the details on how to get going. Summary DeepSeek R1 is an exciting new reasoning model that’s drawing a lot of attention and you can build .NET applications that make use of it today using the Microsoft.Extensions.AI library. GitHub Models lowers the friction of getting started and experimenting it with. Go ahead and try out the samples and checkout our other MEAI samples! The post Build Intelligent Apps with .NET and DeepSeek R1 Today! appeared first on .NET Blog. View the full article
    • If you’ve never seen the movie Analyze This, here’s the quick pitch: A member of, let’s say, a New York family clan with questionable habits decides to seriously considers therapy to improve his mental state. With Billy Crystal and Robert De Niro driving the plot, hilarity is guaranteed. And while Analyze This! satirically tackles issues of a caricatured MOB world, getting to the root of problems with the right analytical tools is crucial everywhere. All the more in a mission critical LOB-App world. Enter the new WinForms Roslyn Analyzers, your domain-specific “counselor” for WinForms applications. With .NET 9, we’re rolling out these analyzers to help your code tackle its potential issues—whether it’s buggy behavior, questionable patterns, or opportunities for improvement. What Exactly is a Roslyn Analyzer? Roslyn analyzers are a core part of the Roslyn compiler platform, seamlessly working in the background to analyze your code as you write it. Chances are, you’ve been using them for years without even realizing it. Many features in Visual Studio, like code fixes, refactoring suggestions, and error diagnostics, rely on or even just are Analyzers or CodeFixes to enhance your development process. They’ve become such an integral part of modern development that we often take them for granted as just “how things work”. The coolest thing: This Roslyn based compiler platform is not a black box. They provide an extremely rich API, and not only Microsoft’s Visual Studio IDE or Compiler teams can create Analyzers. Everyone can. And that’s why WinForms picked up on this technology to improve the WinForms coding experience. It’s Just the Beginning — More to Come With .NET 9 we’ve laid the foundational infrastructure for WinForms-specific analyzers and introduced the first set of rules. These analyzers are designed to address key areas like security, stability, and productivity. And while this is just the start, we’re committed to expanding their scope in future releases, with more rules and features on the horizon. So, let’s take a real look of what we got with the first sets of Analyzers we’re introducing for .NET 9: Guidance for picking correct InvokeAsync Overloads With .NET 9 we have introduced a series of new Async APIs for WinForms. This blog post describes the new WinForms Async feature in detail. This is one of the first areas where we felt that WinForms Analyzers can help a lot in preventing issues with your Async code. One challenge when working with the new Control.InvokeAsync API is selecting the correct overload from the following options: public async Task InvokeAsync(Action callback, CancellationToken cancellationToken = default) public async Task<T> InvokeAsync<T>(Func<T> callback, CancellationToken cancellationToken = default) public async Task InvokeAsync(Func<CancellationToken, ValueTask> callback, CancellationToken cancellationToken = default) public async Task<T> InvokeAsync<T>(Func<CancellationToken, ValueTask<T>> callback, CancellationToken cancellationToken = default) Each overload supports different combinations of synchronous and asynchronous methods, with or without return values. The linked blog post provides comprehensive background information on these APIs. Selecting the wrong overload however can lead to unstable code paths in your application. To mitigate this, we’ve implemented an analyzer to help developers choose the most appropriate InvokeAsync overload for their specific use cases. Here’s the potential issue: InvokeAsync can asynchronously invoke both synchronous and asynchronous methods. For asynchronous methods, you might pass a Func<Task>, and expect it to be awaited, but it will not. Func<T> is exclusively for asynchronously invoking a synchronous called method – of which Func<Task> is just an unfortunate special case. So, in other words, the problem arises because InvokeAsync can accept any Func<T>. But only Func<CancellationToken, ValueTask> is properly awaited by the API. If you pass a Func<Task> without the correct signature—one that doesn’t take a CancellationToken and return a ValueTask—it won’t be awaited. This leads to a “fire-and-forget” scenario, where exceptions within the function are not handled correctly. If such a function then later throws an exception, it will may corrupt data or go so far as to even crash your entire application. Take a look at the following scenario: private async void StartButtonClick(object sender, EventArgs e) { _btnStartStopWatch.Text = _btnStartStopWatch.Text != "Stop" ? "Stop" : "Start"; await Task.Run(async () => { while (true) { await this.InvokeAsync(UpdateUiAsync); } }); // **** // The actual UI update method // **** async Task UpdateUiAsync() { _lblStopWatch.Text = $"{DateTime.Now:HH:mm:ss - fff}"; await Task.Delay(20); } } This is a typical scenario, where the overload of InvokeAsync which is supposed to just return something other than a task is accidentally used. But the Analyzer is pointing that out: So, being notified by this, it also becomes clear that we actually need to introduce a cancellation token so we can gracefully end the running task, either when the user clicks the button again or – which is more important – when the Form actually gets closed. Otherwise, the Task would continue to run while the Form gets disposed. And that would lead to a crash: private async void ButtonClick(object sender, EventArgs e) { if (_stopWatchToken.CanBeCanceled) { _btnStartStopWatch.Text = "Start"; _stopWatchTokenSource.Cancel(); _stopWatchTokenSource.Dispose(); _stopWatchTokenSource = new CancellationTokenSource(); _stopWatchToken = CancellationToken.None; return; } _stopWatchToken = _stopWatchTokenSource.Token; _btnStartStopWatch.Text = "Stop"; await Task.Run(async () => { while (true) { try { await this.InvokeAsync(UpdateUiAsync, _stopWatchToken); } catch (TaskCanceledException) { break; } } }); // **** // The actual UI update method // **** async ValueTask UpdateUiAsync(CancellationToken cancellation) { _lblStopWatch.Text = $"{DateTime.Now:HH:mm:ss - fff}"; await Task.Delay(20, cancellation); } } protected override void OnFormClosing(FormClosingEventArgs e) { base.OnFormClosing(e); _stopWatchTokenSource.Cancel(); } The analyzer addresses this by detecting incompatible usages of InvokeAsync and guiding you to select the correct overload. This ensures stable, predictable behavior and proper exception handling in your asynchronous code. Preventing Leaks of Design-Time Business Data When developing custom controls or business control logic classes derived from UserControl, it’s common to manage its behavior and appearance using properties. However, a common issue arises when these properties are inadvertently set at design time. This typically happens because there is no mechanism in place to guard against such conditions during the design phase. If these properties are not properly configured to control their code serialization behavior, sensitive data set during design time may unintentionally leak into the generated code. Such leaks can result in: Sensitive data being exposed in source code, potentially published on platforms like GitHub. Design-time data being embedded into resource files, either because necessary TypeConverters for the property type in question are missing, or when the form is localized. Both scenarios pose significant risks to the integrity and security of your application. Additionally, we aim to avoid resource serialization whenever possible. With .NET 9, the Binary Formatter and related APIs have been phased out due to security and maintainability concerns. This makes it even more critical to carefully control what data gets serialized and how. The Binary Formatter was historically used to serialize objects, but it had numerous security vulnerabilities that made it unsuitable for modern applications. In .NET 9, we eliminated this serializer entirely to reduce attack surfaces and improve the reliability of applications. Any reliance on resource serialization has the potential to reintroduce these risks, so it is essential to adopt safer practices. To help you, the developer, address this issue, we’ve introduced a WinForms-specific analyzer. This analyzer activates when all the following mechanisms to control the CodeDOM serialization process for properties are missing: SerializationVisibilityAttribute: This attribute controls how (or if) the CodeDOM serializers should serialize the content of a property. The property is not read-only, as the CodeDOM serializer ignores read-only properties by default. DefaultValueAttribute: This attribute defines the default value of a property. If applied, the CodeDOM serializer only serializes the property when the current value at design time differs from the default value. A corresponding private bool ShouldSerialize<PropertyName>() method is not implemented. This method is called at design (serialization) time to determine whether the property’s content should be serialized. By ensuring at least one of these mechanisms is in place, you can avoid unexpected serialization behavior and ensure that your properties are handled correctly during the design-time CodeDOM serialization process. But…this Analyzer broke my whole Solution! So let’s say you’ve developed a domain-specific UserControl, like in the screenshot above, in .NET 8. And now, you’re retargeting your project to .NET 9. Well, obviously, at that moment, the analyzer kicks in, and you might see something like this: In contrast to the previously discussed Async Analyzer, this one has a Roslyn CodeFix attached to it. If you want to address the issue by instructing the CodeDOM serializer to unconditionally never serialize the property content, you can use the CodeFix to make the necessary changes: As you can see, you can even have them fixed in one go throughout the whole document. In most cases, this is already the right thing to do: the analyzer adds the SerializationVisibilityAttribute on top of each flagged property, ensuring it won’t be serialized unintentionally, which is exactly what we want: . . . [DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)] public string NameText { get => textBoxName.Text; set => textBoxName.Text = value; } [DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)] public string EmailText { get => textBoxEmail.Text; set => textBoxEmail.Text = value; } [DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)] public string PhoneText { get => textBoxPhone.Text; set => textBoxPhone.Text = value; } . . . Copilot to the rescue! There is an even more efficient way to handle necessary edit-amendments for property attributes. The question you might want to ask yourself is: if there are no attributes applied at all to control certain aspects of the property, does it make sense to not only ensure proper serialization guidance but also to apply other design-time attributes? But then again, would the effort required be even greater—or would it? Well, what if we utilize Copilot to amend all relevant property attributes that are really useful at design-time, like the DescriptionAttribute or the CategoryAttribute? Let’s give it a try, like this: Depending on the language model you picked for Copilot, you should see a result where we not only resolve the issues the analyzer pointed out, but Copilot also takes care of adding the remaining attributes that make sense in the context. Copilot shows you the code it wants to add, and you can merge the suggested changes with just one mouse click. And those kind of issues are surely not the only area where Copilot can assist you bigtime in the effort to modernize your existing WinForms applications. But if the analyzer flagged hundreds of issues throughout your entire solution, don’t panic! There are more options to configure the severity of an analyzer at the code file, project, or even solution level: Suppressing Analyzers Based on Scope Firstly, you have the option to suppress the analyzer(s) on different scopes: In Source: This option inserts a #pragma warning disable directive directly in the source file around the flagged code. This approach is useful for localized, one-off suppressions where the analyzer warning is unnecessary or irrelevant. For example: #pragma warning disable WFO1000 public string SomeProperty { get; set; } #pragma warning restore WFO1000 In Suppression File: This adds the suppression to a file named GlobalSuppressions.cs in your project. Suppressions in this file are scoped globally to the assembly or namespace, making it a good choice for larger-scale suppressions. For example: [assembly: System.Diagnostics.CodeAnalysis.SuppressMessage( "WinForms.Analyzers", "WFO1000", Justification = "This property is intentionally serialized.")] In Source via Attribute: This applies a suppression attribute directly to a specific code element, such as a class or property. It’s a good option when you want the suppression to remain part of the source code documentation. For example: [System.Diagnostics.CodeAnalysis.SuppressMessage( "WinForms.Analyzers", "WFO1000", Justification = "This property is handled manually.")] public string SomeProperty { get; set; } Configuring Analyzer Severity in .editorconfig To configure analyzer severity centrally for your project or solution, you can use an .editorconfig file. This file allows you to define rules for specific analyzers, including their severity levels, such as none, suggestion, warning, or error. For example, to change the severity of the WFO1000 analyzer: # Configure the severity for the WFO1000 analyzer dotnet_diagnostic.WFO1000.severity = warning Using .editorconfig Files for Directory-Specific Settings One of the powerful features of .editorconfig files is their ability to control settings for different parts of a solution. By placing .editorconfig files in different directories within the solution, you can apply settings only to specific projects, folders, or files. The configuration applies hierarchically, meaning that settings in a child directory’s .editorconfig file can override those in parent directories. For example: Root-level .editorconfig: Place a general .editorconfig file at the solution root to define default settings that apply to the entire solution. Project-specific .editorconfig: Place another .editorconfig file in the directory of a specific project to apply different rules for that project while inheriting settings from the root. Folder-specific .editorconfig: If certain folders (e.g., test projects, legacy code) require unique settings, you can add an .editorconfig file to those folders to override the inherited configuration. /solution-root ├── .editorconfig (applies to all projects) ├── ProjectA/ │ ├── .editorconfig (overrides root settings for ProjectA) │ └── CodeFile.cs ├── ProjectB/ │ ├── .editorconfig (specific to ProjectB) │ └── CodeFile.cs ├── Shared/ │ ├── .editorconfig (applies to shared utilities) │ └── Utility.cs In this layout, the .editorconfig at the root applies general settings to all files in the solution. The .editorconfig inside ProjectA applies additional or overriding rules specific to ProjectA. Similarly, ProjectB and Shared directories can define their unique settings. Use Cases for Directory-Specific .editorconfig Files Test Projects: Disable or lower the severity of certain analyzers for test projects, where some rules may not be applicable. # In TestProject/.editorconfig dotnet_diagnostic.WFO1000.severity = none Legacy Code: Suppress analyzers entirely or reduce their impact for legacy codebases to avoid unnecessary noise. # In LegacyCode/.editorconfig dotnet_diagnostic.WFO1000.severity = suggestion Experimental Features: Use more lenient settings for projects under active development while enforcing stricter rules for production-ready code. By strategically placing .editorconfig files, you gain fine-grained control over the behavior of analyzers and coding conventions, making it easier to manage large solutions with diverse requirements. Remember, the goal of this analyzer is to guide you toward more secure and maintainable code, but it’s up to you to decide the best pace and priority for addressing these issues in your project. As you can see: An .editorconfig file or a thoughtfully put set of such files provides a centralized and consistent way to manage analyzer behavior across your project or team. For more details, refer to the .editorconfig documentation. So, I have good ideas for WinForms Analyzers – can I contribute? Absolutely! The WinForms team and the community are always looking for ideas to improve the developer experience. If you have suggestions for new analyzers or enhancements to existing ones, here’s how you can contribute: Open an issue: Head over to the WinForms GitHub repository and open an issue describing your idea. Be as detailed as possible, explaining the problem your analyzer would solve and how it could work. Join discussions: Engage with the WinForms community on GitHub or other forums. Feedback from other developers can help refine your idea. Contribute code: If you’re familiar with the .NET Roslyn analyzer framework, consider implementing your idea and submitting a pull request to the repository. The team actively reviews and merges community contributions. Test and iterate: Before submitting your pull request, thoroughly test your analyzer with real-world scenarios to ensure it works as intended and doesn’t introduce false positives. Contributing to the ecosystem not only helps others but also deepens your understanding of WinForms development and the .NET platform. Final Words Analyzers are powerful tools that help developers write better, more reliable, and secure code. While they can initially seem intrusive—especially when they flag many issues—they serve as a safety net, guiding you to avoid common pitfalls and adopt best practices. The new WinForms-specific analyzers are part of our ongoing effort to modernize and secure the platform while maintaining its simplicity and ease of use. Whether you’re working on legacy applications or building new ones, these tools aim to make your development experience smoother. If you encounter issues or have ideas for improvement, we’d love to hear from you! WinForms has thrived for decades because of its passionate and dedicated community, and your contributions ensure it continues to evolve and remain relevant in today’s development landscape. Happy coding! The post WinForms: Analyze This (Me in Visual Basic) appeared first on .NET Blog. View the full article
  • Member Statistics

    • Total Members
      47755
    • Most Online
      704

    Newest Member
    Arena Properties
    Joined
  • Forum Statistics

    • Total Topics
      31.9k
    • Total Posts
      126.2k
  1. Guest Cheung

    add code to buttons of an inherited form

      • *Gurus*
    • 4 replies
    • 2k views
    Guest
  2. Guest Ted Osberg

    Unzipping files programmatically

    • 2 replies
    • 2.4k views
    Guest
  3. @ new styles

      • *Gurus*
      • *Experts*
      • Administrators
    • 10 replies
    • 449 views
    AWS
  • Who's Online   0 Members, 0 Anonymous, 43 Guests (See full list)

    • There are no registered users currently online
×
×
  • Create New...