All Activity
- Last week
-
If you’re attending NDC London 2025, we can’t wait to meet you! From January 29-31, Microsoft will be on-site to showcase the latest in .NET, Azure integration, and AI-powered development. This is your chance to engage with our experts, attend technical sessions, and explore how .NET can help you take your applications to the next level. What to Expect from Microsoft at NDC London 2025 Keynote from Scott Hanselman: Start the conference with inspiration as Scott Hanselman delivers a keynote exploring the latest trends and innovations in the developer world, highlighting how .NET empowers developers to build the future. 27+ Technical Sessions by Microsoft Leaders and MVPs: Dive into expert-led sessions covering everything from cloud-native development with .NET Aspire to building modern applications with AI and .NET 9. These talks are designed to equip you with the tools and knowledge to level up your development projects. Visit the Microsoft Booth: Our booth is your gateway to the latest innovations: Live Demos: See .NET 9 and Azure migration tooling in action. Interactive Activities: Network with the community and engage with our experts. Swag Giveaways: Walk away with exclusive Microsoft goodies. Customer Meetups: Schedule a 1:1 session with Microsoft speakers like Scott Hunter, Scott Hanselman, and others. Whether you’re looking for advice on technical challenges or insights into modernizing your applications with Azure, these meetups are the perfect opportunity to engage directly with our thought leaders. Join Us at NDC London 2025 Don’t miss your chance to learn, connect, and grow with the .NET team at NDC London. Whether you’re attending to sharpen your skills, discover new tools, or meet fellow developers, the event promises to deliver value for everyone in the community. We’re excited to meet you! Visit our booth, attend our sessions, and book a 1:1 meeting with our experts to make the most of your NDC London experience. Stay Connected Follow @dotnet for updates throughout the event, and keep an eye on our blog for post-event highlights. Let’s build the future together at NDC London 2025! The post Meet the .NET Team at NDC London 2025 appeared first on .NET Blog. View the full article
-
Welcome to our combined .NET servicing updates for January 2025. Let’s get into the latest release of .NET & .NET Framework, here is a quick overview of what’s new in these releases: Security Improvements .NET updates .NET Framework updates Security improvements This month you will find several CVEs that have been fixed this month: CVE # Title Applies to CVE-2025-21171 .NET Remote Code Execution Vulnerability .NET 9.0 CVE-2025-21172 .NET Remote Code Execution Vulnerability .NET 8.0, .NET 9.0 CVE-2025-21176 .NET and .NET Framework Denial of Service Vulnerability .NET 8.0, .NET 9.0, .NET Framework 4.6.2, 4.7, 4.7.1, 4.7.2, 4.8, 4.8.1 CVE-2025-21173 .NET Elevation of Privilege Vulnerability .NET 8.0, .NET 9.0 .NET January 2025 Updates Below you will find a detailed list of everything from the .NET release for January 2025 including .NET 9.0.1 and .NET 8.0.12: .NET 8.0 .NET 9.0 Release Notes 8.0.12 9.0.1 Installers and binaries 8.0.12 9.0.1 Container Images images images Linux packages 8.0.12 9.0.1 Known Issues 8.0 9.0 .NET Improvements ASP.NET Core: 9.0.1 EF Core: 8.0.12 Runtime: 8.0.12 | 9.0.1 SDK: 8.0.12 | 9.0.1 Share feedback about this release in the Release feedback issue. .NET Framework January 2025 Updates This month, there are security and non-security updates, be sure to browse our release notes for .NET Framework for more details. See you next month Let us know what you think of these new combined service release blogs as we continue to iterate to bring you the latest news and updates for .NET. The post .NET and .NET Framework January 2025 servicing releases updates appeared first on .NET Blog. View the full article
- Earlier
-
Arena Properties joined the community
-
.NET Aspire enhances the local development process with its powerful orchestration feature for app composition. In the .NET Aspire App Host, you specify all the projects, executables, cloud resources, and containers for your application in one centralized location. When you run the App Host project, .NET Aspire will automatically run your projects and executables, provision cloud resources if necessary, and download and run containers that are dependencies for your app. .NET Aspire 9 added new features to give you more control over how container lifetimes are managed on your local machine to speed up development when working with containers. Containers with .NET Aspire Let’s look at a simple example of a .NET Aspire App Host that creates a local Redis container resources, waits for it to become available, and then configures the connection string for the web projects: // Create a distributed application builder given the command line arguments. var builder = DistributedApplication.CreateBuilder(args); // Add a Redis server to the application. var cache = builder.AddRedis("cache"); // Add the frontend project to the application and configure it to use the // Redis server, defined as a referenced dependency. builder.AddProject<Projects.MyFrontend>("frontend") .WithReference(cache) .WaitFor(cache); When the App Host is started, the call to AddRedis will download the appropriate Redis image. It will also create a new Redis container and run it automatically. When we stop debugging our App Host, .NET Aspire will automatically stop all of our projects and will also stop our Redis container and delete the associated volume that typically is storing data. Container lifetimes While this fits many scenarios, if there aren’t going to be any changes in the container you may just want the container to stay running regardless of the state of the App Host. This is where the new WithLifetime API comes in allowing you to customize the lifetime of containers. This means that you can configure a container to start and stay running, making projects start faster because the container will be ready right away and will re-use the volume. var builder = DistributedApplication.CreateBuilder(args); // Add a Redis server to the application and set lifetime to persistent var cache = builder.AddRedis("cache") .WithLifetime(ContainerLifetime.Persistent); builder.AddProject<Projects.MyFrontend>("frontend") .WithReference(cache) .WaitFor(cache); Now, when we run our App Host if the container isn’t found it will still create a new container resource and start it, however if it is found with the specified name, .NET Aspire will use that resource instead of creating a new one. When the App Host shuts down, the container resource will not be terminated and will allow you to re-use it across multiple runs! You will be able to see that the container is set to Persistent with a little pin icon on the .NET Aspire dashboard: How does it work? By default, several factors are taken into consideration when .NET Aspire determines whether to use an existing container or to create a new one when ContainerLifetime.Persistent is set. .NET Aspire will first generate a unique name for the container based on a hash of the App Host project path. This means that a container will be persistent for a specific App Host, but not globally if you have multiple App Host projects. You can specify a container name with the WithContainerName method, which would allow for a globally unique persistent container. In addition to the container name, .NET Aspire will consider the following: Container image Commands that start the container and their parameters Volume mounts Exposed container ports Environment variables Container restart policy .NET Aspire takes all of this information and creates a unique hash from it to compare with any existing container data. If any of these settings are different then the container will NOT be reused and a new one will be created. So, if you are curious why a new container may have been created, it’s probably because something has changed. This is a pretty strict policy that .NET Aspire started with for this new option, and the team is looking for feedback on future iterations. What about persisting data? Now that we are persisting our containers between launches of the App Host, it also means that we are re-using the volume that was associated with it. Volumes are the recommended way to persist data generated by containers and have the benefit that they can store data from multiple containers at a time, offer high performance, and are easy to back up or migrate. So, while yes we are re-using the volume, a new container may be created if settings are changed. Having more control of the exact volume that is used and being reused allows us to do things such as: Maintain cached data or messages in a Redis instance across app launches. Work with a continuous set of data in a database during an extended development session. Test or debug a changing set of files in an Azure Blob Storage emulator. So, let’s tell our container resource what volume to use with the WithDataVolume method. By default it will assign a name based on our project and resource names: {appHostProjectName}-{resourceName}-data, but we can also define the name that will be created and reused which is helpful if we have multiple App Hosts. var cache = builder.AddRedis("cache") .WithLifetime(ContainerLifetime.Persistent) .WithDataVolume("myredisdata"); Now, a new volume will be created and reused for this container resource and if for some reason a new container is created it will still use the myredisdata volume. Using volumes are nice because they offer ideal performance, portability, and security. However, you may want direct access and modification of files on your machine. This is where data bind mounts come in when you need real-time changes. var cache = builder.AddRedis("cache") .WithLifetime(ContainerLifetime.Persistent) .WithDataBindMount(@"C:\Redis\Data"); Data bind mounts rely on the filesystem to persist the Redis data across container restarts. Here, the data bind mount is mounted at the C:\Redis\Data on Windows in the Redis container. Now, in the case of Redis we can also control persistence for when the Redis resource takes snapshots of the data at a specific interval and threshold. var cache = builder.AddRedis("cache") .WithLifetime(ContainerLifetime.Persistent) .WithDataVolume("myredisdata") .WithPersistence(interval: TimeSpan.FromMinutes(5), keysChangedThreshold: 100); Here, the interval is the time between snapshot exports and the keysChangedThreshold is the number of key change operations required to trigger a snapshot. Integrations have their own specifications for WithDataVolume and WithBindMount, so be sure to check the integration documentation for the ones you use. More control over resources We now have everything set up, persisted, and ready to go in our App Host. As a bonus, .NET Aspire 9 also added the ability to start, stop, and restart resources including containers directly from the dashboard! This is really nice to be able to test resiliency of your applications without having to leave the dashboard. Upgrade to .NET Aspire 9 There is so much more in .NET Aspire 9, so be sure to read through the What’s new in .NET Aspire 9.0 documentation and easily upgrade in just a few minutes with the full upgrade guide. There is also newly updated documentation on container resource lifetimes, persisting data with volumes, and the new dashboard features. Let us know what you think of this new feature in .NET Aspire 9 and all of the other great features in the comments below. The post .NET Aspire Quick Tip – Managing Container & Data Lifetime appeared first on .NET Blog. View the full article
-
.Net joined the community
-
VS Code joined the community
-
AWS started following Available Styles and Broken Links
-
DecemberChild started following Create XML file with C#
-
It has been an absolutely outstanding year of content for .NET from creators around the globe sharing their passion for .NET and the .NET team giving insight into the latest and greatest in the world of .NET. From events, live streams, and plenty of on-demand content dropping on the .NET YouTube nearly every single day, it is a great way to stay up to date, but also get involved and give feedback to the team in real-time. This year, developers tuned into the .NET YouTube more than ever before with over 8 million views of content, left over 6,000 comments, smashed the like button over 120,000 times, and over 50,000 new subscribers joined the channel. There is now more variety of content than ever and that has led to over 700K hours of watch time this year alone! This is over 29,000 days watched or to go even a step further… nearly 80 years! Top .NET videos of 2024 It was fun looking back at this year’s top videos as it really was a wide range of content. The most watched video on the channel was Scott Hanselman and David Fowler’s What is C#? video in the C# for Beginner’s series. However, if we take a look at just new videos released in 2024 then Scott shows up yet again, but this time with Stephen Toub in the first entry in Deep .NET on async/await. That was closely followed up with Dan Roth and Safia Abdalla’s What is ASP.NET Core? that went directly into the new front-end and back-end beginner series that launch this year. There is so much more to recap though as there were over 260 new videos released on the .NET YouTube this year! Let’s take a look at what else the community has been tuning into. Deep .NET If you are looking for deep technical content, then look no further than Scott Hanselman and Stephen Toub’s series, Deep .NET. Each episode, Scott and Stephen go in-depth on a topic which has ranged from async/await, Span, RegEx, LINQ, ArrayPool, Parallel Programming, and more. Recently they have been hosting more .NET team members including Eric Erhardt who went deep on Native AOT. Scott and Stephen will be back in 2025 with even more Deep .NET episodes and you will hear from even more voices from the .NET team. So, if you love this type of content, be sure to reach out to Scott & Stephen or leave a comment on YouTube and tell them who and what you want to hear about on Deep .NET. .NET Conf 2024 At this year’s .NET Conf, the 14th entry in the series, we celebrated the launch of .NET 9 and the amazing .NET community. Completely free & virtual, this year’s 3-day event featured a BONUS 4th day of exclusive YouTube premier sessions and the 3rd iteration of the .NET Conf – Student Zone! With over 90 sessions to explore, there is something for everyone. Not to mention that there is still time to link up with your local community with .NET Conf local events happening through January 2025! .NET Conf wasn’t the only major .NET streaming event this year. In August, .NET Conf: Focus on AI highlighted the latest in AI development with .NET. We also celebrated the launch of .NET Aspire 8.1 with a full day of content at .NET Aspire Developers Day. If you are looking for more cloud content for .NET applications, the Azure Developers YouTube ran events on all things .NET on Azure and another event all about using Azure with .NET Aspire! ASP.NET Core Beginner Series Dan Roth and Safia Abdalla re-introduced the world to ASP.NET Core and then went deeper with full beginner series on both front-end web development and back-end API development with .NET. For front-end web development, Dan dives deep into Blazor, Razor, components, render modes, and so much more to build a complete application from scratch. If you are more into API development, then Safia has you covered with all things ASP.NET Core for APIs including building, testing, adding middleware, dependency injection, and so much more. These are just a few of the new beginner series that launched this year to help developers jumpstart their development journey with .NET. Introduction to .NET Aspire Can you believe that it was just 7 months again that .NET Aspire officially launched, helping developers streamline their development process and build better distributed applications? So much has happened in the world of .NET Aspire including several new releases, the launch of the .NET Aspire Community Toolkit, and plenty of .NET Aspire content. One of the most watched series on the .NET YouTube was Welcome to .NET Aspire, where the team took developers through all of the different parts of .NET Aspire. Looking to get started and want to see how to integrate .NET Aspire into your existing apps? Then Jeff Fritz has you covered with the brand new .NET Aspire beginner series, a 90 minute deep dive into all things .NET Aspire. Top .NET Live Streams of 2024 Events and on-demand content weren’t the only thing happening on the .NET YouTube. There was a live stream taking place nearly every other day with over 150 taking place in 2024 alone! Let’s take a look at the top stream. Let’s Learn .NET – Blazor The Let’s Learn .NET series is a world wide live stream interactive event where you can follow along at home to learn a new .NET technology and ask questions live. Besides events, this year’s #1 most watched live stream was the Let’s Learn Blazor event walking through the latest and greatest in building full-stack web apps with .NET. That was only the start for Let’s Learn .NET as they continued throughout the year and included .NET Aspire, Containers, and AI with Semantic Kernel. It has been really exciting to see the series grow and now be live streamed in 7 different languages for developers everywhere! On .NET Live: Modular Monoliths with ASP.NET Core Steve Smith is iconic when it comes to ASP.NET Core architecture videos and guidance. His session at .NET Conf every year consistently is one of the most watched and commented on. This year, the On .NET Live team had him on to talk all about making decisions between monolith and microservice based architecture. Every week the On .NET Live team brings on amazing community members to talk about a wide range of topics, so be sure to browse the entire catalog of live streams. .NET Community Standups Hear and interact directly with team members building .NET here at Microsoft. That is what the .NET Community Standups are all about, they are a behind the scenes look at its development and a great way to have your voice heard and get your questions answered. In 2024, over 100K developers tuned in live and another 300K developers caught up on past community standup streams. Here are the top community standups of 2024 for each team: ASP.NET Core – .NET 9 Roadmap & Blazor Hybrid in .NET 9 Languages & Runtime – C# 13 and beyond .NET Data – Database Concurrency .NET AI – Get Started with AI in .NET .NET MAUI – .NET MAUI and .NET Aspire That’s a wrap! Thanks to everyone that created, enjoyed, commented, smashed that like button in 2024! We have tons of great new content coming your way in 2025 so make sure you go and subscribe to the .NET YouTube if you haven’t yet to stay up to date. If you don’t have access to YouTube, don’t worry as all .NET videos are also available on Microsoft Learn! What were your favorite videos and live streams of 2024? What are you looking forward to in 2025? Let us know in the comments below. The post Top .NET Videos & Live Streams of 2024 appeared first on .NET Blog. View the full article
-
ChrisHite joined the community
-
Pause joined the community
-
Pasquale47 joined the community
-
Welcome to Pages! Pages extends your site with custom content management designed especially for communities. Create brand new sections of your community using features like blocks, databases and articles, pulling in data from other areas of your community. Create custom pages in your community using our drag'n'drop, WYSIWYG editor. Build blocks that pull in all kinds of data from throughout your community to create dynamic pages, or use one of the ready-made widgets we include with the Invision Community. View our Pages documentation
-
GeorgeDop joined the communityTOZKatja1 joined the communityMaureenW83 joined the communityWe are currently making an unexpected change to the way that .NET installers and archives are distributed. This change may affect you and may require changes in your development, CI, and/or production infrastructure. We expect that most users will not be directly affected, however, it is critical that you validate if you are affected and to watch for downtime or other kinds of breakage. The most up-to-date status is being maintained at dotnet/core #9671. Please look to that issue to stay current. If you are having an outage that you believe is caused by these changes, please comment on the reference GitHub issue and/or email us at dotnet@microsoft.com. [HEADING=1]Affected domains[/HEADING] We maintain multiple Content Delivery Network (CDN) instances for delivering .NET builds. Some end in[iCODE]azureedge.net[/iCODE]. These domains are hosted by edg.io, which will soon cease operations due to bankruptcy. We are required to migrate to a new CDN and will be using new domains going forward. It is possible that [iCODE]azureedge.net[/iCODE] domains will have downtime in the near-term. We expect that these domains will be permanently retired in the first few months of 2025. Note No other party will ever have access to use these domains. Affected domains: [iCODE]dotnetcli.azureedge.net[/iCODE] [iCODE]dotnetbuilds.azureedge.net[/iCODE] Unaffected domains: [iCODE]dotnet.microsoft.com[/iCODE] [iCODE]download.visualstudio.microsoft.com[/iCODE] [HEADING=1]Our response[/HEADING] We made several changes in response. We have tried to reduce what you need to do to react. In many cases, you won’t need to do anything special. New CDNs: Official builds: [iCODE]builds.dotnet.microsoft.com[/iCODE] CI builds: [iCODE]ci.dot.net[/iCODE] Updated .NET install script: The install script now uses the new domains, per dotnet/install-scripts #555 This script has been deployed to the official locations, as described in dotnet-install scripts reference Addressing CI installers: GitHub Actions has been updated to use the new domains, per actions/setup-dotnet #570 We expect that GitHub Enterprise Server will be addressed in January. Azure DevOps [iCODE]UseDotnetTask[/iCODE] will be updated in January We do not yet have a date for updating Azure DevOps Server. [HEADING=1]Domain configuration[/HEADING] We are in the process of changing the configuration of our domains. At present, they may be using a combination of Akamai, Azure Front Door, and edgio. Our highest priority has been maintaining domain operation while we initiate new service with other CDN providers and validate their capability in our environment. We are using Azure Traffic Manager to split traffic between them, primarily for reliability. [HEADING=1]Call to action[/HEADING] There are several actions you can take to determine if you have any exposure to [iCODE]azureedge.net[/iCODE] retirement. Search your source code, install scripts, Dockerfiles and other files for instances of [iCODE]azureedge.net[/iCODE]. We also noticed that there is a lot of use of our storage account: [iCODE]dotnetcli.blob.core.windows.net[/iCODE]. Please also search for it. The storage account is unaffected, however, it would be much better for everyone if you used our new CDN. It will deliver better peformance. Update [iCODE]dotnetcli.azureedge.net[/iCODE] to [iCODE]builds.dotnet.microsoft.com[/iCODE] Update [iCODE]dotnetcli.blob.core.windows.net[/iCODE] to [iCODE]builds.dotnet.microsoft.com[/iCODE] Note The new CDN is path-compatible with those servers. It’s only the domain that needs to change. Please check for copies of the install script that you may have within your infrastructure. You will need to update it. You will need to move to the latest version of the GitHub Action and Azure DevOps Task installers to ensure that you are protected from downtime. Please check firewall rules that might prevent you from accessing our new CDNs, similar to this conversation. [HEADING=1]Closing[/HEADING] We are sorry that we are making changes that affect running infrastructure and asking you to react to them during a holiday period. As you can see, the need for these changes was unexpected and we are trying to make the best choices under a very compressed schedule. We are hoping that the mitigations that we put into place will result in most users being unaffected by this situation. With every crisis, there are opportunities for learning. We realized that we are missing public documentation on how to best use all of the installation-related resources we provide, to balance reliability, security, performance, and productivity. We will be working on producing this documentation in the new year. The post Critical: .NET Install links are changing appeared first on .NET Blog. Continue reading...We are currently making an unexpected change to the way that .NET installers and archives are distributed. This change may affect you and may require changes in your development, CI, and/or production infrastructure. We expect that most users will not be directly affected, however, it is critical that you validate if you are affected and to watch for downtime or other kinds of breakage. The most up-to-date status is being maintained at dotnet/core #9671. Please look to that issue to stay current. If you are having an outage that you believe is caused by these changes, please comment on the reference GitHub issue and/or email us at dotnet@microsoft.com. Affected domains We maintain multiple Content Delivery Network (CDN) instances for delivering .NET builds. Some end inazureedge.net. These domains are hosted by edg.io, which will soon cease operations due to bankruptcy. We are required to migrate to a new CDN and will be using new domains going forward. It is possible that azureedge.net domains will have downtime in the near-term. We expect that these domains will be permanently retired in the first few months of 2025. Note No other party will ever have access to use these domains. Affected domains: dotnetcli.azureedge.net dotnetbuilds.azureedge.net Unaffected domains: dotnet.microsoft.com download.visualstudio.microsoft.com Our response We made several changes in response. We have tried to reduce what you need to do to react. In many cases, you won’t need to do anything special. New CDNs: Official builds: builds.dotnet.microsoft.com CI builds: ci.dot.net Updated .NET install script: The install script now uses the new domains, per dotnet/install-scripts #555 This script has been deployed to the official locations, as described in dotnet-install scripts reference Addressing CI installers: GitHub Actions has been updated to use the new domains, per actions/setup-dotnet #570 We expect that GitHub Enterprise Server will be addressed in January. Azure DevOps UseDotnetTask will be updated in January We do not yet have a date for updating Azure DevOps Server. Domain configuration We are in the process of changing the configuration of our domains. At present, they may be using a combination of Akamai, Azure Front Door, and edgio. Our highest priority has been maintaining domain operation while we initiate new service with other CDN providers and validate their capability in our environment. We are using Azure Traffic Manager to split traffic between them, primarily for reliability. Call to action There are several actions you can take to determine if you have any exposure to azureedge.net retirement. Search your source code, install scripts, Dockerfiles and other files for instances of azureedge.net. We also noticed that there is a lot of use of our storage account: dotnetcli.blob.core.windows.net. Please also search for it. The storage account is unaffected, however, it would be much better for everyone if you used our new CDN. It will deliver better peformance. Update dotnetcli.azureedge.net to builds.dotnet.microsoft.com Update dotnetcli.blob.core.windows.net to builds.dotnet.microsoft.com Note The new CDN is path-compatible with those servers. It’s only the domain that needs to change. Please check for copies of the install script that you may have within your infrastructure. You will need to update it. You will need to move to the latest version of the GitHub Action and Azure DevOps Task installers to ensure that you are protected from downtime. Please check firewall rules that might prevent you from accessing our new CDNs, similar to this conversation. Closing We are sorry that we are making changes that affect running infrastructure and asking you to react to them during a holiday period. As you can see, the need for these changes was unexpected and we are trying to make the best choices under a very compressed schedule. We are hoping that the mitigations that we put into place will result in most users being unaffected by this situation. With every crisis, there are opportunities for learning. We realized that we are missing public documentation on how to best use all of the installation-related resources we provide, to balance reliability, security, performance, and productivity. We will be working on producing this documentation in the new year. The post Critical: .NET Install links are changing appeared first on .NET Blog. View the full articleIn 2024, the .NET blog continued to be a central hub of knowledge, delivering valuable insights and updates straight from the source. With over 130 posts and more than 260,000 words published, these blogs remain a critical resource for developers looking to stay up-to-date with the latest advancements in .NET. Alright, let’s explore the top blogs from the .NET team that made the biggest impact this year. [HEADING=1]Announcing .NET 9[/HEADING] .NET 9 is here! It is the most productive, modern, secure, intelligent, and performant release of .NET yet! We started the year by sharing our vision for .NET 9 and our strategy for engaging deeper with the developer community around the release. This meant that we pivoted our content on the blog to focus on .NET 8, the current shipping version of .NET at the time. This led to a new form of extremely detailed release notes on GitHub for every preview release. In addition, we focused on ensuring that as .NET 9 progressed every feature was documented and maintained on Microsoft Learn. The means on launch day that developers could not only read the announcement on .NET 9, but they could also dive deep into documentation around all part of what’s new in .NET 9 including the Runtime, Libraries, SDK, C# 13, F# 9, ASP.NET Core, .NET Aspire, .NET MAUI, EF Core, WPF, and Windows Forms. Want to go deeper on all things .NET 9? Be sure to browse all of the blog entries this year covering .NET 9 updates, , and of course the where you can watch me walk across the beautiful new bridge on the Microsoft campus for 5 minute straight! [HEADING=1]Performance Improvements in .NET 9[/HEADING] It wouldn’t be a new release with Stephen Toub’s complete deep dive into the vast performance improvements in .NET. When printed to PDF the blog spans over 320 pages covering the over 1,000 performance related pull-requests in .NET 9. From enhancements to garbage collection, Native AOT, threading, reflection, LINQ, loops, JIT, and so much more it is an absolute must read. If you want to spend your holiday break enjoying the entire history of performance improvements in Toub’s ongoing series, then check out previous posts on .NET 8, .NET 7, .NET 6, .NET 5, .NET Core 3.0, .NET Core 2.1, and .NET Core 2.0. If you are like me and would rather watch a video on all these improvements, then Toub has you covered again with ! [HEADING=1]Introducing ASP.NET Core metrics and Grafana dashboards in .NET[/HEADING] .NET Aspire includes a fantastic developer dashboard for OpenTelemetry, but did you know you can easily setup your own custom Grafana dashboards? This blog post introduces new metrics in .NET for ASP.NET Core, including HTTP request counts, duration, and error handling diagnostics. It highlights the pre-built Grafana dashboards for monitoring apps in production, and how you can create custom metrics and use tools like [iCODE]dotnet-counters[/iCODE] for live metrics viewing. [ATTACH type=full" alt="A Grafana dashboard showing metrics]6085[/ATTACH] [HEADING=1]General Availability of .NET Aspire: Simplifying .NET Cloud-Native Development[/HEADING] .NET Aspire is officially here! A new stack designed to simplify the development of .NET projects with tools, templates, and integrations to streamline building distributed applications. Key features include the .NET Aspire Dashboard for viewing OpenTelemetry data, support for various databases and cloud services, and the ability to orchestrate local development with the App Host project. Take an indepth look and how to get started with .NET Aspire using Visual Studio, the .NET CLI, or Visual Studio Code. Also, be sure to browse through all .NET Aspire blog posts, the What’s new in .NET Aspire 9 session from .NET Conf, the brand new , and free Microsoft Learn training and .NET Aspire credential. [HEADING=1]Introducing .NET Smart Components – AI-powered UI Controls[/HEADING] .NET Smart Components are a set of AI-powered UI controls for .NET apps, initially available for Blazor, MVC, and Razor Pages. These components include Smart Paste, Smart TextArea, and Smart ComboBox, which enhance user productivity by automating form filling, autocompleting text, and providing intelligent suggestions. You can try these components today and checkout full sample apps and provide feedback to help improve them on GitHub. [ATTACH type=full" alt="Animation of copy and pasting an address with Smart Paste]6086[/ATTACH] Since the first announcements of Smart Components, an entire ecosystem has grown around the initiative. Read about the thriving smart components ecosystem from popular component vendors to easily add AI to your .NET apps. [HEADING=1]C# 12 Blog Series[/HEADING] The team also experimented with some new series on the blog including “Refactor your C# code” from David Pine who explored various C# 12 features and how to integrate them into your every day coding including: Primary cnstructors Collection expressions Aliasying any type Default lambda parameters [HEADING=1]AI + .NET Blogs[/HEADING] It is now easier than ever to find blogs on the latest in AI development with .NET with the AI category on the .NET blog. You can dive into great posts on big announcements, getting started, and in-depth tutorials on using the latest models. Here are some of my favorites: Introducing Microsoft.Extensions.AI Announcing the stable release of the official OpenAI library for .NET How we build GitHub Copilot into Visual Studio eShop infused with AI Using local AI models with .NET Aspire [HEADING=1]Go Deep on Developer Workloads[/HEADING] There is so much more on the .NET blog to revist with great content across our workloads for building mobile, desktop, and web applications with .NET. Here are some of my top picks across .NET MAUI, ASP.NET Core, Blazor, Entity Framework and more. .NET MAUI welcome Syncfusion open-source contributions Learn to build your first Blazor Hybrid app! Creating bindings for .NET MAUI with Native Library Interop MongoDB EF Core Provider: What’s New? How to use a Blazor QuickGrid with GraphQL The FAST and the Fluent: A Blazor Story OpenAPI document generation in .NET 9 Adding .NET Aspire to your existing .NET Apps Build & test resilient apps in .NET with Dev Proxy Note You can easily view all recent posts for our top focus areas like .NET Aspire, AI, etc. by using the dropdown menu in the blog navigation. [HEADING=1]A fresh new look![/HEADING] You may have noticed a fresh new look for all of the developer blogs here at Microsoft. This brand new look and feel comes with some great new features including a full table of contents, a Read Next sections, easier sharing, and improved navigation. [ATTACH type=full" alt="Screenshot of blog page with items circled including TOC]6087[/ATTACH] There you have it, the top .NET blog posts of 2024! What were your favorites? What do you want to see more of in 2025? Let us know and share your favorite .NET blogs in the comments below. Don’t forget to subscribe to the blog in your favorite RSS reader or through e-mail notifications so you never miss a .NET blog again. Don’t forget to go download .NET 9 today! The post Top .NET Blogs Posts of 2024 appeared first on .NET Blog. Continue reading...In 2024, the .NET blog continued to be a central hub of knowledge, delivering valuable insights and updates straight from the source. With over 130 posts and more than 260,000 words published, these blogs remain a critical resource for developers looking to stay up-to-date with the latest advancements in .NET. Alright, let’s explore the top blogs from the .NET team that made the biggest impact this year. Announcing .NET 9 .NET 9 is here! It is the most productive, modern, secure, intelligent, and performant release of .NET yet! We started the year by sharing our vision for .NET 9 and our strategy for engaging deeper with the developer community around the release. This meant that we pivoted our content on the blog to focus on .NET 8, the current shipping version of .NET at the time. This led to a new form of extremely detailed release notes on GitHub for every preview release. In addition, we focused on ensuring that as .NET 9 progressed every feature was documented and maintained on Microsoft Learn. The means on launch day that developers could not only read the announcement on .NET 9, but they could also dive deep into documentation around all part of what’s new in .NET 9 including the Runtime, Libraries, SDK, C# 13, F# 9, ASP.NET Core, .NET Aspire, .NET MAUI, EF Core, WPF, and Windows Forms. Want to go deeper on all things .NET 9? Be sure to browse all of the blog entries this year covering .NET 9 updates, videos from .NET Conf, and of course the .NET Conf 2024 Keynote where you can watch me walk across the beautiful new bridge on the Microsoft campus for 5 minute straight! Performance Improvements in .NET 9 It wouldn’t be a new release with Stephen Toub’s complete deep dive into the vast performance improvements in .NET. When printed to PDF the blog spans over 320 pages covering the over 1,000 performance related pull-requests in .NET 9. From enhancements to garbage collection, Native AOT, threading, reflection, LINQ, loops, JIT, and so much more it is an absolute must read. If you want to spend your holiday break enjoying the entire history of performance improvements in Toub’s ongoing series, then check out previous posts on .NET 8, .NET 7, .NET 6, .NET 5, .NET Core 3.0, .NET Core 2.1, and .NET Core 2.0. If you are like me and would rather watch a video on all these improvements, then Toub has you covered again with his session from .NET Conf 2024! Introducing ASP.NET Core metrics and Grafana dashboards in .NET .NET Aspire includes a fantastic developer dashboard for OpenTelemetry, but did you know you can easily setup your own custom Grafana dashboards? This blog post introduces new metrics in .NET for ASP.NET Core, including HTTP request counts, duration, and error handling diagnostics. It highlights the pre-built Grafana dashboards for monitoring apps in production, and how you can create custom metrics and use tools like dotnet-counters for live metrics viewing. General Availability of .NET Aspire: Simplifying .NET Cloud-Native Development .NET Aspire is officially here! A new stack designed to simplify the development of .NET projects with tools, templates, and integrations to streamline building distributed applications. Key features include the .NET Aspire Dashboard for viewing OpenTelemetry data, support for various databases and cloud services, and the ability to orchestrate local development with the App Host project. Take an indepth look and how to get started with .NET Aspire using Visual Studio, the .NET CLI, or Visual Studio Code. Also, be sure to browse through all .NET Aspire blog posts, the What’s new in .NET Aspire 9 session from .NET Conf, the brand new .NET Aspire beginner series, and free Microsoft Learn training and .NET Aspire credential. Introducing .NET Smart Components – AI-powered UI Controls .NET Smart Components are a set of AI-powered UI controls for .NET apps, initially available for Blazor, MVC, and Razor Pages. These components include Smart Paste, Smart TextArea, and Smart ComboBox, which enhance user productivity by automating form filling, autocompleting text, and providing intelligent suggestions. You can try these components today and checkout full sample apps and provide feedback to help improve them on GitHub. Since the first announcements of Smart Components, an entire ecosystem has grown around the initiative. Read about the thriving smart components ecosystem from popular component vendors to easily add AI to your .NET apps. C# 12 Blog Series The team also experimented with some new series on the blog including “Refactor your C# code” from David Pine who explored various C# 12 features and how to integrate them into your every day coding including: Primary cnstructors Collection expressions Aliasying any type Default lambda parameters AI + .NET Blogs It is now easier than ever to find blogs on the latest in AI development with .NET with the AI category on the .NET blog. You can dive into great posts on big announcements, getting started, and in-depth tutorials on using the latest models. Here are some of my favorites: Introducing Microsoft.Extensions.AI Announcing the stable release of the official OpenAI library for .NET How we build GitHub Copilot into Visual Studio eShop infused with AI Using local AI models with .NET Aspire Go Deep on Developer Workloads There is so much more on the .NET blog to revist with great content across our workloads for building mobile, desktop, and web applications with .NET. Here are some of my top picks across .NET MAUI, ASP.NET Core, Blazor, Entity Framework and more. .NET MAUI welcome Syncfusion open-source contributions Learn to build your first Blazor Hybrid app! Creating bindings for .NET MAUI with Native Library Interop MongoDB EF Core Provider: What’s New? How to use a Blazor QuickGrid with GraphQL The FAST and the Fluent: A Blazor Story OpenAPI document generation in .NET 9 Adding .NET Aspire to your existing .NET Apps Build & test resilient apps in .NET with Dev Proxy Note You can easily view all recent posts for our top focus areas like .NET Aspire, AI, etc. by using the dropdown menu in the blog navigation. A fresh new look! You may have noticed a fresh new look for all of the developer blogs here at Microsoft. This brand new look and feel comes with some great new features including a full table of contents, a Read Next sections, easier sharing, and improved navigation. There you have it, the top .NET blog posts of 2024! What were your favorites? What do you want to see more of in 2025? Let us know and share your favorite .NET blogs in the comments below. Don’t forget to subscribe to the blog in your favorite RSS reader or through e-mail notifications so you never miss a .NET blog again. Don’t forget to go download .NET 9 today! The post Top .NET Blogs Posts of 2024 appeared first on .NET Blog. View the full articleWe're excited to announce an all new free plan for GitHub Copilot, available for everyone today in VS Code. All you need is a GitHub account. No trial. No subscription. No credit card required. Enable GitHub Copilot Free You can click on the link above or just enable GitHub Copilot right from within VS Code like so... With GitHub Copilot Free you get 2000 code completions/month. That's about 80 per working day - which is a lot. You also get 50 chat requests/month, as well as access to both GPT-4o and Claude 3.5 Sonnet models. If you hit these limits, ideally it's because Copilot is doing its job well, which is to help you do yours! If you find you need more Copilot, the paid Pro plan is unlimited and provides access to additional models like o1 and Gemini (coming in the new year). With this announcement, GitHub Copilot becomes a core part of the VS Code experience. The team has been hard at work, as always, improving that experience with brand new AI features and capabilities. Let’s take a look at some of the newer additions to GitHub Copilot that dropped in just the past few months. This is your editor, redefined with AI. Work with multiple files using Copilot Edits Copilot Edits is a multi-file editing experience that you can open from the top of the chat side bar. Given a prompt, Edits will propose changes across files including creating new files when needed. This gives you the conversational flow of chat combined with the power of Copilot's code generation capabilities. The result is something you have to try to believe.Try this: Build a native mobile app using Flutter. I built a game last weekend and I've never used Flutter in my life. Multiple models, your choice Whether you're using Chat, Inline Chat, or Copilot Edits, you get to decide who your pair programmer is. Try this: Use 4o to generate an implementation plan for a new feature and then feed that prompt to Claude in GitHub Copilot Edits to build it. Custom instructions Tell GitHub Copilot exactly how you want things done with custom instructions. These instructions are passed to the model with every request, allowing you to specify your preferences and the details that the model needs to know to write code the way you want it. You can specify these at the editor or project level. We'll even pick them up automatically if you include a .github/copilot-instructions.md file in your project. These instructions can easily be shared with your team, so everyone can be on the same page - including GitHub Copilot. For example... ## React 18 * Use functional components * Use hooks for state management * Use TypeScript for type safety ## SvelteKit 4 * Use SSR for dynamic content rendering * Use static site generation (SSG) for pre-rendered static pages. ## TypeScript * Use consistent object property shorthand: const obj = { name, age } * Avoid implicit any Copy Try this: Ask Copilot to generate the command to dump your database schema to a file and then set that file as one of your custom instructions. Full project awareness GitHub Copilot has AI powered domain experts that you can mention with the @ syntax. We call these, "participants". The @workspace participant is a domain expert in the area of your entire codebase.GitHub Copilot will also do intent detection (as seen in the video) and include the @workspace automatically if it sees you are asking a question that requires full project context. Try this: Type /help into the chat prompt to see a list of all the particpants in GitHub Copilot and their various areas of expertise, as well as slash commands that can greatly reduce prompting. Naming things and other hard problems They say naming things is one of the hardest problems in computer science. Press F2 to rename something, and GitHub Copilot will give you some suggestions based on how that symbol is implemented and used in your code.Try this: If you don't know what to call something, don't overthink it. Just call it foo and implement it. Then hit F2 and let GitHub Copilot suggest a name for you. Speak your mind Select the microphone icon to start a voice chat. This is powered by the free, cross-platform VS Code Speech extension that runs on local models. No 3rd party app required. Try this: Use Speech with GitHub Copilot Edits to prototype your next app. You can literally talk your way to a working demo. Be a terminal expert With terminal chat, you can do just about anything in your terminal. Press Cmd/Ctrl + i while in the VS Code terminal and tell GitHub Copilot what you want to do. Copilot can also explain how to fix failed shell commands by analyzing the error output. For instance, I know that I can use the ffmpeg library to extract frames from videos, but I don't know the syntax and flags. No problem! Try this: The next time you get an error in your terminal, look for the sparkle icon next to your prompt. Select it to have GitHub Copilot fix, explain, or even auto-correct the shell command for you. No fear of commitment No more commits that say "changes". GitHub Copilot will suggest a commit message for you based on the changes you've made and your last several commit messages. You can use custom instructions for commit generation to format the messages exactly the way you want. Try this: Go beyond commits. Install the GitHub Pull Requests and Issues extension and you can generate pull request descriptions, get summaries of pull requests and even get suggested fixes for issues. All without leaving VS Code. Extensions are all you need Every VS Code extension can tie directly into the GitHub Copilot APIs and offer a customized AI experience. Check out MongoDB with their extension that can write impressively complex queries, use fuzzy search and a lot more... Try this: Build your own extension for GitHub Copilot using GitHub Copilot! We've created some new tutorials that show you how to build a code tutor chat paricipant or generate AI-powered code annotations. A vision for the future This last one is a preview of something we're adding to GitHub Copilot soon, but it's way too cool not to show you right now. Install the Vision Copilot Preview extension and ask GitHub Copilot to generate an interface based on a screenshot or markup. Or use it to generate alt text for an image. Try this: Mock up a UI using Figma or Sketch (or PowerPoint - it's ok if you do that. I do it too). Then use @vision to generate the UI. You can even tell it which CSS framework to use. Note: Vision is in preview today and requires you to have your own OpenAI, Anthropic, or Gemini API key. The key will not be required when we release it as part of GitHub Copilot. Coming Soon! Keeping up with GitHub Copilot There's so much more GitHub Copilot we want to show you, but nothing can replace the experience of trying it for yourself. If you're just getting started, we recommend you check out these 3 short videos to bring you up to speed quickly on the Copilot UI, as well as learning some prompt engineering best practices. We ship updates and new features for GitHub Copilot every month. The best way to keep up with the latest and greatest in AI coding is to follow us on X, Bluesky, LinkedIn, and even TikTok. We'll give you the updates as they drop - short and sweet - right in your feed. And if you've got feedback, we'd love to hear it. Feel free to @ us on social or drop an issue or feature request on the GitHub Copilot extension issues repo. GitHub Copilot in other places As part of the free tier, you will also be able to use GitHub Copilot on GitHub.com. While we work with GitHub to build the Visual Studio Code experience, Copilot itself is not exclusive to VS Code. You may be wondering about editors like Visual Studio. Will those users get a free Copilot offering as well? Yes. Absolutely. Check out this blog post from the VS team on what works today and what’s coming shortly. The AI code editor for everyone 2025 is going to be a huge year for GitHub Copilot, now a core part of the overall VS Code experience. We hope that you’ll join us on the journey to redefine the code editor. Again. Enable GitHub Copilot Free Continue reading...Announcing a free plan for GitHub Copilot in Visual Studio Code. Read the full article View the full articleAt .NET Conf 2024 we celebrated the official launch of .NET 9 alongside groundbreaking announcements across the entire .NET ecosystem and a deeper dive into the world of .NET for developers worldwide. Organized by Microsoft and the .NET community, the event was a huge success, providing .NET developers with 3 days of incredible, free .NET content. For the first time ever, this year also included a bonus “day 4” of YouTube premieres following the initial 3 days, which brought even more great content to .NET developer. If you have been wondering if James really did walk all the way across the new bridge on the Microsoft campus for the keynote, he did! With help from Cameron and Maddy they recorded one continuous cut for the keynote, and while we did walk slow and steady for the recording it actually takes around 5 minutes to walk across the bridge at a standard pace. [ATTACH type=full" alt="Keynote filming]6080[/ATTACH] [HEADING=1]On-Demand Recordings[/HEADING] If you missed the event, feel free to catch up on the sessions via our on-demand playlists on YouTube or Microsoft Learn. This year, we streamed 92 sessions over 4 days with most of those sessions delivered live. Day 1 featured the official release of .NET 9, including a 1-hour keynote and sessions led by the .NET team to introduce new features and enhancements related to .NET 9 including topics like .NET Aspire, AI, .NET MAUI, web development, Visual Studio, and more. Day 2 provided a deeper dive into .NET capabilities, continuously broadcast for 24 hours to reach all time zones. Day 3 was a continuation of the 24-hour broadcast, offering a wide range of sessions from speakers around world. Day 4 was a new “bonus” addition this year, which included pre-recorded YouTube premieres that included a range of topics from the .NET community. [HEADING=1].NET 9 Announcements[/HEADING] The kickoff of .NET Conf 2024 included the launch of .NET 9, the most productive, modern, secure, intelligent, and performant release of .NET yet. Full details on the .NET 9 release can be found in the Announcing .NET 9 blog post. Other major announcements that were made during the event included: Visual Studio 2022 v17.12 GA .NET Aspire Community Toolkit Azure Functions Support for .NET Aspire (Preview) Microsoft.Extensions.AI – .NET AI Library (Preview) Syncfusion Toolkit for .NET MAUI [HEADING=1]Explore Slides & Demo Code[/HEADING] Access the PowerPoint slide decks, source code, and more from our amazing speakers on the official .NET Conf 2024 GitHub page. Plus, grab your 2024/DigitalSwag at main · dotnetConf/2024! [HEADING=1]Upskill on .NET Aspire[/HEADING] .NET Aspire training and credential on Microsoft Learn: To earn the Build distributed apps with .NET Aspire credential learners demonstrate the ability to build distributed apps with .NET Aspire. Through the training and credential, learners will learn the following: Add .NET Aspire to a solution Configure service discovery Configure components Monitor resources with the .NET Aspire dashboard Create tests with .NET Aspire Prepare for deployment .NET Aspire for Beginners video series: Are you completely new to .NET Aspire? This beginner video series teaches you how to get started with .NET Aspire and implement it into your applications. [HEADING=1]Customer Stories[/HEADING] There was an astonishing amount of customer evidence presented this year at .NET Conf including some exciting videos and mentions during the keynote presentation. Please see below for some of the customer evidence highlights. Microsoft Copilot Discover how a small team of five developers at Microsoft transformed the Copilot backend in just four months using .NET & .NET Aspire. Join Pedram Rezaei, a developer on the Copilot backend team, as he shares their journey to improve performance, scalability, and reliability for millions of users worldwide. Whether you’re a .NET developer or interested in building scalable, reliable services efficiently, this inspiring story demonstrates what’s possible with the right tools and a dedicated team. Fidelity Investments Discover Fidelity’s latest innovation in trading technology with Active Trader Pro, built on Microsoft’s .NET MAUI platform. This powerful, cross-platform trading solution brings seamless performance to both Windows and Mac users, backed by real-time data streaming, advanced tools, and Microsoft’s support. Join Fidelity’s SVP Mark Burns as he shares how .NET MAUI enables Fidelity to deliver a fast, reliable, and scalable experience for active traders everywhere. Chevron Phillips Chemical We partnered with Chevron Phillips Chemical Company to showcase their migration story with .NET and Azure. We presented the slide below during the and also had their Cloud Architect Manager present live at our .NET session at Ignite. KPMG KPMG is another company we have been partnering with to promote the positive outcomes of utilizing .NET and Azure, specifically for KPMG Clara. We showcased the below slide in the . Xbox The Xbox team recently started using .NET Aspire in Xbox services as they are going through a large migration to the latest .NET. They shared how .NET Aspire has helped them speed up and tighten their inner development loop. [HEADING=1]Local .NET Conf Events[/HEADING] The learning journey continues with community-run events. Join us in celebrating .NET around the globe! Find an event near you. [ATTACH type=full" alt=".NET Conf Local Events]6081[/ATTACH] [HEADING=1]Join the Conversation[/HEADING] Share your thoughts and favorite moments from .NET Conf 2024 in the comments below or on social media using #dotNETConf2024. Let’s keep the conversation going! [ATTACH type=full" alt="🎥]6082[/ATTACH] Catch Up on Sessions: Watch all the sessions you missed or rewatch your favorites on on-demand playlists or Microsoft Learn. [ATTACH type=full" alt="🚀]6083[/ATTACH] Get Started with .NET 9: Download the latest release of .NET 9 and explore the groundbreaking features it has to offer. [ATTACH type=full" alt="📚]6084[/ATTACH] Upskill on .NET Aspire: Begin your journey with .NET Aspire by watching the beginner video series and earning the Microsoft Learn credential. Let’s continue building, innovating, and empowering developers with .NET! The post .NET Conf 2024 Recap – Celebrating .NET 9, AI, Community, & More appeared first on .NET Blog. Continue reading...At .NET Conf 2024 we celebrated the official launch of .NET 9 alongside groundbreaking announcements across the entire .NET ecosystem and a deeper dive into the world of .NET for developers worldwide. Organized by Microsoft and the .NET community, the event was a huge success, providing .NET developers with 3 days of incredible, free .NET content. For the first time ever, this year also included a bonus “day 4” of YouTube premieres following the initial 3 days, which brought even more great content to .NET developer. https://devblogs.microsoft.com/dotnet/wp-content/uploads/sites/10/2024/12/dotnet9dontnetconfrecap.mp4 If you have been wondering if James really did walk all the way across the new bridge on the Microsoft campus for the keynote, he did! With help from Cameron and Maddy they recorded one continuous cut for the keynote, and while we did walk slow and steady for the recording it actually takes around 5 minutes to walk across the bridge at a standard pace. On-Demand Recordings If you missed the event, feel free to catch up on the sessions via our on-demand playlists on YouTube or Microsoft Learn. This year, we streamed 92 sessions over 4 days with most of those sessions delivered live. Day 1 featured the official release of .NET 9, including a 1-hour keynote and sessions led by the .NET team to introduce new features and enhancements related to .NET 9 including topics like .NET Aspire, AI, .NET MAUI, web development, Visual Studio, and more. Day 2 provided a deeper dive into .NET capabilities, continuously broadcast for 24 hours to reach all time zones. Day 3 was a continuation of the 24-hour broadcast, offering a wide range of sessions from speakers around world. Day 4 was a new “bonus” addition this year, which included pre-recorded YouTube premieres that included a range of topics from the .NET community. .NET 9 Announcements The kickoff of .NET Conf 2024 included the launch of .NET 9, the most productive, modern, secure, intelligent, and performant release of .NET yet. Full details on the .NET 9 release can be found in the Announcing .NET 9 blog post. Other major announcements that were made during the event included: Visual Studio 2022 v17.12 GA .NET Aspire Community Toolkit Azure Functions Support for .NET Aspire (Preview) Microsoft.Extensions.AI – .NET AI Library (Preview) Syncfusion Toolkit for .NET MAUI Explore Slides & Demo Code Access the PowerPoint slide decks, source code, and more from our amazing speakers on the official .NET Conf 2024 GitHub page. Plus, grab your 2024/DigitalSwag at main · dotnetConf/2024! Upskill on .NET Aspire .NET Aspire training and credential on Microsoft Learn: To earn the Build distributed apps with .NET Aspire credential learners demonstrate the ability to build distributed apps with .NET Aspire. Through the training and credential, learners will learn the following: Add .NET Aspire to a solution Configure service discovery Configure components Monitor resources with the .NET Aspire dashboard Create tests with .NET Aspire Prepare for deployment .NET Aspire for Beginners video series: Are you completely new to .NET Aspire? This beginner video series teaches you how to get started with .NET Aspire and implement it into your applications. Customer Stories There was an astonishing amount of customer evidence presented this year at .NET Conf including some exciting videos and mentions during the keynote presentation. Please see below for some of the customer evidence highlights. Microsoft Copilot Discover how a small team of five developers at Microsoft transformed the Copilot backend in just four months using .NET & .NET Aspire. Join Pedram Rezaei, a developer on the Copilot backend team, as he shares their journey to improve performance, scalability, and reliability for millions of users worldwide. Whether you’re a .NET developer or interested in building scalable, reliable services efficiently, this inspiring story demonstrates what’s possible with the right tools and a dedicated team. Fidelity Investments Discover Fidelity’s latest innovation in trading technology with Active Trader Pro, built on Microsoft’s .NET MAUI platform. This powerful, cross-platform trading solution brings seamless performance to both Windows and Mac users, backed by real-time data streaming, advanced tools, and Microsoft’s support. Join Fidelity’s SVP Mark Burns as he shares how .NET MAUI enables Fidelity to deliver a fast, reliable, and scalable experience for active traders everywhere. Chevron Phillips Chemical We partnered with Chevron Phillips Chemical Company to showcase their migration story with .NET and Azure. We presented the slide below during the keynote at .NET Conf 2024 and also had their Cloud Architect Manager present live at our .NET session at Ignite. KPMG KPMG is another company we have been partnering with to promote the positive outcomes of utilizing .NET and Azure, specifically for KPMG Clara. We showcased the below slide in the .NET Conf keynote. Xbox The Xbox team recently started using .NET Aspire in Xbox services as they are going through a large migration to the latest .NET. They shared how .NET Aspire has helped them speed up and tighten their inner development loop. Local .NET Conf Events The learning journey continues with community-run events. Join us in celebrating .NET around the globe! Find an event near you. Join the Conversation Share your thoughts and favorite moments from .NET Conf 2024 in the comments below or on social media using #dotNETConf2024. Let’s keep the conversation going! Catch Up on Sessions: Watch all the sessions you missed or rewatch your favorites on on-demand playlists or Microsoft Learn. Get Started with .NET 9: Download the latest release of .NET 9 and explore the groundbreaking features it has to offer. Upskill on .NET Aspire: Begin your journey with .NET Aspire by watching the beginner video series and earning the Microsoft Learn credential. Let’s continue building, innovating, and empowering developers with .NET! The post .NET Conf 2024 Recap – Celebrating .NET 9, AI, Community, & More appeared first on .NET Blog. View the full article
Exploring Microsoft.Extensions.VectorData with Qdrant and Azure AI Search
Guest posted a topic in General
Discover how to use Microsoft.Extensions.VectorData to implement semantic search using Qdrant and Azure AI Search. [HEADING=1]Dive into Semantic Search with Microsoft.Extensions.VectorData: Qdrant and Azure AI Search[/HEADING] Semantic search is transforming how applications find and interpret data by focusing on meaning rather than mere keyword matching. With the release of Microsoft.Extensions.VectorData, .NET developers have a new set of building blocks to integrate vector-based search capabilities into their applications. In this post, we’ll explore two practical implementations of semantic search using Qdrant locally and Azure AI Search. [HEADING=1]Quick Introduction to Microsoft.Extensions.VectorData[/HEADING] Microsoft.Extensions.VectorData is a set of code .NET libraries designed for managing vector-based data in .NET applications. These libraries provide a unified layer of C# abstractions for interacting with vector stores, enabling developers to handle embeddings and perform vector similarity queries efficiently. To get a detailed overview of the library’s architecture and capabilities, I recommend reading Luis’s excellent blog post. In this blog post, we’ll showcase two real-world use cases: Using Qdrant locally for semantic search. Leveraging Azure AI Search for enterprise-scale vector search. To run the demos, you need to use one of the models provided by Ollama for the embeddings generations. In this sample, the model used is all-minilm. Install Ollama. Download the all-minilm model. An OCI compliant container runtime, such as: Docker Desktop or Podman. [HEADING=1]Semantic Search with Qdrant[/HEADING] [HEADING=2]What is Qdrant?[/HEADING] Qdrant is a vector similarity search engine that provides a production-ready service with a convenient API to store, search, and manage points (i.e. vectors) with an additional payload. It’s perfect for applications that require efficient similarity searches. You can easily run Qdrant locally in a Docker container, making it a developer-friendly choice. For setup instructions, refer to the Qdrant Quickstart Guide. And, as for reference, this is a sample command to run a local container instance: [iCODE]docker run -p 6333:6333 -p 6334:6334 -v $(pwd)/qdrant_storage:/qdrant/storage:z qdrant/qdrant[/iCODE] Once the container is created, you can check it in Docker. [ATTACH type=full" alt="qdrant container running in docker]6077[/ATTACH] [HEADING=2]Qdrant and Semantic Kernel[/HEADING] Semantic Kernel provides a built-in connector for Qdrant, enabling .NET developers to store embeddings and execute vector-based queries seamlessly. This connector is built on top of [iCODE]Microsoft.Extensions.VectorData[/iCODE] and the official .NET Qdrant Client. This integration combines Qdrant’s high performance with Semantic Kernel’s ease of use. To learn more about the connector, visit the official documentation for Semantic Kernel Vector Store Qdrant connector. [HEADING=2]Scenario Overview – Qdrant[/HEADING] Setup: A Qdrant instance runs locally in a Docker container. Functionality: A .NET console application uses the Semantic Kernel’s Qdrant connector to: Store movie embeddings. Perform semantic search queries. Let’s see a sample class that implements and runs this demo. using Microsoft.Extensions.AI;using Microsoft.Extensions.VectorData;using Microsoft.SemanticKernel.Connectors.Qdrant;using Qdrant.Client;var vectorStore = new QdrantVectorStore(new QdrantClient("localhost"));// get movie listvar movies = vectorStore.GetCollection<ulong, MovieVector<ulong>>("movies");await movies.CreateCollectionIfNotExistsAsync();var movieData = MovieFactory<ulong>.GetMovieVectorList();// get embeddings generator and generate embeddings for moviesIEmbeddingGenerator<string, Embedding<float>> generator = new OllamaEmbeddingGenerator(new Uri("http://localhost:11434/"), "all-minilm");foreach (var movie in movieData){ movie.Vector = await generator.GenerateEmbeddingVectorAsync(movie.Description); await movies.UpsertAsync(movie);}// perform the searchvar query = "A family friendly movie that includes ogres and dragons";var queryEmbedding = await generator.GenerateEmbeddingVectorAsync(query);var searchOptions = new VectorSearchOptions(){ Top = 2, VectorPropertyName = "Vector"};var results = await movies.VectorizedSearchAsync(queryEmbedding, searchOptions);await foreach (var result in results.Results){ Console.WriteLine($"Title: {result.Record.Title}"); Console.WriteLine($"Description: {result.Record.Description}"); Console.WriteLine($"Score: {result.Score}"); Console.WriteLine();} Once the demo is run, this is the sample output: Title: ShrekDescription: Shrek is an animated film that tells the story of an ogre named Shrek who embarks on a quest to rescue Princess Fiona from a dragon and bring her back to the kingdom of Duloc.Score: 0.5013245344161987Title: Lion KingDescription: The Lion King is a classic Disney animated film that tells the story of a young lion named Simba who embarks on a journey to reclaim his throne as the king of the Pride Lands after the tragic death of his father.Score: 0.3225690722465515 [HEADING=2]Why Qdrant?[/HEADING] Using Qdrant for semantic search offers the advantage of scalable, high-speed similarity search, making it an excellent choice for applications requiring large-scale vector data management. Additionally, you have the option to run Qdrant Cloud on Microsoft Azure. [HEADING=1]Semantic Search with Azure AI Search[/HEADING] [HEADING=2]What is Azure AI Search?[/HEADING] Azure AI Search is Microsoft’s search-as-a-service offering. It integrates traditional search capabilities with AI-powered features like semantic and vector search. Built for scalability and reliability, it is an ideal solution for enterprise applications requiring advanced search functionality. You can learn more about Azure AI Search. For this sample, we will use the integrated vectorization in Azure AI Search, which improves indexing and querying by converting documents and queries into vectors. [HEADING=2]Azure AI Search and Semantic Kernel[/HEADING] This connector is built on top of [iCODE]Microsoft.Extensions.VectorData[/iCODE] and the official Azure AI Search libraries for .NET. For more information, refer to the Azure AI Search connector documentation. [HEADING=2]Scenario Overview – Azure AI Search[/HEADING] Setup: An Azure AI Search service is created in your Azure subscription. Functionality: The console application: Stores vector embeddings of movies. Executes vector-based semantic search queries. [*]Requirements: The Azure AI Search endpoint must be added as a User Secrets in the application. With the endpoint only, the app will create an Azure Default Credential to connect to the service. If you want to use the secret to access the Azure AI Search, you need to add the value also as a User Secret. Here is a console command sample on how to add the User Secrets: dotnet user-secrets initdotnet user-secrets set "AZURE_AISEARCH_URI" "https://<AI Search Name>.search.windows.net"dotnet user-secrets set "AZURE_AISEARCH_SECRET" "AI Search Secret" Let’s see a sample class that implements and runs this demo. using Microsoft.Extensions.AI;using Microsoft.Extensions.VectorData;using Azure;using Azure.Search.Documents.Indexes;using Microsoft.SemanticKernel.Connectors.AzureAISearch;using Microsoft.Extensions.Configuration;using Azure.Identity;using Azure.Core;// get the search index client using Azure Default Credentials or Azure Key Credential with the service secretvar client = GetSearchIndexClient();var vectorStore = new AzureAISearchVectorStore(searchIndexClient: client);// get movie listvar movies = vectorStore.GetCollection<string, MovieVector<string>>("movies");await movies.CreateCollectionIfNotExistsAsync();var movieData = MovieFactory<string>.GetMovieVectorList();// get embeddings generator and generate embeddings for moviesIEmbeddingGenerator<string, Embedding<float>> generator = new OllamaEmbeddingGenerator(new Uri("http://localhost:11434/"), "all-minilm");foreach (var movie in movieData){ movie.Vector = await generator.GenerateEmbeddingVectorAsync(movie.Description); await movies.UpsertAsync(movie);}// perform the searchvar query = "A family friendly movie that includes ogres and dragons";var queryEmbedding = await generator.GenerateEmbeddingVectorAsync(query);// show the results... Once the demo is run, this is the sample output: Title: ShrekDescription: Shrek is an animated film that tells the story of an ogre named Shrek who embarks on a quest to rescue Princess Fiona from a dragon and bring her back to the kingdom of Duloc.Score: 0.6672559 And we can see the new index with the Movie fields in the Azure Portal in the Azure AI Search service. [ATTACH type=full" alt="index with the Movie fields in the Azure Portal in the Azure AI Search service]6078[/ATTACH] [HEADING=2]Why Azure AI Search?[/HEADING] Azure AI Search provides enterprise-grade scalability and integration, making it a robust solution for production-ready applications requiring advanced semantic search. Additionally, AI Search includes built-in security features, such as encryption and secure authentication, to protect your data. It also adheres to compliance standards, ensuring that your search solutions meet regulatory requirements. [HEADING=1]Explaining the Code[/HEADING] [HEADING=2]Console Applications for Demonstrations[/HEADING] Each semantic search demo is implemented as a .NET 9 Console Application. The codebase for the samples can be traced back to the original demo provided by Luis, with extensions for both Azure AI Search and Qdrant scenarios. [ATTACH type=full" alt="Visual Studio Solution Explorer including all the sample projects]6079[/ATTACH] [HEADING=2]Shared Class for Data Representation[/HEADING] A shared class represents a Movie entity, which includes: Fields for Vector Embeddings: These embeddings are used to perform semantic search. List of Movies: A static list of movies is generated to serve as sample data. Type Factory for Keys: The class implements a factory pattern to handle differences in key data types. [HEADING=2]Handling Different Data Types for Keys[/HEADING] Qdrant: Uses [iCODE]ulong[/iCODE] as the data type for its key field. Azure AI Search: Uses [iCODE]string[/iCODE] as the key field’s data type. MovieFactory: Ensures that the application generates the correct data type for each scenario, maintaining flexibility across implementations. [HEADING=3]Movie Factory Implementation[/HEADING] public class MovieFactory<T>{ public static List<Movie<T>> GetMovieList() { var movieData = new List<Movie<T>>() { // all movie sample collection is defined here }; return movieData; } public static List<MovieVector<T>> GetMovieVectorList() { var movieData = GetMovieList(); var movieVectorData = new List<MovieVector<T>>(); foreach (var movie in movieData) { movieVectorData.Add(new MovieVector<T> { Key = movie.Key, Title = movie.Title, Description = movie.Description }); } return movieVectorData; } You can browse the github repository with the complete code samples. [HEADING=1]What’s Coming Next?[/HEADING] The journey with Microsoft.Extensions.VectorData doesn’t stop here. You can choose other connectors like SQLite in memory, Pinecone or Redis; enabling developers to run lightweight semantic search solutions locally. This feature will be perfect for scenarios where performance and simplicity are essential. And we are also working with partners like Elasticsearch are already building on top of Microsoft.Extensions.VectorData. You can learn more about this use case on Customer Case Study: Announcing the Microsoft Semantic Kernel Elasticsearch Connector. [HEADING=1]Conclusion and Learn More[/HEADING] The combination of Microsoft.Extensions.VectorData and Semantic Kernel, allows .NET developers to build intelligent, scalable, and context-aware applications. Whether you’re working on a small-scale project or a large enterprise system, these tools provide the foundation for delivering cutting-edge semantic search experiences. [HEADING=2]Learn More[/HEADING] Semantic Kernel Vector Store code samples (Preview) Semantic Kernel Overview Qdrant Documentation Azure AI Search Documentation [HEADING=1]Summary[/HEADING] Stay tuned for more tutorials and resources, and feel free to connect with us on social media for questions or feedback. Happy Coding! The post Exploring Microsoft.Extensions.VectorData with Qdrant and Azure AI Search appeared first on .NET Blog. Continue reading...Discover how to use Microsoft.Extensions.VectorData to implement semantic search using Qdrant and Azure AI Search. Dive into Semantic Search with Microsoft.Extensions.VectorData: Qdrant and Azure AI Search Semantic search is transforming how applications find and interpret data by focusing on meaning rather than mere keyword matching. With the release of Microsoft.Extensions.VectorData, .NET developers have a new set of building blocks to integrate vector-based search capabilities into their applications. In this post, we’ll explore two practical implementations of semantic search using Qdrant locally and Azure AI Search. Quick Introduction to Microsoft.Extensions.VectorData Microsoft.Extensions.VectorData is a set of code .NET libraries designed for managing vector-based data in .NET applications. These libraries provide a unified layer of C# abstractions for interacting with vector stores, enabling developers to handle embeddings and perform vector similarity queries efficiently. To get a detailed overview of the library’s architecture and capabilities, I recommend reading Luis’s excellent blog post. In this blog post, we’ll showcase two real-world use cases: Using Qdrant locally for semantic search. Leveraging Azure AI Search for enterprise-scale vector search. To run the demos, you need to use one of the models provided by Ollama for the embeddings generations. In this sample, the model used is all-minilm. Install Ollama. Download the all-minilm model. An OCI compliant container runtime, such as: Docker Desktop or Podman. Semantic Search with Qdrant What is Qdrant? Qdrant is a vector similarity search engine that provides a production-ready service with a convenient API to store, search, and manage points (i.e. vectors) with an additional payload. It’s perfect for applications that require efficient similarity searches. You can easily run Qdrant locally in a Docker container, making it a developer-friendly choice. For setup instructions, refer to the Qdrant Quickstart Guide. And, as for reference, this is a sample command to run a local container instance: docker run -p 6333:6333 -p 6334:6334 -v $(pwd)/qdrant_storage:/qdrant/storage:z qdrant/qdrant Once the container is created, you can check it in Docker. Qdrant and Semantic Kernel Semantic Kernel provides a built-in connector for Qdrant, enabling .NET developers to store embeddings and execute vector-based queries seamlessly. This connector is built on top of Microsoft.Extensions.VectorData and the official .NET Qdrant Client. This integration combines Qdrant’s high performance with Semantic Kernel’s ease of use. To learn more about the connector, visit the official documentation for Semantic Kernel Vector Store Qdrant connector. Scenario Overview – Qdrant Setup: A Qdrant instance runs locally in a Docker container. Functionality: A .NET console application uses the Semantic Kernel’s Qdrant connector to: Store movie embeddings. Perform semantic search queries. Let’s see a sample class that implements and runs this demo. using Microsoft.Extensions.AI; using Microsoft.Extensions.VectorData; using Microsoft.SemanticKernel.Connectors.Qdrant; using Qdrant.Client; var vectorStore = new QdrantVectorStore(new QdrantClient("localhost")); // get movie list var movies = vectorStore.GetCollection<ulong, MovieVector<ulong>>("movies"); await movies.CreateCollectionIfNotExistsAsync(); var movieData = MovieFactory<ulong>.GetMovieVectorList(); // get embeddings generator and generate embeddings for movies IEmbeddingGenerator<string, Embedding<float>> generator = new OllamaEmbeddingGenerator(new Uri("http://localhost:11434/"), "all-minilm"); foreach (var movie in movieData) { movie.Vector = await generator.GenerateEmbeddingVectorAsync(movie.Description); await movies.UpsertAsync(movie); } // perform the search var query = "A family friendly movie that includes ogres and dragons"; var queryEmbedding = await generator.GenerateEmbeddingVectorAsync(query); var searchOptions = new VectorSearchOptions() { Top = 2, VectorPropertyName = "Vector" }; var results = await movies.VectorizedSearchAsync(queryEmbedding, searchOptions); await foreach (var result in results.Results) { Console.WriteLine($"Title: {result.Record.Title}"); Console.WriteLine($"Description: {result.Record.Description}"); Console.WriteLine($"Score: {result.Score}"); Console.WriteLine(); } Once the demo is run, this is the sample output: Title: Shrek Description: Shrek is an animated film that tells the story of an ogre named Shrek who embarks on a quest to rescue Princess Fiona from a dragon and bring her back to the kingdom of Duloc. Score: 0.5013245344161987 Title: Lion King Description: The Lion King is a classic Disney animated film that tells the story of a young lion named Simba who embarks on a journey to reclaim his throne as the king of the Pride Lands after the tragic death of his father. Score: 0.3225690722465515 Why Qdrant? Using Qdrant for semantic search offers the advantage of scalable, high-speed similarity search, making it an excellent choice for applications requiring large-scale vector data management. Additionally, you have the option to run Qdrant Cloud on Microsoft Azure. Semantic Search with Azure AI Search What is Azure AI Search? Azure AI Search is Microsoft’s search-as-a-service offering. It integrates traditional search capabilities with AI-powered features like semantic and vector search. Built for scalability and reliability, it is an ideal solution for enterprise applications requiring advanced search functionality. You can learn more about Azure AI Search. For this sample, we will use the integrated vectorization in Azure AI Search, which improves indexing and querying by converting documents and queries into vectors. Azure AI Search and Semantic Kernel This connector is built on top of Microsoft.Extensions.VectorData and the official Azure AI Search libraries for .NET. For more information, refer to the Azure AI Search connector documentation. Scenario Overview – Azure AI Search Setup: An Azure AI Search service is created in your Azure subscription. Functionality: The console application: Stores vector embeddings of movies. Executes vector-based semantic search queries. Requirements: The Azure AI Search endpoint must be added as a User Secrets in the application. With the endpoint only, the app will create an Azure Default Credential to connect to the service. If you want to use the secret to access the Azure AI Search, you need to add the value also as a User Secret. Here is a console command sample on how to add the User Secrets: dotnet user-secrets init dotnet user-secrets set "AZURE_AISEARCH_URI" "https://<AI Search Name>.search.windows.net" dotnet user-secrets set "AZURE_AISEARCH_SECRET" "AI Search Secret" Let’s see a sample class that implements and runs this demo. using Microsoft.Extensions.AI; using Microsoft.Extensions.VectorData; using Azure; using Azure.Search.Documents.Indexes; using Microsoft.SemanticKernel.Connectors.AzureAISearch; using Microsoft.Extensions.Configuration; using Azure.Identity; using Azure.Core; // get the search index client using Azure Default Credentials or Azure Key Credential with the service secret var client = GetSearchIndexClient(); var vectorStore = new AzureAISearchVectorStore(searchIndexClient: client); // get movie list var movies = vectorStore.GetCollection<string, MovieVector<string>>("movies"); await movies.CreateCollectionIfNotExistsAsync(); var movieData = MovieFactory<string>.GetMovieVectorList(); // get embeddings generator and generate embeddings for movies IEmbeddingGenerator<string, Embedding<float>> generator = new OllamaEmbeddingGenerator(new Uri("http://localhost:11434/"), "all-minilm"); foreach (var movie in movieData) { movie.Vector = await generator.GenerateEmbeddingVectorAsync(movie.Description); await movies.UpsertAsync(movie); } // perform the search var query = "A family friendly movie that includes ogres and dragons"; var queryEmbedding = await generator.GenerateEmbeddingVectorAsync(query); // show the results... Once the demo is run, this is the sample output: Title: Shrek Description: Shrek is an animated film that tells the story of an ogre named Shrek who embarks on a quest to rescue Princess Fiona from a dragon and bring her back to the kingdom of Duloc. Score: 0.6672559 And we can see the new index with the Movie fields in the Azure Portal in the Azure AI Search service. Why Azure AI Search? Azure AI Search provides enterprise-grade scalability and integration, making it a robust solution for production-ready applications requiring advanced semantic search. Additionally, AI Search includes built-in security features, such as encryption and secure authentication, to protect your data. It also adheres to compliance standards, ensuring that your search solutions meet regulatory requirements. Explaining the Code Console Applications for Demonstrations Each semantic search demo is implemented as a .NET 9 Console Application. The codebase for the samples can be traced back to the original demo provided by Luis, with extensions for both Azure AI Search and Qdrant scenarios. Shared Class for Data Representation A shared class represents a Movie entity, which includes: Fields for Vector Embeddings: These embeddings are used to perform semantic search. List of Movies: A static list of movies is generated to serve as sample data. Type Factory for Keys: The class implements a factory pattern to handle differences in key data types. Handling Different Data Types for Keys Qdrant: Uses ulong as the data type for its key field. Azure AI Search: Uses string as the key field’s data type. MovieFactory: Ensures that the application generates the correct data type for each scenario, maintaining flexibility across implementations. Movie Factory Implementation public class MovieFactory<T> { public static List<Movie<T>> GetMovieList() { var movieData = new List<Movie<T>>() { // all movie sample collection is defined here }; return movieData; } public static List<MovieVector<T>> GetMovieVectorList() { var movieData = GetMovieList(); var movieVectorData = new List<MovieVector<T>>(); foreach (var movie in movieData) { movieVectorData.Add(new MovieVector<T> { Key = movie.Key, Title = movie.Title, Description = movie.Description }); } return movieVectorData; } You can browse the github repository with the complete code samples. What’s Coming Next? The journey with Microsoft.Extensions.VectorData doesn’t stop here. You can choose other connectors like SQLite in memory, Pinecone or Redis; enabling developers to run lightweight semantic search solutions locally. This feature will be perfect for scenarios where performance and simplicity are essential. And we are also working with partners like Elasticsearch are already building on top of Microsoft.Extensions.VectorData. You can learn more about this use case on Customer Case Study: Announcing the Microsoft Semantic Kernel Elasticsearch Connector. Conclusion and Learn More The combination of Microsoft.Extensions.VectorData and Semantic Kernel, allows .NET developers to build intelligent, scalable, and context-aware applications. Whether you’re working on a small-scale project or a large enterprise system, these tools provide the foundation for delivering cutting-edge semantic search experiences. Learn More Semantic Kernel Vector Store code samples (Preview) Semantic Kernel Overview Qdrant Documentation Azure AI Search Documentation Summary Stay tuned for more tutorials and resources, and feel free to connect with us on social media for questions or feedback. Happy Coding! The post Exploring Microsoft.Extensions.VectorData with Qdrant and Azure AI Search appeared first on .NET Blog. View the full articleWe’re happy to announce the official launch of the 8.4 release of the .NET Community Toolkit! This new version includes support for partial properties for the MVVM Toolkit generators, new analyzers, bug fixes and enhancements, and more! [ATTACH type=full" alt=".NET Community Toolkit 8.4.0]6070[/ATTACH] As always, we deeply appreciate all the feedback received both by teams here at Microsoft using the Toolkit, as well as other developers in the community. All the issues, bug reports, comments and feedback continue to be extremely useful for us to plan and prioritize feature work across the entire .NET Community Toolkit. Thank you to everyone contributing to this project and helping make the .NET Community Toolkit better! [ATTACH type=full" alt="🎉]6071[/ATTACH] Information For more details on the history of the .NET Community Toolkit, here is a link to our previous 8.0.0 announcement post. Here is a breakdown of the main changes that are included in this new 8.4 release of the .NET Community Toolkit. [HEADING=1]Partial properties support for the MVVM Toolkit [ATTACH type=full" alt="🎉]6072[/ATTACH][/HEADING] One of the most popular feature requests for the MVVM Toolkit source generators was support for partial properties, which is now available thanks to the new C# language features available in the .NET 9 SDK! Specifically, the MVVM Toolkit source generators will now leverage partial properties and semi-auto properties (aka. the [iCODE]field[/iCODE] keyword) to make it possible to define observable properties as follows: This comes with significant improvements: Declaring properties is now properly integrated with the C# language, instead of the MVVM Toolkit creating a new property inferring its characteristics based on the annotated field. This means all C# features you’d expect to work on properties will now “just work”, such as declaring custom accessibility modifiers to each accessor, or annotating the property, field, or accessors, with custom attributes. Similarly, more modifers are now supported as well, such as [iCODE]new[/iCODE], [iCODE]sealed[/iCODE], [iCODE]override[/iCODE], and [iCODE]required[/iCODE]. Nullability annotations are improved, and also correctly handle property initializers and constructors. Using partial properties also makes [iCODE][ObservableProperty][/iCODE] fully AOT safe for UWP and WinUI 3! In an upcoming Visual Studio update, CTRL + click (and F12) navigation will allow easily jumping between the partial property declaration and the implementation, meaning that jumping back to property declarations from arbitrary property references elsewhere in the code will be much easier than before. Of course, this new release also includes a brand new code fixer, which can automate migrating code form [iCODE][ObservableProperty][/iCODE] on fields, to partial properties, with a single click to fix all occurrences in your entire solution! Just hover on the squiggly line in Visual Studio and select the suggested code fix: We recommend converting all [iCODE][ObservableProperty][/iCODE] uses to partial properties, especially if you’re using CsWinRT (ie. if you have a UWP .NET 9 app or a WinUI 3 app) and need Native AOT support. You will get clearer code, better language support and lots of new features. Try it out and please share feedback! Note In order to use support for partial properties, C# preview is required. You can enable this by adding [iCODE]<LangVersion>preview</LangVersion>[/iCODE] to your .csproj file, or to any imported .props file (eg. Directory.Build.props). This is necessary because the generated code makes use of the [iCODE]field[/iCODE] keyword. [HEADING=1]Lots of new MVVM Toolkit analyzers [ATTACH type=full" alt="🎯]6073[/ATTACH][/HEADING] One of the main areas of investment in the MVVM Toolkit is in our rich set of diagnostic analyzers, which help make sure code is written correctly, and without common mistakes. These analyzers cover all sort of things (eg. unsupported types for [iCODE][ObservableProperty][/iCODE] members, incorrect declarations for annotated types, etc.), and new analyzers are added in each release of the MVVM Toolkit. The 8.4 release follows this trend, and introduces the following new diagnostics: MVVMTK0041, error: “The C# language version must be set to ‘preview’ when using [iCODE][ObservableProperty][/iCODE] on partial properties for the source generators to emit valid code (the [iCODE]<LangVersion>preview</LangVersion>[/iCODE] option must be set in the .csproj/.props file).”. MVVMTK0042, error: “Fields using [iCODE][ObservableProperty][/iCODE] can be converted to partial properties instead, which is recommended (doing so improves the developer experience and allows other generators and analyzers to correctly see the generated property as well).”. MVVMTK0043, error: “Properties annotated with [iCODE][ObservableProperty][/iCODE] must be instance (non static) partial properties with a getter and a setter that is not init-only.”. MVVMTK0044, error: “Using [iCODE][ObservableProperty][/iCODE] with (partial) properties requires a higher version of Roslyn (remove [iCODE][ObservableProperty][/iCODE] or target a field instead, or upgrade to at least Visual Studio 2022 version 17.12 and the .NET 9 SDK).”. MVVMTK0045, warning: “Fields using [iCODE][ObservableProperty][/iCODE] will generate code that is not AOT compatible in WinRT scenarios (such as UWP XAML and WinUI 3 apps), and partial properties should be used instead (as they allow the CsWinRT generators to correctly produce the necessary WinRT marshalling code).”. MVVMTK0046, warning: “Using [iCODE][RelayCommand][/iCODE] on methods within a type also using [iCODE][GeneratedBindableCustomProperty][/iCODE] is not supported, and a manually declared command property should be used instead (the [iCODE][GeneratedBindableCustomProperty][/iCODE] generator cannot see the generated command property that is produced by the MVVM Toolkit generator).”. MVVMTK0047, warning: “Using [iCODE][GeneratedBindableCustomProperty][/iCODE] on types that also use [iCODE][ObservableProperty][/iCODE] on any declared (or inherited) fields is not supported, and partial properties should be used instead (the [iCODE][GeneratedBindableCustomProperty][/iCODE] generator cannot see the generated property that is produced by the MVVM Toolkit generator).”. MVVMTK0048, warning: “Using [iCODE][GeneratedBindableCustomProperty][/iCODE] on types that also use [iCODE][RelayCommand][/iCODE] on any inherited methods is not supported, and a manually declared command property should be used instead (the [iCODE][GeneratedBindableCustomProperty][/iCODE] generator cannot see the generated property that is produced by the MVVM Toolkit generator).”. MVVMTK0049, warning: “Using the [iCODE][iNotifyPropertyChanged][/iCODE] attribute on types is not AOT compatible in WinRT scenarios (such as UWP XAML and WinUI 3 apps), and they should derive from [iCODE]ObservableObject[/iCODE] or manually implement [iCODE]INotifyPropertyChanged[/iCODE] instead (as it allows the CsWinRT generators to correctly produce the necessary WinRT marshalling code).”. MVVMTK0050, warning: “Using the [iCODE][ObservableObject][/iCODE] attribute on types is not AOT compatible in WinRT scenarios (such as UWP XAML and WinUI 3 apps), and they should derive from [iCODE]ObservableObject[/iCODE] instead (as it allows the CsWinRT generators to correctly produce the necessary WinRT marshalling code).”. MVVMTK0051, info: “This project producing one or more ‘MVVMTK0045’ warnings due to [iCODE][ObservableProperty][/iCODE] being used on fields, which is not AOT compatible in WinRT scenarios, should set ‘LangVersion’ to ‘preview’ to enable partial properties and the associated code fixer (setting ‘LangVersion=preview’ is required to use [iCODE][ObservableProperty][/iCODE] on partial properties and address these warnings).”. MVVMTK0052, error: “A property using [iCODE][ObservableProperty][/iCODE] is not an incomplete partial definition part ([iCODE][ObservableProperty][/iCODE] must be used on partial property definitions with no implementation part).”. MVVMTK0053, error: “A property using [iCODE][ObservableProperty][/iCODE] returns a value by reference ([iCODE][ObservableProperty][/iCODE] must be used on properties returning a type by value).”. MVVMTK0054, error: “A property using [iCODE][ObservableProperty][/iCODE] returns a byref-like value ([iCODE][ObservableProperty][/iCODE] must be used on properties of a non byref-like type).”. MVVMTK0055, error: “A property using [iCODE][ObservableProperty][/iCODE] returns a pointer-like value ([iCODE][ObservableProperty][/iCODE] must be used on properties of a non pointer-like type).”. MVVMTK0056, info: “Semi-auto properties should be converted to partial properties using [iCODE][ObservableProperty][/iCODE] when possible, which is recommended (doing so makes the code less verbose and results in more optimized code).”. As you can see, we have so many new analyzers in this release! They fall into two categories: General code analysis for MVVM scenarios CsWinRT trim/AOT supporting code analysis (for UWP and WinUI 3) All of these will just show up nicely in quick info, whenever needed: We want you to be able to rely on the MVVM Toolkit to make sure code leveraging its generators is written correctly and bug-free! Please let us know if you hit any issues with the analyzers, or if you found a new scenario that is not covered, and that you think should have a new dedicated analyzer, by opening an issue in our GitHub repo. [HEADING=1]Other changes and improvements [ATTACH type=full" alt="✅]6074[/ATTACH][/HEADING] Add .targets to validate the Windows SDK version (#942): the MVVM Toolkit now includes MSBuild logic to provide friendly error messages when using an incorrect version of the Windows SDK package, and suggests how to fix it and which exact version to use. Allow forwarding attributes to property accessors (#952): when using [iCODE][ObservableProperty][/iCODE] on fields, adding attributes on the generated accessors is now also supported. Fix suppressions for custom attribute targets (#964): the custom diagnostic suppressions now work correctly when using custom attribute targets on [iCODE][ObservableProperty][/iCODE] fields. Move some diagnostics to analyzers (#968): moved more diagnostics to separate analyzers, to improve performance of the MVVM Toolkit source generators. Handle ‘required’ fields in partial property code fixer (#972): using the [iCODE]required[/iCODE] modifier is now also support for fields using [iCODE][ObservableProperty][/iCODE]. Embed .pdb files for all analyzer projects (#980): all source generators and analyzers now have embedded .pdb files, making it simpler to debug them in Visual Studio, if needed. Use [iCODE]ref readonly[/iCODE] in [iCODE]IndexOf<T>[/iCODE] (#997): the [iCODE]IndexOf<T>[/iCODE] extension now takes a [iCODE]ref readonly[/iCODE], making it clear that it’s not meant to be used with rvalue-s. Added [iCODE]Stream[/iCODE] over [iCODE]ReadOnlySequence<byte>[/iCODE] (#808): there’s a new [iCODE]AsStream()[/iCODE] extension for [iCODE]ReadOnlySequence<byte>[/iCODE] to easily get a readonly, seekable stream wrapping it! Thank you @paulomorgado! You can see the full changelog for this release from the GitHub release page. [HEADING=1]Get started today! [ATTACH type=full" alt="🎉]6075[/ATTACH][/HEADING] You can find all source code in our GitHub repo, some handwritten docs on MS learn, and complete API references in the .NET API browser website. If you would like to contribute, feel free to open issues or to reach out to let us know about your experience! To follow the conversation on Twitter, use the #CommunityToolkit hashtag. All your feedback greatly helped shape the direction of these libraries, so make sure to share them! Happy coding! [ATTACH type=full" alt="💻]6076[/ATTACH] The post Announcing .NET Community Toolkit 8.4! Partial properties support for MVVM, new analyzers, and more! appeared first on .NET Blog. Continue reading...We’re happy to announce the official launch of the 8.4 release of the .NET Community Toolkit! This new version includes support for partial properties for the MVVM Toolkit generators, new analyzers, bug fixes and enhancements, and more! As always, we deeply appreciate all the feedback received both by teams here at Microsoft using the Toolkit, as well as other developers in the community. All the issues, bug reports, comments and feedback continue to be extremely useful for us to plan and prioritize feature work across the entire .NET Community Toolkit. Thank you to everyone contributing to this project and helping make the .NET Community Toolkit better! Information For more details on the history of the .NET Community Toolkit, here is a link to our previous 8.0.0 announcement post. Here is a breakdown of the main changes that are included in this new 8.4 release of the .NET Community Toolkit. Partial properties support for the MVVM Toolkit One of the most popular feature requests for the MVVM Toolkit source generators was support for partial properties, which is now available thanks to the new C# language features available in the .NET 9 SDK! Specifically, the MVVM Toolkit source generators will now leverage partial properties and semi-auto properties (aka. the field keyword) to make it possible to define observable properties as follows: This comes with significant improvements: Declaring properties is now properly integrated with the C# language, instead of the MVVM Toolkit creating a new property inferring its characteristics based on the annotated field. This means all C# features you’d expect to work on properties will now “just work”, such as declaring custom accessibility modifiers to each accessor, or annotating the property, field, or accessors, with custom attributes. Similarly, more modifers are now supported as well, such as new, sealed, override, and required. Nullability annotations are improved, and also correctly handle property initializers and constructors. Using partial properties also makes [ObservableProperty] fully AOT safe for UWP and WinUI 3! In an upcoming Visual Studio update, CTRL + click (and F12) navigation will allow easily jumping between the partial property declaration and the implementation, meaning that jumping back to property declarations from arbitrary property references elsewhere in the code will be much easier than before. Of course, this new release also includes a brand new code fixer, which can automate migrating code form [ObservableProperty] on fields, to partial properties, with a single click to fix all occurrences in your entire solution! Just hover on the squiggly line in Visual Studio and select the suggested code fix: We recommend converting all [ObservableProperty] uses to partial properties, especially if you’re using CsWinRT (ie. if you have a UWP .NET 9 app or a WinUI 3 app) and need Native AOT support. You will get clearer code, better language support and lots of new features. Try it out and please share feedback! Note In order to use support for partial properties, C# preview is required. You can enable this by adding <LangVersion>preview</LangVersion> to your .csproj file, or to any imported .props file (eg. Directory.Build.props). This is necessary because the generated code makes use of the field keyword. Lots of new MVVM Toolkit analyzers One of the main areas of investment in the MVVM Toolkit is in our rich set of diagnostic analyzers, which help make sure code is written correctly, and without common mistakes. These analyzers cover all sort of things (eg. unsupported types for [ObservableProperty] members, incorrect declarations for annotated types, etc.), and new analyzers are added in each release of the MVVM Toolkit. The 8.4 release follows this trend, and introduces the following new diagnostics: MVVMTK0041, error: “The C# language version must be set to ‘preview’ when using [ObservableProperty] on partial properties for the source generators to emit valid code (the <LangVersion>preview</LangVersion> option must be set in the .csproj/.props file).”. MVVMTK0042, error: “Fields using [ObservableProperty] can be converted to partial properties instead, which is recommended (doing so improves the developer experience and allows other generators and analyzers to correctly see the generated property as well).”. MVVMTK0043, error: “Properties annotated with [ObservableProperty] must be instance (non static) partial properties with a getter and a setter that is not init-only.”. MVVMTK0044, error: “Using [ObservableProperty] with (partial) properties requires a higher version of Roslyn (remove [ObservableProperty] or target a field instead, or upgrade to at least Visual Studio 2022 version 17.12 and the .NET 9 SDK).”. MVVMTK0045, warning: “Fields using [ObservableProperty] will generate code that is not AOT compatible in WinRT scenarios (such as UWP XAML and WinUI 3 apps), and partial properties should be used instead (as they allow the CsWinRT generators to correctly produce the necessary WinRT marshalling code).”. MVVMTK0046, warning: “Using [RelayCommand] on methods within a type also using [GeneratedBindableCustomProperty] is not supported, and a manually declared command property should be used instead (the [GeneratedBindableCustomProperty] generator cannot see the generated command property that is produced by the MVVM Toolkit generator).”. MVVMTK0047, warning: “Using [GeneratedBindableCustomProperty] on types that also use [ObservableProperty] on any declared (or inherited) fields is not supported, and partial properties should be used instead (the [GeneratedBindableCustomProperty] generator cannot see the generated property that is produced by the MVVM Toolkit generator).”. MVVMTK0048, warning: “Using [GeneratedBindableCustomProperty] on types that also use [RelayCommand] on any inherited methods is not supported, and a manually declared command property should be used instead (the [GeneratedBindableCustomProperty] generator cannot see the generated property that is produced by the MVVM Toolkit generator).”. MVVMTK0049, warning: “Using the [INotifyPropertyChanged] attribute on types is not AOT compatible in WinRT scenarios (such as UWP XAML and WinUI 3 apps), and they should derive from ObservableObject or manually implement INotifyPropertyChanged instead (as it allows the CsWinRT generators to correctly produce the necessary WinRT marshalling code).”. MVVMTK0050, warning: “Using the [ObservableObject] attribute on types is not AOT compatible in WinRT scenarios (such as UWP XAML and WinUI 3 apps), and they should derive from ObservableObject instead (as it allows the CsWinRT generators to correctly produce the necessary WinRT marshalling code).”. MVVMTK0051, info: “This project producing one or more ‘MVVMTK0045’ warnings due to [ObservableProperty] being used on fields, which is not AOT compatible in WinRT scenarios, should set ‘LangVersion’ to ‘preview’ to enable partial properties and the associated code fixer (setting ‘LangVersion=preview’ is required to use [ObservableProperty] on partial properties and address these warnings).”. MVVMTK0052, error: “A property using [ObservableProperty] is not an incomplete partial definition part ([ObservableProperty] must be used on partial property definitions with no implementation part).”. MVVMTK0053, error: “A property using [ObservableProperty] returns a value by reference ([ObservableProperty] must be used on properties returning a type by value).”. MVVMTK0054, error: “A property using [ObservableProperty] returns a byref-like value ([ObservableProperty] must be used on properties of a non byref-like type).”. MVVMTK0055, error: “A property using [ObservableProperty] returns a pointer-like value ([ObservableProperty] must be used on properties of a non pointer-like type).”. MVVMTK0056, info: “Semi-auto properties should be converted to partial properties using [ObservableProperty] when possible, which is recommended (doing so makes the code less verbose and results in more optimized code).”. As you can see, we have so many new analyzers in this release! They fall into two categories: General code analysis for MVVM scenarios CsWinRT trim/AOT supporting code analysis (for UWP and WinUI 3) All of these will just show up nicely in quick info, whenever needed: We want you to be able to rely on the MVVM Toolkit to make sure code leveraging its generators is written correctly and bug-free! Please let us know if you hit any issues with the analyzers, or if you found a new scenario that is not covered, and that you think should have a new dedicated analyzer, by opening an issue in our GitHub repo. Other changes and improvements Add .targets to validate the Windows SDK version (#942): the MVVM Toolkit now includes MSBuild logic to provide friendly error messages when using an incorrect version of the Windows SDK package, and suggests how to fix it and which exact version to use. Allow forwarding attributes to property accessors (#952): when using [ObservableProperty] on fields, adding attributes on the generated accessors is now also supported. Fix suppressions for custom attribute targets (#964): the custom diagnostic suppressions now work correctly when using custom attribute targets on [ObservableProperty] fields. Move some diagnostics to analyzers (#968): moved more diagnostics to separate analyzers, to improve performance of the MVVM Toolkit source generators. Handle ‘required’ fields in partial property code fixer (#972): using the required modifier is now also support for fields using [ObservableProperty]. Embed .pdb files for all analyzer projects (#980): all source generators and analyzers now have embedded .pdb files, making it simpler to debug them in Visual Studio, if needed. Use ref readonly in IndexOf<T> (#997): the IndexOf<T> extension now takes a ref readonly, making it clear that it’s not meant to be used with rvalue-s. Added Stream over ReadOnlySequence<byte> (#808): there’s a new AsStream() extension for ReadOnlySequence<byte> to easily get a readonly, seekable stream wrapping it! Thank you @paulomorgado! You can see the full changelog for this release from the GitHub release page. Get started today! You can find all source code in our GitHub repo, some handwritten docs on MS learn, and complete API references in the .NET API browser website. If you would like to contribute, feel free to open issues or to reach out to let us know about your experience! To follow the conversation on Twitter, use the #CommunityToolkit hashtag. All your feedback greatly helped shape the direction of these libraries, so make sure to share them! Happy coding! The post Announcing .NET Community Toolkit 8.4! Partial properties support for MVVM, new analyzers, and more! appeared first on .NET Blog. View the full articleLearn what is new in the Visual Studio Code November 2024 Release (1.96) Read the full article View the full articleAs .NET continues to evolve, so do the tools available to WinForms developers, enabling more efficient and responsive applications. With .NET 9, we’re excited to introduce a collection of new asynchronous APIs that significantly streamline UI management tasks. From updating controls to showing forms and dialogs, these additions bring the power of async programming to WinForms in new ways. In this post, we’ll dive into four key APIs, explaining how they work, where they shine, and how to start using them. [HEADING=1]Meet the New Async APIs[/HEADING] .NET 9 introduces several async APIs designed specifically for WinForms, making UI operations more intuitive and performant in asynchronous scenarios. The new additions include: Control.InvokeAsync – Fully released in .NET 9, this API helps marshal calls to the UI thread asynchronously. Form.ShowAsync and Form.ShowDialogAsync (Experimental) – These APIs let developers show forms asynchronously, making life easier in complex UI scenarios. TaskDialog.ShowDialogAsync (Experimental) – This API provides a way to show Task-Dialog-based message box dialogs asynchronously, which is especially helpful for long-running, UI-bound operations. Let’s break down each set of APIs, starting with [iCODE]InvokeAsync[/iCODE]. [HEADING=2]Control.InvokeAsync: Seamless Asynchronous UI Thread Invocation[/HEADING] [iCODE]InvokeAsync[/iCODE] offers a powerful way to marshal calls to the UI thread without blocking the calling thread. The method lets you execute both synchronous and asynchronous callbacks on the UI thread, offering flexibility while preventing accidental “fire-and-forget” behavior. It does that by queueing operations in the WinForms main message queue, ensuring they’re executed on the UI thread. This behavior is similar to [iCODE]Control.Invoke[/iCODE], which also marshals calls to the UI thread, but there’s an important difference: [iCODE]InvokeAsync[/iCODE] doesn’t block the calling thread because it posts the delegate to the message queue, rather than sending it. [HEADING=2]Wait – Sending vs. Posting? Message Queue?[/HEADING] Let’s break down these concepts to clarify what they mean and why [iCODE]InvokeAsync[/iCODE]‘s approach can help improve app responsiveness. In WinForms, all UI operations happen on the main UI thread. To manage these operations, the UI thread runs a loop, known as the message loop, which continually processes messages—like button clicks, screen repaints, and other actions. This loop is the heart of how WinForms stays responsive to user actions while processing instructions. When you are working with modern APIs, the majority of your App’s code does not run on this UI thread. Ideally, the UI thread should only be used to do those things which are necessary to update your UI. There are situations when your code doesn’t end up on the UI Thread automatically. One example is when you spin-up a dedicated task to perform a compute-intense operation in parallel. In these cases, you need to “marshal” the code execution to the UI thread, so that the UI thread then can update the UI. Because otherwise it’s this: [ATTACH type=full" alt="Screenshot of a typical Cross-Thread-Exception in the Debugger]6064[/ATTACH] Let’s say I am not allowed to go into a certain room to get a glass of milk, but you are. In that case, there is only one option: Since I cannot become you, I can only ask you to get me that glass of milk. And that’s the same with thread marshalling. A worker thread cannot become the UI thread. But the execution of code (the getting of the glass of milk) can be marshalled. In other words: the worker thread can ask the UI Thread to execute some code on its behalf. And, simply put, that works by queuing the delegate of a method into the message queue. And with that, lets address this Sending and Posting confusion: You have two main ways to queue up actions in this loop: Sending a Message (Blocking): [iCODE]Control.Invoke[/iCODE] uses this approach. When you call [iCODE]Control.Invoke[/iCODE], it synchronously sends the specified delegate to the UI thread’s message queue. This action is blocking, meaning the calling thread waits until the UI thread processes the delegate before continuing. This is useful when the calling code depends on an immediate result from the UI thread but can lead to UI freezes if overused, especially during long-running operations. Posting a Message (Non-Blocking): [iCODE]InvokeAsync[/iCODE] posts the delegate to the message queue, which is a non-blocking operation. This approach tells the UI thread to queue up the action and handle it as soon as it can, but the calling thread doesn’t wait around for it to finish. The method returns immediately, allowing the calling thread to continue its work. This distinction is particularly valuable in async scenarios, as it allows the app to handle other tasks without delay, minimizing UI thread bottlenecks. Here’s a quick comparison: Operation Method Blocking Description Send [iCODE]Control.Invoke[/iCODE] Yes Calls the delegate on the UI thread and waits for it to complete. Post [iCODE]Control.InvokeAsync[/iCODE] No Queues the delegate on the UI thread and returns immediately. [HEADING=2]Why This Matters[/HEADING] By posting delegates with [iCODE]InvokeAsync[/iCODE], your code can now queue multiple updates to controls, perform background operations, or await other async tasks without halting the main UI thread. This approach not only helps prevent the dreaded “frozen UI” experience but also keeps the app responsive even when handling numerous UI-bound tasks. In summary: while [iCODE]Control.Invoke[/iCODE] waits for the UI thread to complete the delegate (blocking), [iCODE]InvokeAsync[/iCODE] hands off the task to the UI thread and returns instantly (non-blocking). This difference makes [iCODE]InvokeAsync[/iCODE] ideal for async scenarios, allowing developers to build smoother, more responsive WinForms applications. Here’s how each [iCODE]InvokeAsync[/iCODE] overload works: public async Task InvokeAsync(Action callback, CancellationToken cancellationToken = default) public async Task<T> InvokeAsync<T>(Func<T> callback, CancellationToken cancellationToken = default) public async Task InvokeAsync(Func<CancellationToken, ValueTask> callback, CancellationToken cancellationToken = default) public async Task<T> InvokeAsync<T>(Func<CancellationToken, ValueTask<T>> callback, CancellationToken cancellationToken = default) Each overload allows for different combinations of synchronous and asynchronous methods with or without return values: [iCODE]InvokeAsync(Action callback, CancellationToken cancellationToken = default)[/iCODE] is for synchronous operations with no return value. If you want to update a control’s property on the UI thread—such as setting the [iCODE]Text[/iCODE] property on a [iCODE]Label[/iCODE]—this overload allows you to do so without waiting for a return value. The callback will be posted to the message queue and executed asynchronously, returning a [iCODE]Task[/iCODE] that you can await if needed. [iCODE]await control.InvokeAsync(() => control.Text = "Updated Text");[/iCODE] [iCODE]InvokeAsync<T>(Func<T> callback, CancellationToken cancellationToken = default)[/iCODE] is for synchronous operations that do return a result of type [iCODE]T[/iCODE]. Use it when you want to retrieve a value computed on the UI thread, like getting the [iCODE]SelectedItem[/iCODE] from a [iCODE]ComboBox[/iCODE]. [iCODE]InvokeAsync[/iCODE] posts the callback to the UI thread and returns a [iCODE]Task<T>[/iCODE], allowing you to await the result. [iCODE]int itemCount = await control.InvokeAsync(() => comboBox.Items.Count);[/iCODE] [iCODE]InvokeAsync(Func<CancellationToken, ValueTask> callback, CancellationToken cancellationToken = default):[/iCODE] This overload is for asynchronous operations that don’t return a result. It’s ideal for a longer-running async operation that updates the UI, such as waiting for data to load before updating a control. The callback receives a [iCODE]CancellationToken[/iCODE] to support cancellation and need to return a [iCODE]ValueTask[/iCODE], which [iCODE]InvokeAsync[/iCODE] will await (internally) for completion, keeping the UI responsive while the operation runs asynchronously. So, there are two “awaits happening”: [iCODE]InvokeAsync[/iCODE] is awaited (or rather can be awaited), and internally the ValueTask that you passed is also awaited. await control.InvokeAsync(async (ct) => { await Task.Delay(1000, ct); // Simulating a delay control.Text = "Data Loaded"; }); [iCODE]InvokeAsync<T>(Func<CancellationToken, ValueTask<T>> callback, CancellationToken cancellationToken = default)[/iCODE] is then finally the overload version for asynchronous operations that do return a result of type [iCODE]T[/iCODE]. Use it when an async operation must complete on the UI thread and return a value, such as querying a control’s state after a delay or fetching data to update the UI. The callback receives a [iCODE]CancellationToken[/iCODE] and returns a [iCODE]ValueTask<T>[/iCODE], which [iCODE]InvokeAsync[/iCODE] will await to provide the result. var itemCount = await control.InvokeAsync(async (ct) => { await Task.Delay(500, ct); // Simulating data fetching delay return comboBox.Items.Count; }); [HEADING=2]Quick decision lookup: Choosing the Right Overload[/HEADING] For no return value with synchronous operations, use [iCODE]Action[/iCODE]. For return values with synchronous operations, use [iCODE]Func<T>[/iCODE]. For async operations without a result, use [iCODE]Func<CancellationToken, ValueTask>[/iCODE]. For async operations with a result, use [iCODE]Func<CancellationToken, ValueTask<T>>[/iCODE]. Using the correct overload helps you handle UI tasks smoothly in async WinForms applications, avoiding main-thread bottlenecks and enhancing app responsiveness. Here’s a quick example: var control = new Control(); // Sync action await control.InvokeAsync(() => control.Text = "Hello, async world!"); // Async function with return value var result = await control.InvokeAsync(async (ct) => { control.Text = "Loading..."; await Task.Delay(1000, ct); control.Text = "Done!"; return 42; }); [HEADING=2]Mixing-up asynchronous and synchronous overloads happen – or do they?[/HEADING] With so many overload options, it’s possible to mistakenly pass an async method to a synchronous overload, which can lead to unintended “fire-and-forget” behavior. To prevent this, WinForms introduces for .NET 9 a WinForms-specific analyzer that detects when an asynchronous method (e.g., one returning [iCODE]Task[/iCODE]) is passed to a synchronous overload of [iCODE]InvokeAsync[/iCODE] without a [iCODE]CancellationToken[/iCODE]. The analyzer will trigger a warning, helping you identify and correct potential issues before they cause runtime problems. For example, passing an async method without [iCODE]CancellationToken[/iCODE] support might generate a warning like: [iCODE]warning WFO2001: Task is being passed to InvokeAsync without a cancellation token.[/iCODE] This Analyzer ensures that async operations are handled correctly, maintaining reliable, responsive behavior across your WinForms applications. [HEADING=1]Experimental APIs[/HEADING] In addition to [iCODE]InvokeAsync[/iCODE], WinForms introduces experimental async options for .NET 9 for showing forms and dialogs. While still in experimental stages, these APIs provide developers with greater flexibility for asynchronous UI interactions, such as document management and form lifecycle control. [iCODE]Form.ShowAsync[/iCODE] and [iCODE]Form.ShowDialogAsync[/iCODE] are new methods that allow forms to be shown asynchronously. They simplify the handling of multiple form instances and are especially useful in cases where you might need several instances of the same form type, such as when displaying different documents in separate windows. Here’s a basic example of how to use [iCODE]ShowAsync[/iCODE]: var myForm = new MyForm(); await myForm.ShowAsync(); And for modal dialogs, you can use [iCODE]ShowDialogAsync[/iCODE]: var result = await myForm.ShowDialogAsync(); if (result == DialogResult.OK) { // Perform actions based on dialog result } These methods streamline the management of asynchronous form displays and help you avoid blocking the UI thread while waiting for user interactions. [HEADING=2]TaskDialog.ShowDialogAsync[/HEADING] [iCODE]TaskDialog.ShowDialogAsync[/iCODE] is another experimental API in .NET 9, aimed at improving the flexibility of dialog interactions. It provides a way to show task dialogs asynchronously, perfect for use cases where lengthy operations or multiple steps are involved. Here’s how to display a [iCODE]TaskDialog[/iCODE] asynchronously: var taskDialogPage = new TaskDialogPage { Heading = "Processing...", Text = "Please wait while we complete the task." }; var buttonClicked = await TaskDialog.ShowDialogAsync(taskDialogPage); This API allows developers to asynchronously display dialogs, freeing the UI thread and providing a smoother user experience. [HEADING=1]Practical Applications of Async APIs[/HEADING] These async APIs unlock new capabilities for WinForms, particularly in multi-form applications, MVVM design patterns, and dependency injection scenarios. By leveraging async operations for forms and dialogs, you can: Simplify form lifecycle management in async scenarios, especially when handling multiple instances of the same form. Support MVVM and DI workflows, where async form handling is beneficial in ViewModel-driven architectures. Avoid UI-thread blocking, enabling a more responsive interface even during intensive operations. If you curious about how [iCODE]Invoke.Async[/iCODE] can revolutionize AI-driven modernization of WinForms apps then check out to see these features come alive in real-world scenarios! And that’s not all—don’t miss our deep dive into everything new in .NET 9 for WinForms . Dive in and get inspired! [HEADING=2]How to Kick Off Something Async from Something Sync[/HEADING] In UI scenarios, it’s common to trigger async operations from synchronous contexts. Of course, we all know it’s best practice to avoid [iCODE]async void[/iCODE] methods. Why is this the case? When you use [iCODE]async void[/iCODE], the caller has no way to await or observe the completion of the method. This can lead to unhandled exceptions or unexpected behavior. [iCODE]async void[/iCODE] methods are essentially fire-and-forget, and they operate outside the standard error-handling mechanisms provided by [iCODE]Task[/iCODE]. This makes debugging and maintenance more challenging in most scenarios. But! There is an exception, and that is event handlers or methods with “event handler characteristics.” Event handlers cannot return [iCODE]Task[/iCODE] or [iCODE]Task<T>[/iCODE], so [iCODE]async void[/iCODE] allows them to trigger async actions without blocking the UI thread. However, because [iCODE]async void[/iCODE] methods aren’t awaitable, exceptions are difficult to catch. To address this, you can use error-handling constructs like [iCODE]try-catch[/iCODE] around the awaited operations inside the event handler. This ensures that exceptions are properly handled even in these unique cases. For example: private async void Button_Click(object sender, EventArgs e) { try { await PerformLongRunningOperationAsync(); } catch (Exception ex) { MessageBox.Show($"An error occurred: {ex.Message}", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } } Here, the [iCODE]async void[/iCODE] is unavoidable due to the event handler signature, but by wrapping the awaited code in a try-catch, we can safely handle any exceptions that might occur during the async operation. The following example uses a 7-Segment control named [iCODE]SevenSegmentTimer[/iCODE] to display a timer in the typical 7-segment-style with the resolution of a 10th of a second. It has a few methods for updating and animating the content: public partial class TimerForm : Form { private SevenSegmentTimer _sevenSegmentTimer; private readonly CancellationTokenSource _formCloseCancellation = new(); public FrmMain() { InitializeComponent(); SetupTimerDisplay(); } [MemberNotNull(nameof(_sevenSegmentTimer))] private void SetupTimerDisplay() { _sevenSegmentTimer = new SevenSegmentTimer { Dock = DockStyle.Fill }; Controls.Add(_sevenSegmentTimer); } override async protected void OnLoad(EventArgs e) { base.OnLoad(e); await RunDisplayLoopAsyncV1(); } private async Task RunDisplayLoopAsyncV1() { // When we update the time, the method will also wait 75 ms asynchronously. _sevenSegmentTimer.UpdateDelay = 75; while (true) { // We update and then wait for the delay. // In the meantime, the Windows message loop can process other messages, // so the app remains responsive. await _sevenSegmentTimer.UpdateTimeAndDelayAsync( time: TimeOnly.FromDateTime(DateTime.Now)); } } } When we run this, we see this timer in the Form on the screen: [ATTACH type=full" alt="WinForms App running a 7-segment stop watch in dark mode with green digits]6065[/ATTACH] The async method [iCODE]UpdateTimeAndDelayAsync[/iCODE] does exactly what it says: It updates the time displayed in the control, and then waits the amount of time, which we’ve set with the [iCODE]UpdateDelay[/iCODE] property the line before. As you can see, this async method [iCODE]RunDisplayLoopAsyncV1[/iCODE] is kicked-off in the Form’s [iCODE]OnLoad[/iCODE]. And that’s the typical approach, how we initiate something async from a synchronous void method. For the typical WinForms dev this may look a bit weird on first glance. After all, we’re calling another method from [iCODE]OnLoad[/iCODE], and that method never returns because it’s ending up in an endless loop. So, does [iCODE]OnLoad[/iCODE] in this case ever finish? Aren’t we blocking the app here? This is where async programming shines. Even though RunDisplayLoopAsyncV1 contains an infinite loop, it’s structured asynchronously. When the await keyword is encountered inside the loop (e.g., [iCODE]await _sevenSegmentTimer.UpdateTimeAndDelayAsync()[/iCODE]), the method yields control back to the caller until the awaited task completes. In the context of a WinForms app, this means the Windows message loop remains free to process events like repainting the UI, handling button clicks, or responding to keyboard input. The app stays responsive because [iCODE]await[/iCODE] pauses the execution of [iCODE]RunDisplayLoopAsyncV1[/iCODE] without blocking the UI thread. When [iCODE]OnLoad[/iCODE] is marked [iCODE]async[/iCODE], it completes as soon as it encounters its first [iCODE]await[/iCODE] within [iCODE]RunDisplayLoopAsyncV1[/iCODE]. After the awaited task completes, the runtime resumes execution of [iCODE]RunDisplayLoopAsyncV1[/iCODE] from where it left off. This happens without blocking the UI thread, effectively allowing [iCODE]OnLoad[/iCODE] to [iCODE]return[/iCODE] immediately, even though the asynchronous operation continues in the background. In the background? You can think of this as splitting the method into parts, like an imaginary [iCODE]WaitAsync-Initiator[/iCODE], which gets called after the first [iCODE]await[/iCODE] is resolved. Which then kicks-off the [iCODE]WaitAsync-Waiter[/iCODE] which runs in the background, until the wait period is over. Which then again triggers the [iCODE]WaitAsync-Callback[/iCODE] which effectively asks the message loop to reentry the call and then complete everything which follows that async call. So, the actual code path looks then something like this: And the best way to internalize this is to compare it to 2 mouse-click events, which have been processed in succession, where the first mouse-click kicks off [iCODE]RunDisplayLoopAsyncV1[/iCODE], and the second mouse-click corresponds to the [iCODE]WaitAsync[/iCODE] call-back into “Part 3” of that method, when the delay is just ready waiting. This process then repeats for each subsequent [iCODE]await[/iCODE] in an async method. And this is why the app doesn’t freeze despite the infinite loop. In fact, technically, OnLoad actually finishes normally, but the part(s) after each await are called back by the message loop later in time. Now, we’re still pretty much using the UI Thread exclusively here. (Technically speaking, the call-backs for a short moment run on a thread-pool thread, but let’s ignore that for now.) Yes, we’re async, but nothing so far is really happening in parallel. Up to now, this is more like a clever ochestrated relay race, where the baton is so seemlessly passed to the next respective runner, that there simply are no hangs or blocks. But an async method can be called from a different thread at any time. And if we did this currently in our sample like this… private async Task RunDisplayLoopAsyncV2() { // When we update the time, the method will also wait 75 ms asynchronously. _sevenSegmentTimer.UpdateDelay = 75; // Let's kick-off a dedicated task for the loop. await Task.Run(ActualDisplayLoopAsync); // Local function, which represents the actual loop. async Task ActualDisplayLoopAsync() { while (true) { // We update and then wait for the delay. // In the meantime, the Windows message loop can process other messages, // so the app remains responsive. await _sevenSegmentTimer.UpdateTimeAndDelayAsync( time: TimeOnly.FromDateTime(DateTime.Now)); } } } then… [ATTACH type=full" alt="Screenshot of a Cross-Thread-Exception in the demo-app's context]6066[/ATTACH] [HEADING=2]The trickiness of InvokeAsync’s overload resolution[/HEADING] So, as we learned earlier, this is an easy one to resolve, right? We’re just using [iCODE]InvokeAsync[/iCODE] to call our local function [iCODE]ActualDisplayLoopAsync[/iCODE], and then we’re good. So, let’s do that. Let’s get the [iCODE]Task[/iCODE] that is returned by InvokeAsync and pass that to [iCODE]Task.Run[/iCODE]. Easy-peasy. [ATTACH type=full" alt="Screenshot Errors and Warnings pointing to overload resolution issues]6067[/ATTACH] Well – that doesn’t look so good. We got 2 issues. First, as mentioned before, we’re trying to invoke a method returning a [iCODE]Task[/iCODE] without a cancellation token. [iCODE]InvokeAsync[/iCODE] is warning us that we are setting up a fire-and-forget in this case, which cannot be internally awaited. And the second issue is not only a warning, it is an error. [iCODE]InvokeAsync[/iCODE] is returning a [iCODE]Task[/iCODE], and we of course cannot pass that to [iCODE]Task.Run[/iCODE]. We can only pass an [iCODE]Action[/iCODE] or a [iCODE]Func[/iCODE] returning a [iCODE]Task[/iCODE], but surely not a [iCODE]Task[/iCODE] itself. But, what we can do, is just converting this line into another local function, so from this… // Doesn't work. InvokeAsync wants a cancellation token, and we can't pass Task.Run a task. var invokeTask = this.InvokeAsync(ActualDisplayLoopAsync); // Let's kick-off a dedicated task for the loop. await Task.Run(invokeTask); // Local function, which represents the actual loop. async Task ActualDisplayLoopAsync(CancellationToken cancellation) to this: // This is a local function now, calling the actual loop on the UI Thread. Task InvokeTask() => this.InvokeAsync(ActualDisplayLoopAsync, CancellationToken.None); await Task.Run(InvokeTask); async ValueTask ActualDisplayLoopAsync(CancellationToken cancellation=default) ... And that works like a charm now! [HEADING=1]Parallelizing for Performance or targeted code flow[/HEADING] Our 7-segment control has another neat trick up its sleeve: a fading animation for the separator columns. We can use this feature as follows: private async Task RunDisplayLoopAsyncV4() { while (true) { // We also have methods to fade the separators in and out! // Note: There is no need to invoke these methods on the UI thread, // because we can safely set the color for a label from any thread. await _sevenSegmentTimer.FadeSeparatorsInAsync().ConfigureAwait(false); await _sevenSegmentTimer.FadeSeparatorsOutAsync().ConfigureAwait(false); } } When we run this, it looks like this: [ATTACH type=full" alt="WinForms App running showing the 7-segment control with separator animation]6068[/ATTACH] However, there’s a challenge: How can we set up our code flow so that the running clock and the fading separators are invoked in parallel, all within a continuous loop? To achieve this, we can leverage Task-based parallelism. The idea is to: Run both the clock update and the separator fading simultaneously: We execute both tasks asynchronously and wait for them to complete. Handle differing task durations gracefully: Since the clock update and fading animations might take different amounts of time, we use [iCODE]Task.WhenAny[/iCODE] to ensure the faster task doesn’t delay the slower one. Reset completed tasks: Once a task completes, we reset it to null so the next iteration can start it anew. And the result is this: private async Task RunDisplayLoopAsyncV6() { Task? uiUpdateTask = null; Task? separatorFadingTask = null; while (true) { async Task FadeInFadeOutAsync(CancellationToken cancellation) { await _sevenSegmentTimer.FadeSeparatorsInAsync(cancellation).ConfigureAwait(false); await _sevenSegmentTimer.FadeSeparatorsOutAsync(cancellation).ConfigureAwait(false); } uiUpdateTask ??= _sevenSegmentTimer.UpdateTimeAndDelayAsync( time: TimeOnly.FromDateTime(DateTime.Now), cancellation: _formCloseCancellation.Token); separatorFadingTask ??= FadeInFadeOutAsync(_formCloseCancellation.Token); Task completedOrCancelledTask = await Task.WhenAny(separatorFadingTask, uiUpdateTask); if (completedOrCancelledTask.IsCanceled) { break; } if (completedOrCancelledTask == uiUpdateTask) { uiUpdateTask = null; } else { separatorFadingTask = null; } } } protected override void OnFormClosing(FormClosingEventArgs e) { base.OnFormClosing(e); _formCloseCancellation.Cancel(); } And this. And you can see in this animated GIF, that the UI really stays responsive all the time, because the window can be smoothly dragged around with the mouse. [ATTACH type=full" alt="Final animated version of the 7-segment timer app]6069[/ATTACH] [HEADING=1]Summary[/HEADING] With these new async APIs, .NET 9 brings advanced capabilities to WinForms, making it easier to work with asynchronous UI operations. While some APIs, like [iCODE]Control.InvokeAsync[/iCODE], are ready for production, experimental APIs for Form and Dialog management open up exciting possibilities for responsive UI development. You can find the sample code of this blog post in our Extensibility-Repo in the respective Samples subfolder. Explore the potential of async programming in WinForms with .NET 9, and be sure to test out the experimental features in non-critical projects. As always, your feedback is invaluable, and we look forward to hearing how these new async capabilities enhance your development process! And, as always: Happy Coding! The post Invoking Async Power: What Awaits WinForms in .NET 9 appeared first on .NET Blog. Continue reading...As .NET continues to evolve, so do the tools available to WinForms developers, enabling more efficient and responsive applications. With .NET 9, we’re excited to introduce a collection of new asynchronous APIs that significantly streamline UI management tasks. From updating controls to showing forms and dialogs, these additions bring the power of async programming to WinForms in new ways. In this post, we’ll dive into four key APIs, explaining how they work, where they shine, and how to start using them. Meet the New Async APIs .NET 9 introduces several async APIs designed specifically for WinForms, making UI operations more intuitive and performant in asynchronous scenarios. The new additions include: Control.InvokeAsync – Fully released in .NET 9, this API helps marshal calls to the UI thread asynchronously. Form.ShowAsync and Form.ShowDialogAsync (Experimental) – These APIs let developers show forms asynchronously, making life easier in complex UI scenarios. TaskDialog.ShowDialogAsync (Experimental) – This API provides a way to show Task-Dialog-based message box dialogs asynchronously, which is especially helpful for long-running, UI-bound operations. Let’s break down each set of APIs, starting with InvokeAsync. Control.InvokeAsync: Seamless Asynchronous UI Thread Invocation InvokeAsync offers a powerful way to marshal calls to the UI thread without blocking the calling thread. The method lets you execute both synchronous and asynchronous callbacks on the UI thread, offering flexibility while preventing accidental “fire-and-forget” behavior. It does that by queueing operations in the WinForms main message queue, ensuring they’re executed on the UI thread. This behavior is similar to Control.Invoke, which also marshals calls to the UI thread, but there’s an important difference: InvokeAsync doesn’t block the calling thread because it posts the delegate to the message queue, rather than sending it. Wait – Sending vs. Posting? Message Queue? Let’s break down these concepts to clarify what they mean and why InvokeAsync‘s approach can help improve app responsiveness. In WinForms, all UI operations happen on the main UI thread. To manage these operations, the UI thread runs a loop, known as the message loop, which continually processes messages—like button clicks, screen repaints, and other actions. This loop is the heart of how WinForms stays responsive to user actions while processing instructions. When you are working with modern APIs, the majority of your App’s code does not run on this UI thread. Ideally, the UI thread should only be used to do those things which are necessary to update your UI. There are situations when your code doesn’t end up on the UI Thread automatically. One example is when you spin-up a dedicated task to perform a compute-intense operation in parallel. In these cases, you need to “marshal” the code execution to the UI thread, so that the UI thread then can update the UI. Because otherwise it’s this: Let’s say I am not allowed to go into a certain room to get a glass of milk, but you are. In that case, there is only one option: Since I cannot become you, I can only ask you to get me that glass of milk. And that’s the same with thread marshalling. A worker thread cannot become the UI thread. But the execution of code (the getting of the glass of milk) can be marshalled. In other words: the worker thread can ask the UI Thread to execute some code on its behalf. And, simply put, that works by queuing the delegate of a method into the message queue. And with that, lets address this Sending and Posting confusion: You have two main ways to queue up actions in this loop: Sending a Message (Blocking): Control.Invoke uses this approach. When you call Control.Invoke, it synchronously sends the specified delegate to the UI thread’s message queue. This action is blocking, meaning the calling thread waits until the UI thread processes the delegate before continuing. This is useful when the calling code depends on an immediate result from the UI thread but can lead to UI freezes if overused, especially during long-running operations. Posting a Message (Non-Blocking): InvokeAsync posts the delegate to the message queue, which is a non-blocking operation. This approach tells the UI thread to queue up the action and handle it as soon as it can, but the calling thread doesn’t wait around for it to finish. The method returns immediately, allowing the calling thread to continue its work. This distinction is particularly valuable in async scenarios, as it allows the app to handle other tasks without delay, minimizing UI thread bottlenecks. Here’s a quick comparison: Operation Method Blocking Description Send Control.Invoke Yes Calls the delegate on the UI thread and waits for it to complete. Post Control.InvokeAsync No Queues the delegate on the UI thread and returns immediately. Why This Matters By posting delegates with InvokeAsync, your code can now queue multiple updates to controls, perform background operations, or await other async tasks without halting the main UI thread. This approach not only helps prevent the dreaded “frozen UI” experience but also keeps the app responsive even when handling numerous UI-bound tasks. In summary: while Control.Invoke waits for the UI thread to complete the delegate (blocking), InvokeAsync hands off the task to the UI thread and returns instantly (non-blocking). This difference makes InvokeAsync ideal for async scenarios, allowing developers to build smoother, more responsive WinForms applications. Here’s how each InvokeAsync overload works: public async Task InvokeAsync(Action callback, CancellationToken cancellationToken = default) public async Task<T> InvokeAsync<T>(Func<T> callback, CancellationToken cancellationToken = default) public async Task InvokeAsync(Func<CancellationToken, ValueTask> callback, CancellationToken cancellationToken = default) public async Task<T> InvokeAsync<T>(Func<CancellationToken, ValueTask<T>> callback, CancellationToken cancellationToken = default) Each overload allows for different combinations of synchronous and asynchronous methods with or without return values: InvokeAsync(Action callback, CancellationToken cancellationToken = default) is for synchronous operations with no return value. If you want to update a control’s property on the UI thread—such as setting the Text property on a Label—this overload allows you to do so without waiting for a return value. The callback will be posted to the message queue and executed asynchronously, returning a Task that you can await if needed. await control.InvokeAsync(() => control.Text = "Updated Text"); InvokeAsync<T>(Func<T> callback, CancellationToken cancellationToken = default) is for synchronous operations that do return a result of type T. Use it when you want to retrieve a value computed on the UI thread, like getting the SelectedItem from a ComboBox. InvokeAsync posts the callback to the UI thread and returns a Task<T>, allowing you to await the result. int itemCount = await control.InvokeAsync(() => comboBox.Items.Count); InvokeAsync(Func<CancellationToken, ValueTask> callback, CancellationToken cancellationToken = default): This overload is for asynchronous operations that don’t return a result. It’s ideal for a longer-running async operation that updates the UI, such as waiting for data to load before updating a control. The callback receives a CancellationToken to support cancellation and need to return a ValueTask, which InvokeAsync will await (internally) for completion, keeping the UI responsive while the operation runs asynchronously. So, there are two “awaits happening”: InvokeAsync is awaited (or rather can be awaited), and internally the ValueTask that you passed is also awaited. await control.InvokeAsync(async (ct) => { await Task.Delay(1000, ct); // Simulating a delay control.Text = "Data Loaded"; }); InvokeAsync<T>(Func<CancellationToken, ValueTask<T>> callback, CancellationToken cancellationToken = default) is then finally the overload version for asynchronous operations that do return a result of type T. Use it when an async operation must complete on the UI thread and return a value, such as querying a control’s state after a delay or fetching data to update the UI. The callback receives a CancellationToken and returns a ValueTask<T>, which InvokeAsync will await to provide the result. var itemCount = await control.InvokeAsync(async (ct) => { await Task.Delay(500, ct); // Simulating data fetching delay return comboBox.Items.Count; }); Quick decision lookup: Choosing the Right Overload For no return value with synchronous operations, use Action. For return values with synchronous operations, use Func<T>. For async operations without a result, use Func<CancellationToken, ValueTask>. For async operations with a result, use Func<CancellationToken, ValueTask<T>>. Using the correct overload helps you handle UI tasks smoothly in async WinForms applications, avoiding main-thread bottlenecks and enhancing app responsiveness. Here’s a quick example: var control = new Control(); // Sync action await control.InvokeAsync(() => control.Text = "Hello, async world!"); // Async function with return value var result = await control.InvokeAsync(async (ct) => { control.Text = "Loading..."; await Task.Delay(1000, ct); control.Text = "Done!"; return 42; }); Mixing-up asynchronous and synchronous overloads happen – or do they? With so many overload options, it’s possible to mistakenly pass an async method to a synchronous overload, which can lead to unintended “fire-and-forget” behavior. To prevent this, WinForms introduces for .NET 9 a WinForms-specific analyzer that detects when an asynchronous method (e.g., one returning Task) is passed to a synchronous overload of InvokeAsync without a CancellationToken. The analyzer will trigger a warning, helping you identify and correct potential issues before they cause runtime problems. For example, passing an async method without CancellationToken support might generate a warning like: warning WFO2001: Task is being passed to InvokeAsync without a cancellation token. This Analyzer ensures that async operations are handled correctly, maintaining reliable, responsive behavior across your WinForms applications. Experimental APIs In addition to InvokeAsync, WinForms introduces experimental async options for .NET 9 for showing forms and dialogs. While still in experimental stages, these APIs provide developers with greater flexibility for asynchronous UI interactions, such as document management and form lifecycle control. Form.ShowAsync and Form.ShowDialogAsync are new methods that allow forms to be shown asynchronously. They simplify the handling of multiple form instances and are especially useful in cases where you might need several instances of the same form type, such as when displaying different documents in separate windows. Here’s a basic example of how to use ShowAsync: var myForm = new MyForm(); await myForm.ShowAsync(); And for modal dialogs, you can use ShowDialogAsync: var result = await myForm.ShowDialogAsync(); if (result == DialogResult.OK) { // Perform actions based on dialog result } These methods streamline the management of asynchronous form displays and help you avoid blocking the UI thread while waiting for user interactions. TaskDialog.ShowDialogAsync TaskDialog.ShowDialogAsync is another experimental API in .NET 9, aimed at improving the flexibility of dialog interactions. It provides a way to show task dialogs asynchronously, perfect for use cases where lengthy operations or multiple steps are involved. Here’s how to display a TaskDialog asynchronously: var taskDialogPage = new TaskDialogPage { Heading = "Processing...", Text = "Please wait while we complete the task." }; var buttonClicked = await TaskDialog.ShowDialogAsync(taskDialogPage); This API allows developers to asynchronously display dialogs, freeing the UI thread and providing a smoother user experience. Practical Applications of Async APIs These async APIs unlock new capabilities for WinForms, particularly in multi-form applications, MVVM design patterns, and dependency injection scenarios. By leveraging async operations for forms and dialogs, you can: Simplify form lifecycle management in async scenarios, especially when handling multiple instances of the same form. Support MVVM and DI workflows, where async form handling is beneficial in ViewModel-driven architectures. Avoid UI-thread blocking, enabling a more responsive interface even during intensive operations. If you curious about how Invoke.Async can revolutionize AI-driven modernization of WinForms apps then check out this .NET Conf 2024 talk to see these features come alive in real-world scenarios! And that’s not all—don’t miss our deep dive into everything new in .NET 9 for WinForms in another exciting talk. Dive in and get inspired! How to Kick Off Something Async from Something Sync In UI scenarios, it’s common to trigger async operations from synchronous contexts. Of course, we all know it’s best practice to avoid async void methods. Why is this the case? When you use async void, the caller has no way to await or observe the completion of the method. This can lead to unhandled exceptions or unexpected behavior. async void methods are essentially fire-and-forget, and they operate outside the standard error-handling mechanisms provided by Task. This makes debugging and maintenance more challenging in most scenarios. But! There is an exception, and that is event handlers or methods with “event handler characteristics.” Event handlers cannot return Task or Task<T>, so async void allows them to trigger async actions without blocking the UI thread. However, because async void methods aren’t awaitable, exceptions are difficult to catch. To address this, you can use error-handling constructs like try-catch around the awaited operations inside the event handler. This ensures that exceptions are properly handled even in these unique cases. For example: private async void Button_Click(object sender, EventArgs e) { try { await PerformLongRunningOperationAsync(); } catch (Exception ex) { MessageBox.Show($"An error occurred: {ex.Message}", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } } Here, the async void is unavoidable due to the event handler signature, but by wrapping the awaited code in a try-catch, we can safely handle any exceptions that might occur during the async operation. The following example uses a 7-Segment control named SevenSegmentTimer to display a timer in the typical 7-segment-style with the resolution of a 10th of a second. It has a few methods for updating and animating the content: public partial class TimerForm : Form { private SevenSegmentTimer _sevenSegmentTimer; private readonly CancellationTokenSource _formCloseCancellation = new(); public FrmMain() { InitializeComponent(); SetupTimerDisplay(); } [MemberNotNull(nameof(_sevenSegmentTimer))] private void SetupTimerDisplay() { _sevenSegmentTimer = new SevenSegmentTimer { Dock = DockStyle.Fill }; Controls.Add(_sevenSegmentTimer); } override async protected void OnLoad(EventArgs e) { base.OnLoad(e); await RunDisplayLoopAsyncV1(); } private async Task RunDisplayLoopAsyncV1() { // When we update the time, the method will also wait 75 ms asynchronously. _sevenSegmentTimer.UpdateDelay = 75; while (true) { // We update and then wait for the delay. // In the meantime, the Windows message loop can process other messages, // so the app remains responsive. await _sevenSegmentTimer.UpdateTimeAndDelayAsync( time: TimeOnly.FromDateTime(DateTime.Now)); } } } When we run this, we see this timer in the Form on the screen: The async method UpdateTimeAndDelayAsync does exactly what it says: It updates the time displayed in the control, and then waits the amount of time, which we’ve set with the UpdateDelay property the line before. As you can see, this async method RunDisplayLoopAsyncV1 is kicked-off in the Form’s OnLoad. And that’s the typical approach, how we initiate something async from a synchronous void method. For the typical WinForms dev this may look a bit weird on first glance. After all, we’re calling another method from OnLoad, and that method never returns because it’s ending up in an endless loop. So, does OnLoad in this case ever finish? Aren’t we blocking the app here? This is where async programming shines. Even though RunDisplayLoopAsyncV1 contains an infinite loop, it’s structured asynchronously. When the await keyword is encountered inside the loop (e.g., await _sevenSegmentTimer.UpdateTimeAndDelayAsync()), the method yields control back to the caller until the awaited task completes. In the context of a WinForms app, this means the Windows message loop remains free to process events like repainting the UI, handling button clicks, or responding to keyboard input. The app stays responsive because await pauses the execution of RunDisplayLoopAsyncV1 without blocking the UI thread. When OnLoad is marked async, it completes as soon as it encounters its first await within RunDisplayLoopAsyncV1. After the awaited task completes, the runtime resumes execution of RunDisplayLoopAsyncV1 from where it left off. This happens without blocking the UI thread, effectively allowing OnLoad to return immediately, even though the asynchronous operation continues in the background. In the background? You can think of this as splitting the method into parts, like an imaginary WaitAsync-Initiator, which gets called after the first await is resolved. Which then kicks-off the WaitAsync-Waiter which runs in the background, until the wait period is over. Which then again triggers the WaitAsync-Callback which effectively asks the message loop to reentry the call and then complete everything which follows that async call. So, the actual code path looks then something like this: And the best way to internalize this is to compare it to 2 mouse-click events, which have been processed in succession, where the first mouse-click kicks off RunDisplayLoopAsyncV1, and the second mouse-click corresponds to the WaitAsync call-back into “Part 3” of that method, when the delay is just ready waiting. This process then repeats for each subsequent await in an async method. And this is why the app doesn’t freeze despite the infinite loop. In fact, technically, OnLoad actually finishes normally, but the part(s) after each await are called back by the message loop later in time. Now, we’re still pretty much using the UI Thread exclusively here. (Technically speaking, the call-backs for a short moment run on a thread-pool thread, but let’s ignore that for now.) Yes, we’re async, but nothing so far is really happening in parallel. Up to now, this is more like a clever ochestrated relay race, where the baton is so seemlessly passed to the next respective runner, that there simply are no hangs or blocks. But an async method can be called from a different thread at any time. And if we did this currently in our sample like this… private async Task RunDisplayLoopAsyncV2() { // When we update the time, the method will also wait 75 ms asynchronously. _sevenSegmentTimer.UpdateDelay = 75; // Let's kick-off a dedicated task for the loop. await Task.Run(ActualDisplayLoopAsync); // Local function, which represents the actual loop. async Task ActualDisplayLoopAsync() { while (true) { // We update and then wait for the delay. // In the meantime, the Windows message loop can process other messages, // so the app remains responsive. await _sevenSegmentTimer.UpdateTimeAndDelayAsync( time: TimeOnly.FromDateTime(DateTime.Now)); } } } then… The trickiness of InvokeAsync’s overload resolution So, as we learned earlier, this is an easy one to resolve, right? We’re just using InvokeAsync to call our local function ActualDisplayLoopAsync, and then we’re good. So, let’s do that. Let’s get the Task that is returned by InvokeAsync and pass that to Task.Run. Easy-peasy. Well – that doesn’t look so good. We got 2 issues. First, as mentioned before, we’re trying to invoke a method returning a Task without a cancellation token. InvokeAsync is warning us that we are setting up a fire-and-forget in this case, which cannot be internally awaited. And the second issue is not only a warning, it is an error. InvokeAsync is returning a Task, and we of course cannot pass that to Task.Run. We can only pass an Action or a Func returning a Task, but surely not a Task itself. But, what we can do, is just converting this line into another local function, so from this… // Doesn't work. InvokeAsync wants a cancellation token, and we can't pass Task.Run a task. var invokeTask = this.InvokeAsync(ActualDisplayLoopAsync); // Let's kick-off a dedicated task for the loop. await Task.Run(invokeTask); // Local function, which represents the actual loop. async Task ActualDisplayLoopAsync(CancellationToken cancellation) to this: // This is a local function now, calling the actual loop on the UI Thread. Task InvokeTask() => this.InvokeAsync(ActualDisplayLoopAsync, CancellationToken.None); await Task.Run(InvokeTask); async ValueTask ActualDisplayLoopAsync(CancellationToken cancellation=default) ... And that works like a charm now! Parallelizing for Performance or targeted code flow Our 7-segment control has another neat trick up its sleeve: a fading animation for the separator columns. We can use this feature as follows: private async Task RunDisplayLoopAsyncV4() { while (true) { // We also have methods to fade the separators in and out! // Note: There is no need to invoke these methods on the UI thread, // because we can safely set the color for a label from any thread. await _sevenSegmentTimer.FadeSeparatorsInAsync().ConfigureAwait(false); await _sevenSegmentTimer.FadeSeparatorsOutAsync().ConfigureAwait(false); } } When we run this, it looks like this: However, there’s a challenge: How can we set up our code flow so that the running clock and the fading separators are invoked in parallel, all within a continuous loop? To achieve this, we can leverage Task-based parallelism. The idea is to: Run both the clock update and the separator fading simultaneously: We execute both tasks asynchronously and wait for them to complete. Handle differing task durations gracefully: Since the clock update and fading animations might take different amounts of time, we use Task.WhenAny to ensure the faster task doesn’t delay the slower one. Reset completed tasks: Once a task completes, we reset it to null so the next iteration can start it anew. And the result is this: private async Task RunDisplayLoopAsyncV6() { Task? uiUpdateTask = null; Task? separatorFadingTask = null; while (true) { async Task FadeInFadeOutAsync(CancellationToken cancellation) { await _sevenSegmentTimer.FadeSeparatorsInAsync(cancellation).ConfigureAwait(false); await _sevenSegmentTimer.FadeSeparatorsOutAsync(cancellation).ConfigureAwait(false); } uiUpdateTask ??= _sevenSegmentTimer.UpdateTimeAndDelayAsync( time: TimeOnly.FromDateTime(DateTime.Now), cancellation: _formCloseCancellation.Token); separatorFadingTask ??= FadeInFadeOutAsync(_formCloseCancellation.Token); Task completedOrCancelledTask = await Task.WhenAny(separatorFadingTask, uiUpdateTask); if (completedOrCancelledTask.IsCanceled) { break; } if (completedOrCancelledTask == uiUpdateTask) { uiUpdateTask = null; } else { separatorFadingTask = null; } } } protected override void OnFormClosing(FormClosingEventArgs e) { base.OnFormClosing(e); _formCloseCancellation.Cancel(); } And this. And you can see in this animated GIF, that the UI really stays responsive all the time, because the window can be smoothly dragged around with the mouse. Summary With these new async APIs, .NET 9 brings advanced capabilities to WinForms, making it easier to work with asynchronous UI operations. While some APIs, like Control.InvokeAsync, are ready for production, experimental APIs for Form and Dialog management open up exciting possibilities for responsive UI development. You can find the sample code of this blog post in our Extensibility-Repo in the respective Samples subfolder. Explore the potential of async programming in WinForms with .NET 9, and be sure to test out the experimental features in non-critical projects. As always, your feedback is invaluable, and we look forward to hearing how these new async capabilities enhance your development process! And, as always: Happy Coding! The post Invoking Async Power: What Awaits WinForms in .NET 9 appeared first on .NET Blog. View the full articleUsing local AI models can be a great way to experiment on your own machine without needing to deploy resources to the cloud. In this post, we’ll look at how use .NET Aspire with Ollama to run AI models locally, while using the Microsoft.Extensions.AI abstractions to make it transition to cloud-hosted models on deployment. [HEADING=1]Setting up Ollama in .NET Aspire[/HEADING] We’re going to need a way to use Ollama from our .NET Aspire application, and the easiest way to do that is using the Ollama hosting integration from the .NET Aspire Community Toolkit. You can install the Ollama hosting integration from NuGet via the Visual Studio tooling, VS Code tooling, or the .NET CLI. Let’s take a look at how to install the Ollama hosting integration via the command line into our app host project: [iCODE]dotnet add package CommunityToolkit.Aspire.Hosting.Ollama[/iCODE] Once you’ve installed the Ollama hosting integration, you can configure it in your [iCODE]Program.cs[/iCODE] file. Here’s an example of how you might configure the Ollama hosting integration: var ollama = builder.AddOllama("ollama") .WithDataVolume() .WithOpenWebUI(); Here, we’ve used the [iCODE]AddOllama[/iCODE] extension method to add the container to the app host. Since we’re going to download some models, we’re going to want to persist that data volume across container restarts (it means we don’t have to pull several gigabytes of data every time we start the container!). Also, so we’ve got a playground, we’ll add the [iCODE]OpenWebUI[/iCODE] container, which will give us a web interface to interact with the model outside of our app. [HEADING=1]Running a local AI model[/HEADING] The [iCODE]ollama[/iCODE] resource that we created in the previous step is only going to be running the Ollama server, we still need to add some models to it, and we can do that with the [iCODE]AddModel[/iCODE] method. Let’s use the Llama 3.2 model: [iCODE]var chat = ollama.AddModel("chat", "llama3.2");[/iCODE] If we wanted to use a variation of the model, or a specific tag, we could specify that in the [iCODE]AddModel[/iCODE] method, such as [iCODE]ollama.AddModel("chat", "llama3.2:1b")[/iCODE] for the 1b tag of the Llama 3.2 model. Alternatively, if the model you’re after isn’t in the Ollama library, you can use the [iCODE]AddHuggingFaceModel[/iCODE] method to add a model from the Hugging Face model hub. Now that we have our model, we can add it as a resource to any of the other services in the app host: builder.AddProject<Projects.MyApi>("api") .WithReference(chat); When we run the app host project, the Ollama server will start up and download the model we specified (make sure you don’t stop the app host before the download completes), and then we can use the model in our application. If you want the resources that depend on the model to wait until the model is downloaded, you can use the [iCODE]WaitFor[/iCODE] method with the model reference: builder.AddProject<Projects.MyApi>("api") .WithReference(chat) .WaitFor(chat); [ATTACH type=full" alt=".NET Aspire dashboard showing health checks and model download status]6063[/ATTACH] In the above screenshot of the dashboard, we’ll see that the model is being downloaded. The Ollama server is running but unhealthy because the model hasn’t been downloaded yet, and the [iCODE]api[/iCODE] resource hasn’t started as it’s waiting for the model to download and become healthy. [HEADING=1]Using the model in your application[/HEADING] With our API project set up to use the [iCODE]chat[/iCODE] model, we can now use the [iCODE]OllamaSharp[/iCODE] library to connect to the Ollama server and interact with the model, and to do this, we’ll use the [iCODE]OllamaSharp[/iCODE] integration from the .NET Aspire Community Toolkit: [iCODE]dotnet add package CommunityToolkit.Aspire.OllamaSharp[/iCODE] This integration allows us to register the OllamaSharp client as the [iCODE]IChatClient[/iCODE] or [iCODE]IEmbeddingsGenerator[/iCODE] service from the Microsoft.Extensions.AI package, which is an abstraction that means we could switch out the local Ollama server for a cloud-hosted option such as Azure OpenAI Service without changing the code using the client: [iCODE]builder.AddOllamaSharpChatClient("chat");[/iCODE] To make full use of the Microsoft.Extensions.AI pipeline, we can provide that service to the [iCODE]ChatClientBuilder[/iCODE]: builder.AddKeyedOllamaSharpChatClient("chat"); builder.Services.AddChatClient(b => b .UseFunctionInvocation() .UseOpenTelemetry(configure: t => t.EnableSensitiveData = true) .UseLogging() // Use the OllamaSharp client .Use(b.Services.GetRequiredKeyedService<IChatClient>("chat"))); Lastly, we can inject the [iCODE]IChatClient[/iCODE] into our route handler: app.MapPost("/chat", async (IChatClient chatClient, string question) => { var response = await chatClient.CompleteAsync(question); return response.Message; }); [HEADING=1]Supporting cloud-hosted models[/HEADING] While Ollama is great as a local development tool, when it comes to deploying your application, you’ll likely want to use a cloud-based AI service like Azure OpenAI Service. To handle this, we’ll need to update the API project to register a different implementation of the [iCODE]IChatClient[/iCODE] service when running in the cloud: if (builder.Environment.IsDevelopment()) { builder.AddKeyedOllamaSharpChatClient("chat"); } else { builder.AddKeyedAzureOpenAIClient("chat"); } builder.Services.AddChatClient(b => b .UseFunctionInvocation() .UseOpenTelemetry(configure: t => t.EnableSensitiveData = true) .UseLogging() // Use the previously registered IChatClient, which is either Ollama or Azure OpenAI .Use(b.Services.GetRequiredKeyedService<IChatClient>("chat"))); [HEADING=1]Conclusion[/HEADING] In this post, we’ve seen how, with only a few lines of code, we can set up an Ollama server with .NET Aspire, specify a model that we want to use, have it downloaded for us, and then integrated into a client application. We’ve also seen how we can use the Microsoft.Extensions.AI abstractions to make it easy to switch between local and cloud-hosted models. This is a powerful way to experiment with AI models on your local machine before deploying them to the cloud. Check out the eShop sample application for a full example of how to use Ollama with .NET Aspire. The post Using Local AI models with .NET Aspire appeared first on .NET Blog. Continue reading...Using local AI models can be a great way to experiment on your own machine without needing to deploy resources to the cloud. In this post, we’ll look at how use .NET Aspire with Ollama to run AI models locally, while using the Microsoft.Extensions.AI abstractions to make it transition to cloud-hosted models on deployment. Setting up Ollama in .NET Aspire We’re going to need a way to use Ollama from our .NET Aspire application, and the easiest way to do that is using the Ollama hosting integration from the .NET Aspire Community Toolkit. You can install the Ollama hosting integration from NuGet via the Visual Studio tooling, VS Code tooling, or the .NET CLI. Let’s take a look at how to install the Ollama hosting integration via the command line into our app host project: dotnet add package CommunityToolkit.Aspire.Hosting.Ollama Once you’ve installed the Ollama hosting integration, you can configure it in your Program.cs file. Here’s an example of how you might configure the Ollama hosting integration: var ollama = builder.AddOllama("ollama") .WithDataVolume() .WithOpenWebUI(); Here, we’ve used the AddOllama extension method to add the container to the app host. Since we’re going to download some models, we’re going to want to persist that data volume across container restarts (it means we don’t have to pull several gigabytes of data every time we start the container!). Also, so we’ve got a playground, we’ll add the OpenWebUI container, which will give us a web interface to interact with the model outside of our app. Running a local AI model The ollama resource that we created in the previous step is only going to be running the Ollama server, we still need to add some models to it, and we can do that with the AddModel method. Let’s use the Llama 3.2 model: var chat = ollama.AddModel("chat", "llama3.2"); If we wanted to use a variation of the model, or a specific tag, we could specify that in the AddModel method, such as ollama.AddModel("chat", "llama3.2:1b") for the 1b tag of the Llama 3.2 model. Alternatively, if the model you’re after isn’t in the Ollama library, you can use the AddHuggingFaceModel method to add a model from the Hugging Face model hub. Now that we have our model, we can add it as a resource to any of the other services in the app host: builder.AddProject<Projects.MyApi>("api") .WithReference(chat); When we run the app host project, the Ollama server will start up and download the model we specified (make sure you don’t stop the app host before the download completes), and then we can use the model in our application. If you want the resources that depend on the model to wait until the model is downloaded, you can use the WaitFor method with the model reference: builder.AddProject<Projects.MyApi>("api") .WithReference(chat) .WaitFor(chat); In the above screenshot of the dashboard, we’ll see that the model is being downloaded. The Ollama server is running but unhealthy because the model hasn’t been downloaded yet, and the api resource hasn’t started as it’s waiting for the model to download and become healthy. Using the model in your application With our API project set up to use the chat model, we can now use the OllamaSharp library to connect to the Ollama server and interact with the model, and to do this, we’ll use the OllamaSharp integration from the .NET Aspire Community Toolkit: dotnet add package CommunityToolkit.Aspire.OllamaSharp This integration allows us to register the OllamaSharp client as the IChatClient or IEmbeddingsGenerator service from the Microsoft.Extensions.AI package, which is an abstraction that means we could switch out the local Ollama server for a cloud-hosted option such as Azure OpenAI Service without changing the code using the client: builder.AddOllamaSharpChatClient("chat"); Note: If you are using an embedding model and want to register the IEmbeddingsGenerator service, you can use the AddOllamaSharpEmbeddingsGenerator method instead. To make full use of the Microsoft.Extensions.AI pipeline, we can provide that service to the ChatClientBuilder: builder.AddKeyedOllamaSharpChatClient("chat"); builder.Services.AddChatClient(sp => sp.GetRequiredKeyedService("chat")) .UseFunctionInvocation() .UseOpenTelemetry(configure: t => t.EnableSensitiveData = true) .UseLogging(); Lastly, we can inject the IChatClient into our route handler: app.MapPost("/chat", async (IChatClient chatClient, string question) => { var response = await chatClient.CompleteAsync(question); return response.Message; }); Supporting cloud-hosted models While Ollama is great as a local development tool, when it comes to deploying your application, you’ll likely want to use a cloud-based AI service like Azure OpenAI Service. To handle this, we’ll need to update the API project to register a different implementation of the IChatClient service when running in the cloud: if (builder.Environment.IsDevelopment()) { builder.AddKeyedOllamaSharpChatClient("chat"); } else { builder.AddKeyedAzureOpenAIClient("chat"); } builder.Services.AddChatClient(sp => sp.GetRequiredKeyedService("chat")) .UseFunctionInvocation() .UseOpenTelemetry(configure: t => t.EnableSensitiveData = true) .UseLogging(); Conclusion In this post, we’ve seen how, with only a few lines of code, we can set up an Ollama server with .NET Aspire, specify a model that we want to use, have it downloaded for us, and then integrated into a client application. We’ve also seen how we can use the Microsoft.Extensions.AI abstractions to make it easy to switch between local and cloud-hosted models. This is a powerful way to experiment with AI models on your local machine before deploying them to the cloud. Check out the eShop sample application for a full example of how to use Ollama with .NET Aspire. The post Using Local AI models with .NET Aspire appeared first on .NET Blog. View the full articleKeeping your .NET SDK version up to date is crucial for maintaining secure and efficient applications. And now that Dependabot can update .NET SDK versions in [iCODE]global.json[/iCODE], it is easier than ever to make sure you’re always running the latest security patches and improvements. Regular SDK updates are essential because they include: Security patches for known vulnerabilities (CVEs) Bug fixes and performance improvements Latest development tools and features [HEADING=1]Using [iCODE]global.json[/iCODE] to Manage SDK Versions[/HEADING] To manage your .NET SDK version, you typically use a [iCODE]global.json[/iCODE] file in your project. This file specifies which version of the SDK your project should use. Here’s an example of a simple [iCODE]global.json[/iCODE] file: { "sdk": { "version": "9.0.100" }} If you’re using GitHub Actions, and the [iCODE]dotnet/setup-dotnet[/iCODE] action, this file will ensure that the correct SDK version is used in your CI/CD pipeline. [HEADING=1]Configuring Dependabot for .NET SDK Updates[/HEADING] Add a [iCODE]dependabot.yml[/iCODE] file to your repository at [iCODE].github/dependabot.yml[/iCODE] in the default branch. If you always want to receive the latest updates, a minimal configuration will look like this: version: 2updates: - package-ecosystem: "dotnet-sdk" directory: "/" But .NET SDK updates are mostly released on “patch Tuesday” (the second Tuesday of each month), so you might want to adjust the update schedule to check for updates only once a week. You can do that by adding a [iCODE]schedule[/iCODE] section: version: 2updates: - package-ecosystem: "dotnet-sdk" directory: "/" schedule: interval: "weekly" day: "wednesday" Additionally, you can ignore major and minor version updates if you want to focus only on security patches. This can be done by adding an [iCODE]ignore[/iCODE] section: version: 2updates: - package-ecosystem: "dotnet-tool" directory: "/" schedule: interval: "weekly" day: "wednesday" ignore: - dependency-name: "*" update-types: - "version-update:semver-major" - "version-update:semver-minor" Dependabot will also respect the [iCODE]allowPrerelease[/iCODE] setting in your [iCODE]global.json[/iCODE] file. So if you want to include pre-release versions in your updates, make sure to set that option accordingly. Check out the Dependabot documentation for more details on all the configuration options available. [HEADING=1]Dependabot NuGet Package Updates[/HEADING] In addition to .NET SDK updates, you can also configure Dependabot to manage your NuGet package dependencies. We significantly improved the NuGet support in Dependabot last year to manage more complex scenarios, so you can easily keep your packages up to date as well. [HEADING=1]Feedback[/HEADING] You can share feedback with us by opening an issue in the Dependabot repository. You can also leave comments on this post if you have any questions or suggestions. The post Using Dependabot to Manage .NET SDK Updates appeared first on .NET Blog. Continue reading...Keeping your .NET SDK version up to date is crucial for maintaining secure and efficient applications. And now that Dependabot can update .NET SDK versions in global.json, it is easier than ever to make sure you’re always running the latest security patches and improvements. Regular SDK updates are essential because they include: Security patches for known vulnerabilities (CVEs) Bug fixes and performance improvements Latest development tools and features Using global.json to Manage SDK Versions To manage your .NET SDK version, you typically use a global.json file in your project. This file specifies which version of the SDK your project should use. Here’s an example of a simple global.json file: { "sdk": { "version": "9.0.100" } } If you’re using GitHub Actions, and the dotnet/setup-dotnet action, this file will ensure that the correct SDK version is used in your CI/CD pipeline. Configuring Dependabot for .NET SDK Updates Add a dependabot.yml file to your repository at .github/dependabot.yml in the default branch. If you always want to receive the latest updates, a minimal configuration will look like this: version: 2 updates: - package-ecosystem: "dotnet-sdk" directory: "/" But .NET SDK updates are mostly released on “patch Tuesday” (the second Tuesday of each month), so you might want to adjust the update schedule to check for updates only once a week. You can do that by adding a schedule section: version: 2 updates: - package-ecosystem: "dotnet-sdk" directory: "/" schedule: interval: "weekly" day: "wednesday" Additionally, you can ignore major and minor version updates if you want to focus only on security patches. This can be done by adding an ignore section: version: 2 updates: - package-ecosystem: "dotnet-sdk" directory: "/" schedule: interval: "weekly" day: "wednesday" ignore: - dependency-name: "*" update-types: - "version-update:semver-major" - "version-update:semver-minor" Dependabot will also respect the allowPrerelease setting in your global.json file. So if you want to include pre-release versions in your updates, make sure to set that option accordingly. Check out the Dependabot documentation for more details on all the configuration options available. Dependabot NuGet Package Updates In addition to .NET SDK updates, you can also configure Dependabot to manage your NuGet package dependencies. We significantly improved the NuGet support in Dependabot last year to manage more complex scenarios, so you can easily keep your packages up to date as well. Feedback You can share feedback with us by opening an issue in the Dependabot repository. You can also leave comments on this post if you have any questions or suggestions. The post Using Dependabot to Manage .NET SDK Updates appeared first on .NET Blog. View the full articleDramatically faster package restores with .NET 9’s new NuGet resolver
Guest posted a topic in General
With the release of .NET 9 came a major leap for large .NET repositories: a new NuGet dependency graph resolver built to dramatically improve performance. If you’ve struggled with slow package restores in complex builds, this is the solution you’ve been waiting for. Faster restores mean less waiting, more productivity, and a smoother experience for developers working on large projects. [HEADING=1]The Challenge[/HEADING] Internally at Microsoft, a large repository is migrating thousands of projects to .NET Core in order to reap the benefits of runtime performance improvements. This is no small task, with currently over 2,500 individual projects migrated and optimized for modern standards. The team contacted NuGet because their restores were taking over 30 minutes, which caused significant delays across hundreds of builds each day. These delays quickly added up, resulting in wasted time and frustration for developers. To tackle this issue, a team worked tirelessly for months on optimizations, testing various solutions to speed up the restore process. These efforts managed to cut restore times down to 16 minutes, which was an improvement, but it still wasn’t enough. The time taken was still hindering productivity, and we knew there had to be a better way. [HEADING=1]Reimagining Package Resolution[/HEADING] The old NuGet dependency resolution algorithm began as a temporary solution; proving that, as Milton Friedman famously said, “Nothing is as permanent as a temporary solution that works”. While it has served its purpose for a long time, it was not designed to handle the scale and complexity of large repositories. The original dependency resolver created a massive dependency graph with millions of nodes, representing every possible relationship between dependencies. This approach simply wasn’t scalable; it required vast amounts of memory and processing power, and as projects grew, so did the time and effort needed to resolve the graphs. It was clear that a new approach was needed, so a dedicated team of engineers decided to start from scratch. Their goal was ambitious: create a simpler, more efficient resolver that would still produce the same results but in a fraction of the time. The new algorithm they developed uses a more streamlined approach, representing the graph as a flattened set where each node is created only once. This makes the in-memory dependency graph much smaller and easier to work with. Conflicts are resolved as the graph is being built, which avoids the need for the repetitive passes that the old dependency graph resolution algorithm required. [HEADING=1]The Results[/HEADING] This new approach had dramatic results. The original dependency graph, which in our testing would create 1.6 million nodes for a complex project, was reduced to just 1,200 nodes. With fewer nodes to process, restore times dropped significantly; from 16 minutes down to just 2 minutes. This is a game-changer for developers working with large repositories, as it meant they could spend less time waiting and more time coding and building great products. Changing a fundamental part of the build process like NuGet was not without its challenges. We understand that making big changes can be daunting, especially when it involves essential tools that developers rely on every day. However, we took this leap of faith—and it paid off. This success has paved the way for further innovation and improvements across the .NET ecosystem. [HEADING=1]What’s Next?[/HEADING] NuGet’s new dependency graph resolution algorithm is just the beginning. It’s the first step toward a performance-first approach that we want to apply across all of .NET. By focusing on performance and taking a fresh look at existing processes, we believe we can find new ways to make developers’ lives easier and more productive. We invite developers everywhere to join us in this journey; help us identify bottlenecks, suggest improvements, and work together to create the best possible development environment. The new dependency graph resolution algorithm is included in .NET 9, and it’s on by default. That means if you upgrade to .NET 9, you’ll automatically get the benefits of faster restore times—no extra setup needed, no configuration changes required. Just upgrade, and you’ll see the difference immediately. If you experience any issues with it, please see our documentation on how to get support. The post Dramatically faster package restores with .NET 9’s new NuGet resolver appeared first on .NET Blog. Continue reading...
×
- Create New...