Microsoft Technologies Archives - TatvaSoft Blog https://www.tatvasoft.com/blog/category/microsoft-technologies/feed/ Tue, 30 Jan 2024 08:32:17 +0000 en-US hourly 1 .NET Microservices Implementation with Docker Containers https://www.tatvasoft.com/blog/net-microservices/ https://www.tatvasoft.com/blog/net-microservices/#respond Thu, 23 Nov 2023 07:19:28 +0000 https://www.tatvasoft.com/blog/?p=12223 Applications and IT infrastructure management are now being built and managed on the cloud. Today's cloud apps require to be responsive, modular, highly scalable, and trustworthy.
Containers facilitate the fulfilment of these needs by applications.

The post .NET Microservices Implementation with Docker Containers appeared first on TatvaSoft Blog.

]]>

Key Takeaways on .Net Microservices

  1. The microservices architecture is increasingly being favoured for the large and complex applications based on the independent and individual subsystems.
  2. Container-based solutions offer significant cost reductions by mitigating deployment issues arising from failed dependencies in the production environment.
  3. With Microsoft tools, one can create containerized .NET microservices using a custom and preferred approach.
  4. Azure supports running Docker containers in a variety of environments, including the customer’s own datacenter, an external service provider’s infrastructure, and the cloud itself.
  5. An essential aspect of constructing more secure applications is establishing a robust method for exchanging information with other applications and systems.

1. Microservices – An Overview

Applications and IT infrastructure management are now being built and managed on the cloud. Today’s cloud apps require to be responsive, modular, highly scalable, and trustworthy.

Containers facilitate the fulfilment of these needs by applications. To put it another way, attempting to navigate a new location by placing an application in a container without first deciding on a design pattern is like going directionless. You could get where you’re going, but it probably won’t be the fastest way.

.NET Microservices is necessary for this purpose. With the help of a reliable .NET development company offering microservices, the software can be built and deployed in a way that meets the speed, scalability, and dependability needs of today’s cloud-based applications.

2. Key Considerations for Developing .Net Microservices

When using .NET to create microservices, it’s important to remember the following points:

API Design

Since microservices depend on APIs for inter-service communication, it’s crucial to construct APIs with attention. RESTful APIs are becoming the accepted norm for developing APIs and should be taken into consideration. To prevent breaking old clients, you should plan for versioning and make sure your APIs are backward compatible.

Data Management

Because most microservices use their own databases, ensuring data consistency and maintenance can be difficult. If you’re having trouble keeping track of data across your microservices, you might want to look into utilising Entity Framework Core, a popular object-relational mapper (ORM) for .NET.

Microservices need to be tested extensively to assure their dependability and sturdiness. For unit testing, you can use xUnit or Moq, and for API testing, you can use Postman.

Monitoring and analysis are crucial for understanding the health of your microservices and fixing any problems that may develop. You might use monitoring and logging tools such as Azure Application Insights.

If you want to automate the deployment of your microservices, you should use continuous integration and continuous delivery (CI/CD) pipeline. This will assist guarantee the steady delivery and deployment of your microservices.

3. Implementation of .Net Microservices Using Docker Containers

3.1 Install .NET SDK

Let’s begin from scratch. First, install .NET 7 SDK. You can download it from this URL: https://dotnet.microsoft.com/en-us/download/dotnet/7.0  

Once you complete the download, install the package and then open a new command prompt and run the following command to check .NET (SDK) information: 

> dotnet

If the installation succeeded, you should see an output like the following in command prompt: 

.NET SDK Installation

3.2 Build Your Microservice

Open command prompt on the location where you want to create a new application. 

Type the following command to create a new app named “MyMicroservices”

> dotnet new webapi -o DemoMicroservice --no-https -f net7.0 

Then, navigate to this new directory. 

> cd DemoMicroservice

What do these commands mean? 

CommandMeaning
dotnetIt creates a new application of type webapi  (that’s a REST API endpoint). 
-oCreates a directory where your app “DemoMicroservices” is stored.
–no-httpsCreates an app that runs without an HTTPS certificate. 
-fIndicates that you are creating a .NET 7 application. 

3.3 Run Microservice

Type this into your command prompt:

> dotnet run

The output will look like this: 

run microservices

The Demo Code: 

Several files were generated in the DemoMicroservices directory. It gives you a simple service which is ready to run.  

The following screenshot shows the content of the WeatherForecastController.cs file. It is located in the Controller directory. 

Demo Microservices

Launch a browser and enter http://localhost:<port number>/WeatherForecast once the program shows that it is monitoring that address.

In this example, It shows that it is listening on port 5056. The following image shows the output on the following url: http://localhost:5056/WeatherForecast.

WeatherForecast Localhost

You’ve successfully launched a basic service.

To stop the service from running locally using the dotnet run command, type CTRL+C at the command prompt.

3.4 Role of Containers

In software development, containerization is an approach in which a service or application, its dependencies, and configurations (in deployment manifest files) are packaged together as a container image.    

The containerized application may be tested as a whole and then deployed to the host OS in the form of a container image instance.

Software containers are like cardboard boxes in which they are a standardised unit of software deployment that can hold a wide variety of programs and dependencies, and they can be moved from location to location. 

This method of software containerization allows developers and IT professionals to easily deploy applications to many environments with few code changes.

If this seems like a scenario where containerizing an application may be useful, it’s because it is. The advantages of containers are nearly identical to the advantages of microservices.

The deployment of microservices is not limited to the containerization of applications. Microservices may be deployed via a variety of mechanisms, such as Azure App Service, virtual machines, or anything else. 

Containerization’s flexibility is an additional perk. Creating additional containers for temporary jobs allows you to swiftly scale up. The act of instantiating an image (by making a container) is, from the perspective of the application, quite similar to the method of implementing a service or a web application.

In a nutshell, containers improve the whole application lifecycle by providing separation, mobility, responsiveness, versatility, and control.

All of the microservices you create in this course will be deployed to a container for execution; more specifically, a Docker container.

3.5 Docker Installation

3.5.1. What is Docker?

Docker is a free and set of platform as a service products that use OS level virtualization for automating the deployment of applications as portable, self-sufficient containers that can run on cloud or on-premises. Docker also has premium tiers for premium features. 

Azure supports running Docker containers in a variety of environments, including the customer’s own datacenter, an external service provider’s infrastructure, and the cloud itself. Docker images may be executed in a container format on both Linux and Windows.

3.5.2. Installation Steps

Docker is a platform for building containers, which are groups of an app, its dependencies, and configurations. Follow the steps mentioned below to install the docker: 

  • First download the .exe file from docker website
  • Docker’s default configuration for Windows employs Linux Containers. When asked by the installation, just accept the default settings.
  • You may be prompted to sign out of the system after installing Docker.
  • Make sure Docker is up and running.
  • Verify that Docker is at least version 20.10 if you currently have it installed.

Once the setup is complete, launch a new command prompt and enter:

> docker --version

If the command executes and some version data is displayed, then Docker has been set up properly.

3.6 Add Docker Metadata

A Docker image can only be created by following the directions provided in a text file called a Dockerfile. If you want to deploy your program in the form of a Docker container, you’ll need a Docker image.

Get back to the app directory

Since the preceding step included opening a new command prompt, you will now need to navigate back to the directory in which you first established your service.

> cd DemoMicroservice

Add a DockerFile

Create a file named “Dockerfile” with this command:

> fsutil file createnew Dockerfile 0

To open the docker file, execute the following command. 

> start Dockerfile.

In the text editor, update the following with the Dockerfile’s current content:

FROM mcr.microsoft.com/dotnet/sdk:7.0 AS build
WORKDIR /src
COPY DemoMicroservice.csproj .
RUN dotnet restore
COPY . .
RUN dotnet publish -c release -o /app

FROM mcr.microsoft.com/dotnet/aspnet:7.0
WORKDIR /app
COPY --from=build /app .
ENTRYPOINT ["dotnet", "DemoMicroservice.dll"]

Note: Keep in mind that the file needs to be named as Dockerfile and not Dockerfile.txt or anything else.

Optional: Add a .dockerignore file

If you have a .dockerignore file, it will limit the number of files that are read during the ‘docker build’ process. Reduce the number of files to compile faster.

If you’re acquainted with .gitignore files, the following command will create a .dockerignore file for you:

> fsutil file createnew .dockerignore 0

You can then open it in your favorite text editor manually or with this command:

> start .dockerignore

Then, either manually or with the following command, load it in your preferred text editor:

Dockerfile
[b|B]in
[O|o]bj

3.7 Create Docker Image

Start the process with this command:

> > docker build -t demomicroservice

Docker images may be created with the use of the Dockerfile and the docker build command.

The following command will display a catalogue of all images on your system, especially the one you just made.

> docker images

3.8 Run Docker image

Here’s the command you use to launch your program within a container:

> docker run -it --rm -p 3000:80 --name demomicroservicecontainer demomicroservice

To connect to a containerized application, go to the following address: http://localhost:3000/WeatherForecast 

demo microservices with docker weatherforecast

Optionally, The subsequent command allows you to observe your container in a different command prompt: 

> docker ps
docker ps

To cancel the docker run command that is managing the containerized service, enter CTRL+C at the prompt.

Well done! A tiny, self-contained service that can be easily deployed and scaled with Docker containers has been developed by you.

These elements provide the foundation of a microservice.

4. Conclusion

The .NET Framework, from its inception with .NET Core to the present day, was designed from the ground up to run natively on the cloud. Its cross-platform compatibility means your .NET code will execute regardless of the operating system as your Docker image is built on. .NET is incredibly quick, with the ASP.NET Kestrel web server consistently surpassing its competitors. Its remarkable presence leaves no doubt for disappointment and should be incorporated in your projects.

5. FAQs

Why is .NET core good for microservices?

.NET enables the developers to break down the monolithic application into smaller parts and deploy services separately which can not only help businesses get more time to market the product but also benefit in adapting to the changes quickly and with flexibility. Because of this reason, the .NET core is considered a powerful platform to create and deploy microservices. Besides this, some other major reasons behind it being a good option for microservices are – 

  • Easier maintenance as with .NET core, microservices can be tested, updated, and deployed independently.
  • Better scalability is offered by the .NET core. It scales each service independently to meet the traffic demands.

What is the main role of Docker in microservices?

When it comes to a microservices architecture, the .Net app developers can create applications that are independent of the host environment. This can be done by encapsulating each of the microservices in Docker containers. Docker is a concept that enables the developers to package the applications they create into containers and here each container has a standard executable component and operating system library to run the microservices in any platform. 

The post .NET Microservices Implementation with Docker Containers appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/net-microservices/feed/ 0
Introduction to .NET MAUI https://www.tatvasoft.com/blog/net-maui/ https://www.tatvasoft.com/blog/net-maui/#respond Fri, 01 Sep 2023 05:23:49 +0000 https://www.tatvasoft.com/blog/?p=11915 .NET MAUI (Multi-platform App UI) is an open source and cross platform framework to create native mobile and desktop apps using C# and XAML. Using .NET MAUI, one can develop apps that run on
Android 
iOS
MacOS
Windows

The post Introduction to .NET MAUI appeared first on TatvaSoft Blog.

]]>

Key Takeaways

  1. .NET Multi-platform App UI (MAUI) is an open source, cross platform framework to develop native applications for Windows, iOS, macOS, and Android platforms using C# and XAML.
  2. .NET MAUI is an evolution of Xamarin.Forms with UI control redesigned for extensibility and better performance, becoming a new flagship.
  3. It also supports .NET hot reload by which you can update and modify the source code while the application is running.
  4. .NET MAUI project uses a single codebase and provides consistent and simplified platform specific development experience for the users.
  5. One can also develop apps in modern patterns like MVU, MVVM, RxUI, etc. using .NET MAUI.

1. What is .NET MAUI?

.NET MAUI (Multi-platform App UI) is an open source and cross platform framework to create native mobile and desktop apps using C# and XAML. Using this multi-platform UI, one can develop apps that run on

  1. Android 
  2. iOS
  3. MacOS
  4. Windows
What is .NET MAUI

.NET MAUI is an evaluation of Xamarin.Forms, extended from mobile to desktop scenarios, and rebuilt UI controls from the base level to improve the performance. It is quite similar to Xamarin Forms, another framework for creating cross-platform apps using a single codebase. 

The primary goal of .NET MAUI is to help you to develop as much of your app’s functionality and UI layout as possible in a single project.

The .NET MAUI is suitable for the developers who want to

  • Use single codebase to develop app for android, iOS, and desktop with C# and Xamarin.Forms
  • Share Code, tests, and logic across all platforms
  • Share common UI layout and design across all platforms

Now, let’s look at how .NET MAUI works. 

2. How does .NET MAUI Work?

.NET MAUI is a unified solution for developing mobile and desktop app user interfaces. With it, developers can deploy an app to all supported platforms using a single code base, and also provide unique access to every aspect of each platform. .

The Windows UI 3 (WinUI 3) library, along with its counterparts for Android, iOS, and macOS, are all part of the .NET 6 and later family of .NET frameworks for app development. The .NET Base Class Library (BCL) is shared by all of these frameworks. This library hides platform specifics from your program code. The .NET runtime is essential to the BCL because it provides a setting in which your code will be executed. Mono, a representation of the .NET runtime, provides the environment implementation for Android, iOS, and macOS. On Windows, the  .NET CoreCLR serves as the execution runtime.

Each platform has its own way of building an app’s visual interface and its own model for defining how the elements of an app’s user interface interact with one another; nevertheless, the Base Class Library enables multi-platform app UI to interchange business logic. The user interface may be designed independently for each platform utilizing a suitable framework, but it requires a distinct code base for each group of devices.

How does .NET MAUI work

Native app packages may be compiled from .NET MAUI code written on either a PC or a Mac:

  • When an Android app is developed with .NET MAUI, C# is translated into an intermediate language (IL), and during runtime, the IL is JIT-converted into a native assembler.
  • Apps for iOS developed with .NET MAUI are converted from C# into native ARM assembly code.
  • For macOS, this MAUI apps employ Mac Catalyst, an Apple technology that ports your UIKit-based iOS app to the desktop and enhances it with extra AppKit and platform APIs.
  • Native Windows desktop programs developed with .NET MAUI are created with the help of the Windows User Interface 3 (WinUI 3) library.

3. What’s Similar between .NET MAUI & Xamarin Forms?

The community is still developing apps with XAML and C#. To separate our logic from the view specification, we can use Model-View-Viewer (MVVM), Reactive User Interface (UI), or Model-View-Update.

We can create apps for:

  • Windows Desktop
  • iOs & macOS
  • Android

It is easy to relate with .NET MAUI if you have prior experience with Xamarin. While the project configuration may shift, the code you write daily should seem like an old hat.

4. What is Unique about .NET MAUI?

If Xamarin is already available, then what makes .NET MAUI so different? To improve Xamarin, Microsoft is revamping its foundation. Forms, which boosted speed, unified the architectural systems and brought us beyond mobile to the desktop.

Major Advances in MAUI:

4.1 Single Project Experience

You can build apps for Android, iOS, macOS, and Windows all from a single  .NET MAUI project, which abstracts away the platform-specific development experiences you’d normally face.

When developing for several platforms, using a .NET MAUI single project simplifies and standardizes the process. The following benefits are there for using  .NET MAUI single project:

  • A unified project that can develop for iOS, macOS, Android, and Windows.
  • Your .NET MAUI applications can run with a streamlined debug target option.
  • Within a single project, shared resource files can be used.
  • A single manifest file that describes the name, identifier, and release number of an app.
  • When necessary, you can use the platform’s native APIs and toolkit.
  • Simply one code-base for all platforms.

4.2 .NET Hot Reload

The ability to instantly update running apps with fresh code changes is a huge time saver for .NET developers thanks to a feature called “hot reload.” 

It helps to save time and keeps the development flow going by doing away with the need to pause for builds and deployments. Hot Reload is being improved in .NET, with full support coming to .NET MAUI and other workloads.

4.3 Cross-Platform APIs for Device Features

APIs for native device features can be accessed across platforms thanks to .NET MAUI. The .NET Multi-platform App UI  (MAUI) provides access to functionalities like:

  • Keep us posted regarding the device on which your app is installed.
  • Control of device’s sensors, including the accelerometer, compass, and gyroscope.
  • Select a single file or a batch from the storage device.
  • The capacity to monitor and identify changes in the device’s network connectivity status.
  • Read text using the device’s in-built text-to-speech engines.
  • Transfer text between applications by copying it to the system clipboard.
  • Safely store information using key-value pairs.
  • Start an authentication process in the browser that awaits a response from an app’s registered URL.

5. How to Build Your First App Using .NET MAUI? 

1. Prerequisites

Installation of the .NET Multi-platform App UI workload in Visual Studio 2022 version 17.3 or later is required.

2. Build an Application

1. Start up Visual Studio 2022. To initiate a fresh project, select the “Create a new project” option.

Start up Visual Studio 2022

2. Select MAUI from the “All project types” – menu, then choose the “.NET MAUI App” template and press the “Next” icon in the “Create a new project” window.

Create .NET MAUI App

3. Give your project a name, select a location, and then press the “Next” button in the window labeled “Configure your new project”

Configure your new project

4. Press the “Create” button after selecting the desired version of .NET in the “Additional information” window.

Additional information

5. Hold off until the project is built and its dependencies are restored.

Dependencies are restored

6. Choose Framework, and then the “.net 7.0-windows” option, from the “Debug” menu in Visual Studio’s toolbar:

Choose Framework

7. To compile and launch the application, click the “Windows Machine” icon in Visual Studio’s toolbar.

Compile and launch the application

Visual Studio will ask you to switch on Developer Mode if you haven’t already. This can be done via Settings of your device. Click “settings for developers” from VS code and on the “Developer Mode”. Also accept the disclaimer. 

Developer Mode

8. To test this, open the app and hit the “Click me” icon many times to see the click counter rise:

Test App

6. Why .NET MAUI?

6.1 Accessibility 

.NET MAUI supports multiple approaches for accessibility experience. 

  1. Semantic Properties: 

Semantic Properties are the approach to provide the accessibility values in apps. It is the most recommended one. 

  1. Automation Properties

This is a Xamarin.Forms approach to provide accessibility values in apps. 

One can also follow the recommended accessibility checklist from the official page for more details. 

6.2 APIs to Access Services

Since .NET MAUI was built with expansion in mind, you may keep adding features as needed. Consider the Entry control, a classic illustration of a control that displays uniquely on one platform but not another. Developers frequently wish to get rid of the underlining that Android draws underneath the text field. Using .NET MAUI, you can easily modify each and every Entry throughout your whole project with minimal additional code.  

6.3 Global Using Statements and Field-Scoped Namespaces

.NET MAUI uses the new C# 10 features introduced in .NET 6, comprising global using statements and file-scoped namespaces. This is great for reducing clutter in your files. For example: 

Statements and Namespace

6.4 Use Blazor for Desktop and Mobile

Web developers who want to create native client apps will find .NET MAUI to be an excellent choice. You may utilize your current Blazor web UI components in your native mobile and desktop apps thanks to the integration between .NET MAUI and Blazor. .NET MAUI and Blazor allow you to create a unified user interface (UI) for mobile, desktop, and web apps.

Without the requirement for WebAssembly, .NET MAUI will run your Blazor components locally on the device and render them to an in-app web view. Since Blazor parts are compiled and executed in the .NET procedure, they are not restricted to the web platform and may instead make use of features specific to the target platform, such as the platform’s filesystem, sensors, and location services. Your Blazor web UI can even have native UI controls added to it. Blazor Hybrid is a completely original hybrid app.

Using the provided .NET MAUI Blazor App project template, you can quickly begin working with Blazor and .NET MAUI.

.NET MAUI Blazor App project template

With this starting point, you can quickly begin developing an HTML5, CSS3, and C#-based .NET MAUI Blazor app. The .NET MAUI Blazor Hybrid guide will show you how to create and deploy your very own Blazor app.

If you already have a .NET MAUI project and wish to start using Blazor components, you may do so by adding a BlazorWebView regulation to it:


  
    
      
     


Present desktop programs can now be updated to operate on the web or cross-platform with .NET MAUI due to Blazor Hybrid support for WPF and Windows Forms. BlazorWebView features for Windows Presentation Foundation and Windows Forms can be downloaded via NuGet. 

6.5 Optimized for Speed

.NET MAUI is developed for performance. .NET MAUI’s user interface controls are built on top of the native platform controls with a thin, decoupled handler-mapper design. This streamlines the display of user interfaces and makes it easier to modify controls.

In order to speed up the rendering and updating of your user interface, .NET MAUI’s layouts have been designed to follow a uniform management approach that improves the measure and setup loops. In the context of StackLayout, layouts are exposed that have already been optimized for certain use cases, such as HorizontalStackLayout and VerticalStackLayout.

It was started off with the intention of reducing app size and speeding up startup time when it was upgraded to .NET 6. The .NET Podcast example application, which used to take 1299 milliseconds to start up, now takes just 814.2 milliseconds—a 37.3% improvement .

To make these improvements available in a release build, these options are enabled by default.

Optimized for Speed

Quicker code launches for your Android apps are possible using ahead-of-time (AOT) compilation. If you’re trying to keep your application’s size within the wifi installation threshold, full AOT can make your outputs too big. Startup Tracing is the solution to this problem. As the name suggests, it makes you achieve an acceptable balance between performance and size by performing partial AOT on only the portions of your program to run at startup.

Benchmark numbers from Pixel 5 device tests by GitHub :

[Android App][1] [.NET MAUI App][2]
JIT startup time (s) 00:00.4387 00:01.4205
AOT startup time (vs. JIT) 00:00.3317 ( 76%) 00:00.7285 ( 51%)
Profiled AOT startup time (vs. JIT) 00:00.3093 ( 71%) 00:00.7098 ( 50%)
JIT .apk size (B) 9,155,954 17,435,225
AOT .apk size (vs. JIT) 12,755,672 (139%) 44,751,651 (257%)
Profiled AOT .apk size (vs. JIT) 9,777,880 (107%) 23,210,787 (133%)

6.6 Native UI

With .NET MAUI, you can create uniform brand experiences across many platforms (Android, iOS, macOS, and Windows) while also making use of each system’s unique design for the greatest app experience possible. Each system works and appears as intended right out of the box, without the need for any further widgets or ad hoc styling. For instance, WinUI 3, the best native UI component included with the Windows App SDK, supports .NET MAUI on Windows.

With .NET MAUI native UI, you can:

  • Create your apps using a library of more than 40 controls, layouts, and pages using C# and XAML. 
  • Built upon the solid foundation of Xamarin’s mobile controls, it extends them to include things like navigation bars, multiple windows, improved animation, and enhanced support for gradients, shadows, and other visual effects.

7. Conclusion

Microsoft’s newest addition to the .NET family is the .NET Multi-platform User Interface which was created to develop apps in C#,.NET, and XAML for Windows, Android, iOS, and macOS. Also, instead of creating numerous versions of your project for different devices, you can now create a single version and distribute it across all of them. 

We hope this article helped you gain a basic introduction to .NET MAUI. There are many wonderful improvements in MAUI that are expected in the future, but we will need to remain patient a bit longer for a release candidate edition of MAUI to include them all. So, stay tuned.

The post Introduction to .NET MAUI appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/net-maui/feed/ 0
Clean Architecture .NET Core: All You Need to Know https://www.tatvasoft.com/blog/clean-architecture-net-core/ https://www.tatvasoft.com/blog/clean-architecture-net-core/#respond Fri, 23 Sep 2022 12:43:57 +0000 https://www.tatvasoft.com/blog/?p=9054 Most traditional .NET applications are deployed as a single unit. All of these applications run on the same IIS application...

The post Clean Architecture .NET Core: All You Need to Know appeared first on TatvaSoft Blog.

]]>
Most traditional .NET applications are deployed as a single unit. All of these applications run on the same IIS application domain and execute as web applications. It is still recommended to logically separate business applications into several layers and deploy them in this single unit. Monolithic applications are completely isolated in terms of their behavior. To perform operations, it may interact with other services and data stores. However, most applications are deployed as a single package. But in general, applications are divided into layers. One layer is dependent on the other layer. When there are dependencies among parts of an application, it can be difficult to test one part of it.

Clean Architecture is the solution to this dependency issue. In contrast to general architecture, Clean Architecture does not depend on frameworks, user interfaces, or databases. Generally, an application has 3 kinds of layers: UI layer, Business Logic Layer, and Data Access Layer but Clean architecture includes Application Core, Infrastructure  and UI. It is possible to test or change a user interface or database without having to change the rest of the system.

Our team of .Net developers has helped us to draft this blog to help you to understand the importance of clean architecture and maintaining one will eliminate the loopholes coming from an erroneous app development environment. To start with, let us check the different types of architectures in ASP.NET core and then understand the clean architecture.

1. Common Architectures in ASP.NET Core

Having a good architecture is a key to building an application. Different kinds of architecture are always there to use, which have the same objectives i.e. Separation of code. That can be done by dividing an application into layers.

Two types are categorized in ASP.NET Core Common Architecture:

1.1 Traditional “N-Layer” Architecture

  1. It has layers such as UI, BLL (Business Logic Layer), and DAL (Data Access Layer).
  2. Presentation of any page will be a part of the UI Layer. In the UI layer, users can make requests. BLL interacts with these requests.
  3. It depends on DAL. BLL holds the logic in the application. BLL calls DAL for data access requests.
  4. DAL holds the data access to all the implementation details. DAL depends on the existence of the Database.
  5. One key disadvantage of this architecture is that the compile time dependencies run from the top to bottom.The main User Interface layer depends upon the BLL and this BLL is dependent on the DAL. This means that the BLL, which holds the most important logic in the application, depends upon the data access layer.  
  6. So, business logic testing is much more difficult as it always requires a test database. 
  7. As a solution, dependency inversion principle is used.

1.2 Clean Architecture

  1. In this type, domain and Application layers remain at the core of the design, which is called the center of the design.
  2. Unlike Traditional “N-Layer” architecture, the clean architecture is independent of the database, UI layer, Framework, and several other external agencies.
  3. The clean architecture puts business logic & application model at the center of the application.
  4. It has inverted dependency, the infrastructure and implementation details will depend on the application core.

Now, let’s understand Clean Architecture in detail.

2. Clean Architecture

Robert C. Martin created the clean architecture and promoted it on his blog. Clean architecture refers to software architecture. Instead of relying on data access for business logic, the application core is responsible for infrastructure and implementation details. In the application core, we can define abstractions, or interfaces, to achieve this functionality. There are various types defined at the infrastructure layer that can be used to implement clean architecture.

As we already know that there is no dependency on frameworks in the clean architecture. It is not mandated to have a  library of feature-rich software to build this architecture. It is highly testable, we can test applications without the UI or any framework dependency. With a clean architecture, business logic and application models are placed at the center of the design. In a clean architecture, modules at the high level or their abstractions are not dependent on low level modules.

2.1 Benefits of Clean Architecture

  • Clean architecture provides you with a cost-effective methodology that helps you to develop quality code with better performance and is easy to understand. 
  • Independent of database, frameworks & presentation layer. One can change the user interface at any time without changing the rest of the system or business logic.
  • Unlike Oracle or SQL Server database, business rules are not bound to any specific database, so one can use BigTable, MongoDB, CouchDB, or something else to implement the business rules. Also, clean architecture doesn’t depend on the existence of some libraries, which are having features of laden software.
  • Clean architecture is highly testable, one can check business rules without any other external elements or even without the UI, Web server, or Database.
  • Clean architecture is independent of any external agency, one can say that business rules are unknown to the outside world.

2.2 Clean Architecture Layers

There are 3 layers in the Clean Architecture: The domain layer or infrastructure layer is in the center and surrounded by the application core layer and the outer layer consists of user interfaces. 

Compile-time dependencies are represented by solid arrows & runtime-only dependency is represented by dashed arrows in the following diagram. It will become very easy to write automated unit tests & the reason is that the Application Core doesn’t depend on Infrastructure. The applications where UI projects, Infrastructure & Application Core runs as a single unit, are called monolithic applications. 

There are interfaces defined in the application core. At the compile time the UI layer works with them. The Infrastructure layer will define implementation types, & UI layer shouldn’t know about those types. However, when the app executes, it requires these implementation types. Implementation types need to be present and must be wired up with the Application’s Core interfaces. Also, with the help of dependency injection, we shall be able to do the above mentioned things.

The given image will show how clean code works for any normal interface.

Clean Architecture Layers

3. Clean Architecture in .NET Core

The in-built use of ASP.NET Core and dependency injection support makes this structure an ideal way to design non-monolithic applications. As the Application layer does not depend on infrastructure, automated unit tests become very easy.

There are three parts in this section: the core part, Infrastructure & Web.

ASP.NET Core architecture
  • What belongs to ASP.NET Core Web(Domain Layer)?
    • One can have all of ASP.NET Core & ASP.NET Core MVC Types or some of these in addition to any DTO types such as ViewModel or API Model etc. 
    • ASP.NET Core Web App may include controllers, Models, ViewModels, ASP.NET Core Identity, Response Caching Filters, Model Validation Filters or several other filters, etc.
  • What about Application Core?
    • All of the Domain ModelTypes belong to Application Core.
    • Interfaces, Business Services, Domain Events, Value Objects, POCO Entities, Application Exceptions, Aggregates, Specifications etc will be a part of Application Core Project.
  • What resides in the Infrastructure?
    • The communications out of the app’s process belong to Infrastructure.
    • Like if someone wants to communicate out of the app, through an SMS so that can also be done through the app.
    • There will be Redis Cache Service, Azure Service bus Accessor, InMemory Data Cache, EF DbContext or SMS Service, Email Service, Other API Clients in infrastructure project. 
  • Here in the database, we can use database link SQL to be a part Data Source.
  • Third-Party Services can be GitHub API, SendGrid API, Twilio API. 

Organizing code in .NET Core clean architecture

Clean architecture is layered architecture consisting Application, Infrastructure and User Interface layers. Every layer has its own responsibilities to follow. Layers contain their types which will be mentioned in detail further.

3.1 Application Layer

Application core layer contains all the business logic along with entities, domain services and interfaces. It should have minimal dependencies on databases, instead we expect it should have interface and domain model types including entities, value objects aggregates. There are domain services which will have logic in them that affects multiple entities or aggregates.

Also custom exceptions should be in the application layer. It is helpful to know when exceptions occur. Apart from that, the application layer contains domain events, event handlers and specifications which are the way to encapsulate the query into a class.

For services to pass the data to higher layers, there should be Data Transfer Objects (DTOs). There can be validators in the core project like “fluentvalidation” which is used to validate the objects which are passed in the controller. Also ”enums” can be there along with custom guard clauses.

Types of Application Layer

Application core include interfaces, domain services, specifications, custom exceptions, protection clauses, domain events, handlers, and more.

3.2 Infrastructure Layer

It depends on the Application layer for the business logic. It implements an interface from the application layer and provides functionality for accessing external systems. It includes repositories that talk to the database, the Entity Framework DbContext and necessary migration files for the communication with the database. The infrastructure layer  also contains API Clients, File System, Email/SMS and System Clock. Also, there are services which implement interfaces defined in Application layer therefore, all interfaces primarily go into application core but some interfaces will go in infrastructure. So basically it works on the database and external API calls.

Types of Infrastructure Layer

Types of Infrastructure layers are 

  1. EF Core DbContext
  2. Data access implementation types like Repositories, Web services, and file logger.

3.3 User Interface Layer

This layer  is the access factor to the application from the user’s perspective.  It contains the MVC stuff like Controller, Views or Razor Pages, ViewModels, etc. It can have custom model binders, custom filters, and custom middleware. Tag helpers are also part of these layers. There are some built-in tag helpers like Html tag helpers, image tag helpers, label tag helpers, etc. Model binding or custom model binding is also part of the UI layer.

Types of UI Layer

UI Layer has Controllers, Views/Razor Pages, ViewModels,  Custom     middlewares, Custom filters, and Startup class.

4. Conclusion

Clean Architecture helps to organize applications from moderate to high complexity. It separates the dependencies in a way that business logic and application’s domain keep isolated. ASP.NET Core works perfectly with the Clean Architecture method, as long as the original solution structure is set correctly. One can break down software into layers according to a distorted dependence and can have an internal system checkable. The Clean Architecture creates the application which is independent of UI Layer, Database, Framework, or any other external sources like software laden libraries, that will be easy to testify. You can also use this clean architecture solution template, a .NET core project template, available on github to ensure your application development is going on the right track. 

The post Clean Architecture .NET Core: All You Need to Know appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/clean-architecture-net-core/feed/ 0
.Net Core Best Practices https://www.tatvasoft.com/blog/net-core-best-practices/ https://www.tatvasoft.com/blog/net-core-best-practices/#respond Mon, 25 Oct 2021 05:59:36 +0000 https://www.tatvasoft.com/blog/?p=6936 Whenever it comes to the performance of a website or an application, load time is one of the key characteristics that unveil the success of a site. So, if it takes more than 3 seconds to load, there are chances that the customers might leave the site and never come back.

The post .Net Core Best Practices appeared first on TatvaSoft Blog.

]]>
Whenever it comes to the performance of a website or an application, load time is one of the key characteristics that unveil the success of a site. So, if it takes more than 3 seconds to load, there are chances that the customers might leave the site and never come back. And this is why businesses prefer to have a perfect and robust web app. For this, one of the best technologies is .NET Core.

.NET Core is an open-source, free, fast, lightweight and cross-platform web development system that is created and handled by Microsoft. It runs on Windows, Linux, and MAC operating systems. Though it is not an updated version of .NET, it enables the developers to control normal program flow with ease. It is redesigned version from scratch and also comes with a single programming model of .NET Web API and .NET MVC. Speed is a key feature of the .NET Core so that it is optimized for better performance. To know more about this technology and the ways it helps in boosting the performance of the application, let’s go through the best practices of .NET core handpicked by our .NET development team.

Top 18 .NET Core Best Practices

Here are some of the best .NET Core practices that can help developers to bring down the business logic of their clients into reality.

1. Inline Methods

Inline methods improve app performance by passing arguments, reducing jumps, and restoring registers. Remember, one method containing a throw statement by the JIT (just-in-time) compiler will not be inline. To resolve it, use a static helper process that encompasses a throw statement.

2. Use Asynchronous Programming : (ASYNC – AWAIT)

To make an application more dependable, faster, and interactive, Asp.Net Core leverages the same Asynchronous programming approach. In our code, we should employ end-to-end asynchronous programming.

For an example:

Don’t:

public class WrongStreamReaderController : Controller

{

    [HttpGet("/home")]

    public ActionResult Get()

    {

        var json = new StreamReader(Request.Body).ReadToEnd();

 

        return JsonSerializer.Deserialize(json);

    }

}

Do:

public class CorrectStreamReaderController : Controller

{

    [HttpGet("/home")]

    public async Task> Get()

    {

        var json = await new StreamReader(Request.Body).ReadToEndAsync();

 

        return JsonSerializer.Deserialize(json);

    }

}

3. Optimize Data Access

To improve the performance of the application by optimizing its data access logic. Most applications are fully dependent on a database and they have to get data from the database, process it, and display it.

Suggestions:

  • Call all data access through the APIs asynchronously.
  • Do not get data that is not required in advance.
  • When retrieving data for read-only reasons in Entity Framework Core, use non-tracking queries.
  • Try to use aggregate and filter LINQ queries like with Where, Select, or Sum statement, so that filter thing can be performed by the database.

4. Always Use Cache

Caching is one of the popular and proven ways of improving performance. We should cache to store any data that is relatively stable. ASP.NET Core offers response caching middleware support, which we can use to enforce response caching. We can use response caching to improve output caching and It can cache web server responses using cache-related headers to the HTTP response objects. Also, Caching large objects avoids costly allocations.

Caching technique:

  • In-memory caching
  • Distributed cache
  • Cache tag helper
  • Distributed cache tag helper

A memory cache can be used or a distributed cache like NCache or Redis Cache can be used.

5. Response Caching Middleware Components

If response data is cacheable, this response caching middleware monitors and stores responses and serves them from the response cache. This middleware is available to Microsoft.AspNetCore.ResponseCaching package.

public void ConfigureServices(IServiceCollection services)

{

    services.AddResponseCaching();

    services.AddRazorPages();

}

6. Enable Compression

By reducing response size we can improve the performance of the application because it transfers less data between the server and client. You can take the benefits of response compression in ASP.NET Core to reduce the requirements of bandwidth and lower the response. In ASP.NET Core it acts as sure-shot middleware components.

public void ConfigureServices(IServiceCollection services_collection)

{

        services_collection.AddResponseCompression();

        services_collection.Configure

        (opt =>

        {

            opt.Level = CompressionLevel.Fastest;

        });

}

7. Bundling and Minification

Using this we can reduce the number of server trips. Try to upload all client-side assets at once, such as styles and JS/CSS. Using minification, you can first minify your files and then bundle them into one file that loads faster and decreases the number of HTTP requests.

8. Use Content Delivery Network (CDN)

Despite the fact that the speed of light is more than 299000 km/s, which is extremely fast, it also helps us keep our data near to our consumers. If there are only numbered CSS and JS files then it is easy to load on the server For bigger static files, you can think of using CDN. The majority of CDNs have many locations and serve files from a local server. The website performance can be enhanced by loading files from a local server.

9. Load JavaScript from the Bottom

Unless they are required earlier, we should always strive to load our JS files at the end. Your website will load faster as a result, and users will not have to wait long to see the information.

10. Cache Pages or Cache Parts of Pages

Rather than considering the database and re-rendering a complex page, we could save it to a cache and use that data to serve later requests.

[OutputCache(Duration=20, VaryByParam="none")]

Public ActionResult HomeIndex() {

        return View();

}

11. Use Exceptions only When Necessary

Exceptions should be rare. The catch and throw of exceptions are slow in comparison to other code flow patterns. Exceptions are not used to regulate the flow of the program. Take into account the logic of the program to identify and resolve exception-prone scenarios.

 Throw or catch exceptions for unusual or unexpected conditions. You can use App diagnostic tools like Application Insights to identify common exceptions in an app and how they perform.

12. Setting at Environment Level

When we develop our application we have to use the development environment and when we publish our application we have to use the production environment. With this, The configuration for each environment is different and it’s always the best practice.

It is extremely easy to do when we use .NET Core. The appsettings.json file can be found in our project folder. We can see the appsettings.Development.json file for the environment of the development and the appsettings.Production.json file for the environment of the production if we extend it.

13. Routing

We can provide detailed names, and we should use NOUNS instead of VERBS for the routes/endpoints.

Don’t:

[Route("api/route- employee")]

public class EmployeeController : Controller

{

        [HttpGet("get-all-employee")]

        public IActionResult GetAllEmployee() { }

        

        [HttpGet("get- employee-by-Id/{id}")]

        public IActionResult GetEmployeeById(int id) { }

}

Do:

[Route("api/employee")]

public class EmployeeController : Controller

{

    [HttpGet]

    public IActionResult GetAllEmployee() { }

 

    [HttpGet("{id}")]

    public IActionResult GetEmployeeById(int id) { }

}

14. Use AutoMapper to Avoid Writing Boilerplate Code

AutoMapper is a convention-based object-to-object mapper that requires little configuration. Basically, when we want separation between domain models and view models.

To configure AutoMapper and we can map domain models and view models like this.

public class EmployeeService

{

    private EmployeeRepository employeeRepository = new EmployeeRepository();

    public EmployeetDTO GetEmployee(int employeeId)

    {

        var emp = employeeRepository.GetEmployee(employeeId);

        return Mapper.Map(emp);

    }

}

15. Use Swagger

Swagger is a representation of a RESTful API that allows interactive documentation, discoverability, and generation of Client SDK support.

Setting up a Swagger tool usually takes a couple of minutes. We get a great tool that we can use to document our API.

16. Logging

Structured logging is when we keep a consistent, fixed logging format. Using structured logs, it’s easy to filter, navigate and analyze logs.

Asp.Net Core has structured logs by default and to keep the entire code consistent, the Asp.Net team will have to make it consistent. The web server communicates with the application. Serilog is an excellent logging framework that can be used. logging

17. Do Refactoring for Auto-generated Code

In .NET Core, there are a lot of auto-generated codes, so set aside some time to examine the logic flow, and because we know our application better, we can improve it a little.

18. Delete Unused Profiles

  • Delete unused custom middleware components from startup.cs
  • Remove any default controllers you aren’t using.

Trace and remove all redundant comments used for testing from the views.

Remove the unwanted white spaces as well

Why Use ASP.NET Core?

  • You can build any web application and services, IoT app development, and mobile backend development services.
  • Works on any development tools like Windows, Mac OS-X, and Linux.
  • You can deploy the applications on cloud and on-premise cloud
  • It can be easily executed on the .NET core.

To improve application performance it is good to ensure that we build applications that use a fewer amount of resources to generate the desired output. This post presents one of the best practices to improve the performance of .NET core applications.

Advantages of ASP.NET Core

Some of the major advantages of .NET Core are –

  • ASP.NET is a fast, lightweight, and high-performance web framework that can boost the core performance of the application.
  • .NET core has the hosting capability which enables the developers to host the app on Apache, IIS, Docker, or Self Hosting.
  • It supports built-in dependency injection.
  • .NET Core is an open-source framework and being community-focused it offers perfect performance profiling tools.
  • ASP.NET Core is a framework that supports modern, client-side frameworks like ReactJs, AngularJs, and React with Redux, etc.
  • The .NET Core performance gets boosted with the help of side-by-side app versioning, as it supports the simultaneous running of various versions of applications.
  • This framework is a cross-platform that can run frequently called code paths & applications on development and app diagnostic tools that support Linux, Windows, and Mac.
  • It supports modular HTTP requests.

Closing Thoughts on .NET Core Application Best Practices

Undoubtedly, now the overview of .NET 6 has been released. The performance claims of a faster and smoother experience were delivered in our experience. You will observe performance improvement in terms of data that we process. It can help us process the information faster and provide a better experience. Our main goal in writing this blog was to reacquaint you with the best practices for growth and strategies for .Net Core.

The post .Net Core Best Practices appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/net-core-best-practices/feed/ 0
How to Setup Your Own MQTT Broker on Azure https://www.tatvasoft.com/blog/how-to-setup-your-own-mqtt-broker-on-azure/ https://www.tatvasoft.com/blog/how-to-setup-your-own-mqtt-broker-on-azure/#respond Thu, 07 Oct 2021 05:57:38 +0000 https://www.tatvasoft.com/blog/?p=6182 MQTT stands for Message Queuing Telemetry Transport. MQTT is a lightweight framework for posting and subscribing, where you can post and accept messages as a customer.

The post How to Setup Your Own MQTT Broker on Azure appeared first on TatvaSoft Blog.

]]>
1. Introduction to MQTT and Broker

MQTT stands for Message Queuing Telemetry Transport. MQTT is a lightweight framework for posting and subscribing, where you can post and accept messages as a customer.

MQTT offers a standardized messaging system with an open OASIS platform. It is an extremely lightweight, publish-subscribe network communication protocol designed for constrained devices with low bandwidth, making it the perfect fit for Internet of Things (IoT) applications. MQTT enables you to send commands in order to control outputs, read and manage sensor-based nodes and messaging systems and more. 

Nowadays, we can see the exponential growth of MQTT in a variety of industries such as automotive, manufacturing, telecommunication, oil and gas, retail, etc.

Introduction to MQTT and Broker

The role of an MQTT broker is to fundamentally manage all the messages, filtering them, understanding  the potential clients and then later manage all the interested people and publish their messages to all the subscribed clients.

2. Why Eclipse Mosquitto?

First of all, we will start with what is Eclipse Mosquitto? so, Eclipse Mosquitto is an open-source message broker licensed in EPL/EDL with protocol versions 5.0,3.1.1 and 3.1.  This is a compatible, scalable platform that allows multiple devices as well as low-power single-board computers to full servers.

This Mosquitto project also has a set of C library functions to implement MQTT clients and 

one of the most popular and effective command lines are mosquitto_pub and mosquitto_sub for specific clients.

3. Prerequisites

Before setting up an MQTT broker, we’ll need the following things.

  • Docker hub account
  • Docker Desktop in a temporary system (to create Docker image – one-time operation)
  • Azure Container instance
  • Azure storage account
  • Azure File Share (Inside of Azure storage account)
  • Azure CLI (Cloud shell)
  • Azure Resource Group

4. Step by Step Process of Setup MQTT Broker on Azure

To set up an MQTT broker on Azure, we’ll need a few base elements to be ready.

  • Mosquitto Broker Image
  • Creating volumes for the container to mount
  • Creating Azure container instance

4.1 Mosquitto Broker Image

The first thing needed here is the mosquito broker’s docker image “eclipse-mosquito” which is available inside the docker hub (https://hub.docker.com/_/eclipse-mosquitto).

Mosquitto Broker Image

Docker is required to be installed in a machine from where the docker image we are willing to create. If docker is not installed then download and install using the link below

https://www.docker.com/products/docker-desktop

After a successful installation pulls the eclipse-mosquito image from the docker hub with the “docker pull eclipse-mosquito” command in your command prompt.

command prompt

Login to your docker hub account in command prompt with the “docker login” command or log in with Docker desktop, if you do not have an account you can create one on https://hub.docker.com/

docker login
welcome docker hub

Tag the “eclipse-mosquito” image as “<your-docker-acoount-id>/<docker-image-name>”, here we named it as “pca31/testsystem1” with the command “docker tag eclipse-mosquito pca31/testsystem1”.

You can see newly created images by the “docker image ls” command.

docker tag eclipse-mosquito pca31/testsystem1

Now push this image to docker hub with command “docker push <your-image-name>”, here we performed “docker push pca31/testsystem1”.

docker push pca31/testsystem1

You can now see this image on your docker hub account.

docker hub account

Now we can pull this image and create a container.

4.2 Creating Volumes for the Container to Mount

As mentioned in eclipse-mosquito’s description (https://hub.docker.com/_/eclipse-mosquitto) we can mount three directories on mosquito containers, we will be using only mosquito/config for this setup.

what is eclipse mosquitto

We will be putting the necessary files such as mosquitto configuration file (mosquitto.conf), files for authentication and authorization (password.txt, roles.txt) and files for SSL/TLS configurations (RootCA.crt, server.crt, server.key) inside the azure file share and will mount that on our azure container instance.

You can initiate your Azure storage account if there are no accounts. You can also use the existing storage accounts. Go to the Storage accounts section on the Azure portal and click on “Add”.

Azure portal

Select the resource group and give the appropriate storage account name then click the “Review+Create” button, here we give “testsystem1” as the storage account name.

storage account

Verify all the details and click the “Create” button.

Verify all the details

Once the deployment process is completed, you can click on the “Go to the resource” section to tap and see all the resources.

Go to the resource
system1

Click on the “File shares” option to create a new file share inside our storage account.

File shares
File shares

Give the name for file share and quota for that file share then click the “Create” button.

file share and quota

Click on the MQTT file share.

MQTT file share

Click on the upload button to upload the MQTT configuration files.

MQTT configuration files

You can tap on the file icon, choose the files from your internal system and then click “Upload”.

Upload
Upload
Upload

Now, you can go to select account storage screen and click on “Access keys”

Access keys

Copy any one of the keys, we will need this while creating the azure container instance.

azure container instance

4.3 Creating Azure Container Instance

Now we will create an Azure container instance using our docker image “pca31/testsystem1” with azure CLI.

Tap on the Cloud shell Options by opting for the cloud shell icon as shown below.

Cloud shell Options
Cloud shell Options
Cloud shell Options

Now you can execute the following commands with specific commands.

az container create 
--resource-group $YOUR_RESOURCE_GROUP_NAME
--name $CONTAINER_NAME
--dns-name-label $DNS_NAME
--image $DOCKER_IMAGE_PATH 
--ports 8883 
--azure-file-volume-account-name $AZURE_STORAGE_ACCOUNT_NAME
--azure-file-volume-account-key $AZURE_STORAGE_ACCOUNT_KEY
--azure-file-volume-share-name $FILE_SHARE_NAME
--azure-file-volume-mount-path /mosquitto/config/

Below is the sample command we ran for our Dev MQTT broker setup (Take it as a reference)

az container create
--resource-group Test_Test_Lab-BOServer-251818 
--name testsystem1 
--dns-name-label testsystem1 
--image pca31/testsystem1:latest --ports 8883 
--azure-file-volume-account-name testsystem1 
--azure-file-volume-account-key 
I2S6Z3AluZSyQjqePqA+UgpVlG10qDqfGOql0cuF0p130TdR7KhvvPwspfFlnwusFNg0N5
bGMdas3NNrf9xLOw==
--azure-file-volume-share-name mqtt 
--azure-file-volume-mount-path /mosquitto/config/
MQTT broker setup

 Above script will create an Azure instance. Now go to the azure container instance to see this resource.

create an Azure instance
create instances

Use this DNS name as an MQTT server in your MQTT client application.

container instances

Go to the Container tab to see the events, properties and logs of our container instance.

logs of our container instance
logs of our container instance
logs of our container instance
logs of our container instance

5. Connect to Sample MQTT Broker

To be able to connect to the MQTT broker, the following things are required and that can be obtained from Sample Client.

  • MQTT broker URL
  • Certificate file
  • Key file
  • Credentials

Steps for Connecting to Sample MQTT broker

Get the following files from Sample Project

  • Client Certificate file (.crt)
  • Client Key file (.key)

Following is the code example to create an MQTT client with C# .Net language.

code example
code example

6. Conclusion

MQTT broker has made it simpler to establish a publisher-subscriber-based system. It is quite simple to use and works well with the Internet of things and home automation projects. This document will help you utilize all your tools using the MQTT function and give clarity on how it works.

Here are some of the exciting MQTT 5 features that can be explored:

  • Custom Headers and User Properties Voicemails
  • Payload Format and Content Types Call queues
  • Connect Options
  • Message Expiry
  • Subscription Identifier

The post How to Setup Your Own MQTT Broker on Azure appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/how-to-setup-your-own-mqtt-broker-on-azure/feed/ 0
Table-Valued Parameters in SQL Server https://www.tatvasoft.com/blog/table-valued-parameters-in-sql-server/ https://www.tatvasoft.com/blog/table-valued-parameters-in-sql-server/#respond Thu, 23 Sep 2021 09:56:01 +0000 https://www.tatvasoft.com/blog/?p=6000 In the latest update of the SQL Server, the table-values parameters are used to transmit data from multiple rows into a T-SQL statement that is stored in the form or procedure or function.

The post Table-Valued Parameters in SQL Server appeared first on TatvaSoft Blog.

]]>
In the latest update of the SQL Server, the table-values parameters are used to transmit data from multiple rows into a T-SQL statement that is stored in the form or procedure or function.

By using multiple records in a single statement or routine, round trips to the server can be reduced. You can declare this using a table that is defined by the user. In 2008, the initialization of  User-defined table types (UDTTs) and TVPs Table-valued parameters (TVP) were done  for use in SQL server. Before that, passing a variable through the table was challenging. Hence they made it possible by easily variable as a parameter through stored procedures. You can easily rectify this and with added features, this function was enhanced in SQL Server 2008 and above. By using this, now you can send multiple rows of data and store it in procedure or function using restoration techniques like temporary table creation, or passing many values through a parameter as a table variable that can be passed as a parameter. In this blog, we’re able to give you a demonstration with the help of our .net development team. Take a look!

1. Create a User-defined table type in SQL (UDTT)

You can predefine the user definition of tables with functions that are created using a schema. We also call it a schema definition that can be created using temporary data also known as UDTTs(User-defined table types).

These user-defined table types support almost the same features as normal data tables like primary keys, default values, unique constraints, etc. It is referred from a table that can be passed through parameters like stored procedures and functions.

For a table-valued parameter, we need a user-defined table type (UDTT).  UDTT can be created with the following T-SQL statement.

CREATE TYPE UDTT_Country AS TABLE(

    CountryName nvarchar(100),

    CurrencyName nvarchar(50)

)
GO

Note: Syntax for creating a user-defined table type is similar to performing a normally created table. As we know, there is no availability of a user interface in SQL server management studio. Primary Key, Indexes, Constraints, Computed columns, Identity columns can be declared in UDDT definition. However, foreign keys are not available.

There is no ALTER statement available for user defined table type (UDTT). For modification, we will need to use DROP and CREATE.

2. Table-Valued Parameter in Stored Procedure

The utilization of table-valued parameters is almost equivalent to other parameters. For other data types, we will have to give UDTT a name.  Table-Valued Parameter allows us to pass multiple columns and rows as input to the stored method.

The table values parameters must be passed through READONLY parameters. All the DML operations such as INSERT, DELETE and UPDATE cannot be stored in any procedure. You can only use functions like SELECT statements.

Given below is an example of a table-valued parameter in the stored procedure. Table Valued Parameter can’t be used as OUTPUT parameter in stored procedures.

CREATE PROCEDURE USP_AddCountries

    @Countries UDTT_Country READONLY

AS

BEGIN

    INSERT INTO CountryList (CountryName, CurrencyName)

    SELECT CountryName,CurrencyName FROM @Countries

END

GO

The use of table-valued parameters in user-defined functions is similar.

3. Execution from T-SQL statement

To execute a stored procedure, which has a table-valued parameter, we need to create a table variable, which references UDTT.   Following is an example of execution.

Execution from T-SQL statement

4. Execution from C# code

To execute stored procedures from .net code, we have to define parameters as Structured parameters.

Structure data type accepts DataTable, DbDataReader or IEnumarable<SqlDataRecord>. In the following example, the first is using a data table, while the second is using IEnumarable<SqlDataRecord> for List Records. The third example shows how to use table-valued parameters with a dapper.

Using Data Table

status void TableParameterUsingDataTable()
{  DataTable dtCurrency = new DataTable();
    dtCurrency.Columns.Add("Country", typeof(string));
    dtCurrency.Columns.Add("Currencyname", typeof(string));
    dtCurrency.Rows.Add("India", "Indian Rupee");
    dtCurrency.Rows.Add("USA", "US Dollar");

SqlConnection connection =new SQlConnection(connectionString);
connection.Open();
SqlCommand cmd = new SqlCommand("USP_AddCountries", connection);
cmd.CommandType = CommandType.StoredProcedure;

//Pass tabel valued parameter to Store Procedure
SqlParameter sqlparam = cmd.Parameters.AddWithValue("@Countries", dtCurrency);
SqlParam.SqlDbtype = SqlDbType.Structured;
cmd.ExecuteNonQuery();
connection.Close();

Using List

static void TableParameterUsingList()
{
    //Local Funtion
    IEnumerable CreateSQLDataRecords(IEnumerable) countries)
   {
        SqlMetaData[] metaData = new SqlmetaData[2];
        metaData[0] = new SqlMetaData("Countryname", SqlDbType.NVarChar, 100);
       metaData[1] = new SqlMetaData("Currencyname", SqlDbtype.NVarchar,50);
       SqlDataRecord record = new SqlDataRecord(metaData);
       foreach (var c in countries)
       {
           record.SetSqlString(0, c.Countryname); 
           record.SetSqlString(1, c.Currencyname);
       };
  }
  List currecnylist = new List
  {
          new Country("India", "Indian Rupee"),
          new Country("USA", "US Dollar"),
  };
IEnumerable sqlDataRecords = CreateSqlDataRecords(currencyList);

SqlConnection connection = new SqlConnection(connectionString);
connection.Open();
SqlCommand cmd = new SqlCommand("USP_AddCountries", connection);
cmd.CommandType = CommandType.StoredProcedure;

//Pass table Valued parameter to Store Procedure
Sqlparameter sqlParam = cmd.parameters.AddWithvalue("@Countries", sqlDataRecords);
sqlParam.SqlDbtype = SqlDbtype.Structured;
cmd.ExecuteNonQuery();
connection.Close();
}

Using Dapper

static void TableparameterUsingDapper()
{ 
     List currencyList = new list
       {
            new Country("India", "Indian Rupee"),
            new Country("USA", "US Dollar")
};
DataTable dtCurrency = new DataTable();
using (var reader = ObjectReader.Create(currencyList))
{
      dtCurrecny.Load(reader);
}
//For DataTable, start from here
SqlConnection connection =  new SqlConnection(connectionString);
DynamicParameters parameters = new DynamicParameters();
parameters.Add("@Countries", dtCurrency.AsTableValuedParameter("UDTT_Country"));
connection.Query("USP_AddCountries", parameters, commandType: CommandType.StoredProcedure);
}

5. Modifying data with Table-valued Parameters (Transact-SQL)

To perform set-based data modifications impacting different aspects of rows through execution of a single statement query. You can see a good impact on the table-valued parameters. Say for instance,  you have all the rights to choose relevant rows and add them to a database table. You can also perform DML operations such as create, delete and update the table-valued parameter by adjoining it with a table that needs upgradation.

The below-depicted UPDATE statement explains how to use Transact-SQL to use Table-valued parameters by performing a join with Countries table.

While you are using table-valued parameters, you can use the function JOIN in the FROM clause. We can also call this table-valued parameter “Edited countries” as shown in the image below.

UPDATE dbo.Countries  

    SET Countries.CountryName = editedCountries.CountryName  

    FROM dbo.Countries INNER JOIN @tvpEditedCountries AS editedCountries  

    ON dbo.Countries.CountryID = editedCountries.CountryID;

The below-mentioned Transact-SQL statement closely defines how we can choose the specific set of rows from a table-valued parameter.

INSERT INTO dbo.Countries (CountryID, CountryName)  

SELECT newCountries.CountryyID, newCountries.CountryName FROM @tvpNewCountries AS newCountries;

In the above query, the INSERT option closely defines a single set-based operation. 

6. Using Memory-Optimized Table-valued Parameters

This memory-optimized table value parameter is an efficient way of managing data structure and memory-optimized tables utilizing the same memory-optimized algorithms. This will maximize the efficiency of the process by accessing the table variables from a compiled native module. 

Using the same concept, it is quite possible to initiate memory-optimized Table-valued parameters which have a prime focus of reducing temps activity and use memory-optimized TVPs.

The following example is a clear demonstration of memory-optimized table-valued parameters.

CREATE TYPE Countries_MemOptimized AS TABLE

(CountryId  INT PRIMARY KEY NONCLUSTERED  HASH WITH (BUCKET_COUNT = 1000), CountryName VARCHAR(100)) WITH ( MEMORY_OPTIMIZED = ON )

For any syntax, if you see the MEMORY_OPTIMIZED=ON clause then it means this type of table type is memory-optimized. Additionally, you can also create a hash index that will manage the data using the indices of memory-optimized.

CREATE PROCEDURE Usp_InsertCountryMemOpt

@ParCountry Countries_MemOptimized READONLY AS

INSERT INTO Countries

SELECT * FROM @ParCountry

You will now create using the stored procedure with full memory optimization as an input type. Later using the same memory-optimized table value, we can execute Usp_InsertLessonMemOpt procedures.

DECLARE @VarCountry_MemOptimized AS Countries_MemOptimized 
INSERT INTO @VarCountry_MemOptimized

VALUES ( 4, 'India_MemOptimized')

INSERT INTO @VarCountry_MemOptimized

VALUES ( 5, 'USA_MemOptimized')

INSERT INTO @VarCountry_MemOptimized

VALUES ( 6, 'UK_MemOptimized')

EXEC Usp_InsertCountryMemOpt @VarCountry_MemOptimized

SELECT * FROM Countries

Output

CountryID CountryName
1 India
2 USA
3 UK
4 India_MemOptimized
5 USA_MemOptimized
6 UK_MemOptimized

Memory-optimized Table-Value Parameters usage reduces the tempdb activity even though this usage type may increase memory consumption. If we see from the other perspective, we will see that the table-value parameter creates activity based on tempdb files.

7. Table-Valued Parameters vs BULK INSERT Options

When comparing with other set-based parameters that are used to perform updates in large data sets. When we compare it to bulk operations that may include higher startup costs, table-valued parameters which may also need at least 1000 rows as an input.

Table-valued parameters can also benefit from temporary table caching when reused. Table caching enables greater scalability compared to BULK INSERT options.

The table-valued parameters are efficient and perform way better than other equivalent parameters and array implementations.

8. Benefits of Table-valued Parameters

Benefits of Table-valued Parameters
  • Simple programming model but may sometimes face complex business logic but can be implemented in a single regular method.
  • Reduce round trips to server
  • Using Merge Statement, Multiple Insert/Update/Delete operation is possible in a single routine.
  • Provides more flexibility over temporary tables.

9. Limitations of Table-valued Parameters

Limitations of Table-valued Parameters
  • Table-valued parameters cannot be passed to CLR user-defined functions.
  • As we know that SQL Server does not keep data on table-value parameters, it is only possible to index the table-value parameters to support Special or PRIMARY KEY restrictions.
  • In the Transact-SQL language, table-valued parameters are read-only. You can’t change the column values in the rows of a parameter with a table value, and you can’t insert or subtract rows. You must inject the data into a temporary table or into a table variable to change the data that is transferred to a stored procedure or parameterized expression in a table-valued parameter.
  • You cannot use ALTER TABLE statements to modify the design of table-valued parameters.

10. Conclusion

As explained above, using table-valued parameters, we can send multiple records to a single server trip, and complex business logic can be implemented.

The post Table-Valued Parameters in SQL Server appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/table-valued-parameters-in-sql-server/feed/ 0
ASP.NET Core and Docker https://www.tatvasoft.com/blog/asp-net-core-and-docker/ https://www.tatvasoft.com/blog/asp-net-core-and-docker/#respond Fri, 02 Jul 2021 09:30:38 +0000 https://www.tatvasoft.com/blog/?p=5568 “Docker" is a buzzword nowadays, and almost everyone has heard about it a lot. Most of you might know Docker superficially. Like its Open-source containerization technology based on Linux which enables developers to create and deploy programs using containers.

The post ASP.NET Core and Docker appeared first on TatvaSoft Blog.

]]>
“Docker” is a buzzword nowadays, and almost everyone has heard about it a lot. Most of you might know Docker superficially. Like its Open-source containerization technology based on Linux which enables developers to create and deploy programs using containers. But it is vital to have a deep idea about Docker, what is it? What are Docker images and how do you use them in the net core application? How to use the docker run command in our ASP.NET core applications?

Can I Use it with ASP.NET Core?

In this blog, we are going to explore how we can use docker with dot net core applications. To start with, let us delve deeper.

Docker container is used for developing and publishing applications. It is a Linux-based open-source platform. It is a way of hiding or protecting your code which you need to run applications, ASP.NET core is one of them.

It is dependent on the type of operating system you use. You can put the docker build command in the docker container to protect it from breaching. In docker, you can neatly put everything into one package. To build that package without repeating the same process again, Because you can copy it anywhere either in your development or production and can reuse it as many times as you want.

Containers are used to deploy applications and this is called containerization. They are used to easily deploy applications.

Containerization is gaining a lot of popularity because they are:Lightweight, Portable, Flexible, Scalable,Loosely coupled, and still Secure

Containers and Virtual Machines

You might resemble it a lot with a VM- Virtual Machines but this is different from the VM?

As you can see in the below image, the docker images execute natively on Windows or Linux. The container will be shared with other containers and the kernel of the host machine. It can also run as a separate process which will be faster as it consumes less memory than others.

Whereas a Virtual Machine executes on a completely guest operating system, it requires virtual access to host with software. This will create and run the virtual machine like Hypervisor. In short, a Virtual Machine takes a lot of loads compared to others. Moreover, it eats up your application logic.

Containers and Virtual Machines

Docker Tooling in Visual Studio

Docker tooling in visual studio IDE can help us to create and add Docker support to the project. This also generates the correct docker file. You can modify this file and also the container and can run it on the visual studio code running section.

At the time of project creation, ensure whether the “Enable Docker Support” option is checked in, this will allow you to enable docker support for your project.

Enable Docker Support

You can also add the Dockerfile in your project folder later by using “Adding Docker Support” as shown in the below screenshot.

Dockerfile

It will ask the option for which OS you want to target to run the Docker. Here you can select Linux from the given options available for docker commands.

Docker FIle Options

Later,  it will generate the Dockerfile of your application. This file is like commands which are like a stepwise instruction on how to build up your docker image.

Build up your docker image

This is a multi-stage docker file. This means you can make use of any base image and can modify it. Then you can use the created container to build another container image. One of its examples is that the container image contains the .Net Core SDK which is good especially on Linux. This allows you to build an application within a container but there can be cases when you do not want extra SDK for your final deployment image. This is because you want to reduce the runtime of core apps. You want a really small, lean, and fast product. So you can see that the last block starts with the base as the final image.

You need to make sure that you build a container image that has an ASP.NET core SDK. It has got all the packages that you need. Through these multiple layers that it has labeled in the first section as the base and then in that last section down by using it as a first set which is not actually going out to another registry to pull another one.

I’m utilizing the one that was designed to function as an intermediate and is perfectly positioned on the right side. We can use it again from the publisher which is building a released version of the web application. Now put all into the publish layer and then combine all of it again that will assemble everything at the very end of the entry point. So, here we are running an application from visual studio.

As you can see in the below image, select the new option Docker when you add docker support that gets added automatically to run this Dockerfile.

Add docker support

Voila !! 
We get our hello world

hello world

As mentioned above Visual Studio, also supports debugging by putting the breakpoint.

Breakpoint

Deploy a .NET Core App to Docker Hub Using Visual Studio and Run it in Azure App Service

1. Prerequisite

First, we’ll have to install Docker for Windows before we create a Docker container.  This is available for Mac and Linux. You can also download and run the installation file as you do. You just need to login with a Docker hub account. You are required to create an account.

Docket Desktop Windows

During the installation, it will ask you different options to choose Windows/Linux Containers. We’ll select the option for Windows Containers. By the way, this can also be changed later in the Docker settings.

2. Containerize an ASP.NET Core Application and Host the Image in Docker Hub

Let’s create a new project. For that we will go to File->New Project->Web-> ASP.NET Core web application

ASP.NET Core web application

We can select the checkbox “Enable Docker Support” or you can add the docker file later in the project

Enable Docker Support

As shown in the below image, we can now have a Docker file in the project. This file contains the container configuration. It uses operating system images to build the container and run the application on it.

Container configuration

 Click on the project and select Publish. Now, we will choose Container Registry and  Docker Hub, as we want to put the Container on Docker Hub.

Pick a Publish Target

Here, fill in your DockerHub username and password. Your username is not an e-mail address. You’ll see your username when you sign in to the Docker Hub website.

Container Resgitry

That’s it. Let’s publish it.

Publish

We are now logged in to the Docker Hub website. This is a container image that we’ve just published from Visual Studio.

Repositories

Here, we can see that Visual Studio has added a tag that says latest to the image. You can use tags for identification and versioning.

Username Container name

3. Use Azure App Service Web Apps for Containers to Run the Container Image

So, let’s run this Container. I’ll switch to the Azure portal. From here, I’ll create a new web app for Containers. This is a web app like the Standard App Services web app that runs Containers instead of running an application directly.

Azure App Service

Adding container support to an ASP.NET Core application is really simple from Visual Studio. From there, you can easily publish it to Docker Hub or any other Container Registry and run it in Azure.

Let’s create one. Let us fill in all the information here. First, we need to fill in a name and resource group.

Web App for Containers

Now, for the container. I can choose where the Container comes from. I will choose Docker Hub as that is where my Container is right now. so then I will insert all the details in the container name and also need to add the tag which is the latest.

Single Container Preview

This is the web app for containers, and it is running on a container with the ASP.NET Core application in it. Now let’s go and check the URL and see if it works.

Check the URL

Yes. There it is, running within the container .

ASP.Net Core

Conclusion

In this long discussion of how to use docker with the integration of .NET, we have tried to discuss all the important aspects of developing an ASP.NET Core application using Docker in this process. The advantage of Dockers is that it is more flexible and light than Virtual Machines and provides therefore a minimal resource and overhead usage. This makes it inexpensive because one Virtual Machine would suffice to run several Docker containers. And the Docker Hub provides many pre-built images and tools that you can use for your own customized solutions.

The post ASP.NET Core and Docker appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/asp-net-core-and-docker/feed/ 0
AWS Lambda vs Azure Functions: Serverless Computing https://www.tatvasoft.com/blog/aws-lambda-vs-azure-functions/ https://www.tatvasoft.com/blog/aws-lambda-vs-azure-functions/#comments Tue, 16 Mar 2021 13:25:00 +0000 https://www.tatvasoft.com/blog/?p=5007 Every organization isn't sure of which type of serverless application is the best fit for their business. For any application, it is very essential to be reliable and scalable as per the changing business needs.

The post AWS Lambda vs Azure Functions: Serverless Computing appeared first on TatvaSoft Blog.

]]>
Every organization isn’t sure of which type of serverless application is the best fit for their business. For any application, it is very essential to be reliable and scalable as per the changing business needs. Apart from google cloud functions, the two popular names are Microsoft Azure functions and Amazon AWS Lambda. These are the ultimate choices for businesses that come with their own set of advantages and perks. This blog on AWS Lambda vs Azure Functions is intended to showcase the comparison of lambda versus azure functions. So, without much ado, let’s get started.

AWS Lambda vs Azure Functions: Comparison of Various Parameters

1. Code

AWS Lambda

AWS Lambda provides the support to upload the code in a zip file as well as writing the code in the console to create the function. Let us understand both ways thoroughly:

Via ZIP File

A piece of code is deployed in the zip file as a function that is linked to a specific event like queue or HTTP endpoint which runs this function every time a matching event occurs. Users can also change the runtime associated with the function by updating the configurations. Also, developers can directly get the code from vendors which helps them consume less time.

For the code written in a non-supported language by AWS, Lambda custom runtime uses binary files which are then compiled using Amazon Linux. So, this makes things simple as there’s only one way to run code. But if there are any changes required to be implied in a code then the whole package will have to be re-uploaded.

Via Console:

AWS Lambda gives good support for writing the code to create Lambda functions using the console. The benefit of using the Lambda console is that it provides a code editor for the languages which are not compiled which enables modifying and testing of the code smoothly.

Let us take a quick view of the steps of writing the lambda code through the console:

  1. Firstly, you will need to open the Functions Page which is available on the Lambda Console.
  2. The next step is to create the actual function by invoking and then creating
  3. Then it will be followed by testing of the function wherein the user will be allowed to create up to 10 test events for each function
  4. It will then execute the code that has been loaded in the functions and test results will be displayed into the console.

AWS Lambda re-uses the execution environment from the previous invocation while invoking a new function if there are any available which saves time for preparing the execution environment and also allows users to save resources like the number of database connections and creation of a temporary file in the execution environment every time a function runs.

The runtime code is completely supportive of language versions, diversified versions of languages. There can also be specific functions of runtime for a definite language or version of the framework which has been depreciated due to the end time it has reached.

Azure Functions

Azure Functions code support and writing approach is a little bit complex to implement but also at the same time it is more flexible. Custom handler functions rely on HTTP primitives to communicate with code written in languages that are not supported.

Azure Functions also help in building a stable data-driven application that will allow the user to keep track of every event through a unified platform. It offers a lucid app development facility where you can integrate multiple Microsoft services like Azure Event Grid and Azure Cosmos DB etc.

This doesn’t just save time but improves efficiency too. You can simply set up the code and the deployment process using Azure functions that provide continuous support for any code-related services such as GitHub, Bitbucket, and VS Team Services. Some Logic Apps are also offered by the Azure Functions which helps in the integration without writing code.

2. Language

AWS Lambda

As you know that AWS Lambda constantly PowerShell, Node.js, Go, Python, C#, Java, and Ruby code. It facilitates a run time API which enables users to add more programming languages that can enhance their functionalities. In reciprocation of multiple events, AWS Lambda will run code automatically (i.e., HTTP requests via Amazon API Gateway, modifications to objects in Amazon S3 buckets, table updates in Amazon DynamoDB).

For choosing languages for AWS lambda mainly two factors are taken into considerations which are:

Cold Start

The overhead in implementing any invocation of Lambda is referred to as a cold start consisting of two essential aspects. The First one is AWS, which takes time to set up an execution environment for the code of the function and is fully managed on its own. The second can be specified as the interval of code initialization which is typically handled by the developers

Warm start

The Lambda will remain instantiated for a while soon after a cold start is made. This enables every other call to be made without initializing it every time. These calls are called the “warm call”, which means that the code is packed into memory and is ready to be executed one or more times when the Lambda function is called.

Azure Functions

Current Azure functions runtime supports different languages like C#, JavaScript, F#, Java, PowerShell, Python, TypeScript by transpiring them to JavaScript. For every supported language, a different set of events and methods are followed to create a function. Azure does not provide Go and Ruby, else, the language options are quite the same. Language support in Azure functions comes in two levels which are as below.

Generally available (GA): Completely supportive of all types of production use.

Preview: There isn’t a shred of evidence to prove but can be assumed to achieve GA status in the future.

Different versions of Azure functions runtime are available as below which support differently in different versions.

Language 1x 2x 3x
C# GA (.NET Framework 4.7) GA (.NET Core 2.2) GA (.NET Core 3.1)
JavaScript GA (Node 6) GA (Node 10 & 8) GA (Node 12 & 10) Preview (Node 14)
F# GA (.NET Framework 4.7) GA (.NET Core 2.2) GA (.NET Core 3.1)
Java N/A GA (Java 8) GA (Java 11 & 8)
PowerShell N/A GA (PowerShell Core 6) GA (PowerShell 7 & Core 6)
Python N/A GA (Python 3.7 & 3.6) GA (Python 3.8, 3.7. & 3.6)
TypeScript N/A GA” GA

3. Function

AWS Lambda

AWS Lambda works on a simple JSON and YAML input/output model. To develop a new function, all you would need is a package for deployment that needs to be executed. After executing this function, you need to deploy Amazon CloudWatch logs to keep a track of logs and streamline AWS X-ray tracking requests.

{
 "Type" : "AWS::Lambda :: Function",
 "Properties" : {
  "Code" : Code,
  "CodesigningConfigArn" : String,
  "DeadLetterConfig" : DeadLetterConfig,
  "Description" : String,
  "Environment" : Environment,
  "FilesystemConfigs" : { FilesystemConfig, … },
  "FunctionName" : String,
  "Handler" : String,
  "Kmskey Arn" : String,
  "Layers" : { String, … },
  "MemorySize" : Integer,
  "ReservedConcurrentExecutions" : Integer,
  "Role" : String,
  "Runtime" : String,
  "Tags" : { Tag, … },
  "Timeout" : Integer,
  "TracingConfig" : TracingConfig,
  "VpcConfig" : VpcConfig
 }
}

Azure Functions

Azure Functions, you have triggers and bindings. If you use this, you can easily pull or push extra data while processing the query along with input and output bindings. These bindings are capable of offering better scalability with an extended cost of complexity in both APIs and configuration.

Enlisted are some of the most common problems that come from actual day-to-day scenarios of using Azure functions. This by no means sounds exhaustive.

If you want to… then…
Build a web API Implement an endpoint for your web applications using the HTTP trigger
Process file uploads Run code when a file is uploaded or changed in blob storage
Build a serverless workflow Chain a series of functions together using durable functions
Respond to database changes Run custom logic when a document is created or updated in Cosmos DB
Run scheduled tasks Execute code at set times
Create reliable message queue systems Process message queues using Queue Storage Service Bus, or Event Hubs
Analyze lot data streams Collect and process data from loT devices
Process data in real-time Use Functions and SignalR to respond to data at the moment

4. Configurability

AWS Lambda

For AWS lambda deployment one needs to define maximum memory allocation approximately between 128 MB and 3 GB. The capacity of the CPU and the cost of operating the function are proportional to the memory allocated. Based on the workload profile, it takes a bit of experimenting to determine the optimum scale. All instances run on Amazon Linux irrespective of scale.

AWS Lambda provides multiple configuration options to configure function settings, to add triggers and destinations. The memory range that can be set in this function is from 128 MB to 10240 MB in 1-MB addition. The amount of time that Lambda grants for a function to run are a minimum of 3 seconds and a maximum of 900 seconds. It also allows the creation of a database proxy for the functions using the Amazon RDS DB instance pr cluster.

Azure Functions

One size fits all- is true for all types of Azure functions. It has around 1.5GB of memory with low-profile virtual core functions. You can easily choose between different hosting platforms such as Windows and Linux. Azure Functions involves many instances to upscale their sizes for more than 14 GB and 4 vCPUs. You need to pay as per fix hour fees for reserved capacities.

Azure functions provide with consumption plan and premium plan wherein:

  • The premium plan comes with multiple instance sizes up to 14 GB of memory and four vCPUs. It also encourages the limitless length of execution (60 minutes). The allocation of high-density applications for plans of different working apps is also provided by a premium plan.
  • The consumption plan comes with a memory size of 1.5 GB said to be one size fits for all. It immediately scales out, even during high-load cycles. A usage schedule has little effect on each app’s durability, scalability, or reliability. As a host operating system, you can select between Windows and Linux.

5. Extensibility

AWS Lambda

Another way to simply integrate Lambda with the most preferable operations of preferable alternatives for monitoring, safeguarding, and governance is a simple preview of Lambda extensions.

This post explained how Lambda extensions operate, how you should start using them, and the AWS Lambda Ready Partners extensions available today.

The extensions help to integrate the already-in action tools with the Lambda environment. There is no intricate installation or setup, and today it is easier to use favorite tools across all your portfolio of applications. With this streamlined experience, you can also imply Use-Case Extensions, such as:

  • Gathering diagnostic information during the process, after the completion of function, and before that too.
  • Automate the instrumentation of code without any modifications
  • Lookup for configuration settings before the invocation of the function.
  • Identifying and alarming functional activities through security agents who execute different processes from the function.

Extensions from AWS Lambda Ready Collaborators, AWS, and open-source projects are used. AppDynamics, Dynatrace, HashiCorp, Thundra, Lumigo, Splunk SignalFx, Datadog, Epsagon, AWS AppConfig, Check Point, New Relic, and Amazon CloudWatch Lambda Insights are available today as extensions.

If you want to create your extensions then take a deeper view into the changes made in the Lambda lifecycle post that says “Building Extensions for AWS Lambda“.

The Lambda extensions are easy to install, plugin, and used without any complexities of configuration, setups, and management. To simplify the integration of extensions from New Relic, Lumigo, and Thundra, you can use Epsagon, Stackery. You deploy extensions as Lambda layers with the AWS Management Console and AWS Command Line Interface (AWS CLI).

Code tools can also be used as infrastructure, such as AWS Serverless Application Model (AWS SAM), AWS CloudFormation, Serverless Architecture, and Terraform. You will use Stackery to simplify the incorporation of extensions from Epsagon, New Relic, Lumigo, and Thundra.

Lambda functions are performed in a sandbox environment which is called an execution environment. This is the thing that separates them from other functions. They offer them external resources such as memory-specified configuration options and more. The lambda function freezes the environment in between the subsequent invocations and functions. Utilizing the API extension, users can now manipulate and control the functions when the lambda services freeze in the execution environment.

Extensibility AWS Lambda

Before runtime and the feature, extensions are initialized. They then proceed to run in parallel with the function, get greater control during the invocation of the function, and during the shutdown, they will run logic.

There are various AWS Lambda Ready Partners and extensions already available at launch like AppDynamics, Datadog, Dynatrace, Epsagon, HashiCorp Vault, Lumigo, Check Point CloudGuard, New Relic, Thundra, Splunk, AWS AppConfig, Amazon CloudWatch Lambda Insights. If you want you can create your extensions and then integrate them with our in-house organization’s tools. You may also create and get your own extensions to incorporate the instrumentation of your company. For instance, the extensions used can be built using the cloud Foundations team at Square.

Azure Functions

Starting with Azure Functions version 2.x, By default, the runtime of the function includes HTTP and time triggers and other binding and valuable information to differentiate between packages. These .NET class libraries have apps that use bindings that can be simply installed in the project using NuGet packages.

In Java, multiple app extensions allow users to use the same bindings that can explicitly install the binding extensions with no many issues regarding .NET infrastructure. The functional apps used here are JavaScript, Powershell, python and custom handlers.

What do you mean by Extension bundle?

An extension bundle is another way to add an expandable and scalable set of binding extensions to the existing apps. You can easily enable extensions to your bundles in the app’s host.json file. There are varied versions available and each version has a specific rule set for apps to work in sync and get validated. You can select any bundle version based on the unique needs of your apps.

The table below shows how and when you should register bindings for portals to use extension bundles.

Development environment Registration in Functions 1.x Registration in Functions 3.x/2.x
Azure portal Automatic Automatic*
Non-.NET languages Automatic Use extension bundles (recommended) or explicitly install extensions
C# class library using Visual Studio Use NuGet tools Use NuGet tools
C# class library using Visual Studio Code N/A Use .NET Core CLI

6. Function Triggers & Types

AWS Lambda

Triggers in Lambda are used to make functions process data automatically. It can be considered as a Lambda resource that is configured to invoke functions for every lifecycle event or external request. One function can have multiple triggers where each trigger will work as a separate and independent invocation for our function. So, each event passed to the function will have data only from the trigger or one client.

One Lambda function cannot trigger another lambda function directly in order to connect these two functions, the first event will have to generate an event for the second function in order to be triggered. For Lambda to know which event will trigger it, can be defined in the configuration section. There are other ways as well that can be triggered through this Lambda function such as:

  1. API Gateway event: In this just by calling API gateway lambda function will be triggered. These are the most classic events.
  2. S3 event: In this type of event one can specify the actions which will trigger the lambda functions. These actions can be like create, update or modify in the file specified. These contents are known as S3 bucket content which on-change triggers the function.
  3. DynamoDB event: In this, you will need to connect Lambda with DynamoDB which makes it a little complicated. Here DynamoDB works as a stream of line or queue which means that if there are any changes in the database then all the changes will be published instantly which will trigger a lambda function. In two different ways, a lambda can be triggered by it which is when there is any data present in the stream then at a certain time, this function will be triggered. In another way, a series of events will be present in the stream which will be executed together saving time.

Azure Functions

You will see Azure functions are invoked by triggers. For each function, you have one trigger associated with it. The function triggers in a timely manner. How the function will be invoked and executed is defined by the triggers only. There are multitudes of triggers available for Azure functions such as:

  1. Queue trigger: Fires when new message arrived in Azure Storage Queue
  2. HTTP trigger: Fires for every HTTP request.
  3. Event Hub trigger: Fires for events delivered to an Azure Event Hub.
  4. Timer trigger: Execution time can be set using this trigger and it also can be called on a predefined schedule.
  5. Blob Trigger: A new blob trigger will be fired so that the contents are passed on with each input function.
  6. Generic Webhook: For every HTTP request coming from any services supporting webhook this trigger will be fired.
  7. GitHub Webhook: For any event occurring in GitHub, a trigger will be fired.
  8. Service Bus Trigger: Fires when a new message comes from the service bus queue.
Example Scenario Trigger Input Binding Output Binding
A new queue message arrives which runs a function to write to another queue. Queue* None Queue*
A scheduled job reads Blob Storage contents and creates a new Cosmos DB document Timer Blob Storage Cosmos DB
The Event Grid is used to read an image from Blob Storage and a document from Cosmos DB to send an email. Event Grid Blob Storage and Cosmos DB SendGrid
A webhook that uses Microsoft Graph to update an Excel sheet. HTTP None Microsoft Graph

7. Scalability

Scalability plays an important role when a sudden massive workload appears. Many organizations make sure the AWS Lambda is used by both web APIs and web apps based on queues. It is also better for accelerated scale-out and large workload management. The bootstrapping delay effect – cold starts – with AWS Lambda is also less important.

AWS Lambda

AWS Lambda provides dynamic scaling applications allowing users to chain new features and also helps in incorporating custom serverless functions as a service. It also allows the user to perform external operations and a configurable timeout along with setting default action in case if it returns an error or takes more time than defined. For all incoming requests, AWS invokes and scales code automatically to support its rate without any additional configurations from your side. Due to its auto-scaling feature, the performance is as high as the rate of frequency of events and also it starts running code within a very less amount of time (milliseconds of an event).

By adding a Lambda function through SNS to the lifecycle hook, you can add to your Auto Scaling community a nearly infinite number of custom acts. Lambda can start multiple instances as required without any configuration delay and lengthy deployments as the code over here is stateless. Lambda is a more viable option for faster scale-out services and managing heavy workloads for both web API’s and queue-based applications.

Azure Functions

In Azure Functions scaling is provided by Azure Function Scale Controller. This controller checks for all the queues in a timely manner and issues peek commands. Based on the messages received with these commands and also latencies of the same it will be deciding whether to use any additional virtual machines or not. In the case where the latencies are observed too high then a virtual machine will be added till the latency of the messages reaches the desired level.

This process will be continued regardless of the partition count of the queues. Azure Functions make sure that the services provided are always available in the case when the application requires them. And also configures applications for geo-redundancy. It makes sure that in the case when the primary program is unavailable it creates a secondary replica to avoid any major failure.

8. HTTP Triggers and Integration

AWS Lambda

Previously, AWS Lambda required an Amazon API gateway to manage HTTP traffic that costs high with some hidden charges. But now, amazon introduced integration with Load balancing that may cost high and is efficient for high load scenarios. This comes on the basis of a per-hour pricing model where you can easily judge how to exactly use it. AWS only offers C# supported by .NET Core.

Azure Functions

The Azure functions come with exceptional and extraordinary HTTP end-point functions for integration. There are no additional expenses added to its integration in order to develop a serverless API that reacts to webhooks using an HTTP trigger.

With .NET or .NET Core runtime, Azure gives you the ability to build a feature in C#. We acknowledge that the Azure functionality has an exceptional HTTP endpoint incorporation and this does not come with an exceptional HTTP endpoint with no extra charges.

9. Identity and Access Management

AWS Lambda

The functions of AWS lambda like IAM Access management and AWS identity constantly regulate and provide access to AWS services safely through admin. Control of IAM administrators that can be validated using signed-in options and allowed (licensed) to use Lambda services. An AWS service- IAM can be used with no added expense.

This differs in Identity and Access management that would differ as per your tasks on Lambda.

  1. Service user: Admins provides all the user details login, passwords and permission to run your Lambda function for your tasks. You need special permissions to do more additional tasks.
  2. Service Administrator: Determine which Lambda features and resources other users should access. You can submit a request to the IAM lead or admins that can help you change the approvals of service users.
  3. IAM admin: This allows you to draft policies and manage the accessibility in Lambda.

Authenticating with identities

Authentication can be defined as a way to identify keys, sign in into AWS. You should authenticate (signed into AWS) as the AWS account root user, as an IAM user, or as an assumer of the IAM feature. Even, you can use the company’s single sign-on authentication or even sign up for Google or Facebook.

Your administrator has already set up an identity federation using IAM roles in these instances. You owe all the user rights reserved from another company where your role is indirect.

You can use your password with a dedicated root system and user email address and your IAM user credentials. In the AWS program, you can use the root user and IAM access keys. AWS is capable of facilitating users with SDK and command-line tools to cryptographically sign requests using the rights. For cases when you don’t want to use AWS tools, you will have to assign requests on your own. For this, you can use Signature version 4 a protocol to authenticate APIs and requests.

Irrespective of the main method for authentication, you are also allowed to offer additional security data for enhanced protection. Say, for instance, AWS can recommend you to use multi-factor authentication (MFA) to improve the safeguarding of the account.

Azure Functions

You can add the security of apps and data at the frontend by supporting Azure identity and access management solutions. You can add security against various malicious logins that are risk-based with access rights, identity protection tools and more robust authentication alternatives. This should not disrupt productivity and provide multiple options that will shape best with your needs.

  1. If you have a wish to provide cloud client identity and access management, you can use Azure Active Directory External Identities to ease and maintain customer and vendor identities by providing partners, clients, people, patients or any other user outside your company with the level of customization and control provided by your business with smooth and highly secure digital experiences.
  2. You can easily use Azure Azure Active Directory Domain Services if you wish to facilitate clients to change domains without deploying domains using identity protection in combined machines that are virtually present in Azure. You can do this without using LDAP, Azure, Azure active directoryNTLM, Kerberos and other authentication sources.

10. Dependency Management

AWS Lambda

Talking about the runtime environment, the AWS Lambda includes a range of modules, such as the Node.js and Python runtime AWS SDK. This is aimed to manage the dependencies in the implementation bundle of your function. To allow the latest collection of features and security improvements, Lambda will regularly upgrade these libraries. These updates can incorporate subtle changes to your Lambda function’s behavior. You will have to package all the dependencies with your deployment package if you want to access all your functions from a single source.

Azure Functions

The Azure function supports all the injection and software, designing models. With these dependencies, you can achieve the inversion of control between classes. This injection in the Azure function is created using core .NET core dependency injection features. There are multiple things common between .NET core competencies. It is highly recommended. But there is a difference in how you override dependency and how configuration values work with the consumption plan of Azure functions.

In case you want to use the dependency package, you can install all the extensions of Microsoft.Azure.Functions.Extensions and Microsoft.NET.Sdk.Functions package in version 1.0.28 or upgraded ones from NuGet packages.

11. Orchestrations

The method of configuring, administering and coordinating applications and resources automatically is known as orchestration. In the case of wider systems where manual handling and monitoring are complicated, this is primarily useful. For AWS Lambda and Azure features, let us go over how orchestration operates.

AWS Lambda

Orchestrating a series of individual Lambda applications, debugging failures can be sometimes difficult. To utilize the benefit of serverless, AWS Lambda provides step functions that allow users to orchestrate invocations of the function. Step functions also provide and manage with the benefit of error handling and retry logic. This is supportive of managing the complexity of distributed systems as and when it expands.

This concept is useful mainly for executing long-running tasks. Orchestration service provided by Lambda connects the function together into serverless workflows which are generally known as state machines. When you use a function orchestrator, it will become simpler to run the lambda functions and multiple AWS services in sequence. In this flow, one can create and run the series of events where the output of one step acts as the input of the next. Each step will be executed as per the defined business logic and in order.

AWS allows the services to give a good and safer option to store information. It becomes easy and efficient to build an application with containers to know when and where it should execute. with flexible computation to pump up containers.

Azure Functions

In Azure Functions an extension called Durable Functions is available which can be used in writing the stateful functions in a serverless environment. By using this extension workflow of the application can be defined and also orchestrator functions can be written. An orchestrator function creates the workflow without any pre-declared schemas or designs.

These functions can call other durable functions synchronously or asynchronously where the output from the functions can be stored in any local variable. The process of execution can be automatically checkpointed where the function can await or yield. These functions can run really long depending upon the requirement. Local State in between is never lost while using these functions.

The tasks which are included in the process of orchestration in azure functions are Scheduling, Health –Monitoring, Failover, Scaling, Networking, Service Discovery, Co-ordinated application upgrades.

12. AWS Lambda vs Azure Functions Pricing

Information on costs is a retrospective aspect. After the reality of investment, the concrete figures arrive. The lack of budgeting leaves decision-makers uncertain, using them well in advance to prepare the cost of infrastructure. Therefore, our aim is to consider the cost structure and to be able to anticipate invoice shifts as apps and industry grow.

The pricing of Serverless is based on a pay-per-use model where both the services are divided based on two cost components:

  1. Pay-per-call
  2. Pay-per-GB*seconds

The second one is a metric that combines the consumed memory with execution time.

Moreover, everything was almost the same including the price tag in both the services was exactly the same. Though there were some minor differences such as:

AWS Lambda Pricing

For the maximum potential of provisioned memory. You can measure the length of the code which starts before it starts the return value or otherwise terminates the round-ups nearer 1ms* that includes test calls appearing from the console.

The charge is dependent on the total memory that you delegate within your role. You select the amount of memory you like for your role in the AWS Lambda resource model and are allocated to it. The incremental value of the memory triggers is equivalent to an increase in the availability of CPU functions. The CPU profiles are different for lambda functions and can lead to a separate duration of many comparable workloads.

Price structure- It starts from $0.20 per 1M requests and goes up to $16.67 per million GB*seconds with free service of 1 million executions and 400,000 GB-s.

The cost of duration is dependent on the memory allocated to the given functions. You can manage the memory by dividing it between 128MB and 10240 MB, in 1MB increments. The table below allows users to store smaller separate memory sizes ranging from 1ms or more memory size.

Memory (MB) (with larger memory) Price /1ms
128 $0.0000000021
512 $0.0000000083
1024 $0.0000000167
1536 $0.0000000250
2048 $0.0000000333
3072 $0.0000000500
4096 $0.0000000667
5120 $0.0000000833
6144 $0.0000001000
7168 $0.0000001167
8192 $0.0000001333
9216 $0.0000001500
10240 $0.0000001667

Azure Functions Pricing

This function evaluates the average consumption of memory when executed. Executions share instances where the cost of memory is not charged multiple times when shared in between processes, leading to a major reduction. In this Consumption Plan, the bill will be dependent on resource consumption and execution per second, while in the Premium Plan, you will be billed based on v-CPU and GB-s resource consumption.

Pricing: $0.20 per 1M requests and $16.00 per million GB*seconds with the free grant of 1 million executions and 400,000 GB-s [*Free grants apply to paid, consumption subscriptions only].

Note—When you create any functions app, a storage account will be created by default which isn’t a part of the free grant. So, networking and storage rates are separately charged as peruse or as applicable.

Users can expect enhanced performance in the Azure Functions Premium plan and are charged based on the number of vCPU ($0.173 vCPU/hour) and GB ($0.0123 GB/hour) per second as per your Premium Functions consumption. The Azure function provides the ability to run functions at regular app service plans within customer’s app service plan rates.

Below is the example to compare pricing for AWS Lambda and Azure function:

Let’s assume that there is a function that observes 512 MB memory with a one-second execution duration and executes 3 million times during a month.

Below is the billing calculation of a month:

  AWS Lambda Azure Function
Resource consumption – In seconds
Executions 3 million 3 million
Execution duration 1 second 1 second
Resource consumption Total 3 million seconds (Total: Executions * Execution duration) 3 million seconds (Total: Executions * Execution duration)
Resource consumption in GB-s
Resource consumption converted to GBs 512 MB / 1,024 MB 512 MB / 1,024 MB
Execution time 3 million seconds 3 million seconds
Total GB-s 1.5 million GB-s (Total: Resource consumption * Execution time) 1.5 million GB-s (Total: Resource consumption * Execution time)
Billable resource consumption
Resource consumption 1.5 million GB-s 1.5 million GB-s
Monthly free grant 400,000 GB-s 400,000 GB-s
Total billable consumption 1.1 million GB-s (Total: Resource consumption – Monthly free grant) 1.1 million GB-s (Total: Resource consumption – Monthly free grant)
Monthly resource consumption cost
Billable resource consumption 1.1 million GB-s 1.1 million GB-s
Resource consumption price $0.000016/GB-s $0.00001667/GB-s
Total cost $17.60 (Billable resource consumption * Resource consumption price) $18.34 (Billable resource consumption * Resource consumption price)

Below is an example of Executions billing calculation: [Both are the same in cost]

  AWS Lambda Azure Function
Monthly Billable executions
Total executions 3 million executions 3 million executions
Free executions 1 million executions 1 million executions
Billable executions 2 million executions (Total: Monthly executions – Free executions) 2 million executions (Total: Monthly executions – Free executions)
Monthly executions cost
Billable executions 2 million executions 2 million executions
Price per million executions $0.20 $0.20
Execution cost per month $0.40 (Billable executions * Price per million executions) $0.40 (Billable executions * Price per million executions)

Consumption Summary:

  AWS Lambda Azure Function
Total monthly cost
Resource consumption $17.60 $18.34
Executions $0.40 $0.40
Total $18 $18.74

Important pricing difference:

  • Transfer of data among lambda and its storage is chargeable as an additional fee within a different region whereas inbound data transfers are free in Azure functions but not outbound data transfer from one datacenter to another cloud environment.
  • Functions are initiated by provisioned Concurrency which is the reason for higher rates in Lambda so that they can handle requests more quickly. The cost is dependent on the functional memory and the consumption during execution time. A similar feature is also offered by Azure function in its premium plan with the addition of enhanced function performance and additional virtual networking.

13. Storage

AWS Lambda

There are multiple storage options provided by Lambda to meet the needs of developers. The list of storage services includes Amazon S3 and Amazon EFS also some other storage options like temporary storage or Lambda layers available. Let us see these options in brief:

  Amazon S3 /tmp Lambda Layers Amazon EFS
Maximum size Elastic 512 MB 50 MB (direct upload; Larger if from 53). Elastic
Persistence Durable Ephemeral Durable Durable
Content Dynamic Dynamic Static Dynamic
Storage type Object File system Archive File system
Lambda event source integration Native N/A N/A N/A
Operations supported Atomic with versioning Any file system Operations Immutable Any file system operation
Object tagging Y N N N
Object metadata Y N N N
Pricing model Storage + requests + data transfer Included in Lambda Included in Lambda Storage + data transfer + throughput
Sharing/permissions Model IAM Function-only IAM IAM + NFS
Source for AWS Glue Y N N N
Source for Amazon QuickSight Y N N N
Relative data access speed from Lambda Fast Faster Faster Very Fast

Azure Functions

Azure provides a storage account which is known as Azure Storage Account. Azure storage by default provides encryption of data using Microsoft-managed keys. On creating a new function app, a Storage Account should be created. Azure Storage supports Files and Table, Queue and Blob Storage. There are some storage accounts that do not provide support to queues and tables. Azure Functions depend on Azure Storage for the operations like managing triggers and logging function execution.

There are many types of Storage accounts namely Azure Premium, general-purpose and blob-only storage accounts. Which account to choose? It can be selected while creating a function app based on our existing storage account. Azure portal is used to create azure storage for azure functions. To get a good performance result it is advised that the azure function app and azure storage account must be present in the same region. You can share the same storage account with multiple functional apps but it is advised to keep individual storage accounts and individual functional apps to get better performance. Especially when durable functions and Event-Hub triggers are used in function apps one should always create separate storage.

14. AWS Lambda vs Azure Functions Performance

1. Cold Start

Instances of cloud functions work on demand dynamically so whenever a first request is handled by any new instance, the response time increases, which is known as a cold start. After the deployment whenever the first request comes in, the first cold start happens. The moment that request is processed which can be used to make the function active and reuse it in further processes. In the case of both these AWS Lambda and Azure Function, they consist of warm instances to avoid cold start in premium and dedicated plans.

AWS Lambda

About recycling an Idea instance, AWS has the policies of a fixed 10 minutes period. If you compare the cold start difference between new and existing instances, it was moreover the same.

Azure Function

Azure has the policies to recycle an idle instance after mostly 20 Minutes. In Azure function 1.5 GB memory allocates on new instances so medium cold start latency seems high despite AWS lambda.

2. Concurrency and Isolation

Both the functionalities of AWS Lambda and Azure services are capable of executing multiple executions of the same function concurrently. And among these, each one has an incoming event to be handled.

AWS Lambda

The performance is constant and completely predictable as each execution is unique with a distinct pool of memories and CPU cycles. For each execution, it always keeps a different instance for each execution.

Azure Function

Azure Functions has the ability to allocate multiple simultaneous executions to the same virtual node, so you can’t predict the performance entirely and also, it’s not stable. As an example, if one execution is waiting for a response and idea in a queue, at the same time some other execution may use resources, otherwise be wasted. In some of the cases, executions that are starving for the resources might harm the overall processing time and/or performance by fighting for the pool in a shared environment.

15. Deployment

AWS Lambda

AWS Lambda has the ability to deploy all the functions on servers in the Lambda environment that can run on Amazon Linux. With this lambda function, users can interact with all other services on AWS cloud and other options but if we talk about deployment then it is limited to just Lambda service.

Azure Functions

The deployment in the Azure function seems quite scalable and users can either run software internally within Docker and containers with deployed code options on Azure functional services. This enables programmers to take more control of the execution of the environment. By integrating with Kubernetes, event-driven auto-scaling is performed and the packaged functions are easily deployed to Kubernates.

Users can deploy Azure Functions to either Linux or Windows servers. In general, the host operating system should not make a difference but if your functions have any other dependencies or any code specific to any OS, (ie. For most programming languages and other libraries that run only on specific OS) it is quite an important factor.)

16. Examples

AWS Lambda

  1. Netflix – You will find lot of aws lambda use cases and among them Netflix is the world’s leading Internet television network that also uses AWS Lambda. The storage, numerous customers, fast processing, and high-quality is based on the fast AWS system. “From years of managing a sophisticated and dynamic infrastructure, we’re excited by AWS Lambda and the prospect of an evolution in the way we build and manage our applications,” said Neil Hunt, Chief Product Officer, Netflix. (https://www.youtube.com/watch?v=SorHbAiZ918) “From easier media transcoding and faster monitoring, from disaster recovery to improved security and compliance, AWS Lambda promises to help us develop dynamic event-driven computing patterns.”
  2. The Seattle Times – It’s a family-owned news media business serving the Pacific Northwest built on AWS Lambda. The Seattle Times has so many awards in its bucket. Seattletimes.com attracts so many visitors per month (in millions), by standing as the biggest local digital network. For more details visit https://aws.amazon.com/solutions/case-studies/the-seattle-times/
  3. Financial Engines – Offers investment and finance tips. AWS Lambda helps to increase processing speed and especially handling requests rate which is up to 60,000 per minute.

For more case-studies visit https://aws.amazon.com/solutions/case-studies/

Azure Functions

  1. Fujifilm – FUJIFILM Software got a great outcome by moving its popular service (image file management and sharing) to the Azure platform. It delivers customer satisfaction beyond high reliability and lower latency.
  2. Relativity- The company enhanced its performance by creating a monitoring solution using the Azure function which helps to identify and resolve performance issues.

For more case-studies visit https://azure.microsoft.com/en-in/resources/customer-stories/

What to Choose? : AWS Lambda or Azure Function

Common Advantages:

  • No infrastructure
  • Pay only when invoked
  • No deploy, no server, great sleep
  • Easy to deploy

Advantages of AWS Lambda:

  • Cheap
  • Quick
  • Stateless
  • Extensive API
  • Event-Driven Governance
  • Autoscale and cost-effective
  • VPC Support
  • Integrated with various AWS services
  • Better graphical view

Advantages of Azure Function:

  • Great developer experience for C#
  • Multiple languages supported
  • Great debugging support
  • Easy Scalability
  • Can be used as a lightweight HTTPS service
  • Azure component events for Storage, services, etc
  • Event-driven
  • Webhooks

It is quite clear from a comparison between Azure Functions vs AWS Lambda that serverless computing empowers customers to build applications at a higher pace and a lower cost. It doesn’t count which approach you take up. It was also possible to ship the designs even sooner and with lesser glitches. Compared to other conventional client-server strategies, it aims to save the corporate initiative but is not without its own individual characteristics.

Hence for both types, a Linux-based platform like AWS Lambda or windows suit like Microsoft Azure. It is vital for businesses to have an in-depth understanding of server-side software and business logic. The most considerable benefit of serverless development is that it adds slight revisions, with no effort in the code you write. One service can be scalable compared to another with minor modifications in input/output.

AWS Lambda and Azure Functions are identical services, but the downside is in the specifics and nearly every angle reveals some important differences between the two. Hence you can say that there is no ‘good way’ or’ bad way’ to handle these FaaS suppliers. The differences in the article are not exhaustive as it covers everything in detail, each aspect would require a separate article. It’s doubtful that your options will drive your best results.

It does not matter what you want to choose until you decide to go and choose something different from AWS and Azure. You’ll be reaping the benefits of a hyper scalable cloud architecture that can meet the growing needs of the businesses.

You might feel challenged while selecting a serverless solution which might not be fruitful at once. Some of the factors while choosing to include your budget, project and time frame to which the provider stands still.

As a conclusion, choose the best suitable option!

Scaling Face-Off

To compare scaling of both the cloud technologies we rent some scripts individually and record some IO to demonstrate.

Plan Details:
Azure: Consumption plan
AWS: Free Tier plan

Integration:
Azure: Created one azure function app with request type HTTP Trigger
AWS: Created one Lambda function and called via API Gateway

Function Details:
Language: .net core 3.1
Response: long dynamic string as an output (700+ characters)
The function consists of small and complex compute code which returns a long dynamic string as an output (700+ characters) and you run this script from JMeter on random time interval with 800 users and results are as below with the configuration details:

Environment Details:
Internet Bandwidth:
Download: 75.75 Mbps
Upload: 76.32 Mbps

JMeter Configurations:
Number of PCs: 3 (1 Master (Server) and 2 Slave Machines)
PCs Configurations: Core I5, 8 GB RAM (All of 3)
Function Hit Type: Uniform Random Timer
Random delay maximum: 3000 ms
Constant delay offset: 300 ms
Number of Threads (Users): 800

  • 400 Threads each
  • Single Slave machine can hit 400 Users in 60 seconds
  • So, 2 Slave machines can hit 800 users in 60 seconds

Ramp-Up Period: 60 second
Number of Hits: 800 (1 hit per user)

Comparison Details From JMeter

Aggregate Report

  • AzureAzure
  • AWS
AWS
As you can see, the error % is 1.1which is ultimately 9. So, 9 hits failed among 800.

Aggregate Graph

  • With Add Details (Azure):Azure Aggregate
  • With Add Details (AWS):AWS Aggregate
  • Average (Azure):Average Azure
  • AWS:Average AWS

Response Time Graph

  • Azure:Responsetime Azure
  • AWS:Responsetime AWS

Graph Results

  • Azure:Azure Result
  • AWS:AWS Result

Portal Comparisons

  • Azure
    • Total 800 requests
    • Average duration: 68.8 msAzure Overall
  • AWS
    • Total 800 Requests among 9 are failed same mentioned in JMeter response.AWS Overall
    • Failed requestsFail Request

Duration

  • Azure
    • Total Average Duration (Aggregated duration for an entry, calculated by average or 50th / 95th / 99th percentile). The average is 68.8 ms.Azure Duration
    • Graph including 50th / 95th / 99th percentileAzure Graph Duration
  • AWS
    • Total Average Duration (Aggregated duration for an entry, calculated by average or 50th / 95th / 99th percentile). The average is 30.7 ms.Aws duration
    • Maximum Duration: 415.46, Duration Average: 30.68, Minimum Duration: 2.43Aws average duration

Matrix Comparison

  • AzureAzure Matrix
  • AWSAWS Matrix
Final States Azure AWS
User Load 800 users 800 users
Random delay maximum(ms) 3000 3000
Constant delay offset (ms) 300 300
Passed Requests 100% (800) 98.9% (791)
Failed Requests 0% 1.1% (9)
Avg Response Time (ms) 68.8 30.7

Conclusion

In this extensive blog post, We have tried to cover all the aspects that are essential and dominating while we compare lambda vs azure. This blog will help developers from any software development company to find an ideal serverless computing application. If your business or team of developers are not sure of any serverless platform or service, this one will be a saving grace. We have considered all the scenarios, and functions, it is quite visible that AWS Lambda seems faster than the Microsoft Azure functions.

FAQs:

Is Azure Functions Equivalent to AWS Lambda?

When it comes to comparing Azure Functions and AWS Lambda, generally these platforms tend to serve the same purpose. This means it is easy to say that AWS lambda is similar to Azure. But this is not the case for each and everything, there are some things that might differ as per the usage and project it is used for. Having said that, both these platforms have the capability to serve both remote and local testing.

What is the Difference between AWS Function and AWS Lambda?

One major difference between AWS functions and AWS Lambda is that Lambda supports unlimited functions per project. This means that it will enable a developer to have 1000 executions per account. On the other hand, AWS Functions offers 1000 functions per project but only with 400 executions.

How do I Migrate AWS Lambda to Azure Functions?

First of all, you will have to make use of the core tools of the cross-platform Azure Functions in order to create a local functions project. Then you’ll have to run the project. After that use the cross-platform Visual Studio Code to create and debug the entire project. Later implement the fully migrated function to deploy it directly from Visual Studio Code.

The post AWS Lambda vs Azure Functions: Serverless Computing appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/aws-lambda-vs-azure-functions/feed/ 32
What is an Execution Plan in SQL Server and How to Use It? https://www.tatvasoft.com/blog/optimize-sql-query/ https://www.tatvasoft.com/blog/optimize-sql-query/#respond Fri, 22 Jan 2021 13:31:07 +0000 https://www.tatvasoft.com/blog/?p=4420 In a competitive IT industry, the primary challenge is to make the product available in the market for users so that they don’t opt for any other alternatives. The development of products is aimed to benefit the customers with great performance and latest technology.

The post What is an Execution Plan in SQL Server and How to Use It? appeared first on TatvaSoft Blog.

]]>
In a competitive IT industry, the primary challenge is to make the product available in the market for users so that they don’t opt for any other alternatives. The development of products is aimed to benefit the customers with great performance and latest technology. The developed product and its performance are good enough for users to make the best use of technology. Of course, most of the products/applications might be dealing with heavy data flow between back-end services and database servers.

An ideal and responsible team of dedicated software developers considers it a duty to write an efficient query that gives quicker and optimized results. Apart from that, we also have to optimize SQL queries which are performing slow and unable to give faster results.

SQL Server query performance tuning is seen as a primary concern because of the constant battle of database managers to achieve the highest performance and the lowest use of resources for their managed systems.

Using SQL Execution Plans as the first and foremost approach for any database administrator, we recall query output tuning. This is how the strategy advises us on what to tune. We demonstrate how the tasks are done internally with the execution of a road map presentation.

1. Best Practices to Get High Performance in SQL Queries

Before we jump into the execution plan, let’s go through some best practices to write high performing SQL queries.

  • Perform the query for the required columns only, this would ensure that the db query is fired for the necessary columns and that no unnecessary columns are fetched, ensuring that efficiency and specifications are better fulfilled. Unless it suits the complete purpose, do NOT reply to queries.
  • Subqueries should be avoided. Perform functions to join or write as needed.
  • Utilize proper indexes (for faster search results).
  • Often be aware of NULL events in your results.
  • Often use table aliases where there is more than one source involved in your SQL statement. It increases readability, maintenance and assures that the proper columns are collected.
  • In the ORDER BY clause, do not include column numbers. The main purpose of using this is for usability and scalability and not just performance. It might not be a concern while you create the database, but when time progresses and new columns are applied to the SELECT statement or whether you have used ColumnNumber, the original table is reordered. When you use ColumnNumber, then the results will be unpredictable and wrong in terms of ORDERBY.
  • It is important to use INSERT statements in the column list. We advise this to help Software developers avoid the table modifications for the NULL values added to the columns. Thus the impact can be easily identified.
  • For T-SQL code, never use double quotes.
  • You should not use a prefix for stored procedures that starts with “sp_”.It is a syntax of the system that SQL provides us. It is advised to follow a unique pattern to name written procedures that can be differentiated easily.

2. What is SQL Execution Plan?

As discussed, the Execution Plan in SQL server management studio is a graphical representation of different operations performed by the SQL query processor. When you execute any query then the query processor will generate an execution plan along with the Query to be initiated. Identifying query workflow, operators, and components begins with query execution plans. SQL server provides three types of execution plans i.e Estimated plan, Actual plan and Cached plan but today we will be discussing two basic plans.

  • Estimated Execution Plan

The moment you confirm the query to the SQL Server, the query optimizer will show a graphical execution plan that will display estimated execution plan time. This SQL Server database will take steps as per the query plan and you will receive the actual bill of the estimated execution plan.

  1. This type of plan is generated before executing the query or we can say during the compilation time.
  2. It is just an estimation by the query processor.
  3. No runtime information is provided with this.
  • Actual Execution Plan

From the display window when you click on the actual execution plan, this gets activated.  This plan is used to troubleshoot the concerns of performance during the query execution plan to improve and boost performance.

  1. This type of plan is generated once the query gets executed or we can say after the run time. The actual execution plan shows the steps SQL Server takes to execute the query.
  2. It is giving actual information by the query processor.
  3. It provides all information like which are the steps involved when we execute that query.

SQL Server Execution Plan Formats

The SQL Server Management Studio will generate a SQL Server execution plan in graphical format by default. However you have the option of viewing execution plans in three different formats:

  1. Graphical (generated by default)
  2. XML
  3. Text

3. How Do You Create a SQL Execution Plan?

An execution plan is generated when you execute any query which necessarily includes the query along with the plan. Running an execution plan for a particular query provides insights into the SQL Server query optimizer and query engine. It is possible to get an estimated SQL Server execution plan based on SQL Server statistics. There are many other ways to execute this plan. If we were to compare SQL then it would be with a mechanic. It works in the same way. Like a mechanic comes and checks your vehicle, then provides you with an estimation of plan based on his observation. There are other factors too that influence the plan such as vehicle health, time and material required, the cost that mechanic charges and so on. Similarly, SQL estimates the execution plan and later you can predict actual cost, time which may vary a little from actual SQL Server execution plans since it’s an estimation.

In SQL, you will use the Menu button, and toolbar buttons and shortcut keys. A SQL Server execution plan is important to check performance issues in the query execution. For more details, let’s get deep into the process below to know how to obtain the estimated and actual execution plans.

  • Shortcut key: There is a shortcut key available to check for the Estimated Execution plan. You can press Ctrl+L after writing the query in the Query window.
  • In the Context Menu of the Query Window, you will find a menu on the toolbar in SQL Server Management Studio with the name “Display Estimated Execution Plan”. It will work the same way as the above step will do. It will display the Estimated Execution Plan. Also, in the Query menu, there is an option available named “Display Estimated Execution plan”.
  • In the toolbar, There is a button with this image. Image And this is exactly how the Actual Execution plan functions there is a shortcut key available for this and that is Ctrl + M.

(Note: If the button does not exist then check on Add or Remove Buttons. Also, in the Query menu there is an option available named “Include Actual Execution plan”.

Estimated Execution Plans in SQL Server Management Studio Example

Estimated Execution Plan Example

Actual Execution Plans in SQL Server Management Studio Example

Actual Execution Plan Example

As you can see in the example it is showing the Actual Execution Plan in SQL Server Management Studio. We have executed the query and there are 3 tabs available. If you check the difference it will show you the time taken for the scan. There are 3 parts in which we can divide our execution plan.

  1. Clustered Index Scan (Clustered)
  2. Sort Operation
  3. Select Operation

4. What are the Components of the SQL Execution Plan?

As there is no data available in the table and it’s a simple query so the estimated execution plan and actual execution plan will be the same but if you go with the big queries then you will see the difference. You can use that to optimize your Query.

When you hover on the Clustered Index Scan there will be detailed results available. Have a look at the below screenshot.

Clustered Index Scan

If you see in the image above, SQL has provided different details. We will discuss everything in detail.

  1. Physical Operation: Physical Operators are the objects that perform such operations. Some of the Examples are Index Seek, Clustered Index Scan etc. Logical Operators are giving direction to this kind of operator to perform the defined operations.
  2. Logical Operation: In Physical Operation, Our Software Developers use the work of Logical Operators. It also gives a clear picture of what query is necessary to process and how it will perform.
  3. Actual Execution Mode: This section will portray the actual plan to be executed. It is used by the Processing engine and  for executing the query.
  4. Estimated Execution Mode: It is similar to the above plan but the only difference is it is showing estimated value.
  5. Storage: The output fired by the query is like an optimizer extracted from the query.
  6. Legit facts and figures for all executions- The actual plan will show all the real figures and numbers in the execution plan. Based on the condition that will give us no records and no returns.
  7. Actual Number of Batches: This will exist only in the Actual execution plan. If it’s a Batch query then it will return No of Batches.
  8. Estimated Operational Cost: If there are any other operational costs involved in our query then it will do the calculation for that and will be displayed here.
  9. Estimated I/O Cost: It shows the accurate number of input and output costs of the result set.
  10. Estimated CPU Cost: It estimates the cost to execute the operations with the CPU.
  11. Estimated Subtree Cost: When the Execution plan is generated then it generates Tree. From now, you’ll be able to calculate
  12. Number of Executions: This will exist only in the Actual execution plan. In the single batch, the number of executions that can be handled by the Optimizer.
  13. Estimated Number of Execution: Similar to the above one but the only difference is that it will give you the Estimated value.
  14. Estimated Number of Rows per Execution: This is just an estimation from Optimizer that how many rows are going to be returned.
  15. Estimated Number of Rows to be Read: This is just an estimation from Optimizer that how many rows are going to be read.
  16. Estimated Row Size: As the name suggests, it is showing you the estimated row size of the storage.
  17. Actual Rebinds- This will be active during the actual execution plan. It gives information like how many times an object must be reevaluated to process.
  18. Rewind Actual numbers: This part will be repeated in the actual execution plan again.
  19. In correlated operation, the total number of rows is executed by using inner result datasets repeatedly.
  20. Ordered: It determines if the dataset on which operation is performed has implemented sorting or not. If you check in the above example it is giving you False because till now sorting is not done. Once sorting will be done then it will be true.
  21. Node ID: This follows a unique type of numbering from Right to left and then the usual Top to bottom. So, we can say that the Bottom Right will have NodeID=1 and the Top Left has Maximum Node based on the Execution Plan Tree.

In Addition, there are two more components. We can check its information below.

  1. Predicate: It is the value returned by the WHERE clause of the SQL statement.
  2. Object: Defines the Table on which we have performed that query or operation.
  3. Output List: Defines the selected columns which will be displayed in the Dataset or result set.

Search Data in the Table

We are now going to delve deeper into this with an example so before we begin let’s brush up our knowledge about indexes and comparison between those.

Table Scan: In this type of scanning, the scan is comprehensively executed in a way that it touches every row of the table, irrespective of whether it qualifies the given search result or not. This type of scan is an efficient way to check a small table in which the majority of the rows would qualify for the predicate. The estimated cost would be proportional to the total number of rows in the table.

Index Scan: If the table has a clustered index, then the executed query should cover all the rows and columns. So, it is advisable to fire a query that would cover most of the rows or almost all the rows of the table. I.e. a query without a WHERE or Getting clause, the index search would be used. In the process of database optimization, the query optimizer chooses the best one from the available index. And based on that information, the clause functions are clearly defined when the whole table is scanned.

This clause keeps the statistical information of the database.

The moment you choose the right index. The immediate next step is to navigate the tree structure to all the matching data points and using SQL Query processor or engine extract the exact records.

One of the major differences between a full table search and an index scan is that when data is sorted in the index tree, the database engine understands when it has reached the limit and still looks for that. It can then submit the question or, if appropriate, pass on to the next data set.

Index Seek: The cost is directly proportional to the number of qualifying rows and pages, rather than the total number of rows in the table, since only qualified rows and pages containing these qualifying rows are affected by a search of these three, this is the quickest one.

5. How can Execution Plans Improve Query Performance in SQL Server?

Execution Plan

From the above image displayed in the execution plan, there are 4 different queries with some minor changes. Let’s take a look at each of these one by one and try to understand what improvements we can make by observing execution plans.

Query 1: SELECT DepartmentID, DepartmentName FROM Department WHERE DepartmentName = ‘HR’

This is a table that does not have any primary key defined and hence it does not have any clustered index created. This performs a complete table scan which is visible in the first execution plan. This query takes the maximum time if the number of records in the table are in millions.

Query 2: SELECT EmployeeID, EmployeeName, DepartmentID, BirthDate FROM Employee WHERE DepartmentID = 3

This query is performing the Index scan which is little faster than a table scan as it would be bringing data in the sorted order the way it was stored through the clustered index. It will still be slower if the table is having huge data.

Query 3: SELECT * FROM Employee WHERE BirthDate = ‘1982-08-07’ I have created a non-clustered index on the BirthDate column as visible in the first image above. Please note that the columns are shown in the included tab in the image below. This means, the index seeks can only be performed if there are many columns selected in the select clause and WHERE clause is on the Birthdate column.

Still the third execution plan is showing Index scan. Confused! Please refer to the first bullet point in the best practices section to get high performance. You can In the SELECT clause we have written * and not the specific columns. This has prevented index seek ability and we didn’t receive the desired performance that was expected. In spite of creating a non-cluster in the database. Now, let’s check the final query.

Final query

Query 4: SELECT EmployeeID, EmployeeName, DepartmentID, BirthDate FROM Employee WHERE BirthDate = ‘1982-08-07’

You will note that the Index seek has been used for this query as we followed the best practices and we have non-clustered indexes created accurately.

To identify performance improvements, one should be aware of best practices and in-depth knowledge about the SQL server. The execution plan helps to find out missing things by its graphical representation so one could easily find out the action items to improve performance in a particular query. I hope this example would have given the idea about how to find out improvements in the query.

6. Conclusion

In this article, we did extensive research and gained insights into the Execution plan. Some of the secret strategies were known on how to produce the execution plan, the discrepancy between the projected execution plan and the real execution plan, and various components of the execution plan nodes. Also, we learnt about how execution plans help to find improvements in the query. This tool has been very useful for the DBA to deal with day to day challenges and when there is a big data concern so you can check the execution plan again and optimize the query whenever needed.

More Useful Resources:
How to Configure Database Mirroring for SQL Server
How to compare two SQL Server Databases using SQL Server Data Tools

The post What is an Execution Plan in SQL Server and How to Use It? appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/optimize-sql-query/feed/ 0
How to Build a Serverless Web App in Azure? https://www.tatvasoft.com/blog/serverless-web-application-in-azure/ https://www.tatvasoft.com/blog/serverless-web-application-in-azure/#respond Tue, 19 Jan 2021 07:29:08 +0000 https://www.tatvasoft.com/blog/?p=4267 In this article, we will learn about how to build a web application without managing web servers in Azure. We will also understand how to create serverless web applications using .NET core. Core Website (front end) and with Azure Functions (as an API).

The post How to Build a Serverless Web App in Azure? appeared first on TatvaSoft Blog.

]]>
In this article, we will learn about how to build a web application without managing web servers in Azure. We will also understand how to create serverless web applications using .NET core. Core Website (front end) and with Azure Functions (as an API). This blog will focus on the steps of building your first serverless web application architecture with Azure.

Now let’s start to understand the term “serverless”…

1. What is a Serverless Web Application?

In the Software and technology sector, Serverless means the machine does not demand any space on the server. Serverless as a whole is an interconnected entity and so its types have two distinct terms but with similar meanings.

Read more about Serverless Architecture

Backend as a service (BaaS)

The Back-end cloud services is a popular type of database and storage platform that enables users to connect API directly to the services of client applications.

Functions as a service (FaaS)

For Function as a service, there is a piece of code that is deployed on the cloud and then the same code is executed. This is implemented in a hosting environment that runs the code to abstract data from servers.

How to Build a Serverless Web App in Azure

2. Why Should We Create a Serverless Application?

Developing a serverless application enables you to concentrate on your application code rather than maintaining and running infrastructure. Also, there is no need for you to consider provisioning or configuring servers as AWS manages these functions for you. Serverless apps are preferred over typical server-hosted apps for several reasons. Let us look at a few of those:

  1. Low maintenance
  2. Low cost
  3. Easy to scale
  4. No infrastructure management

3. What are Azure Functions?

The Azure function is a compute-on-demand experience that is driven by events to extend the legacy Azure application platform. It is capable of implementing codes that are triggered through events that occur in Azure apps or any other third-party apps located on the cloud or on-premise systems.

By using Azure functions, the Software developers can now easily perform connection of data at different sources. They can also perform messaging solutions that make the process simpler and take action to the events. Using Azure Functions, developers can easily build HTTP-based API endpoints for easy accessibility to a wide range of apps on mobile devices and IoT-based devices. It is an on-demand and scale-based application that takes up pay as you go. This enables you to pay for what you have consumed and no extra amount is expected to be paid.

4. How Do I Create an SQL Database in Azure?

Microsoft Powered Azure SQL database is a cloud-computing based service which is also known as Database as a service. This unique and innovative platform allows clients to host relational databases on the cloud and use them without any special hardware or software installation. It also enables various modern features that benefit the business in the long run. Enlisted below are some of the advantages of Azure SQL database that will change the functioning of your business.

  • Consistent backup & Retention– It allows users to track their database records for up to 10 years.
  • Replication of data– This function enables the readability of secondary databases at any global location.
  • Auto-Tuning- It is enabled with AI to rightly perform performance tuning which fixes the concerns automatically.
  • Consistency in Business 
  • Improved Data Availability
  • Scalable database- This is the most powerful feature that upscales the database as per the need
  • Automatic Smart backups

For the creation of the Azure SQL database, we will have to migrate data to the Azure Management Portal.  The data inserted in the SQL database is displayed in the dashboard form where you can select and write SQL keywords in the search bar by clicking on the SQL database option.

Azure Services
Add SQL DataBase

Now, we will click the Add link on the SQL databases page. It will open the Create SQL Database page. 

You can now click on Add link option available on the SQL databases page. This gives an option to Create SQL Databases.

Create SQL DataBase
  1. To create a new resource group, click on the Create New button.
  2. You can now give a name to the created resource group.
  3. Click OK to finalize the creation of this group.
Create SQL Database 2

This database is not allowed to use special characters.
Say for Example XYZ??DB or XYZDB – this is not allowed as the name of the database. So make sure that you enter a unique name for the database.

After this, when you click on the Create New link, the New Server page will be initiated on the right side of the page. From the screen, you can define the primary details such as server name, admin login password, and location of the server.

Creat New Server

We can now pick another purchase model to customize the database option. As we assess the output of the database on this screen, this is the most important choice when building a database. We can alter the number of vCores for the vCore-based purchase model so that we can increase or decrease the database performance. Here we just make one change like select the serverless and click on the button Apply.

Configure SQL

After we configure the database we click on the Review + Create button.

Creat SQL Database Review + Create

Review the information/ configuration that was selected for the database and by clicking on the create button it creates the database, it will take 3-4 minutes to deploy and ready to use.

You need to follow the following images in order to know more about the azure database:

SQL DataBases
Serverless AppDB
FireWall Settings
ServerAppDB Connection Strings

5. How do I Create an Azure Function in Visual Studio?

Prerequisites:
To complete this tutorial, we are using Visual Studio 2017. Ensure you select the Azure development workload during installation and you must install the latest Azure Functions tools.

If you don’t have an Azure subscription, create a free account before you begin.

Installing Visual Studio

In order to create an Azure function app, open the Visual Studio and go to the menu, then select File > New > Project. Also make sure all of the values are up to date before running the command to construct the Azure function app. After creating the function app, update the Cross-Origin Resource Sharing Configuration. CORS is an HTTP feature that will allow a web application hosted on one domain to access the resources hosted on another domain.

Azure New Project
Azure ServerlessApps New Project

Select Create in order to create the function project of the HTTP trigger function.

Initially, the Visual studio starts with creating the project and the class for HTTP trigger function type with boilerplate code. Later this code will send an HTTP response with a value from the query string appearing in the body part of the request. This is used because the HTTPTrigger attribute specifies that you have requested this HTTP request using that function.

By default, Visual studio is given the function name like Function1 let change it like in good name like HttpExample, so for that:

  1. In File Explorer, right-click the Function1.cs file and rename it to HttpExample.cs.
  2. In the code, rename the Function1 class to HttpExample.
  3. In the HttpTrigger method named Run, rename the FunctionName method attribute to HttpExample.

To check the functioning of the renaming attribute, you need to press the F5 key on the keyboard and to terminate the process press SHIFT+F5.

functioning of the renaming attribute

you need to select the URL present on the image and then paste it in the HTTP request section of your address bar. The next step is to append the query string using  ?name=<YOUR_NAME> and run the request.

As you can see in the browser it’s working fine and it writes your name as a result.

Now, let us create a function which calls the database, follow the images to create the new function GetUsers.cs

New Function
Add New Items ServerlessApps
New Azure Function users

After that, set the connection string to the local.settings.json.

Set the connection Strings

Make a change in the new function file which we created with the following code: GetUsers.cs

  
public static class GetUsers
    {
        [FunctionName("GetUsers")]
        public static async Task Run([HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]HttpRequest req, 
            TraceWriter log)
        {
            try
            {
                string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
                dynamic data = JsonConvert.DeserializeObject(requestBody);
                // int userId = data?.user;

                List oLst = new List();

                DbConnect dbConnect = new DbConnect();
                DataSet dsTables = dbConnect.GetDataSet("Users", "GetAllUsers");

                using (DataTable dtUser = dsTables.Tables["Users"])
                {
                    if (dtUser != null & dtUser.Rows.Count > 0)
                    {
                        for (int i = 0; i <= dtUser.Rows.Count - 1; i++)
                        {
                            UserModel um = new UserModel()
                            {
                                UserId = Convert.ToInt32(dtUser.Rows[i]["UserId"].ToString()),
                                UserName = dtUser.Rows[i]["UserName"].ToString(),
                                Email = dtUser.Rows[i]["Email"].ToString()
                            };

                            oLst.Add(um);
                        }
                    }
                }

                return (ActionResult)new OkObjectResult(oLst);
            }
            catch (Exception ex)
            {
                return new BadRequestObjectResult(ex.Message);
            }
        }
    }

Create another file that calls the database called: dbConnect.cs

  
     public class DbConnect
    {
        public DataSet GetDataSet(string dstTable, string dstSQL)
        {
            DataSet dst = new DataSet();
            DataSet dstReturn;
            SqlConnection SQLConn = new SqlConnection();
            SqlDataAdapter SQLdad;
            var connectionString = Environment.GetEnvironmentVariable("SqlConnection");

            try
            {
                SQLConn.ConnectionString = connectionString;
                SQLdad = new SqlDataAdapter(dstSQL, SQLConn);
                SQLdad.Fill(dst);

                string[] arrTable = dstTable.Split(',');
                int iPos;
                if (dst.Tables.Count > 0)
                {
                    for (iPos = 0; iPos <= arrTable.Length - 1; iPos++)
                    {
                        dst.Tables[iPos].TableName = arrTable[iPos];
                    }
                }
            }
            catch (Exception ex)
            {
                throw ex;
            }
            finally
            {
                SQLdad = null;
                if ((SQLConn != null))
                    SQLConn.Close();
                dstReturn = dst;
                dst.Dispose();
                dst = null;
            }
            return dstReturn;
        }
    }

After creating new files you will receive an error for System.Data.SqlClient, to resolve that click on Manage NuGet Packages and install the 4.4.0 version of the SQL client.

if you still face any issue/error – check the .csproj file if the below code is written over there, highlighted stuff you need to update accordingly your program environment.

 
    
      Always
    
  
  
    
  

Once all setup and errors are fixed, you can run this function the same as the first function and receive such data.

setup and errors
GetUsers

Now with this, you will observe that this is working fine and we can get some results from this database.

6. How Do I Deploy the Function on the Azure Portal?

To deploy the function on the azure portal follows the following steps:

Right-click on Project and then click on publish:

Azure Pick a Publish Target
Creating App Services to Deploy the function

Create a new profile with your azure account and set the proper App Name for the function.

All you need to do is select the relevant details required about the Subscription, Resource Group, Hosting Plan and Storage Account. Do not forget to enable static website hosting after you create a storage account. Before that ensure you are using a globally unique DNS-compliant name for the storage account. It’s also worth noting that only GPv2 storage accounts enable you to provide static information such as (front end) HTML, CSS, JavaScript, and images. Then you can press the Create button to add the details.

Publish Connected Services

After the Creation of the profile needs to publish the project by pressing the publish button. Once it gets deployed correctly it will show in your Azure portal like this and using URL we can access the Azure Function.

TatvaserverlessApp Function
tatvaserverlessApp Function URL
Coonection String in Configuration
Add connection string in Configuration >> Application Setting, If the connection string is not available.
API GETUsers

Now as you can from the screenshots, this has started working fine and we are getting records from the Azure SQL database using the Azure function.

If you are unable to get records to add a key in a query string like below

query string
Azure SQL database using the Azure function

Now let’s create a new website application that uses the azure function and displays the records on the website.

7. How Do I Create a New .Net Core Website Using Azure Function Inside?

To create a new website application follow the following images:

new website application
new .Net core website using Azure Function

Now to connect the Azure SQL database, let’s add the connection string inside the: appSetting.json

{
  "Logging": {
    "LogLevel": {
      "Default": "Warning"
    }
  },
  "AllowedHosts": "*",
  "AzureFunction": "https://.azurewebsites.net/api/",
  "AzureFunctionKey": "",
  "AzureFunction_GetUsers": "GetUsers?",
  "AzureFunction_GetUserById": "GetUserById?"
}

To read the appSetting.json file let create another file called AppSettings.cs
    public class AppSettings
    {
        public static IConfigurationRoot Configuration { get; set; }

        public static string AzureFunction
        {
            get
            {
                return Configuration["AzureFunction"].ToString();
            }
        }

        public static string AzureFunctionKey
        {
            get
            {
                return Configuration["AzureFunctionKey"].ToString();
            }
        }

        public static string AzureFunction_GetUsers
        {
            get
            {
                return AppSettings.AzureFunction + Configuration["AzureFunction_GetUsers"].ToString() + AppSettings.AzureFunctionKey;
            }
        }

        public static string AzureFunction_GetUserById
        {
            get
            {
                return AppSettings.AzureFunction + Configuration["AzureFunction_GetUserById"].ToString() + AppSettings.AzureFunctionKey;
            }
        }
    }

Also, we need to configure the Startup.cs file like the following, here are changing to read data from appSetting.js file and setting up Hosting Environment:

 
public class Startup
    {
        public Startup(IHostingEnvironment env, IServiceProvider serviceProvider)
        {
            try
            {
                var builder = new ConfigurationBuilder()
                    .SetBasePath(env.ContentRootPath)
                    .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
                    .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
                    .AddEnvironmentVariables();
                Configuration = builder.Build();

                AppSettings.Configuration = Configuration;
            }
            catch (Exception)
            {
            }
        }

        public IConfigurationRoot Configuration { get; }

        // This method gets called by the runtime. Use this method to add services to the container.
        public void ConfigureServices(IServiceCollection services)
        {
            services.Configure(options =>
            {
                // This lambda determines whether user consent for non-essential cookies is needed for a given request.
                options.CheckConsentNeeded = context => true;
                options.MinimumSameSitePolicy = SameSiteMode.None;
            });

            services.AddSingleton(Configuration);
            services.AddSingleton(Configuration);

            services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
        }

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IHostingEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }
            else
            {
                app.UseExceptionHandler("/Home/Error");
                app.UseHsts();
            }

            app.UseHttpsRedirection();
            app.UseStaticFiles();
            app.UseCookiePolicy();

            app.UseMvc(routes =>
            {
                routes.MapRoute(
                    name: "default",
                    template: "{controller=Home}/{action=Index}/{id?}");
            });
        }
    }

Then let’s create a new model file: UserModel.cs

   
public class UserModel
    {
        public int UserId { get; set; }
        public string UserName { get; set; }
        public string Email { get; set; }
    }

Then after inside the controller folder let’s create a new controller called: BaseController.cs
    public class BaseController : Controller
    {
        public static void SerializeJsonIntoStream(object value, Stream stream)
        {
            using (var sw = new StreamWriter(stream, new UTF8Encoding(false), 1024, true))
            using (var jtw = new JsonTextWriter(sw) { Formatting = Formatting.None })
            {
                var js = new JsonSerializer();
                js.Serialize(jtw, value);
                jtw.Flush();
            }
        }

        public static HttpContent CreateHttpContent(object content)
        {
            HttpContent httpContent = null;

            if (content != null)
            {
                var ms = new MemoryStream();
                SerializeJsonIntoStream(content, ms);
                ms.Seek(0, SeekOrigin.Begin);
                httpContent = new StreamContent(ms);
                httpContent.Headers.ContentType = new MediaTypeHeaderValue("application/json");
            }

            return httpContent;
        }

        public static async Task CallFunc(string afUrl)
        {
            try
            {
                dynamic content = null;

                CancellationToken cancellationToken;

                using (var client = new HttpClient())
                using (var request = new HttpRequestMessage(HttpMethod.Post, afUrl))
                using (var httpContent = CreateHttpContent(content))
                {
                    request.Content = httpContent;

                    using (var response = await client.SendAsync(request, HttpCompletionOption.ResponseHeadersRead, cancellationToken).ConfigureAwait(false))
                    {
                        // response.EnsureSuccessStatusCode();

                        var resualtList = response.Content.ReadAsAsync();
                        return resualtList.Result;
                    }
                }
            }
            catch (Exception ex)
            {
                throw ex;
            }
        }
    }

After that let’s modified the HomeController to get the records from the database

HomeController.cs
    public class HomeController : BaseController
    {
        public async Task Index()
        {
            List model = new List();

            try
            {
                var AfUrl = AppSettings.AzureFunction_GetUsers;

                var response = await CallFunc>(AfUrl);
                if (response != null)
                {
                    model = response;
                }

                return View(model);
            }
            catch (Exception)
            {
                return View(model);
            }
        }

        public async Task ViewUser(int user)
        {
            UserModel model = new UserModel();

            try
            {
                var AfUrl = AppSettings.AzureFunction_GetUserById + "&user=" + user;

                var response = await CallFunc(AfUrl);
                if (response != null)
                {
                    model = response;
                }

                return View(model);
            }
            catch (Exception)
            {
                return View(model);
            }
        }

        [ResponseCache(Duration = 0, Location = ResponseCacheLocation.None, NoStore = true)]
        public IActionResult Error()
        {
            return View(new ErrorViewModel { RequestId = Activity.Current?.Id ?? HttpContext.TraceIdentifier });
        }
    }

Add the respective view files inside the View > Home folder

Index.cshtml
@model List
@{
    ViewData["Title"] = "User List Page";
}


    @if (Model != null && Model.Count() > 0)
    {
        foreach (var item in Model)
        {
            
        }
    }
    else
    {
        
    }
User Name Email
@item.UserName @item.Email View
No record found.
ViewUser.cshtml
@model ServerlessWebsite.Models.UserModel
@{
    ViewData["Title"] = "User List Page";
}


    @if (Model != null)
    {
        
    }
    else
    {
        
    }
User Name Email
@Model.UserName @Model.Email
No record found.

Once all the above changes are made we need to deploy it on the Azure portal for that follow the following image:

Pick a Publish Target
Creat App Service
Azurewebsites

8. Final Words

Our extensive researched blog is to provide clarity to all developers and business analysts on how to set up a serverless web application. Using this definite method, you can have detailed information on all types of necessary data such as the name of the App, Subscription details, Resource Group, and Hosting Plan. After this, you can click on the create button then click on the publish button to deploy it. Once it is deployed, it automatically opens the website in the browser and displays it.

Useful Resources:
Safely consume an Azure Function through Microsoft Flow
CLR Function in SQL Server & Time Zone in SQL Server using CLR function
How to compare two SQL Server Databases using SQL Server Data Tools

The post How to Build a Serverless Web App in Azure? appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/serverless-web-application-in-azure/feed/ 0