Vishal Shah, Author at TatvaSoft Blog https://www.tatvasoft.com/blog/author/vishals/feed/ Thu, 13 Jun 2024 09:02:03 +0000 en-US hourly 1 React Design Patterns- A Comprehensive Guide https://www.tatvasoft.com/blog/react-design-patterns/ https://www.tatvasoft.com/blog/react-design-patterns/#comments Thu, 09 May 2024 05:00:53 +0000 https://www.tatvasoft.com/blog/?p=12968 Traditional web development was complicated once but with the arrival of React on the scene, the process has been simplified significantly. It also offers great ease of use thanks to the reusable components and its extensive ecosystem.

The post React Design Patterns- A Comprehensive Guide appeared first on TatvaSoft Blog.

]]>
Traditional web development was complicated once but with the arrival of React on the scene, the process has been simplified significantly. It also offers great ease of use thanks to the reusable components and its extensive ecosystem. 

React ecosystem provides a large range of tools; some are used to fulfill various development requirements whereas others help resolve different types of issues. React Design Patterns are quick and reliable solutions for your typical development problems. 

In ReactJS, you can find a large number of design patterns leveraged by Reactjs development companies and each serves a unique purpose in a development project. This article talks about some “need-to-know” design patterns for React developers. 

But before diving into the topic, it is important to know what are the design patterns in React and how they are useful for app development. 

1. What is a Reactjs Design Pattern?

ReactJS design patterns are solutions to common problems that developers face during a typical software development process. They are reusable and help reduce the size of the app code. There is no need to use any duplication process to share the component logic when using React design patterns.

While working on a software development project, complications are bound to arise. But with reliable solutions from Design patterns, you can easily eliminate those complications and simplify the development process. This also enables you to write easy-to-read code.

2. Benefits of Using React Design Patterns in App Development 

If you have any doubt about the effectiveness of React development, it is because you might have yet to take a look at the benefits of React Design Patterns. Their advantages are one of those factors that make React development more effective. 

2.1 Reusability

React Design Patterns offer reusable templates that allow you to build reusable components. Therefore, developing applications with these reusable components saves a lot of your time and effort. More importantly, you don’t have to build a React application from the ground up every time you start a new project.

2.2 Collaborative Development

React is popular for providing a collaborative environment for software development. It allows different developers to work together on the same project. If not managed properly, this can cause some serious issues. However, the design patterns provide an efficient structure for developers to manage their projects effectively. 

2.3 Scalability

Using Design patterns, you can write React programs in an organized manner. This makes app components simpler. So, even if you are working on a large application, maintaining and scaling it becomes easy. And because every component here is independent, you can make changes to one component without affecting another. 

2.4 Maintainability 

Design patterns are referred to as the solutions for typical development issues because they provide a systematic approach to programming. It makes coding simple which is not only helpful in developing the codebase but also in maintaining it. This is true even if you are working on large React projects

React Design Patterns make your code more decoupled and modular which also divides the issues. Modifying and maintaining a code becomes easy when dealing with small chunks of code. It is bound to give satisfactory results because making changes to one section of the code will not affect other parts in a modular architecture. In short, modularity promotes maintainability. 

2.5 Efficiency 

Although React has a component-based design, it can provide faster loading time and quick updates, thanks to the Virtual DOM. As an integral aspect of the architecture, Virtual DOM aims to improve the overall efficiency of the application. This also helps offer an enhanced user experience. 

Moreover, design patterns such as memoization save results of expensive rendering so that you do not have to conduct unnecessary rerenders. Re-renderings take time but if the results are already cached then they can be immediately delivered upon request. This helps improve the app’s performance.  

2.6 Flexibility

Due to its component-based design, applying modifications to the React apps is convenient. Using this approach allows you to try out various combinations of the components to build unique solutions. Components design patterns also allow you to craft a suitable user interface. Your application needs such flexibility to succeed in the marketplace.

Unlike other famous web development frameworks, React doesn’t ask you to adhere to specific guidelines or impose any opinions. This offers ample opportunities for developers to express their creativity and try to mix and match various approaches and methodologies in React development. 

2.7 Consistency

Adhering to React design patterns provides a consistent look to your application and makes it user-friendly. The uniformity helps offer a better user experience whereas simplicity makes it easy for users to navigate across the app which increases user engagement. Both of these are important factors to boost your revenues.

3. Top Design Patterns in React that Developers Should Know 

Design patterns help you resolve issues and challenges arising during development projects. With so many efficient design patterns available in the React ecosystem, it is extremely difficult to include them all in a single post. However, this section sheds some light on the most popular and effective React design patterns.  

3.1 Container and Presentation Patterns

Container and presentation patterns allow you to reuse the React components easily. Because this design pattern divides components into two different sections based on the logic. The first is container components containing the business logic and the second one is presentation components consisting of presentation logic. 

Here, the container components are responsible for fetching data and carrying out necessary computations. Meanwhile, presentation components are responsible for rendering the fetched data and computed value on the user interface of the application or website. 

When using this pattern for React app development, it is recommended that you initially use presentation components only. This will help you analyze if you aren’t passing down too many props that won’t be of any use to the intermediate components and will be further passed to the components below them.  

If you are facing this problem then you have to use container components to separate the props and their data from the components that exist in the middle of the tree structure and place them into the leaf components. 

The example for container and presentation pattern:

import React, { useEffect, useState } from "react";
import UserList from "./UserList";

const UsersContainer = () => {
  const [users, setUsers] = useState([]);
  const [isLoading, setIsLoading] = useState(false);
  const [isError, setIsError] = useState(false);

  const getUsers = async () => {
    setIsLoading(true);
    try {
      const response = await fetch(
        "https://jsonplaceholder.typicode.com/users"
      );
      const data = await response.json();
      setIsLoading(false);
      if (!data) return;
      setUsers(data);
    } catch (err) {
      setIsError(true);
    }
  };

  useEffect(() => {
    getUsers();
  }, []);

  return ;
};

export default UsersContainer;



// the component is responsible for displaying the users

import React from "react";

const UserList = ({ isLoading, isError, users }) => {
  if (isLoading && !isError) return 
Loading...
; if (!isLoading && isError) return
error occurred.unable to load users
; if (!users) return null; return ( <>

Users List

    {users.map((user) => (
  • {user.name} (Mail: {user.email})
  • ))}
); }; export default UserList;

3.2 Component Composition with Hooks

First introduced in 2019, Hooks gained popularity with the advent of React 16.8. Hooks are the basic functions designed to fulfill the requirements of the components. They are used to provide functional components access to state and React component lifecycle methods. State, effect, and custom hooks are some of the examples of Hooks. 

Using Hooks with components allows you to make your code modular and more testable. By tying up the Hooks loosely with the components, you can test your code separately. Here is an example of Component composition with Hooks: 

// creating a custom hook that fetches users

import { useEffect, useState } from "react";

const useFetchUsers = () => {
  const [users, setUsers] = useState([]);
  const [isLoading, setIsLoading] = useState(false);
  const [isError, setIsError] = useState(false);
  const controller = new AbortController();

  const getUsers = async () => {
    setIsLoading(true);
    try {
      const response = await fetch(
        "https://jsonplaceholder.typicode.com/users",
        {
          method: "GET",
          credentials: "include",
          mode: "cors",
          headers: {
            "Content-Type": "application/json",
            "Access-Control-Allow-Origin": "*",
          },
          signal: controller.signal,
        }
      );
      const data = await response.json();
      setIsLoading(false);
      if (!data) return;
      setUsers(data);
    } catch (err) {
      setIsError(true);
    }
  };

  useEffect(() => {
    getUsers();
    return () => {
      controller.abort();
    };
  }, []);

  return [users, isLoading, isError];
};

export default useFetchUsers;

Now, we have to import this custom hook to use it with the StarWarsCharactersContainer component. 

Now, we have to import this custom hook to use it with the UsersContainer component.

import React from "react";
import UserList from "./UserList";
import useFetchUsers from "./useFetchUsers";

const UsersContainer = () => {
  const [users, isLoading, isError] = useFetchUsers();

  return ;
};

export default UsersContainer;

3.3 State Reducer Pattern 

When you are working on a complex React application with different states relying on complex logic then it is recommended to utilize the state reducer design pattern with your custom state logic and the initialstate value. The value here can either be null or some object.

Instead of changing the state of the component, a reducer function is passed when you use a state reducer design pattern in React. Upon receiving the reducer function, the component will take action with the current state. Based on that action, it returns a new State. 

The action consists of an object with a type property. The type property either describes the action that needs to be performed or it mentions additional assets that are needed to perform that action. 

For example, the initial state for an authentication reducer will be an empty object and as a defined action the user has logged in. In this case, the component will return a new state with a logged-in user. 

The code example for the state reducer pattern for counter is given below:

import React, { useReducer } from "react";

const initialState = {
  count: 0,
};

const reducer = (state, action) => {
  switch (action.type) {
    case "increment":
      return { ...state, count: state.count + 1 };
    case "decrement":
      return { ...state, count: state.count - 1 };
    default:
      return state;
  }
};

const Counter = () => {
  const [state, dispatch] = useReducer(reducer, initialState);

  return (
    

Count: {state.count}

); }; export default Counter;

3.4 Provider Pattern 

If you want your application to stop sending props or prop drilling to nested components in the tree then you can accomplish it with a provider design pattern. React’s Context API can offer you this pattern.

import React, { createContext } from "react";
import "./App.css";
import Dashboard from "./dashboard";

export const UserContext = createContext("Default user");

const App = () => {
  return (
    
); }; export default App; //Dashboard component import React, { useContext } from "react"; import { UserContext } from "../App"; const Dashboard = () => { const userValue = useContext(UserContext); return

{userValue}

; }; export default Dashboard;

The above provider pattern code shows how you can use context to pass the props directly to the newly created object. Both the provider and the consumer of the state have to be included in the context. In the above code, the dashboard component using UserContext is your consumer and the app component is your provider. 

Take a look at the visual representation given below for better understanding.

Provider Pattern

When you don’t use the provider pattern, you have to pass props from component A to component D through prop drilling where components B and C act as intermediary components. But with the provider pattern, you can directly send the props from A to D. 

3.5 HOCs (Higher-Order Components) Pattern 

If you want to reuse the component logic across the entire application then you need a design pattern with advanced features. The higher-order component pattern is the right React pattern for you. It comes with various types of features like data retrieval, logging, and authorization. 

HOCs are built upon the compositional nature of the React functional components which are JavaScript functions. So, do not mistake them for React APIs. 

Any higher-order component in your application will have a similar nature to a JavaScript higher-order function. These functions are of pure order with zero side effects. And just as the JavaScript higher-order functions, HOCs also act as a decorator function.

The structure of a higher-order React component is as given below: 

const MessageComponent = ({ message }) => {
  return <>{message};
};

export default MessageComponent;



const WithUpperCase = (WrappedComponent) => {
  return (props) => {
    const { message } = props;
    const upperCaseMessage = message.toUpperCase();
    return ;
  };
};

export default WithUpperCase;


import React from "react";
import "./App.css";
import WithUpperCase from "./withUpperCase";
import MessageComponent from "./MessageComponent";

const EnhancedComponent = WithUpperCase(MessageComponent);

const App = () => {
  return (
    
); }; export default App;

3.6 Compound Pattern

The collection of related parts that work together and complement each other is called compound components. A card component with many of its elements is a simple example of such a design pattern.

Compound Design Patterns

The functionality provided by the card component is a result of joint efforts from elements like content, images, and actions. 

import React, { useState } from "react";

const Modal = ({ children }) => {
  const [isOpen, setIsOpen] = useState(false);

  const toggleModal = () => {
    setIsOpen(!isOpen);
  };

  return (
    
{React.Children.map(children, (child) => React.cloneElement(child, { isOpen, toggleModal }) )}
); }; const ModalTrigger = ({ isOpen, toggleModal, children }) => ( ); const ModalContent = ({ isOpen, toggleModal, children }) => isOpen && (
× {children}
); const App = () => ( Open Modal

Modal Content

This is a simple modal content.

); export default App;

Compound components also offer an API that allows you to express connections between various components. 

3.7 Render Prop Pattern

React render prop pattern is a method that allows the component to share the function as a prop with other components. It is instrumental in resolving issues related to logic repetition. The component on the receiving end could render content by calling this prop and using the returned value. 

Because they use child props to pass the functions down to the components, render props are also known as child props. 

It is difficult for a component in the React app to contain the function or requirement when various components need that specific functionality. Such a problematic situation is called cross-cutting. 

As discussed, the render prop design pattern passes the function as a prop to the child component. The parent component also shares the same logic and state as the child component. This would help you accomplish Separation of concerns which helps prevent code duplication. 

Leveraging the render prop method, you can build a component that can manage user components single-handedly. This design pattern will also share its logic with other components of the React application. 

This will allow the components that require the authentication functionality and its state to access them. This way developers don’t have to rewrite the same code for different components. 

Render Prop Method Toggle code example:

import React from "react";

class Toggle extends React.Component {
  constructor(props) {
    super(props);
    this.state = { on: false };
  }

  toggle = () => {
    this.setState((state) => ({ on: !state.on }));
  };

  render() {
    return this.props.render({
      on: this.state.on,
      toggle: this.toggle,
    });
  }
}

export default Toggle;



import React from "react";
import Toggle from "./toggle";

class App extends React.Component {
  render() {
    return (
      

Toggle

(

The toggle is {on ? "on" : "off"}.

)} />
); } } export default App;

3.8 React Conditional Design Pattern

Sometimes while programming a React application, developers have to create elements according to specific conditions. To meet these requirements, developers can leverage the React conditional design pattern. 

For example, you have to create a login and logout button if you want to add an authentication process to your app. The process of rendering those elements is known as a conditional rendering pattern. The button would be visible to first-time users. 

The most common conditional statements used in this pattern are the if statement and suppose/else statement. The ‘if statement’ is used when it is required to pass at least one condition. Meanwhile, the developer uses the suppose/else statement when more than one condition has to be passed. 

You can easily refactor the code given above with the help of the switch/case statement in the following way:

const MyComponent = ({ isLoggedIn }) => {
  if (isLoggedIn) {
    return ;
  } else {
    return ;
  }
};

export default MyComponent;



Example with the help of the switch statement in the following way: 

const MyComponent = ({ status }) => {
  switch (status) {
    case "loading":
      return ;
    case "error":
      return ;
    case "success":
      return ;
    default:
      return null;
  }
};

export default MyComponent;

4. Conclusion

The React design patterns discussed in this article are some of the most widely used during development projects. You can leverage them to bring out the full potential of the React library. Therefore, it is recommended that you understand them thoroughly and implement them effectively. This would help you build scalable and easily maintainable React applications.

The post React Design Patterns- A Comprehensive Guide appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/react-design-patterns/feed/ 1
How to Use Typescript with React? https://www.tatvasoft.com/blog/typescript-with-react/ https://www.tatvasoft.com/blog/typescript-with-react/#comments Thu, 04 Apr 2024 08:21:11 +0000 https://www.tatvasoft.com/blog/?p=12891 Although TypeScript is a superset of JavaScript, it is a popular programming language in its own right. Developers tend to feel confident programming with TypeScript as it allows you to specify value types in the code. It has a large ecosystem consisting of several libraries and frameworks. Many of them use TypeScript by default but in the case of React, you are given the choice whether to use TypeScript or not.

The post How to Use Typescript with React? appeared first on TatvaSoft Blog.

]]>

Key Takeaways

  1. Writing TypeScript with React.js is a lot like writing JavaScript with React.js. The main advantage of using TypeScript is that you can provide types for your component’s props which can be used to check correctness and provide inline documentation in editors.
  2. TypeScript enables developers to use modern object-oriented programming features. TypeScript Generics allows the creation of flexible react components that can be used with different data structures.
  3. With TypeScript in React Project, one can perform more compile-time checks for errors and some features become easier to develop.
  4. For TypeScript with React, one can refer to community-driven CheatSheet – React TypeScript covering useful cases and explanations in depth for various modules.

Although TypeScript is a superset of JavaScript, it is a popular programming language in its own right. Developers tend to feel confident programming with TypeScript as it allows you to specify value types in the code. It has a large ecosystem consisting of several libraries and frameworks. Many of them use TypeScript by default but in the case of React, you are given the choice whether to use TypeScript or not. 

React is a JS-based library that enables you to create UIs using a declarative, component-based method. Using TypeScript with React has proven to be very effective. You can observe that the offerings of a reputed software development company include React services that focus on crafting elegant user interfaces. In contrast, TypeScript is leveraged to identify code errors, improving the project’s overall quality.

This tutorial will guide you on how to use TypeScript with React. But first, let’s clear our basics. 

1. What is Typescript?

Microsoft created a high-level programming language called TypeScript. It is statically typed. So, it automatically introduces static type checking to your codebase to improve its quality.

Although React projects mostly use JavaScript, certain features like type annotations, interfaces, and static types that are necessary to detect code errors early on are only available with TypeScript.

There are many perks such as improved collaboration in large-scale projects, increased productivity, and improved code quality for utilizing types from TypeScript. Because it offers the means to define the data format and structure, the language ensures type safety along with the prevention of runtime errors.

The conjunction of React and TypeScript enables the developers to build strongly typed React components. It also enforces the type checking in state and props which helps make your code more robust and reliable. 

TypeScript offerings such as documentation and advanced code navigation features further simplify the work of developers. You can develop robust and easy-to-maintain React applications effortlessly by leveraging the React-TypeScript integration capabilities.

Further Reading on: JavaScript vs TypeScript

2. React: With JavaScript or TypeScript? Which one is Better? 

JavaScript is a popular scripting language. More often it’s the first programming language developers learn for web development. React is a JS-based library that developers prefer to use for building UIs. 

On the other hand, TypeScript is JavaScript’s superset. So, it offers all the benefits of JavaScript. On top of that, it also provides some powerful tooling and type safety features. Using these languages has its own merits and demerits. So, the choice of whether to use TypeScript or JavaScript with React in your project boils down to your requirements.

JavaScript features such as high speed, interoperability, rich interfaces, versatility, server load, extended functionality, and less overhead make it a perfect candidate for building fast-changing apps, browser-based apps and websites, and native mobile and desktop applications.  

TypeScript features like rich IDE support, object-oriented programming, type safety, and cross-platform and cross-browser compatibility allow you to build complex and large-scale applications with robust features.

3. Why Do We Use Typescript with React?

To avail a multitude of advantages. Some of them are: 

  • Static Type Checking: Static Typing was introduced for React projects in TypeScript. It helps developers identify errors early on. This early detection of potential errors and prevention of runtime errors is possible because of the enforcement of type annotations from the programming language. It makes your code robust and reliable.
  • Improved Code Quality: Typescript language allows the developers to define the return values, function parameters, and strict types for variables. This helps in writing a clean and self-documenting code. The Type system of the language encourages the developers to write more structured and high-quality code. So, your code becomes easy to read and maintain. 
  • Enhanced Developer Productivity: The code editors of TypeScript come with features like real-time error checking, type inference, and auto-completion as a part of advanced tooling support. This helps developers code quickly, find mistakes, implement better coding suggestions, minimize the debugging time, and in the end enhance productivity. 
  • Better Collaboration: It might pique your interest to know that apart from providing coding advantages, TypeScript offers collaboration advantages as well. It provides contract definitions and clear interfaces through type annotations and interfaces. It helps developers understand how different modules and components interact. This improves the overall project understanding. 

4. Create a React Application with Typescript

Explore the steps mentioned below to create a React application with Typescript.

4.1 Prerequisites

Before we get into action, make sure you are prepared for it. What you will need includes:

  • One Ubuntu 20.04 server. Make sure it is firewall enabled and comes with a non-root user and sudo privileges. Ubuntu 20.04 offers an initial server setup guide. It would help you get started. 
  • Install nodejs and NPM. Use a NodeSource PPA with an Apt. You have to install them on Ubuntu 20.04 to get started. 

4.2 Create React App

Create-react-app v2.1 comes with a TypeScript by default. While setting up the new project with CRA, you can use TypeScript as the parameter.

npx create-react-app hello-tsx --typescript

Tsconfig.json is generated when TypeScript is used for project setup. 

4.3 Install TypeScript in the React App

To add a TypeScript version to an existing application, you have to install it with all the necessary types. Execute the following code: 

npm install --save typescript @types/node @types/react @types/react-dom @types/jest

Next, rename all your files with ts file and tsx file extension. After that, you can start your server which will automatically produce a JSON file named tsconfig.json. Once that happens, you can start writing React in TypeScript. 

It is important to note that adding TypeScript to existing projects doesn’t mean it will affect the app’s JS code. It would work fine and if you want, you can migrate that code to TypeScript as well. 

4.4 How to Declare Props?

The following example depicts how you can use TypeScript to type props in a React component. 

import React from 'react';

type DisplayUserProps = {
    name: string,
    email: string
};
const DisplayUser: React.FC = ({ name, email }) => {
  return (
    
        

{name}

{email}

) } export default DisplayUser;

A custom type for DisplayUserProps is defined in the code above. It also includes the name and email. The generic type of React Functional Component is used to define the DisplayUser by taking pre-defined DisplayUserProps as arguments. 

So, wherever we use the DisplayUser component, we will get the data back as the props. It helps confirm the predefined types such as name and email. The component is called inside App.tsx. 

const App = () => {
  const name: string = "Cherry Rin";
  const email: string = "cherryrin@xyz.com";
  return (
    
  );
};

export default App;

After reloading the React app, when you check the UI, it renders the name and email as shown in the example below:

Cherry Rin

The TypeScript shows an error when you pass the data that isn’t part of the data structure. 


Your console will display the error message as follows:

error message

It is how TypeScript ensures that defined types are adhered to when props are passed to the DisplayUser component. This also helps enforce error checking and type safety in the development process. 

4.5 How to Declare Hooks?

useRef and useState are the two most common React hooks used in TypeScript. 

Typing the UseState hook

Without Types, the useState will look similar to the following command:

const [number, setNumber] = useState<>(0)

With Types, the useState will look like:

const [number, setNumber] = useState(0)

And just like that, you can simply declare the state value’s types. The value is determined in <number>. If you or anyone else tries updating the state with another value, the action will be prevented with an error message. 

error message

But if you want your state to hold different values then you have to declare:

const [number, setNumber] = useState(0).

After running this command, you can enter any string or number and it won’t count as an error. 

The following example depicts how you can use useState in a React component,

import DisplayUsers from './components/DisplayUsers';
import { useState } from 'react';

const App = () => {
  const [name, setName] = useState('Cherry Rin');
  const [email, setEmail] = useState('cherryrin@xyz.com');
  return (
    
  );
};
export default App;

Typing the UseRef hook

In React, a DOM component is referenced using the useRef hook. The example given below shows how you can implement it in TypeScript with React.

const App = () => {
  const nameRef = useRef(null);

  const saveName = () => {
    if (nameRef && nameRef.current) {
      console.log("Name: ", nameRef.current.value);
    } 
  };
  return (
    <>
      
      
    
  );
};

Output:

Output

We will be starting with the ref variable as null and declare HTMLInputElement | null as its type. While using the useRef hook with TypeScript, you can assign it to either null or to declared type. 

The benefit of declaring the useRef hook is that you won’t be reading data or taking actions from the unmatched types as TypeScript will be there to prevent you. You will get the following errors in case for ref, you try to declare a type number. 

useRef hook

It helps you avoid making silly errors which saves time that you would have been otherwise spending on debugging. While working on large projects, where multiple people are contributing to the codebase, using TypeScript for React apps provides you with a more ordered and controlled work environment. 

4.6 How to Declare and Manage States?

If you have working experience with React or a basic understanding, you would know that adding a simple and stateless component to your React apps isn’t entirely possible. To hold state values in your app component, you have to define state variables.  

The state variable is defined with a useState hook as shown below:

import React, { useState } from "react";
function App() {
  const [sampleState, setSampleState] = useState("Ifeoma Imoh");
  return 
Hello World
; } export default App;

JavaScript

In this case, the type of state is presumed automatically. By declaring the type your state accepts, you can increase its safety. 

import React, { useState } from 'react';
function App() {
  const [sampleState, setSampleState] = useState("Ifeoma Imoh");
  //sampleStateTwo is a string or number
  const [sampleStateTwo, setSampleStateTwo] = useState("");
  //sampleStateThree is an array of string
  const [sampleStateThree, setSampleStateThree] = useState([""]);
  return (
    
Hello World
) } export default App;

4.7 React Functional Component

In TypeScript, a functional component or a stateless component is defined with the following commands: 

type DisplayUserProps = {
    name: string,
    email: string
};

const DisplayUsers: React.FC = ({ name, email }) => {
  return (
    
        

{name}

{email}

) } export default Count;

With React.FC, we will define the expected props object structure. Utilizing props is one of many ways to create the interface. 

interface DisplayUserProps {
    name: string;
    email: string;
};
const DisplayUsers: React.FC = ({ name, email }) => {
  return (
    
        

{name}

{email}

)}

4.8 How to Display the Users on the UI?

Now, what shall we do to display the user’s name and email on the screen?  The console consists of the name property for all objects. These objects have the user’s name and email. We can use them to display the user’s name and email. 

Now, you have to replace the content in the App.tsx file with the following:

import React, { useState } from 'react';
import { UserList } from '../utils/UserList';

export type UserProps = {
    name: string,
    email: string
};

const DisplayUsers = () => {
    const [users, setUsers] = useState(UserList);

    const renderUsers = () => (
        users.map((user, index) => (
            
                

{user.name}

{user.email}

)) ); return (

User List

{renderUsers()}
) } export default DisplayUsers;

The users array is looped over using the array map method. To destructure the name and email properties of each user object, the object destructuring method is used. Here, the user’s name and email will be displayed on the screen as an unordered list. 

Now is the time to determine a user as an object that has properties like name and email. For every property, we also have to define a data type. Next, you have to change the useState users array from:

const [users, setUsers] = useState([]);

to :

const [users, setUsers] = useState(UserList);

It is to specify users as an array of type users’ objects. It is our declared UI. So, if you check your App.tsx files now, you will see that it no longer has any TypeScript errors.

specify users as an array of type users’ objects

As a result, you can see that the screen is displaying a list of 5 random users.

user list

4.9 Run App

Until now, we discussed the basic yet most important concepts for React app development using TypeScript. This section will show how you can leverage them to create a simple to-do app. 

Create a new React-TypeScript project like we did at the beginning of the article. After that, you have to run the code given below at the app’s root level to start the development server. 

npm start
Shell

Now, open the App.tsx file. It consists of a code which you have to replace with the following: 

import DisplayUser, { DisplayUserProp as UserProps } from './components/DisplayUser';
import React, { useEffect, useState } from 'react';
import { UserList } from './utils/UserList';

const App = () => {
  const [users, setUsers] = useState([]);

  useEffect(() => {
    if (users.length == 0 && UserList) setUsers(UserList);
  }, [])

  const renderUsers = () => (
    users.map((user, index) => (
      
        
      
    ))
  );

  return (
    
      

User List

{renderUsers()}
); }; export default App;

Here, User data comes from the UserList.ts file which is inside the utils folder. Add the below-mentioned data to the UserList.ts file:

import { DisplayUserProp as UserProps } from "../components/DisplayUser";
export const UserList: UserProps[] = [
    {
        name: "Cherry Rin",
        email: "cherryrin@xyz.com"
    },
    {
        name: "Lein Juan",
        email: "leinnj23@xyz.com"
    },
    {
        name: "Rick Gray",
        email: "rpgray@xyz.com"
    },
    {
        name: "Jimm Maroon",
        email: "jimaroonjm@xyz.com"
    },
    {
        name: "Shailey Smith",
        email: "ssmith34@xyz.com"
    },
];

To render individual User items in your app, you have to import a DisplayUser component. We will be creating it next. But first, go to your app’s src directory. Open a new components folder. In there, create a DisplayUser file with the tsx extension. Now add the following code to it.

import React from 'react';

export type DisplayUserProp = {
    name: string,
    email: string
};

const DisplayUser: React.FC = ({ name, email }) => {
    return (
        

{name}

{email}

) }; export default DisplayUser;

After saving all the modifications, test your React app in the browser.

user list output on browser

5. Conclusion

In this article, we have surfed through the fundamentals of using TypeScript with React development. These are the concepts that are widely used in your typical TypeScrip-React projects. We also saw how you can integrate these concepts to build a simple React application. You can imply them similarly in other projects with little modifications to suit your requirements. 

If you have any queries or input on the matter, feel free to share them with us in the comments section below. We will get back to you ASAP! 

FAQs

Can You Use TypeScript with React?

When it comes to adding type definitions in a JS code, developers prefer to use TypeScript. Only by adding @types/react and @types/react-dom to your React Project, you can get complete JSX and React web support.

Is TypeScript better for React?

With static typing, TypeScript helps the React compiler detect code errors early in the development stage. On the other hand, because JavaScript is dynamically typed, the compiler has a hard time detecting code errors.

The post How to Use Typescript with React? appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/typescript-with-react/feed/ 1
Detailed Guide to React App Testing https://www.tatvasoft.com/blog/react-app-testing/ https://www.tatvasoft.com/blog/react-app-testing/#respond Wed, 06 Mar 2024 09:36:08 +0000 https://www.tatvasoft.com/blog/?p=12822 React is a prominent JavaScript library that is used by app development companies to create unique and robust applications. It comes with a declarative style and gives more emphasis on composition. With the help of this technology, every React app development company in the market can transform their client's business by creating modern web applications.

The post Detailed Guide to React App Testing appeared first on TatvaSoft Blog.

]]>

Key Takeaways

  1. React app testing is crucial for delivering secure, high-performing, and user friendly application.
  2. React Apps are created using different UI components. So it is necessary to test each component separately and also how they behave when integrated.
  3. It is essential to involve Unit Testing, Integration Testing, End-to-End Testing, and SnapShot Testing for a React app as per the requirements.
  4. React Testing Library, Enzyme, Jest, Mocha, Cypress, Playwright, Selenium,  JMeter, Jasmin and TestRail are some of the key tools and libraries used for testing React app.

React is a prominent JavaScript library that is used by app development companies to create unique and robust applications. It comes with a declarative style and gives more emphasis on composition. With the help of this technology, every React app development company in the market can transform their client’s business by creating modern web applications. When businesses grow, the size of the web application along with its complexity will grow for which the development team will have to write tests that can help in avoiding bugs.

Though testing React apps isn’t an easy task, some frameworks and libraries can make it possible for the development teams. In this article, we will go through such libraries.

1. Why do We Need to Test the Web App?

The main reason behind testing applications is to ensure that the apps work properly without any errors. Along with this, in any application, several features might require some attention from the developers as they might cause expensive iterations if not checked frequently. Some of the areas where testing is a must are:

  • Any part of the application that involves getting input from the user or retrieving data from the application’s database and offering that to the user.
  • The features of the application that are connected with call-to-action tasks where user engagement becomes necessary, need to be tested.
  • When any sequential event is being rendered as its elements lead to function, testing is required.

2. What to Test in React App?

Developers often get confused about what to test in a React application. The reason behind this confusion is that applications are generally dealing with simple data but sometimes they are quite sophisticated. In any case, developers need to set their priorities for testing the applications. Some of the best things to start the testing process with are:

  • Identifying the widely used React components in the applications and start testing them.
  • Identifying application features that can help in adding more business value and adding them for testing.
  • Executing border case scenarios in high-valued features of React application.
  • Performance and stress testing applications if they are serving a large number of users like Amazon or Netflix.
  • Testing React hooks.

3. Libraries and Tools Required

React test libraries and frameworks can be beneficial to offer the best application to the end users. But all these frameworks have their specialty. Here we will have a look at some of these React testing libraries and tools for React application testing.

3.1 Enzyme

EnzymeJS

Enzyme is a React testing library that enables React app developers to traverse and manipulate the runtime of the output. With the help of this tool, the developers can carry out component rendering tasks, find elements, and interact with them. As Enzyme is designed for React, it offers two types of testing methods: mount testing and shallow rendering. This tool is used with Jest.

Some of the benefits of Enzyme are:

  • Supports DOM rendering.
  • Shallow rendering.
  • React hooks.
  • Simulation during runtime against output.

3.2 Jest

JestJS

Jest is a popular React testing framework or test runner suggested by the React community. The testing team prefers this tool to test applications for large-scale companies. Firms like Airbnb, Uber, and Facebook are already using this tool.

Some of the benefits of Jest are:

  • Keeps track of large test cases.
  • Easy to configure and use.
  • Snapshot-capturing with Jest.
  • Ability to mock API functions.
  • Conduct parallelization testing method.

3.3 Mocha

MochaJS

It is also a widely used testing framework. Mocha runs on Node.js and testers use it to check applications that are developed using React. It helps developers to conduct testing in a very flexible manner.

Here are some of the benefits of Mocha:

  • Easy async testing.
  • Easy test suite creation.
  • Highly extensible for mocking libraries.

3.4 Jasmine

Jasmine

Jasmine is a simple JavaScript testing framework for browsers and Node.js. Jasmine comes with a behavior-driven development pattern which means that using this tool can be a perfect choice for configuring an application before using it. Besides this, third-party tools like Enzyme can be used while working with Jasmine for testing React applications. 

Some of Jasmine’s benefits include:

  • No DOM is required.
  • Asynchronous function testing.
  • Front-end and back-end testing is possible.
  • Inbuilt matcher assertion.
  • Custom equality checker assertion.

Despite its many benefits, Jasmine isn’t the perfect testing framework for React apps. It doesn’t offer support for testing snapshots. For this, it requires the usage of third-party tools.

4. How to Test React Applications?

Here are the steps that can help you test a React application. 

4.1 Build a Sample React App

First of all, we will create a minimal application that displays users’ information from an API. This application will be then tested to see how React app testing works. 

Here, as we only have to focus on the front end of the application, we will use JSONPlaceholder user API. First of all, the developer needs to write the following code in the App.js file:

import { useEffect, useState } from "react";
import axios from "axios";
import { getFormattedUserName } from "./utility";
import "./App.css";


function App() {
  const [users, setUsers] = useState([]);


  // Fetch the data from the server
  useEffect(() => {
    let isMounted = true;
    const url = "https://jsonplaceholder.typicode.com/users";
    const getUsers = async () => {
      const response = await axios.get(url);
      if (isMounted) {
        setUsers(response.data);
      }
    };
    getUsers();
  }, []);


  return (
    

Users:

    {users.map((user) => { return (
  • {user.name} --{" "} ({getFormattedUserName(user.username)})
  • ); })}
); } export default App;

Then, it is time to create a file in the src folder. Name the file utility.js and write the following function in it:

export function getFormattedUserName(username) {
  return "@" + username;
}

Now, run the application using this command:

npm start

Once you run the application, you will see the following output:

User List- Output

4.2 Unit Testing

Now, let’s start with testing the application that we have just developed. Here we will start with unit testing. It is a test for checking individual software units or React components separately. A unit in an application can be anything from a routine, function, module, method, and object. Whatever the test objective the testing team decides to follow specific if the unit testing will offer the expected results or not. A unit test module contains a series of approaches that are provided by React development tools like Jest to specify the structure of the test. 

To carry out unit testing, developers can use methods like test or describe as the below-given example:

describe('my function or component', () => {
 test('does the following', () => {
   // add your testing output
 });
});

In the above example, the test block is the test case and the described block is the test suite. Here, the test suite can hold more than one test case but a test case doesn’t need to be present in a test suite. 

When any tester is writing inside a test case, he can include assertions that can validate erroneous or successful processes. 

In the below example, we can see assertions being successful:

describe('true is truthy and false is falsy', () => {
 test('true is truthy', () => {
   expect(true).toBe(true);
 });

 test('false is falsy', () => {
   expect(false).toBe(false);
 });
});

After this, let us write the first test case that can target the function named

getFormattedUserName

from the utilitymodule. For this, the developer will have to build a file called utility.test.js. All the test files use this naming pattern: {file}.test.js, there {file} is the module file name that needs to be tested. 

In this code, the function will take a string as an input and will offer the same string as an output by just adding an @ symbol at its beginning. Here is an example of the same:

import { getFormattedUserName } from "./utility";


describe("utility", () => {
  test("getFormattedUserName adds @ at the start beginning of the username", () => {
    expect(getFormattedUserName("jc")).toBe("@jc");
  });
});

As seen in the above code, any tester can easily specify the module and test case in the code so that if it fails, they can get an idea about the things that went wrong. As the above code states, the first test is ready, so the next thing to do is run the test cases and wait for the output. For this, the tester needs to run a simple npm command:

npm run test

After this, one will have to focus on running tests in a single test with the use of the following command:

npm run test -- -t utility

This can be done when there are other tests created by create-react-app. If by running the above commands, everything goes well, you will be able to see an output like this:

Unit test Output

Successful output.

Here, in the output, you can observe that one test passed successfully. But if in this case, something goes wrong, then a new test needs to be added to the utils test suite. For this, the below-described code can be useful:

test('getFormattedUserName function does not add @ when it starts with @is already provided', () => {
    expect(getFormattedUserName('@jc')).toBe('@jc');
  });

This will be a different situation. In this case, if the username already has an @ symbol at the start of the string, then the function will return the output as the username was provided without any other symbol. Here is the output for the same:

Failed Unit Test

Failed test output.

As anticipated, the test failed as the information the system received was the expected value as the output. Here, if the tester can detect the issue, he can also fix it by using the following code:

export function getFormattedUserName(username) {
  return !username.startsWith("@") ? `@${username}` : username;
}

The output of this code will be:

Unit Test Successful

As you can see, the test is a success.

4.3 SnapShot Testing

Now, let us go through another type of testing which is Snapshot testing. This type of test is used by the software development teams when they want to make sure that the UI of the application doesn’t change unexpectedly.

Snapshot testing is used to render the UI components of the application, take its snapshot, and then compare it with other snapshots that are stored in the file for reference. Here, if the two snapshots match, it means that the test is successful, and if not, then there might have been an unexpected change in the component. To write a test using this method, the tester needs the react-test-renderer library as it allows the rendering of components in React applications. 

The very first thing a tester needs to do is install the library. For this, the following command can be used:

npm i react-test-renderer

After this, it’s time to edit the file to include a snapshot test in it.

import renderer from "react-test-renderer";

// ...

test("check if it renders a correct snapshot", async () => {
  axios.get.mockResolvedValue({ data: fakeUserData });
  const tree = renderer.create().toJSON();
  expect(tree).toMatchSnapshot();
});


// ...

When the tester runs the above code, he will get the following output.

SnapShot Test Output

When this test runs, a test runner like Jest creates a snapshot file and adds it to the snapshots folder. Here is how it will look:

// Jest Snapshot v1, https://goo.gl/fbAQLP

exports[`check if it renders a correct snapshot 1`] = `

Users:

Loading user details...
`;

Now, if one wants to modify the App component, only one single text value needs to be changed. Here, the correct snapshot test will be rendered to fail as there will be a change in the output.

Renders Correct Snapshot

To make the test successful, the tester needs to inform a tool like Jest about the intentional changes. This can be easily carried out when Jest is in watch mode. The snapshot as shown below will be taken:

SnapShot Test Successful

4.4 End-to-end Testing

Another popular type of React application testing is end-to-end testing. In this type of testing, the entire system is included. Here all the complexities and dependencies are considered. In general, UI tests are difficult and expensive which is why end-to-tests are carried out rather than unit tests that focus on only the critical parts of the applications. This type of testing comes with various approaches:

  • Usage of platform for automated end-to-end testing.
  • Automated in-house end-to-end testing.
  • Usage of platform for manual end-to-end testing.
  • Manual in-house end-to-end testing.

4.5 Integration Testing

Now, we will go through integration testing which is also considered an essential type of react application testing. It is done to ensure that two or more modules can work together with ease.

For this, the software testing team will have to follow the below-given steps:

First of all, one needs to install the dependencies with yarn by using the following command:

yarn add --dev jest @testing-library/react @testing-library/user-event jest-dom nock

Or if you want to install it with npm, use this command line:

npm i -D jest @testing-library/react @testing-library/user-event jest-dom nock

Now, it’s time to create an integration test suite file named viewGitHubRepositoriesByUsername.spec.js. For this, Jest will be useful to automatically pick it up.

Now, import dependencies using the code:

import React from 'react'; // so that we can use JSX syntax
import {
 render,
 cleanup,
 waitForElement
} from '@testing-library/react'; // testing helpers
import userEvent from '@testing-library/user-event' // testing helpers for imitating user events
import 'jest-dom/extend-expect'; // to extend Jest's expect with DOM assertions
import nock from 'nock'; // to mock github API
import {
 FAKE_USERNAME_WITH_REPOS,
 FAKE_USERNAME_WITHOUT_REPOS,
 FAKE_BAD_USERNAME,
 REPOS_LIST
} from './fixtures/github'; // test data to use in a mock API
import './helpers/initTestLocalization'; // to configure i18n for tests
import App from '../App'; // the app that we are going to test

After that, you can set the test suite by following this code

describe(check GitHub repositories by username', () => {
 beforeAll(() => {
   nock('https://api.github.com')
     .persist()
     .get(`/users/${FAKE_USERNAME_WITH_REPOS}/repos`)
     .query(true)
     .reply(200, REPOS_LIST);
 });

 afterEach(cleanup);

 describe('if a user of GitHub has public repositories', () => {
   it('Users can view the list of public repositories by entering their GitHub username.', async () => {
     // arrange
     // act
     // assert
   });
 });


 describe('when a user on GitHub doesn't have any public repos', () => {
   it('The user is informed that the login provided for GitHub does not have any public repositories associated with it.', async () => {
     // arrange
     // act
     // assert
   });
 });

 describe('when logged in user does not exist on Github', () => {
   it('user is presented with an error message', async () => {
     // arrange
     // act
     // assert
   });
 });
});

5. Conclusion

As seen in this blog, to conduct React app testing, the testing team needs to have a proper understanding of the React testing libraries and tools. After that, they need to find out what React testing is needed for. This can help them choose the right tool from the above-listed ones.

FAQs

How is React testing conducted?

When it comes to React app testing, developers and testers can support the development of high-quality software along with features that can help developers reduce their time in making changes to the development and help testers create a dependable test suite.

Is React testing library Jest?

React Testing Library and Jest are two different things. Jest is a tool that helps in writing tests which can be beneficial for components of any application. 

Where can a React code be tested?

To test a React code, testers can use any tool like React Testing Library (RTL) and Enzyme. The choice depends on the application type and testing that is required for it.

The post Detailed Guide to React App Testing appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/react-app-testing/feed/ 0
Guide to Deploy React App on Various Cloud Platforms https://www.tatvasoft.com/blog/deploy-react-app/ https://www.tatvasoft.com/blog/deploy-react-app/#respond Wed, 28 Feb 2024 09:32:00 +0000 https://www.tatvasoft.com/blog/?p=12618 For any app development company, the most crucial part of the development process is deployment. This is why the development teams need to understand the different options of deployment that are available in the market and learn how to use them to ensure that the deployment process is carried out smoothly.

The post Guide to Deploy React App on Various Cloud Platforms appeared first on TatvaSoft Blog.

]]>
For any app development company, the most crucial part of the development process is deployment. This is why the development teams need to understand the different options of deployment that are available in the market and learn how to use them to ensure that the deployment process is carried out smoothly. In this blog, we will go through some of the most popular platforms that can be used by React app development companies to quickly and efficiently deploy React applications for their clients. 

Let’s discuss step by step deployment of React applications using different platforms like AWS Amplify, Vercel, Heroku and more.

1. AWS Amplify

One of the most popular platforms to deploy and host modern Reach applications is AWS Amplify Console. It provides custom domain setup, globally available CDNs, password protection, and feature branch deployments. 

Core Features

  • Authentication
  • DataStore
  • Analytics
  • Functions
  • Geo
  • API
  • Predictions

Pricing

  • AWS Amplify enables the users to start creating the backend of an application for free and then they can start paying for some specific functionalities if required. Besides this, hosting setup of an application with AWS Amplify for 12 months is free for 1000 build minutes per month and then per minute, it charges $0.01.

Deploy React App with AWS Amplify

For React app development companies, sometimes deploying an application becomes a daunting task. But when they use the right tools, the React app developers can carry the process very smoothly. For this, one of the most effective options is Amazon Web Services (AWS). It provides a cost-effective and simple solution for hosting static web app. Here, we will have a look at a step-by-step process that a developer can follow while deploying a React app on Amazon Web Services (AWS).

Prerequisites:

Before starting the deployment process, here are a few prerequisites that are required:

  • A React application: The development team must have experience in working with React applications.
  • Amazon Web Services (AWS) Amplify Account: An account for AWS Amplify is required if one doesn’t have it.

Step 1: Create React App Project

The very first step of this guide is to create a React application with the use of NPM. NPM is considered an excellent tool used to bridge the gap between the different generations of web development approaches. It enables the developers to ensure efficient and faster app development experience for modern web projects. 

Here, the React developers need to open the terminal and run the following command to create a new project setup: 

npx create-react-app react-app-demo

Now, the developer has to upload the application on any version control tool like BitBucket or GitHub. This is done to directly connect the developed app to the host platform.

Step 2: Select Amplify into AWS Account

The next step is to choose Amplify in the AWS account and for this, the developer needs to first log in to the AWS Account. To get started with Amplify, click the “AWS Amplify” button as shown in the below image.

select amplify

Now, click on the “Get Started” Button to start the process.

Get Started with Amplify

Now, under the Amplify Hosting option, click on the “Get Started” button to host the application.

Host app

Step 3: Choose Your React App Repo

The next step is to select the application Repo, for which choose the “GitHub” Button to link your GitHub Account to the AWS Amplify Account.

select repo

Now, in the Add repository branch from the drop-down, choose the repository that you want to use to deploy an application on the AWS Amplify Account.

choose repo

Step 4: Configure Your React App

After selecting the repository, one needs to check the configuration of the application and verify the branch code code that will be deployed. As seen in the below image, the app name will be a default Project name, if you want to change it, this is the time. After that, the developer needs to check the code in the “Build and test settings” section and make changes if required.

choose branch

After checking everything, the developer can click on the “Next” button.

Step 5: Deploy a React App

check build command and name

Now, you will be diverted to the review page where you can check the repository details along with the app setting details. And if all looks fine, you need to click on the “Save and deploy” button to deploy the app.

save and deploy

These steps will deploy the application successfully. If the React developer wants to check the deployed application, he can by clicking on the link promoted in the below image to view the deployed application.

build and check preview

Step 6: Preview of Your Deployed React App

While reviewing the application, the developer can check the URL and can change it if desired by configuring the website domain to an AWS Amplify account. Besides URLs, developers can configure different things like Access Control, Monitoring, Domain, Build Settings, and more.

preview
build and check preview in Amplify

Congratulations on a successful deployment! Your application is now live on AWS Amplify and accessible to the world. Share the link as needed to showcase your application.

2. Vercel

Another popular service for the deployment of React applications is Vercel. It is a new approach that developers follow to simplify the deployment process along with team collaboration while creating the React application. This is a service that supports importing source code from GitLab, GitHub, and Bitbucket. With the help of Vercel, developers can get access to starter templates that can help in creating and deploying applications. Besides this, it also offers HTTPS, Serverless functions, and continuous deployment.

Core Features

  • Infinite Scalability
  • Observability as Priority
  • Intelligent Edge Caching
  • Image Optimization
  • Automatic Failover
  • Multi-AZ Code Execution
  • Atomic Deploys

Pricing

  • When it comes to hosting Hobby sites on Vercel, there is no charge but for commercial use, the Vercel platform charges from $20 per month per seat.

Deploy React app with Vercel

In the world of web development, developers can use different types of tools to deploy a React app. Here we will go through the step-by-step process of deploying a React app on Vercel.

Prerequisites:

Before any React app developer starts with this process, here are a few prerequisites for the same:

  • A React application: The development team must have experience in working on a React application that needs to be deployed.
  • Vercel Account: An account in Vercel is required.

Step 1: Build Your Application

Step 2: Login into the Vercel Account

The developer needs to log in to the Vercel Account. For this, they have to click on the “Continue with GitHub” button to log in with the help of the GitHub account. 

Vercel Login

Step 3: Choose Your React App Git Repository

Once the developer logs in, he will be asked to choose the git repository from your GitHub Account. Here one needs to click on the “Import” Button of the repository that the developer needs to deploy the application on.

Import Repo

Step 4: Configure Your React App

Now it’s time to check all configurations of the application. Here the developer needs to check the branch code and make changes in it if required. 

Then as seen in the below image, the project name will be set as default by the system, if the developer wants to set the react project name in the vercel account he can. 

Similarly, the default Build Settings commands will be set in the “Build and Output Settings” section, one can change them as per the requirements. 

Besides this, from the same page, one can set multiple environment variables in the app. For this, there is a section named “Environment Variables”.

After checking all the configurations and making the required changes, it’s time to deploy the application. For this, the developer needs to click on the “Deploy” button. 

Set Env

Step 5: Deploy React App

After that, you will see a page saying “Congratulations – You just Deployed a New React Project to Vercel”. This means that your application has been deployed. From this page, you can get an instant review of the application by clicking on the “Instant Preview” option.

deployed React app to Vercel

 Step 6: Preview of Your Deployed React App

In the application preview page, the developer can check the URL of the deployed application and can make changes to it by configuring the website domain to a Vercel account. To make changes, the developer needs to go to the “setting tab” on Vercel. From here, a developer can also make changes in the security and environmental variables.

preview
Change Settings

Congratulations on a successful deployment! Your React app is now live on Vercel and accessible to the world. Share the link as needed to showcase your application.

3. Firebase

Firebase is a widely used platform for developing and scaling React applications. This Google product offers services like application hosting, Cloud Firestore, authentication, cloud functions, and more.

Core Features

  • Realtime Database
  • Authentication
  • Cloud Messaging
  • Performance Monitoring
  • Test Lab
  • Crashlytics

Pricing

  • For the Spark Plan, Firebase doesn’t charge anything but this plan offers limited data storage, users, and functions. Another plan is the Blaze Plan for which Firebase charges as per the project type and its requirements.

Deploy React App with Firebase

Now, we will have a look at the React development process using Firebase after going through tools like Vercel and AWS Amplify. Here is the step-by-step process of deploying your React app on Firebase.

Prerequisites:

Before the developer starts working with Firebase, here are a few prerequisites:

  • A React application: The development team working on Firebase to deploy the React application must have experience in working on the same application. 
  • Firebase Account: An account in Firebase will be required.

Step 1: Build Your React App

Step 2: Create a project into Firebase Account

To login to the Firebase account, the developer needs to click the “Create a project” button.

create a project in Firebase

Now, one will have to type the project name and click on the “Continue” Button to start the process.

set project name

Now, click on the “Continue” button for the next step.

step 2

Select a country from the dropdown, check all checkboxes, and click on the “create project” button.

step 3

Step 3: Enable Hosting on Your App

1. Now, to enable the hosting setup of the application, the developer will have to select the “Hosting” tab from the left sidebar tabs in Firebase Account and click on the “Get Started” button.

select hosting and started

Step 4: Install Firebase on Your React App

After getting started with the hosting process, the developer will have to follow an installation process. 

1. To install firebase-tool, the developer will have to use the command “npm install -g firebase-tools”.

install firebase

3. Now, will have to log in with your Firebase account on your React application with the use “Firebase login” command.

google login

4. Initialize firebase on your react application using : “firebase init” command.

firebase init

Now, the developer will have to select the “Hosting” option.

select hosting with optional

Now, the development team will have to choose the “Use an existing project” option.

use existing project

Here, one will have to choose a newly created project from the options.

select app

The application is created in the build folder by default, so here the same will be used as the public directory.

options

After this, to create react app, the developer will have to run the “npm run build” command.

run build

Step 5: Deploy React App

After this, it’s time to deploy Firebase hosting sites. For this, the developer will have to run the command “firebase deploy”.

Deploy React app to Firebase

Step 6: Preview of Your Deployed App

Once the application is deployed, the developer can preview it and configure the website domain if required from the Firebase account.

Preview of your Deployed App
preview links on cmd

Congratulations on a successful deployment! Your app is now live on Firebase and accessible to the world. Share the link as needed to showcase your application.

4. Netlify

The next popular service to deploy a React application in our list is Netlify. This is an easy-to-use service. Developers can import projects from Bitbucket, GitHub, and GitLab. They can create multiple project aliases using this service and deploy it. 

Core Features

  • Modernize Architecture
  • Faster Time-to-Market
  • Multi-cloud Infrastructure
  • Robust Integrations
  • Effortless Team Management
  • Advanced Security
  • Seamless Migration

Pricing

  • The basic plan of Netlify is free, then it offers a Pro plan that charges $19 per month for each member, and the enterprise plan is custom-made for the companies as per their requirements and project types. 

Deploy React App with Netlify

Here, to overcome the daunting task of React deployment, we will use Netlify, an essential tool for it. This is the step-by-step process of deploying your React app on Netlify.

Prerequisites:

Here are a few prerequisites for working with Netlify:

  • A React application: The development team must have experience in working on the application that needs to be deployed. 
  • Netlify Account: An account on Netlify is required.

Step 1: Build Your ReactJS App

Step 2: Login into Netlify Account

To log into the Netlify account, the developer will have to click the “Log in with GitHub” button on the Netlify home page.

Netlify login

Step 3: Choose Your React App Repo

Now, to import the existing project, the developer must click on the “Import from Git” Button to link the GitHub Account to the Netlify Account.

Import from GIT

After this, by clicking on the “Deploy with GitHub” option, the developer will be able to import the app repositories from GitHub Account.

click Deploy with GitHub

From the list, the developer will have to select the git repository, for the application they want to deploy from your GitHub Account.

select repo

Step 4: Configure Your React App

After importing the application repository, the developer can look at all the application configurations. Here, the developer can check the code of the application and make changes to it if required. 

Now, as the system will set a default Project name, the developer can even change it. Similarly, the build command setting will be set by default, which can also be changed by going to the “Build Command” section. 

Besides this, from the same page, the developers can also set multiple environment variables in the React app. This can be done from the “Environment Variables” section.

Now, to deploy the application after configuring it, the developer needs to click on the “Deploy reactapp-demo” button.

Add variable

Step 5: Deploy React App

Now, the developer will have to go to the “Deploys” section and click on the “Open production deploy” option to get a preview of the deployed React app.

check deployed preview

 Step 6: Preview of Your Deployed App

While reviewing the application, the developers can also change the URL of the application from the configuring website domain option in the netlify account.

Preview
team overview

Congratulations on a successful deployment! Your React app is now live on Netlify and accessible to the world. Share the link as needed to showcase your application.

5. Heroku

Heroku is used by a large developer community to deploy React applications. This service offers support for various programming languages along with features like a custom domain, a free SSL certificate, and Git integration.

Core Features

  • Modern Open-Source Languages Support
  • Smart Containers
  • Elastic Runtime
  • Trusted Application Operations
  • Leading Service Ecosystem
  • Vertical and Horizontal Scalability
  • Continuous Integration and Deployment

Pricing

  • When someone wants to launch hobby projects, Heroku doesn’t charge anything. But for commercial projects, one will have to pay $ 25 per month as it gives advanced features like SSL, memory selection, and analytics.

Deploy React App with Heroku using Dashboard

After going through various React deployment platforms, we will now go through the step-by-step process of deploying the React application on Heroku.

Prerequisites:

Before starting with Heroku, here are a few prerequisites:

  • A React application: The developer should have worked with the same application that is going through the deployment process. 
  • Heroku Account: The development team must have an account in Heroku.

Step 1: Build Your React App

Step 2: Install Heroku on the System

The developer needs to install Heroku on the system. For this, the command “npm install heroku -g” must be run in the terminal of the system.

cmd

If the developer wants to check whether Heroku is already installed in the system or not, he will have to run the “heroku -v” command.

success install heroku

Step 3: Login Heroku Account on the system

No, after getting Heroku in the system, it’s time to log in to the platform. For this, the “heroku login” command can be used. It will also allow the developer to link the GitHub Account to the Heroku Account.

Heroku login

After login, you can check whether the login is successful or failed.

done login

Step 4: Create React App on Heroku

Now, the developer can start the app development process by choosing the “Create New App” button on the Heroku platform. 

create new app

The developer will have to enter the application name and then click on the “Create app” button.

create a app

Now, we need to connect the Git repository to the Heroku account.

After creating an app, the developer needs to find the option “Connect to GitHub” and choose that option.

choose gitHub

After clicking on the “Connect to GitHub” button, you will get a popup where you can write your login credentials for GitHub.

connect github

Now choose the git repository of the application that needs to be deployed and click on the  “Connect” button to connect that repository.

connect repo

After that, select the branch of the code that needs to be deployed and then click on the “Deploy Branch” button to deploy the application.

select branch and deploy

Step 5: Preview of Your Deployed React App

Once you deploy the application, you can see its preview from where you can check the URL of the application and by configuring the domain, you can change it if required. 

preview

Congratulations on a successful deployment! Your React app is now live on Heroku and accessible to the world. Share the link as needed to showcase your application.

6. AWS S3

AWS S3 (Simple Storage Service) is a platform that offers object storage that is required to create the storage and recover the data & information from the internet. It offers its services via a web services interface. 

Core Features

  • Lifecycle Management
  • Bucket Policy
  • Data Protection
  • Competitor Services
  • Amazon S3 Console

Pricing

  • The cost of AWS S3 Standard, a general-purpose storage starts from $0.023 per GB, the price of S3 Intelligent – Tiering, an automatic cost-saving approach for data starts from $0.023 per GB, and other all storage approaches by AWS S3 starts from $0.0125 per GB. 

Deploy React App to AWS S3

The last React app deployment tool in our list is Amazon S3 (Simple Storage Service). It is a simple and cost-effective solution for hosting applications and storing data. Here, we will go through the step-by-step process of deploying your React app on AWS S3.

Prerequisites:

Before starting with AWS S3, here are some of the prerequisites:

  • A React application: The development team must have experience in working on the application they want to deploy. 
  • AWS Account: An account in AWS is required to access AWS services.

Step 1: Build Your React App

In this guide, we’ll create React app using Vite, a powerful build tool designed to bridge the Here, first of all, we will create a React application and for that, the developer will have to run the following command in the terminal.

 npm create vite@latest demo-react-app

Now, to create a production-ready build, one needs to run the below-given command:

 npm run build

This command will help the developer to create an optimized production build in the “dist” directory.

Step 2: Create an AWS S3 Bucket

Now, in order to log into the AWS Management Console, and access the S3 service, the developer will have to click on the “Create bucket” button.

Create an AWS S3 Bucket

The developer will have to choose a unique name for the bucket and then select the region that is closest to the target audience of the business for improved performance.

create bucket

Step 3: Configure S3 Bucket to Host a Static Website

Enable Static Website Hosting

Enable Static Website Hosting

Now, after entering the bucket, the next thing to do is click on the “Properties” tab from the top of the page. After that scroll down to the “Static website hosting” section inside the Properties tab and click on the “Edit” button next to the “Static website hosting” section.

From here, you will be able to choose Host a static website and use index.html the Index Document, and the Error Document of the project.

Step 4: Configure Settings and Permissions

After this, it’s time to configure the permissions and settings of the AWS S3 bucket. This will ensure that the application can be accessed by users only. For this, the developer will have to follow the below-given steps:

 Disable Block Public Access Permissions

  1. Inside the “Permissions” tab of the bucket, find the “Block public access” section as these settings specify if the business owner wants the application to be accessed by the public or not.
  2. Then click on the “Edit” button to access the settings that can help in blocking public access.
  3. Disable all the “Block public access” settings as this will ensure that the app is publicly accessible, if you don’t want that then enable the settings. 

Besides, when you are inside the “Permissions” tab, find the “Bucket Policy” section. In this section, click on the “Edit” button if you want to create a policy to allow public access to the files of the application. Then copy and paste the below-given commands to generate a policy as per the requirements.

 {
	"Version": "2012-10-17",
	"Statement": [
	 {
		"Sid": "PublicReadGetObject",
		"Effect": "Allow",
		"Principal": "*",
		"Action": "s3:GetObject",
		"Resource": "arn:aws:s3:::www.tatvasoft.demo.com/*"
	 }
	]
}
edit bucket policy

By applying the above-given settings and permissions instructions, the AWS S3 bucket will be ready to serve your React application to the public with some controls. 

Step 5: Publish React App to S3 then Access it with a Public Link

Now, the developer will have to publish the application and for that, the following steps will be useful.

Upload the Contents of Your Build Folder to AWS S3

First, the developer will have to click on the “Upload” button to begin the process. Then select all the content present in the React application’s “dist” folder but not the folder itself. After that, the developer will have to commence the upload to copy these contents to the AWS S3 bucket.

Upload the Contents of Your Build Folder to AWS S3

 Use the Public Link to Access Your React App

Now, after the uploading process is completed, the developer will have to return to the “Properties” tab of the S3 bucket, find the public link in the “Static website hosting” section, and click the link to open the React application in a web browser.

Static website hosting

Congratulations on a successful deployment! Your React app is now live on AWS S3 and accessible to the world. Share the link as needed to showcase your application.

7. Conclusion

In this blog, we had a look at some amazing services that can be used to host and deploy React applications. All these platforms have their own pros and cons along with different methods and approaches to deploy an application. React app development companies can choose any of these platforms as per the application’s type, complexity, and requirement. 

FAQs:

Can I deploy React app without server?

Yes, it is possible to deploy your React App without a server. Just bundle your app during development by allowing the build tool to bundle and minify the JS code, CSS, and all dependencies in separate files. Those JS and CSS files contain necessary app code and are referred to by index.html. Now that you have everything in your app bundled together, you don’t need a server or an NPM module and can just serve your app as a static website. 

Is Firebase free for deployment?

Yes, you can use Firebase for free deployment. However, its no-cost tier plan is limited to a certain level of products. To use other high-level products, you have to subscribe to its paid-tier pricing plan. 

Which is better: AWS or Firebase?

Both platforms fulfills distinct project requirements. So, there is no competition between AWS and Firebase. If you want to improve app development, minimize the deployment time, and have seamless hosting then Firebase is the right pick for you. But if you are working on a more sophisticated project that demands extensive customized programming and server-level access then you have to go with AWS. 

Is Netlify better than Vercel?

Serverless functions are supported in both Nelify and Vercel. But what makes Vercel an excellent choice for serverless applications is that it comes with a serverless architecture. However, integration of serverless workflows with your project is seamless while using Netlify as it also supports AWS Lambda functions.

The post Guide to Deploy React App on Various Cloud Platforms appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/deploy-react-app/feed/ 0
Staff Augmentation vs Managed Services https://www.tatvasoft.com/blog/staff-augmentation-vs-managed-services/ https://www.tatvasoft.com/blog/staff-augmentation-vs-managed-services/#respond Wed, 03 Jan 2024 06:01:53 +0000 https://www.tatvasoft.com/blog/?p=12404 Today, more than before, businesses are looking for ways to outsource their IT operations. The process of finding and employing in-house IT personnel may be lengthy, difficult, and expensive, not to mention unpleasant in the case of a fast-growing business or a temporary project.

The post Staff Augmentation vs Managed Services appeared first on TatvaSoft Blog.

]]>

Key Takeaways

  1. In comparison of staff augmentation vs managed services, Staff Augmentation Model mainly offers an extension to the existing team whereas in Managed Services, the company outsources certain functions or projects to an experienced third party organization.
  2. Staff Augmentation is often utilized with Managed Services Model for specific services at certain points in time.
  3. Staff Augmentation Model may become risky, costly, and less productive when the resources are inexperienced, and there is lack of time for training and development.
  4. IT Companies utilizing Staff Augmentation Model eventually sourcing external resources for their work. Alternatively, They can also adopt an effective managed services model to maximize value.
  5. For short term goals, it is advisable to go with Staff Augmentation Model whereas for long term initiatives and requirement of large team, Managed Services Model is preferable.

Today, more than before, businesses are looking for ways to outsource their IT operations. The process of finding and employing in-house IT personnel may be lengthy, difficult, and expensive, not to mention unpleasant in the case of a fast-growing business or a temporary project.

IT staff augmentation vs managed services has always been an evergreen debate in the IT industry and these are the two most common types of IT outsourcing models. Both approaches are viable substitutes for hiring employees full-time, but which one works best for you will depend on the nature and scope of your projects.

With the help of the staff augmentation model, you may outsource a variety of different jobs and responsibilities. Under the managed services model, the client gives the entire problem and its resolution to an outside company. While managed services excel at the long-term management of operations like architecture and security, staff augmentation is excellent for the short-term scaling of specific operations.

In order to establish which option is preferable, we will compare staff augmentation vs managed services and explore the benefits and drawbacks of each.

1. What is IT Staff Augmentation?

In this context, “staff augmentation” refers to the practice of adding a new member to an organization’s existing team. A remote worker or team is hired for a limited time and for specific tasks, rather than being hired full-time.

They are not fixed workers, but they are completely incorporated into the internal team. Companies interested in adopting this strategy for project development will save significant time and resources.

If a corporation wants to fire an employee, it must first formally register the worker, go through a lengthy onboarding procedure, pay taxes, and sit through extensive interviews.

You are correct; firing an employee is a complicated process in many Western nations. It is normal procedure in Europe to present evidence that an employee lacks the necessary degree of certification, and a specialized commission will rule on your ability to do so. Think about all the time and effort you may waste dealing with the typical bureaucratic system. Therefore, staff augmentation is used as it can be advantageous because of its features that enable you to satisfy your demands for a super-specialist with less effort and expense. Let’s take a closer look at its benefits and drawbacks now.

Further Reading on:
IT Staff Augmentation vs Offshoring

Staff Augmentation vs Outsourcing

1.1 Pros of IT Staff Augmentation

Pros of IT Staff Augmentation

Effortless Teamwork
Your existing team will function normally with the additional “resources.” As a mechanism, they’re rock-solid reliable.

Staffing Adaptability
As needed, staffing levels may be quickly adjusted up or down. Furthermore, individuals sync up more efficiently and rapidly than disjointed teams.

High Proficiency at Low Cost
Adding new individuals to your team helps make up for any gaps in the expertise you may have. Because you are hiring people for their specialized expertise, you won’t have to spend time teaching them anything new.

In-house Specialist Expertise
You can put your attention where it belongs on growing your business and addressing IT needs by using staff augmentation to swiftly bridge skill shortages that arise while working on a software project that requires specialized knowledge and experience.

Reduce Management Issues
By using a staffing agency, you may reduce your risk and save money compared to hiring new employees. You have access to the big picture as well as any relevant details, are able to make decisions at any point in the procedure, and are kept in the loop the whole time.

Internal Acceptance
Permanent workers are more likely to swiftly adapt to working with temporary workers than they would be with an outsourced team, and they are less likely to worry about losing their employment as a result.

Keep to the Deadlines
When you need to get more tasks done in less time, but don’t have enough people to do it, staff augmentation can help. It can aid in the timely completion of tasks and the efficient execution of the project as a whole.

1.2 Cons of IT Staff Augmentation

Cons of IT Staff Augmentation

Training is Essential
As a result, it is imperative that you familiarize your temporary workers with the company’s internal procedures, as they will likely vary from the methods they have used in the past.

Lack of Managerial Resources
Bringing new team members up to speed can be a drain on the existing team’s time and energy, but this is only a problem if you lack the means and foresight to effectively oversee your IT project.

Acclimatization of New Team Members
It’s possible that your team’s productivity will dip temporarily as new members learn the ropes of the business.

Temporary IT Assistance
Hiring an in-house staff may be more cost-effective if your business routinely requires extensive IT support.

1.3 Process of IT Staff Augmentation

In most organizations, there are three main phases to the staff augmentation process:

Determining the Skill Gap
You should now be able to see where your team is lacking in certain areas of expertise and have the hiring specialists to fill those voids with the appropriate programmers.

Onboarding Recruited Staff
Experts, once hired, need to be trained in-house to become acquainted with the fundamental technical ideas and the team. Additionally, they need to be included in the client’s working environment.

Adoption of Supplemental Staff
At this point, it’s crucial that the supplementary staff actively pursues professional growth. The goal of hiring new team members is to strengthen the organization as a whole so that they can contribute significantly to the success of your initiatives.

1.4 Types of Staff Augmentation

Let us delve into the various staff augmentation models and their potential advantages for companies of any size:

Project-Based Staff Augmentation 
Designed for businesses that have a sudden demand for a dedicated team of software engineers or developers to complete a single project.

Skill-Based Staff Augmentation 
Staffing shortages in any industries like healthcare and financial technology with temporary developers.

Time-Based Staff Augmentation 
The time-based approach is the best choice if you want the services of external developers for a specified duration.

Hybrid Staff Augmentation 
The goal is to provide a unique solution for augmenting staff and assets by integrating two or more of the primary methods.

Onshore Staff Augmentation 
Recruiting information technology experts from the same nation as the business is a part of this approach. If your team and the IT department need to communicate and work together closely, this is the way to go.

Nearshore Staff Augmentation 
Nearshore software development makes use of staff augmentation, which involves recruiting a development team from a neighboring nation that shares the same cultural and time zone characteristics.

Offshore Staff Augmentation 
This term describes collaborating with IT experts located in a faraway nation, usually one with an extensive time difference. The best way to save money while adding staff is to hire developers from outside the country.

Dedicated Team Augmentation 
If you want highly specialized knowledge and experience, it’s best to hire a dedicated development team that works only for your company.

2. What is Managed Services?

With managed IT services, you contract out certain IT tasks to an outside company, known as a “managed service provider” (MSP). Almost any topic may be addressed with the help of a service, including cybersecurity issues, VoIP issues, backup and recovery, and more. When businesses don’t have the resources to build and run their own IT departments, they often turn to outsource for help.

Having a reliable MSP allows you to put your attention where it belongs on running your business rather than micromanaging its information technology systems.

Even so, if you pick the incorrect supplier, you may be stuck in a long-term service level  agreement that doesn’t meet your company’s demands, and that might cause a lot of trouble down the road. Therefore, it is quite important that you take the MSP screening procedure seriously.

2.1 Pros of Managed Services

Pros of Managed Services

Efficient Use of Time and Money
You don’t need to buy any new equipment and also no need to pay regular salaries. In this way, you can effectively use time and money. 

Skills and Knowledge
If you outsource your business’s demands to qualified individuals, you may take advantage of their unbounded knowledge and experience to give your company a leg up on the competition.

Security
If you outsource the Technology, the service provider will make sure your company is secure enough to prevent data breaches.

Flexibility
Managed IT service providers, in contrast to in-house teams, are available around the clock, which boosts efficiency.

Monitoring
The service assumes control of the entire project or any part of the project, and acts as project manager, keeping tabs on all project activities and securing all required resources.

Outcome
In most cases, the managed services provider will analyze the potential dangers and propose the best course of action.

2.2 Cons of Managed Services

Cons of Managed Services

Actual Presence
Due to the distant nature of the IT managed services organization, you will be responsible for resolving any issues that develop at the physical location.

Additional Expenditure
A complete set of low-cost tools and resources is not always available. There are those who charge more than others.

Security and Control
When you engage a service provider, you are essentially giving them permission to view your company’s most private files.

Inconvenient Costs of Switching
It might be detrimental to your organization if your managed IT services provider suddenly shuts down without warning.

Changing IT Needs
Your company’s productivity and expansion potential will be stunted if you have to work with an IT service provider that doesn’t care about what it requires.

2.3 Process of Managed Services

An attitude of partnership is essential to the success of the managed services (outsourcing) model. It’s noteworthy that the idea of long-term partnerships with excellent suppliers has been more easily accepted in other sectors of an organization than in IT. Managed service providers base their whole business models on providing exceptional service to their customers, which is why they put so much effort into developing and maintaining their service delivery capabilities.

Partnership with a reliable managed services provider frees up time and resources for IT management to concentrate on maximizing technology’s contribution to the company’s bottom line. The biggest obstacle is the mistaken belief that you have to give up control in order to delegate day-to-day operations when, in reality, you always do thanks to your relationships and contracts.

Those IT departments that have grown to rely on the staff augmentation firms  might reap substantial economic and service benefits by transitioning to a managed services (outsourcing) model.

Managed service (outsourcing) models emphasize delivering “outcomes” (service levels and particular services tied to a volume of activity) for a fixed fee rather than “inputs” (resources). The client benefits from the assurance of fixed costs, while the supplier takes on the risk involved with making good on the promise of delivery.

Given that the expenses of meeting service level responsibilities can outpace price if they are not properly approximated or managed, the outsourcing provider has a strong incentive to incorporate productivity tools and operational “hygienic” tools and practices that encourage the servicing and retention of operational health, each of which inevitably brings value to the consumer.

Managed services (outsourcing) models are advantageous to the provider because they allow for more time for long-term planning, resource management, workload balancing between employees, and job allocation across a global delivery model.

2.4 Types of Managed Services

Security and Networking Solutions
Here, a managed services company often handles all network-related duties, such as setting up your company’s local area network (LAN), wireless access points (WAPs), and other connections. Options for data backup and storage are also managed by the MSP. These firms also provide reliable and faster networking and security solutions.

Security Management
This remote security infrastructure service in managed services models includes backup and disaster recovery (BDR) tools and anti-malware software, and it updates all of these tools regularly.

Communication Services
Messaging, Voice over Internet Protocol (VoIP), data, video, and other communication apps are all managed and monitored by this service. Providers can sometimes fill the role of a contact center in your stead.

Software as a Service
The service provider provides a software platform to businesses in exchange for a fee, typically in the form of a membership. Microsoft’s 365 suite of office applications, unified messaging and security programs are a few examples.

Data Analytics
Data analytics is a requirement if you’re seeking monitoring services for data management. This offering incorporates trend analysis and business analytics to plan for future success.

Support Services
In most circumstances, this choice includes everything from basic problem-solving to complex scenarios requiring IT support.

3. IT Staff Augmentation vs Managed Services

The contrasts between IT staff augmentation vs managed services are below in the following table.

Key Parameters IT Staff Augmentation Managed Services
Advantages
  • Effortless teamwork
  • The ability to stretch the workforce
  • It’s easier and cheaper to increase skills.
  • Proficient in-house specialists
  • fewer problems with management
  • Admitting to oneself
  • Meets deadlines
  • Cost-effectiveness with quick results
  • Competence and know-how Safe and adaptable
  • Rapidly Observable Outcome
Disadvantages
  • Integrating new members of the team is essential.
  • It’s suitable for temporary tech support situations
  • Presence in person necessary
  • Expenses that are much higher
  • Discrepancies in control and security
  • Fees incurred while changing service providers
  • Challenges in keeping up with ever-evolving information
ProcessesResponsibilities and operations to third parties (inputs)Management and solution outsourcing (outputs)
BillingTime and materials are billed for on a regular basis (usually every two weeks).Retainer fees are typically billed once a year.
Forms of ProjectsHighly adaptable and scalable; ideal for projects with a short yet intense growth phaseStrong foundation; ideal for long-term IT administration
HiringEmployed by a supplierPut together for action
Office FacilitiesVendorVendor
AdministrationCustomerVendor
EngagementFull-time EngagementFull time or part-time
Overhead ExpensesVendorVendor
PayrollVendorVendor
Employee BenefitsVendor/clientOnly Vendor
Payroll AnalysisCustomerVendor
Ratings EvaluationVendorVendor
CommunicationStraight Communication Through vendor’s PM
Best Use Cases
  • Short-term requirements
  • Minimal projects
  • Projects requiring adaptability
  • Long-term initiatives
  • Outsourcing complete projects
  • Cost savings increasing over time

4. Conclusion

Both staff augmentation vs managed service model may be disentangled into their constituent parts—out staffing for short-term services and outstaffing for long-term positions. 

Clearly, staff augmentation and managed services are the ideal solutions to implement your business ideas with a massive quantity of profit. However, there is a significant difference between the two approaches, making it challenging to tell which is the superior option simply by looking at them. The requirements are the primary factor in determining the response.

Staff augmentation is the way to go if you need a quick fix that involves bringing in skilled workers to fill in the gaps for a limited time. You may get the desired degree of adaptability and savings using this approach. The managed services approach is ideal if you want to outsource the entire project. Your project will be managed by a group of people who are solely responsible for it. You may save money in the long run by establishing a consistent budget for your IT outsourcing.

With the help of staff augmentation, you may outsource a variety of different jobs and responsibilities. Under the managed services model, the client gives the entire problem and its resolution to an outside company. While managed services excel at the long-term management of operations like architecture and security, staff augmentation is excellent for the short-term scaling of specific operations.

In a nutshell, it’s necessary to identify your needs before jumping to a suggested conclusion since every project has its own distinctive needs and objectives.

The post Staff Augmentation vs Managed Services appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/staff-augmentation-vs-managed-services/feed/ 0
A Complete Guide to React Micro Frontend https://www.tatvasoft.com/blog/react-micro-frontend/ https://www.tatvasoft.com/blog/react-micro-frontend/#respond Tue, 05 Dec 2023 08:15:33 +0000 https://www.tatvasoft.com/blog/?p=12274 It is a difficult and challenging task for developers to manage the entire codebase of the large scale application. Every development team strives to find methods to streamline their work and speed up the delivery of finished products. Fortunately, concepts like micro frontends and microservices are developed to manage the entire project efficiently and have been adopted by application development companies.

The post A Complete Guide to React Micro Frontend appeared first on TatvaSoft Blog.

]]>

Key Takeaways

  1. When developers from various teams contribute to a single monolith on the top of microservices architecture, it becomes difficult to maintain the large scale application.
  2. To manage the large-scale or complex application, breaking down the frontend into smaller and independently manageable parts is preferable.
  3. React is a fantastic library! One can create robust Micro-Frontends using React and tools like Vite.
  4. Micro Frontend with react provides benefits like higher scalability, rapid deployment, migration, upgradation, automation, etc.

It is a difficult and challenging task for developers to manage the entire codebase of the large scale application.  Every development team strives to find methods to streamline their work and speed up the delivery of finished products. Fortunately, concepts like micro frontends and microservices are developed to manage the entire project efficiently and have been adopted by application development companies.   

Micro frontends involve breaking down the frontend side of the large application into small manageable parts. The importance of this design cannot be overstated, as it has the potential to greatly enhance the efficiency and productivity of engineers engaged in frontend code. 

Through this article, we will look at micro frontend architecture using react and discuss its advantages, disadvantages, and implementation steps. 

1. What are Micro Frontends?

The term “micro frontend” refers to a methodology and an application development approach that ensures that the front end of the application is broken down into smaller, more manageable parts which  are often developed, tested, and deployed separately from one another. This concept is similar to how the backend is broken down into smaller components in the process of microservices.

Read More on Microservices Best Practices

Each micro frontend consists of code for a subset (or “feature”) of the whole website. These components are managed by several groups, each of which focuses on a certain aspect of the business or a particular objective.

Being a widely used frontend technology, React is a good option for building a micro frontend architecture. Along with the react, we can use vite.js tool for the smooth development process of micro frontend apps. 

What are Micro frontends

2.1 Benefits of Micro Frontends

Here are the key benefits of the Micro Frontend architecture: 

Key Benefit Description
Gradual Upgrades
  • It might be a time-consuming and challenging task to add new functionality to a massive, outdated, monolithic front-end application.
  • By dividing the entire application into smaller components, your team can swiftly update and release new features via micro frontends.
  • Using multiple frameworks, many portions of the program may be focused on and new additions can be deployed independently instead of treating the frontend architecture as a single application.
  • By this way, teams can improve overall dependencies management, UX, load time, design, and more.
Simple Codebases
  • Many times, dealing with a large and complicated code base becomes irritating for the developers.
  • Micro Frontend architecture separates your code into simpler, more manageable parts, and gives you the visibility and clarity you need to write better code.
Independent Deployment
  • Independent deployment of each component is possible using Micro frontend.
Tech Agnostic
  • You may keep each app independent from the rest and manage it as a component using micro frontend.
  • Each app can be developed using a different framework, or library as per the requirements.
Autonomous Teams
  • Dividing a large workforce into subgroups can increase productivity and performance.
  • Each team of developers will be in charge of a certain aspect of the product, enhancing focus and allowing engineers to create a feature as quickly and effectively as possible.

Here in this Reddit thread one of the front-end developers mentioned how React Micro frontend helps in AWS Cloudwatch.

Comment
byu/angle_of_doom from discussion
inAskProgramming

2.2 Limitations of Micro Frontends

Here are the key limitations of Micro Frontend architecture: 

Limitations Description
Larger Download Sizes
  • Micro Frontends are said to increase download sizes due to redundant dependencies.
  • Larger download rates derive from the fact that each app is made with React or a related library / framework and must download the requirement whenever a user needs to access that particular page.
Environmental Variations
  • If the development container is unique from the operational container, it might be devastating.
  • If the production container is unique from the development container, the micro frontend will malfunction or act otherwise after release to production.
  • The universal style, which may be a component of the container or other micro frontends, is a particularly delicate aspect of this problem.
Management Complexity
  • Micro Frontend comes with additional repositories, technologies, development workflows, services, domains, etc. as per the project requirements.
Compliance Issues
  • It might be challenging to ensure consistency throughout many distinct front-end codebases.
  • To guarantee excellence, continuity, and accountability are kept throughout all teams, effective leadership is required.
  • Compliance difficulties will arise if code review and frequent monitoring are not carried out effectively.

Please find a Reddit thread below discussing the disadvantages of Micro frontend.

Comment
byu/crazyrebel123 from discussion
inreactjs

Now, let’s see how Micro Frontend architecture one can build with React and other relevant tools. 

3. Micro Frontend Architecture Using React

Micro Frontends are taking the role of monolithic design, which has served as the standard in application development for years. The background of monolithic designs’ popularity is extensive. As a result, many prominent software developers and business figures are enthusiastic supporters. Yet as time goes on, new technologies and concepts emerge that are better than what everyone thinks to be used to.

The notion of a “micro frontend” in React is not unique; instead, it represents an evolution of previous architectural styles. The foundation of microservice architecture is being extensively influenced by revolutionary innovative trends in social media, cloud technology, and the Internet of Things in order to quickly infiltrate the industry.

Because of the switch to continuous deployment, micro frontend with react provides additional benefits to enterprises, such as:

  • High Scalability
  • Rapid Deployment
  • Effective migration and upgrading
  • Technology-independence
  • No issue with the insulation
  • High levels of deployment and automation
  • Reduced development time and cost
  • Less Threats to safety and dependability have decreased

Let’s go through the steps of creating your first micro frontend architecture using react: 

4. Building Micro Frontend with React and Vite

4.1 Set Up the Project Structure

To begin with, let’s make a folder hierarchy.

# Create folder named react-vite-federation-demo
# Folder Hierarchy 
--/packages
----/application
----/shared

The following instructions will put you on the fast track:

mkdir react-vite-federation-demo && cd ./react-vite-federation-demo
mkdir packages && cd ./packages

The next thing to do is to use the Vite CLI to make two separate directories: 

  1. application, a react app which will use the components, 
  2. shared, which will make them available to other apps.
#./react-vite-federation-demo/packages
pnpm create vite application --template react
pnpm create vite shared --template react

4.2 Set Up pnpm Workspace

Now that you know you’ll be working with numerous projects in the package’s folder, you can set up your pnpm workspace accordingly.

A package file will be generated in the package’s root directory for this purpose:

touch package.json

Write the following code to define various elements in the package.json file. 

{
  "name": "react-vite-federation-demo", 
  "version": "1.1.0",
  "private": true,   
  "workspaces": [
    "packages/*"
  ],
  "scripts": {
    "build": "pnpm  --parallel --filter \"./**\" build",
    "preview": "pnpm  --parallel --filter \"./**\" preview",
    "stop": "kill-port --port 5000,5001"
  },
  "devDependencies": {
    "kill-port": "^2.0.1",
    "@originjs/vite-plugin-federation": "^1.1.10"
  }
}

This package.json file is where you specify shared packages and scripts for developing and executing your applications in parallel.

Then, make a file named “pnpm-workspace.yaml” to include the pnpm workspace configuration:

touch pnpm-workspace.yaml

Let’s indicate your packages with basic configurations:

# pnpm-workspace.yaml
packages:
  - 'packages/*'

Packages for every applications are now available for installation:

pnpm install

4.3 Add Shared Component  (Set Up “shared” Package)

To demonstrate, let’s create a basic button component and include it in our shared package.

cd ./packages/shared/src && mkdir ./components
cd ./components && touch Button.jsx

To identify button, add the following code in Button.jsx

import React from "react";
import "./Button.css"
export default ({caption = "Shared Button"}) => ;

Let’s add CSS file for your button:

touch Button.css

Now, to add styles, write the following code in Button.css

.shared-button {
    background-color:#ADD8E6;;
    color: white;
    border: 1px solid white;
    padding: 16px 30px;
    font-size: 20px;
    text-align: center;
}

It’s time to prepare the button to use by vite-plugin-federation, so let’s do that now. This requires modifying vite.config.js file with the following settings:

import { defineConfig } from 'vite'
import react from '@vitejs/plugin-react'
import federation from '@originjs/vite-plugin-federation'
import dns from 'dns'

dns.setDefaultResultOrder('verbatim')

export default defineConfig({
  plugins: [
    react(),
    federation({
      name: 'shared',
      filename: 'shared.js',
      exposes: {
        './Button': './src/components/Button'
      },
      shared: ['react']
    })
  ],
  preview: {
    host: 'localhost',
    port: 5000,
    strictPort: true,
    headers: {
      "Access-Control-Allow-Origin": "*"
    }
  },
  build: {
    target: 'esnext',
    minify: false,
    cssCodeSplit: false
  }
})

Set up the plugins, preview, and build sections in this file.

4.4 Use Shared Component and Set Up “application” Package

The next step is to incorporate your reusable module into your application’s code. Simply use the shared package’s Button to accomplish this:

import "./App.css";
import { useState } from "react";
import Button from "shared/Button";

function Application() {
  const [count, setCount] = useState(0);
  return (
    

Application 1

count is {count}
); } export default Application;

The following must be done in the vite.config.js file:

import { defineConfig } from 'vite'
import federation from '@originjs/vite-plugin-federation'
import dns from 'dns'
import react from '@vitejs/plugin-react'

dns.setDefaultResultOrder('verbatim')

export default defineConfig({
  plugins: [
    react(),
    federation({
      name: 'application',
      remotes: {
        shared: 'http://localhost:5000/assets/shared.js',
      },
      shared: ['react']
    })
  ],
  preview: {
    host: 'localhost',
    port: 5001,
    strictPort: true,
  },
  build: {
    target: 'esnext',
    minify: false,
    cssCodeSplit: false
  }
})

In this step, you’ll also configure your plugin to use a community package. The lines match the standard packaging format exactly.

Application Launch

The following commands will help you construct and launch your applications:

pnpm build && pnpm preview

Our shared react application may be accessed at “localhost:5000”:

Launch Your Application

At “localhost:5001”, you will see your application with a button from the shared application on “localhost:5000”:

5. Conclusion

Micro Frontends are unquestionably cutting-edge design that addresses many issues with monolithic frontend architecture. With a micro frontend, you may benefit from a quick development cycle, increased productivity, periodic upgrades, straightforward codebases, autonomous delivery, autonomous teams, and more.

Given the high degree of expertise necessary to develop micro frontends with React, we advise working with professionals. Be sure to take into account the automation needs, administrative and regulatory complexities, quality, consistency, and other crucial considerations before choosing the micro frontend application design.

The post A Complete Guide to React Micro Frontend appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/react-micro-frontend/feed/ 0
.NET Microservices Implementation with Docker Containers https://www.tatvasoft.com/blog/net-microservices/ https://www.tatvasoft.com/blog/net-microservices/#respond Thu, 23 Nov 2023 07:19:28 +0000 https://www.tatvasoft.com/blog/?p=12223 Applications and IT infrastructure management are now being built and managed on the cloud. Today's cloud apps require to be responsive, modular, highly scalable, and trustworthy.
Containers facilitate the fulfilment of these needs by applications.

The post .NET Microservices Implementation with Docker Containers appeared first on TatvaSoft Blog.

]]>

Key Takeaways on .Net Microservices

  1. The microservices architecture is increasingly being favoured for the large and complex applications based on the independent and individual subsystems.
  2. Container-based solutions offer significant cost reductions by mitigating deployment issues arising from failed dependencies in the production environment.
  3. With Microsoft tools, one can create containerized .NET microservices using a custom and preferred approach.
  4. Azure supports running Docker containers in a variety of environments, including the customer’s own datacenter, an external service provider’s infrastructure, and the cloud itself.
  5. An essential aspect of constructing more secure applications is establishing a robust method for exchanging information with other applications and systems.

1. Microservices – An Overview

Applications and IT infrastructure management are now being built and managed on the cloud. Today’s cloud apps require to be responsive, modular, highly scalable, and trustworthy.

Containers facilitate the fulfilment of these needs by applications. To put it another way, attempting to navigate a new location by placing an application in a container without first deciding on a design pattern is like going directionless. You could get where you’re going, but it probably won’t be the fastest way.

.NET Microservices is necessary for this purpose. With the help of a reliable .NET development company offering microservices, the software can be built and deployed in a way that meets the speed, scalability, and dependability needs of today’s cloud-based applications.

2. Key Considerations for Developing .Net Microservices

When using .NET to create microservices, it’s important to remember the following points:

API Design

Since microservices depend on APIs for inter-service communication, it’s crucial to construct APIs with attention. RESTful APIs are becoming the accepted norm for developing APIs and should be taken into consideration. To prevent breaking old clients, you should plan for versioning and make sure your APIs are backward compatible.

Data Management

Because most microservices use their own databases, ensuring data consistency and maintenance can be difficult. If you’re having trouble keeping track of data across your microservices, you might want to look into utilising Entity Framework Core, a popular object-relational mapper (ORM) for .NET.

Microservices need to be tested extensively to assure their dependability and sturdiness. For unit testing, you can use xUnit or Moq, and for API testing, you can use Postman.

Monitoring and analysis are crucial for understanding the health of your microservices and fixing any problems that may develop. You might use monitoring and logging tools such as Azure Application Insights.

If you want to automate the deployment of your microservices, you should use continuous integration and continuous delivery (CI/CD) pipeline. This will assist guarantee the steady delivery and deployment of your microservices.

3. Implementation of .Net Microservices Using Docker Containers

3.1 Install .NET SDK

Let’s begin from scratch. First, install .NET 7 SDK. You can download it from this URL: https://dotnet.microsoft.com/en-us/download/dotnet/7.0  

Once you complete the download, install the package and then open a new command prompt and run the following command to check .NET (SDK) information: 

> dotnet

If the installation succeeded, you should see an output like the following in command prompt: 

.NET SDK Installation

3.2 Build Your Microservice

Open command prompt on the location where you want to create a new application. 

Type the following command to create a new app named “MyMicroservices”

> dotnet new webapi -o DemoMicroservice --no-https -f net7.0 

Then, navigate to this new directory. 

> cd DemoMicroservice

What do these commands mean? 

CommandMeaning
dotnetIt creates a new application of type webapi  (that’s a REST API endpoint). 
-oCreates a directory where your app “DemoMicroservices” is stored.
–no-httpsCreates an app that runs without an HTTPS certificate. 
-fIndicates that you are creating a .NET 7 application. 

3.3 Run Microservice

Type this into your command prompt:

> dotnet run

The output will look like this: 

run microservices

The Demo Code: 

Several files were generated in the DemoMicroservices directory. It gives you a simple service which is ready to run.  

The following screenshot shows the content of the WeatherForecastController.cs file. It is located in the Controller directory. 

Demo Microservices

Launch a browser and enter http://localhost:<port number>/WeatherForecast once the program shows that it is monitoring that address.

In this example, It shows that it is listening on port 5056. The following image shows the output on the following url: http://localhost:5056/WeatherForecast.

WeatherForecast Localhost

You’ve successfully launched a basic service.

To stop the service from running locally using the dotnet run command, type CTRL+C at the command prompt.

3.4 Role of Containers

In software development, containerization is an approach in which a service or application, its dependencies, and configurations (in deployment manifest files) are packaged together as a container image.    

The containerized application may be tested as a whole and then deployed to the host OS in the form of a container image instance.

Software containers are like cardboard boxes in which they are a standardised unit of software deployment that can hold a wide variety of programs and dependencies, and they can be moved from location to location. 

This method of software containerization allows developers and IT professionals to easily deploy applications to many environments with few code changes.

If this seems like a scenario where containerizing an application may be useful, it’s because it is. The advantages of containers are nearly identical to the advantages of microservices.

The deployment of microservices is not limited to the containerization of applications. Microservices may be deployed via a variety of mechanisms, such as Azure App Service, virtual machines, or anything else. 

Containerization’s flexibility is an additional perk. Creating additional containers for temporary jobs allows you to swiftly scale up. The act of instantiating an image (by making a container) is, from the perspective of the application, quite similar to the method of implementing a service or a web application.

In a nutshell, containers improve the whole application lifecycle by providing separation, mobility, responsiveness, versatility, and control.

All of the microservices you create in this course will be deployed to a container for execution; more specifically, a Docker container.

3.5 Docker Installation

3.5.1. What is Docker?

Docker is a free and set of platform as a service products that use OS level virtualization for automating the deployment of applications as portable, self-sufficient containers that can run on cloud or on-premises. Docker also has premium tiers for premium features. 

Azure supports running Docker containers in a variety of environments, including the customer’s own datacenter, an external service provider’s infrastructure, and the cloud itself. Docker images may be executed in a container format on both Linux and Windows.

3.5.2. Installation Steps

Docker is a platform for building containers, which are groups of an app, its dependencies, and configurations. Follow the steps mentioned below to install the docker: 

  • First download the .exe file from docker website
  • Docker’s default configuration for Windows employs Linux Containers. When asked by the installation, just accept the default settings.
  • You may be prompted to sign out of the system after installing Docker.
  • Make sure Docker is up and running.
  • Verify that Docker is at least version 20.10 if you currently have it installed.

Once the setup is complete, launch a new command prompt and enter:

> docker --version

If the command executes and some version data is displayed, then Docker has been set up properly.

3.6 Add Docker Metadata

A Docker image can only be created by following the directions provided in a text file called a Dockerfile. If you want to deploy your program in the form of a Docker container, you’ll need a Docker image.

Get back to the app directory

Since the preceding step included opening a new command prompt, you will now need to navigate back to the directory in which you first established your service.

> cd DemoMicroservice

Add a DockerFile

Create a file named “Dockerfile” with this command:

> fsutil file createnew Dockerfile 0

To open the docker file, execute the following command. 

> start Dockerfile.

In the text editor, update the following with the Dockerfile’s current content:

FROM mcr.microsoft.com/dotnet/sdk:7.0 AS build
WORKDIR /src
COPY DemoMicroservice.csproj .
RUN dotnet restore
COPY . .
RUN dotnet publish -c release -o /app

FROM mcr.microsoft.com/dotnet/aspnet:7.0
WORKDIR /app
COPY --from=build /app .
ENTRYPOINT ["dotnet", "DemoMicroservice.dll"]

Note: Keep in mind that the file needs to be named as Dockerfile and not Dockerfile.txt or anything else.

Optional: Add a .dockerignore file

If you have a .dockerignore file, it will limit the number of files that are read during the ‘docker build’ process. Reduce the number of files to compile faster.

If you’re acquainted with .gitignore files, the following command will create a .dockerignore file for you:

> fsutil file createnew .dockerignore 0

You can then open it in your favorite text editor manually or with this command:

> start .dockerignore

Then, either manually or with the following command, load it in your preferred text editor:

Dockerfile
[b|B]in
[O|o]bj

3.7 Create Docker Image

Start the process with this command:

> > docker build -t demomicroservice

Docker images may be created with the use of the Dockerfile and the docker build command.

The following command will display a catalogue of all images on your system, especially the one you just made.

> docker images

3.8 Run Docker image

Here’s the command you use to launch your program within a container:

> docker run -it --rm -p 3000:80 --name demomicroservicecontainer demomicroservice

To connect to a containerized application, go to the following address: http://localhost:3000/WeatherForecast 

demo microservices with docker weatherforecast

Optionally, The subsequent command allows you to observe your container in a different command prompt: 

> docker ps
docker ps

To cancel the docker run command that is managing the containerized service, enter CTRL+C at the prompt.

Well done! A tiny, self-contained service that can be easily deployed and scaled with Docker containers has been developed by you.

These elements provide the foundation of a microservice.

4. Conclusion

The .NET Framework, from its inception with .NET Core to the present day, was designed from the ground up to run natively on the cloud. Its cross-platform compatibility means your .NET code will execute regardless of the operating system as your Docker image is built on. .NET is incredibly quick, with the ASP.NET Kestrel web server consistently surpassing its competitors. Its remarkable presence leaves no doubt for disappointment and should be incorporated in your projects.

5. FAQs

Why is .NET core good for microservices?

.NET enables the developers to break down the monolithic application into smaller parts and deploy services separately which can not only help businesses get more time to market the product but also benefit in adapting to the changes quickly and with flexibility. Because of this reason, the .NET core is considered a powerful platform to create and deploy microservices. Besides this, some other major reasons behind it being a good option for microservices are – 

  • Easier maintenance as with .NET core, microservices can be tested, updated, and deployed independently.
  • Better scalability is offered by the .NET core. It scales each service independently to meet the traffic demands.

What is the main role of Docker in microservices?

When it comes to a microservices architecture, the .Net app developers can create applications that are independent of the host environment. This can be done by encapsulating each of the microservices in Docker containers. Docker is a concept that enables the developers to package the applications they create into containers and here each container has a standard executable component and operating system library to run the microservices in any platform. 

The post .NET Microservices Implementation with Docker Containers appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/net-microservices/feed/ 0
React Testing Libraries & How to Use Them https://www.tatvasoft.com/blog/react-testing-libraries/ https://www.tatvasoft.com/blog/react-testing-libraries/#respond Wed, 11 Oct 2023 09:20:31 +0000 https://www.tatvasoft.com/blog/?p=12118 Software or application testing is one of the most important parts of software development life cycle. It helps to find out and eliminate potentially destructive bugs and ensure the quality of the final product. When it comes to testing apps, there are various React testing libraries and tools available online. One can test React components similar to any JavaScript code.

The post React Testing Libraries & How to Use Them appeared first on TatvaSoft Blog.

]]>

Key Takeaways

  1. As per Statista, React is the 2nd most used web framework in the world. There are various tools available for testing react applications. So, best tools and practices must be followed by developers.
  2. Jest, and React Testing Library are the most popular tools recommended by React Community to test react applications.
  3. Other than that, tools like Chai, Mocha, Cypress.io, Jasmine, and Enzyme are widely used for testing react apps.
  4. While selecting testing tools, consider iteration speed, environment required, dependencies, and flow length.

Software or application testing is one of the most important parts of software development life cycle. It helps to find out and eliminate potentially destructive bugs and ensure the quality of the final product. When it comes to testing apps, there are various React testing libraries and tools preferred by Reactjs development companies. One can test React components similar to any JavaScript code. 

Now, let’s dive deeper on how you can select best-fit libraries or tools for your project.

Points to Consider before Selecting React Testing libraries and Tools: 

  1. Iteration Speed
    Many tools offer a quick feedback loop between changes done and displaying the results. But these tools may not model the browser precisely. So, tools like Jest are recommended for good iteration speed.
  2. Requirement of Realistic Environment
    Some tools use a realistic browser environment but they reduce the iteration speed. React testing Libraries like mocha work well in a realistic environment. 
  3. Component Dependencies
    Some react components have dependencies for the modules that may not work best in the testing environment. So, carefully mocking these modules out with proper replacements are required. Tools and libraries like Jest support mocking modules. 
  4. Flow Length
    To test a long flow, frameworks and libraries like Cypress, Playwright, and Puppeteer are used to navigate between multiple assets and routes.

Now, let’s explore a few of the best options to test React components.

Best React Testing Libraries and Tools

Here is a list of a few highly recommended testing libraries for React application.

1. Jest

Jest, a testing framework developed and supported by Facebook, is the industry standard. It has been embraced by companies like Uber and Airbnb for usage in testing React components.

While looking for a React Testing Framework, the React Community highly suggests Jest. The unit test framework is self-contained, complete with an automated test runner and assertion features.

Jest

Github Stars – 42.8K
Github Forks- 6.5K

1.1 Features of Jest

  • When it comes to JavaScript projects, Jest is meant to run without any further configuration.
  • Individual test procedures are started in parallel to increase throughput.
  • Quick and risk-free.
  • Entire toolkit of Jest is presented at one place. It is well documented and well maintained. 
  • Including untested files, Jest collects code coverage information from entire projects. 

1.2 How to Install Jest?

Download and install Jest using your preferred package manager:

 
npm install --save-dev jest
or
yarn add --dev jest

1.3 First Test Using Jest:

Now, let’s create a test for a made-up function that supposedly performs a simple arithmetic operation on two values. First, you should make a file called “add.js”

    function add(x, y) {
        return x+y;
    }
    module.exports = add;      

Start by making a new file and naming it add.test.js. Here is where we’ll find the real test:

    const add = require('./add');

    test('adds 2 + 2 to equal 4', () => {
    expect(add(1, 2)).toBe(4);
    });

The preceding must be included in the package. json:

    function add(x, y) {
        return x+y;
      }
      module.exports = add;      

This text will be printed by Jest when you run yarn test or npm test:

PASS  ./add.test.js

✓ adds 2 + 2 to equal 4 (5ms)

The First Jest Test case is completed.  This check ensured that two values were similar by comparing them using expect and toBe, respectively.

2. React Testing Library

A sizable developer community stands behind the React-testing-library development. You may simply test components by imitating real-world user actions without relying on the implementation details.

React Testing Library

Github Stars – 18.1K
Github Forks- 1.1K

To test the React DOM, this package provides a set of tools that work together like an enzyme, mimicking real-world user interactions. You may put your React components through their paces with the help of the React testing package.

2.1 Features of React Testing Library

  • Very light-weight
  • It has inbuilt DOM testing utilities
  • More accessible library

2.2 How to Install React Testing Library?

Node Package Manager (npm) is already setup with node, thus there’s no need to install anything else to use this component in your project.

    npm install --save-dev @testing-library/react

For yarn users, follow:

    yarn add --dev @testing-library/react

The react and react-dom peerDependencies lists can be found here. To use the React Testing Library v13+, React v18 is required. Upgrade to React 12 if you’re still using an earlier edition in your project.

    npm install --save-dev @testing-library/react@12

    yarn add --dev @testing-library/react@12

First Test Case: 

Create firstApp.js in src directory. 

firstApp.js

Write the following code to print the Hello World!

    import React from "react";

    export const App = () => 

Hello World!

;

Now, add the firstApp.test.js file in the src directory. 

    import React from "react";
    import { render } from "@testing-library/react";
    import { firstApp } from "./firstApp";

    describe("App Component", function () {
    it("should have Hello World! string", function () {
        let { getByText } = render();
        expect(getByText("Hello world!")).toMatchInlineSnapshot(`
        

Hello World!

`); }); });

3. Chai

One of the most well-known node and browser-based BDD / TDD assertion and expectation React testing libraries is called Chai. It works well with any existing JavaScript testing framework like  Mocha, Jest and Enzyme as well.

Chai

Github Stars – 8K
Github Forks- 724

To specify what should be expected in a test, you may use functionalities like expect, should, and assert. Assertions about functions are another possible use.

3.1 Features of Chai

  • Chai allows software testers to perform various types of assertions.
    • Expect
    • Assert
    • Should
  • The Chai expect and Should styles are used to chain the natural language assertions together. The only difference is chai should style extends “should” with each property. 
  • The Chai assert style offers the developer with a standard assert-dot notation comparable to what is included with node.js.
  • The assert style component additionally offers further tests and browser support.

3.2 How to Install Chai?

Chai can be downloaded through the NPM. Just enter this into your command prompt to begin the installation process.

    $ npm install --save-dev chai

The chai.js file included in the download may also be used in the browser after it has been installed using the npm package manager. For instance:

    

3.3 First Test Using Chai:

To begin using the library, just import it into your program and select the desired approach (either assert, expect, or should).

    var chai = require('chai');  
    var assert = chai.assert;    // Using Assert style
    var expect = chai.expect;    // Using Expect style
    var should = chai.should();  // Using Should style

app.js

    function app(int a, int b){
        return a*b;
    }    

test.js

    describe(“Multiplication”, () => {
        it(“Multiplies 2 and 3”, () => {
        chai.expect(app(2,3).to.equal(6);
    });
    it(“Multiplies 4 and 2”, () => {
        chai.expect(app(4,2).to.equal(7);
    });
    });    

4. Mocha

Mocha is a popular framework to test react applications. It allows you to use any assertion library, runs tests asynchronously, generates coverage reports, and works in any web browser.

Mocha

Github Stars – 22.2K
Github Forks- 3K

While putting required codes together for a test, developers use the appropriate React development tools, methodologies and approaches. Mocha  is suitable with a broad variety of testing functions and packages. Mocha is a best substitute for Jest because of its simplicity in comparison with Jest in areas like mocking.

4.1 Features of Mocha

  • Capabilities to test synchronous and asynchronous programs using a simple interface.
  • Versatile and precise analysis
  • The capacity to execute tests in a linear manner while monitoring for and reacting to undetected exceptions by tracing them to test cases.
  • Ability to execute functions in a predetermined order and record the results to a terminal window.
  • Software state is automatically cleaned up so that test cases can run separately from  one another.

4.2 How to Install Mocha?

Installing Mocha as a development dependency on your React project is the first step toward utilizing Mocha to test your React apps.

    npm i --save-dev mocha

The following command should be used if Yarn is being used as the package manager:

    yarn add mocha

To include Mocha to your package.json, upgrade the test script first.

    {
        "scripts": {
          "test": "mocha"
        }
      }      

4.3 First Test Using Mocha:

    // test/test.js

    var modal = require('modal');
    describe('Array', function() {
    describe('#indexOf()', function() {
        it('should return 0 when the value is not present', function() {
        assert.equal([5, 4, 3, 2, 1].indexOf(6), -3);
        });
    });
    });

The aforementioned test looked for the index “6” in the column and reported “-3” if it was not found.

5. Cypress.io

Cypress is a lightning-fast end-to-end test automation framework by which writing tests can become easier without the need for any extra testing library or tool. It enables testing in production environments, such as actual browsers and command prompts.

The code may be tested in the actual browser, and browser development tools can be used in tandem with it. The test results for everything may be monitored and managed from the centralized dashboard.

Cypress.io

Github Stars – 45K
Github Forks- 3K

5.1 Features of Cypress

  • Cypress records snapshots over the course of your test runs.
  • Rapid debugging is possible because of the clearly shown errors and stack traces.
  • Cypress will patiently wait for your instructions and statements.
  • Testing that is quick, consistent, and dependable without any hiccups is desirable.

5.2 How to Install Cypress?

On the command line, type the preceding to initiate a React project:

    npm create vite@latest my-awesome-app -- --template react

Open the folder and type npm install:

    cd my-awesome-app
    npm install

Adding Cypress to the program is the next step.

    npm install cypress -D

Open Cypress:

    npx cypress open

Use Cypress’s Launchpad to get help setting up your work.

5.3 First Test Using Cypress:

Returning to the Cypress testing app’s “Create your first spec” page, select “Create from component” to get started.

A prompt will appear with a list of all the element files in the app Cypress will filter out *.config.{js,ts} and *.{cy,spec}.{js,ts,jsx,tsx} from this list. Find the Stepper component by expanding the row for Stepper.jsx.

src/component/Stepper.cy.jsx is where the following spec file was written:

    src/components/Stepper.cy.jsx
    import React from 'react'
    import Stepper from './Stepper'
    
    describe('', () => {
     it('renders', () => {
       // Check : https://on.cypress.io/mounting-react
       cy.mount()
     })
    })    

The Stepper module is initially brought in. Next, we utilize describe and it to create sections for our tests within method blocks, this allows us to manage the test suite more accurately. Cypress provides these features on a global level, so you won’t need to manually import anything to utilize them.

The top-level describe block will hold all of the tests in a single file, and each one will be a separate test. The first parameter to the describe feature is the name of the test suite. The second parameter is a feature that will run the tests.

Here’s the demo video from Cypress – Test Replay Product Demo

6. Jasmine

Jasmine is an excellent open-source BDD framework and test runner for evaluating a wide variety of JavaScript software. The user interface is put through its paces across a variety of devices and screen sizes, from smartphones to TVs. To ensure their code is bug-free, many Angular CLI programmers also consider Jasmine to be an indispensable tool. Developers typically employ it alongside Babel and Enzyme when testing React applications. The handy util package provides more information about testing your React code.

Jasmine

Github Stars – 568
Github Forks- 429

6.1 Features of Jasmine

  • It’s quick, has little overhead, and doesn’t rely on anything except itself.
  • It provides almost every functionalities and features that one requires to test the code. 
  • You may use it with the browser or Node.
  • It’s compatible with Python and Ruby also.
  • In other words, the DOM is not necessary.
  • It has a simple syntax and an extensive API that is easy to use.
  • The tests and their outcomes can be described in ordinary terms.

6.2 How to Install Jasmine?

Installing Jasmine as part of the setup is recommended because developers frequently use it in conjunction with Enzyme.

    npm install - - save - dev @babel/core \
                                @babel/register \
                                babel-preset-react-app \
                                cross-env \
                                jsdom \
                                jasmine

OR 

yarn add - - dev babel-cli \
            @babel/register \
            babel-preset-react-app \
            cross-env \
            enzyme \
            enzyme-adapter-react-16 \
            jasmine-enzyme \
            jsdom \
            jasmine

Follow this command to start Jasmine:

    // For NPM 
    npm jasmine init 

    // For Yarn 
    yarn run jasmine init

6.3 First Test Using Jasmine

Jasmine expects all configuration files, such as those for Babel, Enzyme, and JSDOM, to be placed in a spec folder.

    // babel.js
    require('@babel/register');

    // for typescript
    require('@babel/register')({
        "extensions": [".ts", ".tsx", ".js", ".jsx"]
    });


    describe("A suite is a function", function() {
    // For grouping related specs, there is a describe function. 
    // Typically each test file has one describe function at the top level.   

    let x;

        it("and so is a spec", function() { 
    // Specs are defined by calling the global Jasmine function it.
            x = true;
            expect(x).toBe(true);
        });
    });

    describe("The 'toBe' compares with ===", function() {

    it("and has a negative case", function() {
            expect(false).not.toBe(true);
        });

    it("and has a positive case", function() {
            expect(true).toBe(true);
        });

    });

7. Enzyme

When it comes to testing React components, developers may rely on Enzyme, a testing tool created to make the process easier. One of Airbnb’s most popular React testing libraries is called Enzyme. To fully test the React application, developers often pair it with another framework like Jest, Chai, or Mocha. 

Enzyme

Github Stars – 20K
Github Forks- 2.1K

The sole purpose of the enzyme is to render components, retrieve resources, identify elements, link components, and simulate events. It may make use of assertions written in either Chai or Jest. Testing is simplified by abstracting the rendering of components in React so that you may test their results.

7.1 Features of Enzyme

  • Enzyme’s API aims to be user-friendly and adaptable by modeling itself after jQuery’s API for DOM manipulation and traversal.
  • The Enzyme Application Programming Interface allows for the Inspection of React Elements.
  • It provides shallow rendering.
  • Provides access to enterprise implementations of your component.
  • Provides execution of a complete DOM rendering.
  • There are cases where using react-hooks in shallow rendering is appropriate.

7.2 How to Install Enzyme?

For installation using npm:

    npm - - save - dev enzyme enzyme-adapter-react-16

7.3 First Test Using Enzyme

To install Enzyme, write the following command in npm. 

    npm install –save-dev enzyme

Now, create an app.tsx file in the src folder with the following code.

    import React, { Component } from 'react';
    import './App.scss';

    class App extends React.Component {
    constructor(props: any) {
        super(props);
        this.state = {        
        };
    }
    render() {
        return (
        
) } }; export default App;

Now, create an app.test.tsx file in the same folder and write the following code.

    import React from 'react'
    import Enzyme, { shallow } from 'enzyme'
    import Adapter from 'enzyme-adapter-react-16'
    import App from './App'

    Enzyme.configure({ adapter: new Adapter() })

    describe('First Test Case', () => {
    it('it should render button', () => {
        const wrapper = shallow()
        const buttonElement  = wrapper.find('#ClickHere');
        expect(buttonElement).toHaveLength(1);
        expect(buttonElement.text()).toEqual('Click Here');
    })
    })

Then use the “npm test” command to test the code. 

8. Conclusion

Because of React’s modular design, TDD (Test Driven Development) is improved. Finding the right technology may facilitate the implementation of this idea and help you to harvest its benefits, from testing specific parts to testing the complete system.

Combining the proper testing framework (such as Jest, chai, enzyme, etc.) with the necessary assertion/manipulation libraries is the secret to establishing a flexible approach. Using virtual isolation, you can take scalability and TDD to an entirely higher platform by separating components from their tasks (like Bit etc).

The post React Testing Libraries & How to Use Them appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/react-testing-libraries/feed/ 0
Introduction to .NET MAUI https://www.tatvasoft.com/blog/net-maui/ https://www.tatvasoft.com/blog/net-maui/#respond Fri, 01 Sep 2023 05:23:49 +0000 https://www.tatvasoft.com/blog/?p=11915 .NET MAUI (Multi-platform App UI) is an open source and cross platform framework to create native mobile and desktop apps using C# and XAML. Using .NET MAUI, one can develop apps that run on
Android 
iOS
MacOS
Windows

The post Introduction to .NET MAUI appeared first on TatvaSoft Blog.

]]>

Key Takeaways

  1. .NET Multi-platform App UI (MAUI) is an open source, cross platform framework to develop native applications for Windows, iOS, macOS, and Android platforms using C# and XAML.
  2. .NET MAUI is an evolution of Xamarin.Forms with UI control redesigned for extensibility and better performance, becoming a new flagship.
  3. It also supports .NET hot reload by which you can update and modify the source code while the application is running.
  4. .NET MAUI project uses a single codebase and provides consistent and simplified platform specific development experience for the users.
  5. One can also develop apps in modern patterns like MVU, MVVM, RxUI, etc. using .NET MAUI.

1. What is .NET MAUI?

.NET MAUI (Multi-platform App UI) is an open source and cross platform framework to create native mobile and desktop apps using C# and XAML. Using this multi-platform UI, one can develop apps that run on

  1. Android 
  2. iOS
  3. MacOS
  4. Windows
What is .NET MAUI

.NET MAUI is an evaluation of Xamarin.Forms, extended from mobile to desktop scenarios, and rebuilt UI controls from the base level to improve the performance. It is quite similar to Xamarin Forms, another framework for creating cross-platform apps using a single codebase. 

The primary goal of .NET MAUI is to help you to develop as much of your app’s functionality and UI layout as possible in a single project.

The .NET MAUI is suitable for the developers who want to

  • Use single codebase to develop app for android, iOS, and desktop with C# and Xamarin.Forms
  • Share Code, tests, and logic across all platforms
  • Share common UI layout and design across all platforms

Now, let’s look at how .NET MAUI works. 

2. How does .NET MAUI Work?

.NET MAUI is a unified solution for developing mobile and desktop app user interfaces. With it, developers can deploy an app to all supported platforms using a single code base, and also provide unique access to every aspect of each platform. .

The Windows UI 3 (WinUI 3) library, along with its counterparts for Android, iOS, and macOS, are all part of the .NET 6 and later family of .NET frameworks for app development. The .NET Base Class Library (BCL) is shared by all of these frameworks. This library hides platform specifics from your program code. The .NET runtime is essential to the BCL because it provides a setting in which your code will be executed. Mono, a representation of the .NET runtime, provides the environment implementation for Android, iOS, and macOS. On Windows, the  .NET CoreCLR serves as the execution runtime.

Each platform has its own way of building an app’s visual interface and its own model for defining how the elements of an app’s user interface interact with one another; nevertheless, the Base Class Library enables multi-platform app UI to interchange business logic. The user interface may be designed independently for each platform utilizing a suitable framework, but it requires a distinct code base for each group of devices.

How does .NET MAUI work

Native app packages may be compiled from .NET MAUI code written on either a PC or a Mac:

  • When an Android app is developed with .NET MAUI, C# is translated into an intermediate language (IL), and during runtime, the IL is JIT-converted into a native assembler.
  • Apps for iOS developed with .NET MAUI are converted from C# into native ARM assembly code.
  • For macOS, this MAUI apps employ Mac Catalyst, an Apple technology that ports your UIKit-based iOS app to the desktop and enhances it with extra AppKit and platform APIs.
  • Native Windows desktop programs developed with .NET MAUI are created with the help of the Windows User Interface 3 (WinUI 3) library.

3. What’s Similar between .NET MAUI & Xamarin Forms?

The community is still developing apps with XAML and C#. To separate our logic from the view specification, we can use Model-View-Viewer (MVVM), Reactive User Interface (UI), or Model-View-Update.

We can create apps for:

  • Windows Desktop
  • iOs & macOS
  • Android

It is easy to relate with .NET MAUI if you have prior experience with Xamarin. While the project configuration may shift, the code you write daily should seem like an old hat.

4. What is Unique about .NET MAUI?

If Xamarin is already available, then what makes .NET MAUI so different? To improve Xamarin, Microsoft is revamping its foundation. Forms, which boosted speed, unified the architectural systems and brought us beyond mobile to the desktop.

Major Advances in MAUI:

4.1 Single Project Experience

You can build apps for Android, iOS, macOS, and Windows all from a single  .NET MAUI project, which abstracts away the platform-specific development experiences you’d normally face.

When developing for several platforms, using a .NET MAUI single project simplifies and standardizes the process. The following benefits are there for using  .NET MAUI single project:

  • A unified project that can develop for iOS, macOS, Android, and Windows.
  • Your .NET MAUI applications can run with a streamlined debug target option.
  • Within a single project, shared resource files can be used.
  • A single manifest file that describes the name, identifier, and release number of an app.
  • When necessary, you can use the platform’s native APIs and toolkit.
  • Simply one code-base for all platforms.

4.2 .NET Hot Reload

The ability to instantly update running apps with fresh code changes is a huge time saver for .NET developers thanks to a feature called “hot reload.” 

It helps to save time and keeps the development flow going by doing away with the need to pause for builds and deployments. Hot Reload is being improved in .NET, with full support coming to .NET MAUI and other workloads.

4.3 Cross-Platform APIs for Device Features

APIs for native device features can be accessed across platforms thanks to .NET MAUI. The .NET Multi-platform App UI  (MAUI) provides access to functionalities like:

  • Keep us posted regarding the device on which your app is installed.
  • Control of device’s sensors, including the accelerometer, compass, and gyroscope.
  • Select a single file or a batch from the storage device.
  • The capacity to monitor and identify changes in the device’s network connectivity status.
  • Read text using the device’s in-built text-to-speech engines.
  • Transfer text between applications by copying it to the system clipboard.
  • Safely store information using key-value pairs.
  • Start an authentication process in the browser that awaits a response from an app’s registered URL.

5. How to Build Your First App Using .NET MAUI? 

1. Prerequisites

Installation of the .NET Multi-platform App UI workload in Visual Studio 2022 version 17.3 or later is required.

2. Build an Application

1. Start up Visual Studio 2022. To initiate a fresh project, select the “Create a new project” option.

Start up Visual Studio 2022

2. Select MAUI from the “All project types” – menu, then choose the “.NET MAUI App” template and press the “Next” icon in the “Create a new project” window.

Create .NET MAUI App

3. Give your project a name, select a location, and then press the “Next” button in the window labeled “Configure your new project”

Configure your new project

4. Press the “Create” button after selecting the desired version of .NET in the “Additional information” window.

Additional information

5. Hold off until the project is built and its dependencies are restored.

Dependencies are restored

6. Choose Framework, and then the “.net 7.0-windows” option, from the “Debug” menu in Visual Studio’s toolbar:

Choose Framework

7. To compile and launch the application, click the “Windows Machine” icon in Visual Studio’s toolbar.

Compile and launch the application

Visual Studio will ask you to switch on Developer Mode if you haven’t already. This can be done via Settings of your device. Click “settings for developers” from VS code and on the “Developer Mode”. Also accept the disclaimer. 

Developer Mode

8. To test this, open the app and hit the “Click me” icon many times to see the click counter rise:

Test App

6. Why .NET MAUI?

6.1 Accessibility 

.NET MAUI supports multiple approaches for accessibility experience. 

  1. Semantic Properties: 

Semantic Properties are the approach to provide the accessibility values in apps. It is the most recommended one. 

  1. Automation Properties

This is a Xamarin.Forms approach to provide accessibility values in apps. 

One can also follow the recommended accessibility checklist from the official page for more details. 

6.2 APIs to Access Services

Since .NET MAUI was built with expansion in mind, you may keep adding features as needed. Consider the Entry control, a classic illustration of a control that displays uniquely on one platform but not another. Developers frequently wish to get rid of the underlining that Android draws underneath the text field. Using .NET MAUI, you can easily modify each and every Entry throughout your whole project with minimal additional code.  

6.3 Global Using Statements and Field-Scoped Namespaces

.NET MAUI uses the new C# 10 features introduced in .NET 6, comprising global using statements and file-scoped namespaces. This is great for reducing clutter in your files. For example: 

Statements and Namespace

6.4 Use Blazor for Desktop and Mobile

Web developers who want to create native client apps will find .NET MAUI to be an excellent choice. You may utilize your current Blazor web UI components in your native mobile and desktop apps thanks to the integration between .NET MAUI and Blazor. .NET MAUI and Blazor allow you to create a unified user interface (UI) for mobile, desktop, and web apps.

Without the requirement for WebAssembly, .NET MAUI will run your Blazor components locally on the device and render them to an in-app web view. Since Blazor parts are compiled and executed in the .NET procedure, they are not restricted to the web platform and may instead make use of features specific to the target platform, such as the platform’s filesystem, sensors, and location services. Your Blazor web UI can even have native UI controls added to it. Blazor Hybrid is a completely original hybrid app.

Using the provided .NET MAUI Blazor App project template, you can quickly begin working with Blazor and .NET MAUI.

.NET MAUI Blazor App project template

With this starting point, you can quickly begin developing an HTML5, CSS3, and C#-based .NET MAUI Blazor app. The .NET MAUI Blazor Hybrid guide will show you how to create and deploy your very own Blazor app.

If you already have a .NET MAUI project and wish to start using Blazor components, you may do so by adding a BlazorWebView regulation to it:


  
    
      
     


Present desktop programs can now be updated to operate on the web or cross-platform with .NET MAUI due to Blazor Hybrid support for WPF and Windows Forms. BlazorWebView features for Windows Presentation Foundation and Windows Forms can be downloaded via NuGet. 

6.5 Optimized for Speed

.NET MAUI is developed for performance. .NET MAUI’s user interface controls are built on top of the native platform controls with a thin, decoupled handler-mapper design. This streamlines the display of user interfaces and makes it easier to modify controls.

In order to speed up the rendering and updating of your user interface, .NET MAUI’s layouts have been designed to follow a uniform management approach that improves the measure and setup loops. In the context of StackLayout, layouts are exposed that have already been optimized for certain use cases, such as HorizontalStackLayout and VerticalStackLayout.

It was started off with the intention of reducing app size and speeding up startup time when it was upgraded to .NET 6. The .NET Podcast example application, which used to take 1299 milliseconds to start up, now takes just 814.2 milliseconds—a 37.3% improvement .

To make these improvements available in a release build, these options are enabled by default.

Optimized for Speed

Quicker code launches for your Android apps are possible using ahead-of-time (AOT) compilation. If you’re trying to keep your application’s size within the wifi installation threshold, full AOT can make your outputs too big. Startup Tracing is the solution to this problem. As the name suggests, it makes you achieve an acceptable balance between performance and size by performing partial AOT on only the portions of your program to run at startup.

Benchmark numbers from Pixel 5 device tests by GitHub :

[Android App][1] [.NET MAUI App][2]
JIT startup time (s) 00:00.4387 00:01.4205
AOT startup time (vs. JIT) 00:00.3317 ( 76%) 00:00.7285 ( 51%)
Profiled AOT startup time (vs. JIT) 00:00.3093 ( 71%) 00:00.7098 ( 50%)
JIT .apk size (B) 9,155,954 17,435,225
AOT .apk size (vs. JIT) 12,755,672 (139%) 44,751,651 (257%)
Profiled AOT .apk size (vs. JIT) 9,777,880 (107%) 23,210,787 (133%)

6.6 Native UI

With .NET MAUI, you can create uniform brand experiences across many platforms (Android, iOS, macOS, and Windows) while also making use of each system’s unique design for the greatest app experience possible. Each system works and appears as intended right out of the box, without the need for any further widgets or ad hoc styling. For instance, WinUI 3, the best native UI component included with the Windows App SDK, supports .NET MAUI on Windows.

With .NET MAUI native UI, you can:

  • Create your apps using a library of more than 40 controls, layouts, and pages using C# and XAML. 
  • Built upon the solid foundation of Xamarin’s mobile controls, it extends them to include things like navigation bars, multiple windows, improved animation, and enhanced support for gradients, shadows, and other visual effects.

7. Conclusion

Microsoft’s newest addition to the .NET family is the .NET Multi-platform User Interface which was created to develop apps in C#,.NET, and XAML for Windows, Android, iOS, and macOS. Also, instead of creating numerous versions of your project for different devices, you can now create a single version and distribute it across all of them. 

We hope this article helped you gain a basic introduction to .NET MAUI. There are many wonderful improvements in MAUI that are expected in the future, but we will need to remain patient a bit longer for a release candidate edition of MAUI to include them all. So, stay tuned.

The post Introduction to .NET MAUI appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/net-maui/feed/ 0
Java Best Practices for Developers https://www.tatvasoft.com/blog/java-best-practices/ https://www.tatvasoft.com/blog/java-best-practices/#comments Tue, 22 Aug 2023 05:37:00 +0000 https://www.tatvasoft.com/blog/?p=11749 For any developer, coding is a key task and making mistakes in it is quite possible. Sometimes, the compiler will catch the developer’s mistake and will give a warning but if it is unable to catch it, running the program efficiently will be difficult.

The post Java Best Practices for Developers appeared first on TatvaSoft Blog.

]]>

Key Takeaways

  1. Following a proper set of Java Best Practices enables the entire team to manage the Java project efficiently.
  2. Creating proper project & source files structures, using proper naming conventions, avoiding unnecessary objects and hardcoding, commenting the code in a correct way helps in maintaining the Project Code effectively.
  3. By using appropriate inbuilt methods and functionality, one can easily improve the performance of Java applications.
  4. For exception handling, developers must utilise the catch and finally block as and when required.
  5. By writing meaningful logs, developers can quickly identify and solve the errors.

For any developer, coding is a key task and making mistakes in it is quite possible. Sometimes, the compiler will catch the developer’s mistake and will give a warning but if it is unable to catch it, running the program efficiently will be difficult. And because of this, for any Java app development company, it is essential to make its team follow some of the best practices while developing any Java project. In this blog, we will go through various different types of Java best practices that will enable Java developers to create the application in a standardised manner.

1. Java Clean Coding Best Practices

Here we will have a look at the best practices of Java clean coding –

1.1 Create a Proper Project Structure

Creating the proper project structure is the first step to follow for Java clean coding best practices. Here, the developers need to divide and separate the entire code into related groups and files which enables them to identify the file objectives and avoid rewriting the same code or functions multiple times. 

A good example of the Java Project structure is as follow: 

  • Source
    • Main
      • Java
      • Resource
    • Test
      • Java 
      • Resource

Now, let’s understand each directory in the structure:

Directory Purpose
Main
  • Main source files of the project are stored in the Java folder.
  • Resource folder holds all the necessary resources.
Test
  • Test source files are stored in the Java folder.
  • Test resources files are present in the Resource folder.

Source files refers to the files like Controller, Service, Model, Entity, DTO, and Repositories files, while test source files refers to the test cases files which are written to test the code.

1.2 Use Proper Naming Conventions

Naming conventions mean the names of the interfaces and classes and see how to keep the names of constants, variables, & methods. The conventions set at this stage must be obeyed by all the developers in your team. Some of the best practices that entire team can follow are as follow:

  • Meaningful distinctions: This means that the names given to the variables or other identifiers must be unique and they should have a specific meaning to it. For instance, giving names like i, j, k or p, q, r isn’t meaningful. 
  • Self-explanatory: The naming convention must be such that the name of any variable reveals its intention so that it becomes easy for the entire Java development team to understand it. For instance, the name must be like “dayToExpire” instead of “dte”. This means that the name must be self-explanatory and must not require any comment to describe itself.
  • Pronounceable: The names given by the developers must be pronounceable naturally just like any other language. For instance, we can keep “generationTimestamp” instead of “genStamp”.

Besides this, there are some other general rules that are required when it comes to naming conventions and they are –

  • Methods of the Java code should have names that are starting with lowercase and are verbs. For instance, execute, stop, start, etc. 
  • Names of class and interface are nouns which means that they must start with an uppercase letter. For instance, Car, Student, Painter, etc.
  • Constant names must be in uppercase only. For instance, MIN_WIDTH, MAX_SIZE, etc.
  • Underscore must be used when the numeric value is lengthy in Java code. For instance, the new way to write lengthy numbers is int num = 58_356_823; instead of int num = 58356823;.
  • In addition to this, the use of camelCase notation is also done in Java programming naming conventions. For instance, runAnalysis, StudentManager, and more.

1.3 Avoid Creating Unnecessary Objects

Another best practice for Java clean coding is to avoid creating unnecessary objects. It is known as one of the best memory-consuming operations in Java. This means that the developers must only create objects that are required. 

You can often avoid creating unnecessary objects by using static factory methods in preference to constructors on immutable classes. 

For example, the static factory method Boolean.valueOf(String) is always preferable to the constructor Boolean(String). 

The constructor creates a new object each time whenever it’s called, while the static factory method is not required to do so.

1.4 Create Proper Source File Structure

A source file is something that holds information about various elements. And when the Java compiler enforces any type of structure, a large part of it is fluid. But when some specific order is implemented in a source file it can help in improving the code readability. And for this, there are some different types of style guides available for inspiration for developers. Here is the element’s ordering style that can be used in a source file – 

  1. Package Statement
  2. Import Statements
    • Static and non-static imports
  3. One top-level Class
    • Constructors
    • Class variables
    • Instance variables
    • Methods

Besides this, the developers can also group the methods as per the scope and functionalities of the application that needs to be developed. Here is a practical example of it – 

# /src/main/java/com/baeldung/application/entity/Customer.java
package com.baeldung.application.entity;

import java.util.Date;

public class Patient {
    private String patientName;
    private Date admissionDate;
    public Patient(String patientName) {
        this.patientName = patientName;
        this.admissionDate = new Date();
    }

    public String getPatientName() { 
        return this.patientName; 
    }

    public Date getAdmissionDate() {
        return this.admissionDate;
    }
}

1.5 Comment on the Code Properly

Commenting on the written code is very beneficial when other team members are going through it as it enables them to understand the non-trivial aspects. And in this case, proper care must be taken as specific and to-the-point things must be described in the comments as if not done in a required manner, the comments can confuse developers. 

Besides this, when it comes to commenting on the Java code, there are two types of comments that can be used.

Comment TypeDescription
Documentation/JavaDoc Comments
  • Documentation comments are useful as they are independent of the codebase, and their key focus is on the specification. Besides, the audience of this type of comment is codebase users.
Implementation/Block Comments
  • Implementation comments are for the developers that are working on the codebase and the comments stated here are code implementation-specific.
  • This type of comments can be in a single line as well as in multiple lines depending upon code and steps.

Here, we will have a look at the code that specifies the usage of the meaningful documentation comment:

/**
* This method is intended to add a new address for the employee.
* However do note that it only allows a single address per zip
* code. Hence, this will override any previous address with the
* same postal code.
*
* @param address an address to be added for an existing employee
*/
/*
* This method makes use of the custom implementation of equals 
* method to avoid duplication of an address with the same zip code.
*/
public addEmployeeAddress(Address address) {
}

Implementation Comments:

class HelloWorld {
    public static void main(String[] args) {
        System.out.println("Hello, World!"); 	// This will print “Hello, World!”
    }
}

1.6 Avoid Too Many Parameters in Any Method

When it comes to coding in Java, one of the best practices is to optimize the number of parameters in a method. This means that when any program has too many parameters in a method it can make interpreting the code difficult and complex. 

Let’s have a look at the example of this scenario. Here is the code where there are too many parameters –

private void employeeInformation(String empName, String designation, String departmentName, double salary, Long empId)

Here is the code with an optimized number of parameters –

private void employeeInformation(String empName, Info employeeInfo)

1.7 Use Single Quotes and Double Quotes Properly

In Java programming, single quotes are used to specify characters in some unique cases, and double quotes for strings. 

Let’s understand this with an example: 

Here, to concatenate characters in Java, in order to make a string, double quotes are used as they treat characters as simple strings. Besides this, single quotes specify integer values of the characters. Let’s have a look at the below code as an example – 

public class demoExample
{
 public static void main(String args[]) 
{
  System.out.printIn("A" + "B");
  System.out.printIn('C' + 'D');
}
}

Output:-
AB
135

1.8 Write Code Properly

Code written for any type of application in Java must be easier to read and understand. The reason behind it is that when it comes to Java, there is no single convention for code. Therefore, it is necessary to define a private convention or adopt a popular one. Here are the reasons why it is important and why it is essential to have indentation criteria – 

  • The Java developers must use four spaces for a unit of indentation. 
  • There must be a cap over the line length, but it can also be set to more than 80 owing.
  • Besides this, expressions must be broken down with commas. 

Here is the best example of it – 

List employeeIds = employee.stream()
  .map(employee -> employee.getEmployeeId())
  .collect(Collectors.toCollection(ArrayList::new));

1.9 Avoid Hardcoding

Avoiding hard coding is another best practice that must be followed by developers. For instance, hard coding can lead to duplication and can make it difficult for the developers to change the code when required. 

Besides this, it can also lead to undesirable behaviour when the values in the code are dynamic. Also, hardcoding factors can be refactored in the following manner – 

  • Developers must replace enum or constant value with defined methods or variables in Java.
  • They can also replace class-level defined constants or the values picked from the configuration. 

For Example:

private int storeClosureDay = 7;
// This can be refactored to use a constant from Java
private int storeClosureDay = DayOfWeek.SUNDAY.getValue()

1.10 Review and Remove Duplicate Code

Reviewing the duplicate code is another important best practice that needs to be followed. It can happen that sometimes two or multiple methods have the same intent and functionality in your Java project code. In such cases, it becomes essential for the developers to remove the duplicate methods / class and use the single method / class wherever required.

2. Java Programming Best Practices

Here are some of the best practices of Java coding that developers can take into consideration – 

2.1 Keep Class Members as Private

Class members should be private wherever possible. The reason behind it is that the more inaccessible the member variables are, the better the programming is done. This is the reason why Java developers must use a private access modifier. Here is an example that shows what happens when fields of the class are made public –

public class Student {
  public String name;
  public String course;
} 

Anyone who has access to the code can change the value of the class Student as shown in the below code –

Student student= new Student();
student.name = "George";
student.course = "Maths";

This is why one should use private access modifiers when it comes to defining the class members. The private class members have the tendency to keep the fields hidden and this helps in preventing any user of the code from changing the data without using setter methods. For Example: 

public class Student {
  private String name;
  private String course;
  
  public void setName(String name) {
    this.name = name;
  }
  public void setCourse(String course)
    this.course = course;
  }
}

Besides this, the setter methods are the best choice when it comes to code validation or housekeeping tasks. 

2.2 For String Concatenation, Use StringBuilder or StringBuffer

Another Java best practice is to use StringBuffer or StringBuilder for String concatenation.  Since String Object is immutable in Java, whenever we do String manipulation like concatenation, substring, etc., it generates a new string and discards the older string for garbage collection. These are heavy operations and generate a lot of garbage in the heap. So Java has provided StringBuffer and StringBuilder classes that should be used for String manipulation. These are mutable objects in Java and also provide append(), insert(), delete, and substring() methods for String manipulation.

Here is an example of the code where the “+” operator is used –

String sql = "Insert Into Person (name, age)";
sql += " values ('" + person.getName();
sql += "', '" + person.getage();
sql += "')";

// "+" operator is inefficient as JAVA compiler creates multiple intermediate String objects before creating the final required string.

Now, let’s have a look at the example where the Java developer can use StringBuilder and make the code more efficient without creating intermediate String objects which can eventually help in saving processing time – 

StringBuilder sqlSb = new StringBuilder("Insert Into Person (name, age)");
sqlSb.append(" values ('").append(person.getName());
sqlSb.append("', '").append(person.getage());
sqlSb.append("')");
String sqlSb = sqlSb.toString();

2.3 Use Enums Instead of Interface

Using enums is a good practice rather than creating an interface that is solely used to declare some constants without any methods. Interfaces are designed to define common behaviours and enums to define common values, So for defining values, usage of enums is best practice. 

In the below code, you will see what creating an interface looks like –

public interface Colour {
    public static final int RED = 0xff0000;
    public static final int WHITE = 0xffffff;
    public static final int BLACK = 0x000000;
}

The main purpose of an interface is to carry out polymorphism and inheritance, not work for static stuff. Therefore, the best practice is to start using enum instead. Here is the example that shows the usage of an enum instead of an interface –

public enum Colour {
    RED, WHITE, BLACK
}

In case – the colour code does matter, we can update the enum like this:

public enum Colour {
 
   RED(0xff0000);
    BLACK(0x000000),
    WHITE(0xffffff),
   
    private int code;
 
    Color(int code) {
        this.code = code;
    }
 
    public int getCode() {
        return this.code;
    }
}

As we can see that the code is a bit complex because of the project and in such cases we can create a class that is dedicated to defining constants. An example of this is given below – 

public class AppConstants {
    public static final String TITLE = "Application Name";
 
    public static final int THREAD_POOL_SIZE = 10;
    
    public static final int VERSION_MAJOR = 8;
    public static final int VERSION_MINOR = 2;

    public static final int MAX_DB_CONNECTIONS = 400;
 
    public static final String INFO_DIALOG_TITLE = "Information";
    public static final String ERROR_DIALOG_TITLE = "Error";
    public static final String WARNING_DIALOG_TITLE = "Warning";    
}

By having a look at the code above, we can say that it is an unsaid rule that using enums or dedicated classes is a better idea than using interfaces. 

2.4 Avoid Using Loops with Indexes

Developers must avoid using a loop with an index variable wherever possible. Instead they can replace it with forEach or enhanced for loop. 

The main reason behind this is that the index variable is error-prone, which means that it may incidentally alter the loop’s body, or start the index from 1 instead of starting it from 0. Here is an example that enables the developer to iterate over an array of Strings:

String[] fruits = {"Apple", "Banana", "Orange", "Strawberry", "Papaya", "Mango"};
 
for (int i = 0; i < fruits.length; i++) {
    doSomething(fruits[i]);
}

The above code specifies that the index variable “i”  in this for loop is something that can be easily altered and can cause unexpected results. To prevent this, the developers must use an enhanced for loop like this:

for (String fruit : fruits) {
    doSomething(fruit);
}

2.5 Use Array Instead of Vector.elementAt()

Vector is known as a legacy implementation approach that is used with Java bundle. It is just like ArrayList but unlike it, Vector can be synchronised. It doesn’t require additional synchronisation when multiple threads try to access it but at the same time, it also degrades the performance of the Java application. And when it comes to any application, performance is the most important factor. Therefore, the array must be used instead of a Vector. 

Let’s take an example where we have used the vector.elementAt() method to access all the elements.

int size = v.size();
for(int i=size; i>0; i--)
{
    String str = v.elementAt(size-i);    
}

For best practice, we can convert the vector first into an array. 

int size = v.size();
String arr[] = (String[]) v.toArray();

2.6 Avoid Memory Leaks

Unlike most other programming languages for software development, when developers are working with Java, they do not need to have much control over memory management. The reason behind it is that Java is a programming language that manages memory automatically. 

In spite of this, there are some Java best practices that experts use to prevent memory leaks because any kind of memory loss can degrade an application's  performance and also affect the business. 

There are few more points to prevent memory leaks in Java.

  • Do not create unnecessary objects.
  • Avoid String Concatenation and use String Builder and String Buffer.
  • Don't store a massive amount of data in the session and time out the session when no longer used.
  • Do not use the System.gc() and also avoid using static objects.
  • Always close the connections, statements and result sets in the Finally block.

2.7 Debug Java Programs Properly

Debugging Java programs properly is another practice that developers need to follow. For this, there is nothing much that they need to do. The developers just have to right-click from the package explorers. Then they can select the option Debug As and choose the Java application they prefer to debug. This can help them to create a Debug Launch Configuration which can be utilized by the experts to start the Java application. 

Besides this, nowadays, Java developers can edit and save the project code while they are debugging it without any need of restarting the entire program and this is possible because of the HCR (Hot Code Replacement). HCR is a standard Java approach that is added to enable expert Java developers to experiment with the code and have an iterative trial-and-error coding.

Debugging allows you to run a program interactively while watching the source code and the variables during the execution. A breakpoint in the source code specifies where the execution of the program should stop during debugging. Once the program is stopped you can investigate variables, change their content, etc. To stop the execution, if a field is read or modified, you can specify watchpoints.

Here is a post that describes the unique Java debugging tools:

Quora Question

2.8 Avoid Multiple if-else Statements

Another Java programming best practice is to avoid the usage of multiple if-else statements. The reason behind it is that when conditions like if-else statements are overused, they will affect the performance of the application as JVM will have to compare the conditions every now and then. 

Besides this, using conditions more than required can become worse if the same one is utilized by the developers in looping statements like while, for, and more. Basically, when there are too many statements or conditions used, the business logic of the application will try to group all the conditions and offer the boolean outcome. Here is an example that shows what happens when the if-else statement is overused and why it should be avoided -

if (condition1) {

    if (condition2) {

        if (condition3 || condition4) { execute ..}

        else { execute..}

Note: The above-defined code must be avoided and the developers must use this as follows:

boolean result = (condition1 && condition2) && (condition3  || condition4)  

One can use Switch in place of  if-else. Switch can execute one statement for multiple conditions. It is an alternative of the if-else-if  ladder condition. It also makes it easy to dispatch execution to different parts of code based on value of expression.

// switch statement 
switch(expression)
{
   // case statements
   // values must be of same type of expression
   case value1 :
      // Statements
      break; // break is optional
   
   case value2 :
      // Statements
      break; // break is optional
   
   // We can have any number of case statements
   // below is the default statement, used when none of the cases is true. 
   // No break is needed in the default case.
   default : 
      // Statements
}

2.9 Use Primitive Types Wherever Possible

Java developers should try using primitive types over objects whenever possible as the data of primitive types are on stack memory. While on the other hand, if objects are used, they are stored on heap memory which is comparatively slower than stack memory.

For example: Use int instead of Integer, double instead of Double, boolean instead of Boolean, etc.

Apart from this, developers must also avoid default initialization values to assign while creating any variable. 

Primitive Types

3. Java Exception Handling Best Practices

Here are some of the best practices of Java Execution Handling -

3.1 Don’t Use an Empty Cache Block

Using empty catch blocks is not the right practice in Java programming and the reason behind it is that it can silently fail other programs or continue the program as if nothing has happened. In both cases, it makes it harder to debug the project code. 

Here is an example showing how to multiply two numbers from command-line arguments - (we have used an empty catch block here)

public class Multiply {
  public static void main(String[] args) {
    int a = 0;
    int b = 0;
    
    try {
      a = Integer.parseInt(args[0]);
      b = Integer.parseInt(args[1]);
    } catch (NumberFormatException ex) {
 	// Throwing exception here is ignored
    }
    
    int multiply = a * b;
    
    System.out.println(a + " * " + b + " = " + multiply);
  }
}

Generally, the parseInt() method is used which throws a NumberFormatException Error. But in the above code, throwing exceptions has been ignored which means that an invalid argument is passed that makes the associated variable populated with the default value.

3.2 Handle Null Pointer Exception Properly

Null Pointer Exception occurs when the developer tries to call a method that contains a Null Object Reference. Here is a practical example of the same situation -

int noOfStudents = office.listStudents().count;

Though there is no error in this code, if any method or object in this code is Null then the null pointer exception will be thrown by the code. In such cases, null pointer exceptions are known as inevitable methods but in order to handle them carefully, the developers must check the Nulls prior to execution as this can help them alter or eliminate the null in the code. Here is an example of the same -

private int getListOfStudents(File[] files) {
 if (files == null)
 throw new NullPointerException("File list cannot be null");

3.3 Use Finally Wherever Required

“Finally” block enables developers to put the important code safely. It enables the execution of code in any case which means that code can be executed whether exceptions rise or not. Besides this, Finally is something that comes with some important statements which are regardless of whether the exception occurs or not. For this, there are three different possibilities that can be carried out by the developers, and we will go through all these cases.

Case 1: Here, Finally can be used when an exception does not rise. The code written here runs the program without throwing any exceptions. Besides this, this code executes finally after the try block.

// Java program to demonstrate
// finally block in java When
// exception does not rise 
  
import java.io.*;
  
class demo{
    public static void main(String[] args)
    {
        try {
            System.out.println("inside try block");
            System.out.println(36 / 2);   // Not throw any exception
        }
        
        // Not execute in this case
        catch (ArithmeticException e) {
            
            System.out.println("Arithmetic Exception");
            
        }
        // Always execute
        finally {
            
            System.out.println(
                "finally : always executes");
            
        }
    }
}

Output -
inside try block
18
finally : always executes

Case 2: The second method where Finally is executed after the catch block when the exception rises.

// Java program to demonstrate final block in Java
// When exception rise and is handled by catch
  
import java.io.*;
  
class demo{
    
    public static void main(String[] args)
    {
        try {
             System.out.println("inside try block");
System.out.println(34 / 0);    // This Throw an Arithmetic exception
        }
  
        // catch an Arithmetic exception
        catch (ArithmeticException e) {
  
            System.out.println(
                "catch : arithmetic exception handled.");
        }
  
        // Always execute
        finally {  
          System.out.println("finally : always executes"); 
        }
    }
}

Output -
inside try block
catch : arithmetic exception handled.
finally : always executes

Case 3: The third case specifies the situation where the Finally is executed after the try block and is terminated abnormally. This situation is when the exception rises. In this case, in the end, Finally still works perfectly fine.

import java.io.*;
  
class demo{
    public static void main(String[] args)
    {
        try {
            System.out.println("Inside try block");  
            // Throw an Arithmetic exception
            System.out.println(36 / 0);
        }
  
        // Can not accept Arithmetic type exception; Only accept Null Pointer type Exception
        catch (NullPointerException e) {
            System.out.println(
                "catch : exception not handled.");
        }
  
        // Always execute
        finally {
  
            System.out.println(
                 System.out.println("finally : always executes");

        }
        // This will not execute
        System.out.println("i want to run");
    }
}

Output -
Inside try block
finally : always executes
Exception in thread "main" java.lang.ArithmeticException: / by zero
at demo.main(File.java:9)

3.4 Document the Exceptions Properly

Another best practice for Java exception handling is to document the exceptions of the project. This means that when any developer specifies any type of exception in the method, it must be documented. It helps the developers to keep records of all the information and also enables other team members to handle or avoid exceptions as per the requirement. And for this, the developer needs to add a @throws declaration in the Javadoc while documenting the exceptions and also describing the entire situation. 

If you throw any specific exception, its class name should specifically describe the kind of error. So, you don’t need to provide a lot of other additional information. Here is an example for it - 

/**
* This method does something extremely useful ...
*
* @param input
* @throws DemoException if ... happens
*/
public void doSomething(int input) throws DemoException { ... }

3.5 Do not Log and Rethrow the Exception

When any exception occurs in the application, it must be either logged or carried out with the app or rethrown and let another method be logged in to save the details. Both situations should never be carried out at the same time. 

This means that the exceptions that developers carry out in the application must never log the details and then rethrow the same. Here is an example for the same -

/* example of log and rethrow exception*/
try {
  Class.forName("example");
} catch (ClassNotFoundException ex) {
  log.warning("Class not found.");
  throw ex;
}

3.6 Catch the Most Specific Exception First

Developers must catch the most specific exceptions first. By following this practice, code gets executed easily, and the catch block can be reached faster. 

Here is an example of this where the first catch block enables the system to handle all NumberFormatExceptions and the second one enables handling of all IllegalArgumentExceptions that are not a part of NumberFormatException.

Other Generic Exceptions will be caught in the last catch block.

public void catchMostSpecificExceptionFirst() {
	try {
		doSomething("A message");
	} catch (NumberFormatException e) {
		log.error(e);
	} catch (IllegalArgumentException e) {
		log.error(e)
	} catch (Exception e) {
		log.error(e)
	}
}

4. Java Logging Best Practices

Here are some of the best practices of Java Logging:

4.1 Use a Logging Framework

Using a logging framework is essential as it enables keeping records of all the methods and approaches of the code. And for robust logging, the developers must deal with concurrent access, format log messages, write alternative destinations of logs, and configure all the logs. 

Basically, when the developer is adopting a logging framework, he will be able to carry out a robust logging process without any issues. One of the widely used logging frameworks is Apache Log4j 2 framework. 

Additionally, you can use levels to control the granularity of your logging, for example: LOGGER.info, LOGGER.warn, LOGGER.error, etc.

4.2 Write Meaningful Messages

Another best practice while logging in to Java is to write meaningful messages. If the log events of the project contain meaningful and accurate messages about the given situation, it will be easy for the entire team to read and understand the code while working on it. And if any error occurs in the application, it can be really helpful as they will have enough information to understand and resolve the issue. Here is an example of the same - 

LOGGER.warn("Communication error");

The message must be written as - 

LOGGER.warn("Error while sending documents to events Elasticsearch server, response code %d, response message %s. The message sending will be retried.", responseCode, responseMessage);

The first message will inform you that there is a communication issue in the logging process but it doesn’t specify anything else which means that the developer working at that time on the project will have to find out the context of the error, the name of the logger, and the line of the code that has a warning.  

The second message provides all the information about the logging communication error which means that any developer will get the exact message that is required. This shows that when messaging is done in an easy way, it helps anyone to understand the error and the entire logging system.

4.3 Do not Write Large Logs

Writing large logs is not the right practice as when unnecessary information is incorporated it can reduce the value of the log as it masks the data that is required. It can also create problems with the bandwidth or the performance of the application. 

Too many log messages can also make reading and identifying relevant information from a log file whenever a problem occurs.

4.4 Make Sure You’re Logging Exceptions Correctly

Another Java best practices that the developers must follow while logging the Java code is to make sure that the exceptions are logged correctly. They must not be reported multiple times. Exceptions must be only monitored and reported by using automated tools as they also create alerts.

4.5 Add Metadata

Adding metadata in the logs enables other developers working on the project to find production issues faster. Metadata can be really useful which means that the more they are used the more it is beneficial for the project.

Eg: 

logger.info(“This process is for following user -> {}”,user.getUserName());

5. Conclusion

As seen in this blog, when any developer is developing a Java application, there are many things he must consider as in the long term other experts will be maintaining the same project. For this reason, the developer must create an application by following the Java best practices that are the standardized approaches for it. In this way, in the future, any other developer will be comfortable handling and maintaining the software development project.

The post Java Best Practices for Developers appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/java-best-practices/feed/ 2