Categories
Backend Developer

How to Integrate dotenv with NestJS and TypeORM

Dotenv integration with NestJS and TypeORM.

While using third party sources in application development, there must be some involvement of SSH keys or API credentials. This becomes a problem when a project is handled by a team of developers. Thus, the source code has to be pushed to git repositories periodically. Once the code is pushed to a repository, anyone can see it with the third-party keys.

A very prominent and widely used solution for this problem is using environment variables. These are the local variables containing some useful information like API keys and are made available to the application or project.

A tool known as dotenv has made it easy to create such variables and making these variables available to the application. It is an easy to use tool which can be added to your project by using any package manager.

We will use yarn as a package manager.

First, add the package using terminal.


yarn add dotenv

Since we are using NestJS which is based on typescript, so we need to add the “@types” package for the same that acts as an interface between javascript and typescript package.


yarn add @types/dotenv

Since the database to be used is Postgres, so install the necessary driver for Postgres.


yarn add pg

Now install the TypeORM module to your nest project.


yarn add @nestjs/typeorm typeorm

Now, create TypeORM entities in your project folder- For this illustration, we will be creating a folder ‘db‘ inside the ‘src‘ folder of our nest project and inside this folder, create another folder ‘entities‘ and create a typescript file containing information about your TypeORM entity.

For the sake of simplicity, we will create a user-entity file. Also, we will be creating an ‘id‘ field, a ‘name‘ field and an ‘email‘ field for this entity.

#src/db/entities/user.entity.ts

import { BaseEntity, Column, Entity, PrimaryGeneratedColumn } from 'typeorm';

@Entity({name: 'UserTable'})
class UserEntity extends BaseEntity {

  @PrimaryGeneratedColumn()
  id: number;

  @Column()
  name: string;

  @Column()
  email: string;
}

export default UserEntity;

Note that this entity is given the name ‘UserTable’ which is optional but in case of migration it becomes somewhat useful. We will get to know the reason shortly.

Now create a migration file for this user entity. Migration file can be created using a command-line interface with the following command:


typeorm migration:create -n CreateUserTable

This will create a migration file with the timestamp as a substring in the name of this file.

Here, ‘CreateUserTable‘ will be the name of your migration file created by the TypeORM environment. Now we will create a folder ‘migrations’ inside the ‘db’ folder and place the migration file inside it if it is not done already.

Now create a separate file that will be used as a migration utility to decide the schema of the database. Thus, we can name this file as migrationUtil.ts

Inside this migration util file, create functions to get various types of columns namely-varchar, integer etc.

We will be creating two functions for illustration, namely ‘getIDColumn‘ and ‘getVarCharColumn‘.

#src/util/migrationUtil.ts

import { TableColumnOptions } from 'typeorm/schema-builder/options/TableColumnOptions';

class MigrationUtil {

  public static getIDColumn(): TableColumnOptions[] {
    const columns: TableColumnOptions[] = [];
    columns.push({
      name: 'userId',
      type: 'int',
      isPrimary: true,
      isNullable: false,
      isGenerated: true,
      generationStrategy: 'increment',
    });

    return columns;
  }

  public static getVarCharColumn({ name, length = '255', isPrimary = false, isNullable = false, isUnique = false, defaultValue = null }): TableColumnOptions {
    return {
      name,
      length,
      isPrimary,
      isNullable,
      isUnique,
      default: `'${defaultValue}'`,
      type: 'varchar',
    };
  }
}

export default MigrationUtil;

Here, ‘TableColumnOptions’ is a type provided by typeorm out of the box.

The code for this file is pretty straight whenever each of these functions are called, they create a separate column in your entity table.

Now, back to the ‘CreateUserTable’ migration file, the file should look like this:

#src/db/migrations/1578306918674-CreateUserTable.ts

import { MigrationInterface, QueryRunner, Table } from 'typeorm';

export class CreateUserTable1578306918674 implements MigrationInterface {

    public async up(queryRunner: QueryRunner): Promise<any> {
        
    }

    public async down(queryRunner: QueryRunner): Promise<any> {
        
    }

}

Now, add a table to this migration file using our migration utility file as:

#src/db/migrations/1578306918674-CreateUserTable.ts

....
private static readonly table = new Table({
        name: 'UserTable',
        columns: [
          ...MigrationUtil.getIDColumn(),
          MigrationUtil.getVarCharColumn({name: 'name'}),
          MigrationUtil.getVarCharColumn({name: 'email'}),
        ],
    });

....

Note that the name of this table is given same as the userEntity so as to improve entity-table mapping for developers. Also, finish up the code for async ‘up’ and ‘down’ methods using QueryRunner.

The idea is to create three columns in the user table – ‘userId’, ‘name’ and ’email’.

Thus, in the end, the migration file will be looking something like this:

#src/db/migrations/1578306918674-CreateUserTable.ts

import { MigrationInterface, QueryRunner, Table } from 'typeorm';
import MigrationUtil from '../../util/migrationUtil';

export class CreateUserTable1578306918674 implements MigrationInterface {

    private static readonly table = new Table({
        name: 'UserTable',
        columns: [
          ...MigrationUtil.getIDColumn(),
          MigrationUtil.getVarCharColumn({name: 'name'}),
          MigrationUtil.getVarCharColumn({name: 'email'}),
        ],
    });

    public async up(queryRunner: QueryRunner): Promise<any> {
        await queryRunner.createTable(CreateUserTable1578306918674.table);
    }

    public async down(queryRunner: QueryRunner): Promise<any> {
        await queryRunner.dropTable(CreateUserTable1578306918674.table);
    }

}

Now, create your environment files containing environment variables. We will be creating two .env files, namely- development.env and test.env.

The environment variables for development.env will be:

#env/development.env

TYPEORM_CONNECTION = postgres
TYPEORM_HOST = 127.0.0.1
TYPEORM_USERNAME = root
TYPEORM_PASSWORD = root
TYPEORM_DATABASE = dotenv
TYPEORM_PORT = 5432
TYPEORM_ENTITIES = db/entities/*.entity{.ts,.js}
TYPEORM_MIGRATIONS = db/migrations/*{.ts,.js}
TYPEORM_MIGRATIONS_RUN = src/db/migrations
TYPEORM_MIGRATIONS_DIR = src/db/migrations
HTTP_PORT = 3001

And the environment variables for test.env will be:

#env/test.env

TYPEORM_CONNECTION = postgres
TYPEORM_HOST = 127.0.0.1
TYPEORM_USERNAME = root
TYPEORM_PASSWORD = root
TYPEORM_DATABASE = dotenv-test
TYPEORM_PORT = 5432
TYPEORM_ENTITIES = db/entities/*.entity{.ts,.js}
TYPEORM_MIGRATIONS = db/migrations/*{.ts,.js}
TYPEORM_MIGRATIONS_RUN = src/db/migrations
TYPEORM_ENTITIES_DIR = src/db/entities
HTTP_PORT = 3001

Now, create a TypeORM config file for the connection setup.

We will place this file in the ‘config‘ folder under ‘src‘ folder of the project.

#src/config/database.config.ts

import * as path from 'path';

const baseDir = path.join(__dirname, '../');
const entitiesPath = `${baseDir}${process.env.TYPEORM_ENTITIES}`;
const migrationPath = `${baseDir}${process.env.TYPEORM_MIGRATIONS}`;

export default {
  type: process.env.TYPEORM_CONNECTION,
  host: process.env.TYPEORM_HOST,
  username: process.env.TYPEORM_USERNAME,
  password: process.env.TYPEORM_PASSWORD,
  database: process.env.TYPEORM_DATABASE,
  port: Number.parseInt(process.env.TYPEORM_PORT, 10),
  entities: [entitiesPath],
  migrations: [migrationPath],
  migrationsRun: process.env.TYPEORM_MIGRATIONS_RUN === 'true',
  seeds: [`src/db/seeds/*.seed.ts`],
  cli: {
    migrationsDir: 'src/db/migrations',
    entitiesDir: 'src/db/entities',
  },
};

Here, process.env will contain all our environment variables.

Note that the environment will be specified by us during command execution and thus, anyone of the files ‘development.env’ or ‘test.env’ will be taken as environment variables supplying file.

In the same folder, create another configuration file for dotenv and we will name it as ‘dotenv-options.ts’.

#src/config/dotenv-options.ts

import * as path from 'path';

const env = process.env.NODE_ENV || 'development';
const p = path.join(process.cwd(), `env/${env}.env`);
console.log(`Loading environment from ${p}`);
const dotEnvOptions = {
  path: p,
};

export { dotEnvOptions };

The code for this file is pretty straight.

Note that the line of code containing console.log call will let us know which environment is taken by the nest while executing commands and the same file is being provided as dotenv options below it.

Now, to successfully integrate dotenv with nest, it is recommended by official nest docs to create a config service along with a config module.

Thus, create a ‘services’ folder and inside that folder- create a ‘config.service.ts’ file.

#src/Services/config.service.ts

import * as dotenv from 'dotenv';
import * as fs from 'fs';
import * as Joi from '@hapi/joi';
import { Injectable } from '@nestjs/common';
import IEnvConfigInterface from '../interfaces/env-config.interface';
import { TypeOrmModuleOptions } from '@nestjs/typeorm';
import * as path from 'path';

@Injectable()
class ConfigService {
  private readonly envConfig: IEnvConfigInterface;

  constructor(filePath: string) {
    const config = dotenv.parse(fs.readFileSync(filePath));
    this.envConfig = this.validateInput(config);
  }

  public getTypeORMConfig(): TypeOrmModuleOptions {
    const baseDir = path.join(__dirname, '../');
    const entitiesPath = `${baseDir}${this.envConfig.TYPEORM_ENTITIES}`;
    const migrationPath = `${baseDir}${this.envConfig.TYPEORM_MIGRATIONS}`;
    const type: any = this.envConfig.TYPEORM_CONNECTION;
    return {
      type,
      host: this.envConfig.TYPEORM_HOST,
      username: this.envConfig.TYPEORM_USERNAME,
      password: this.envConfig.TYPEORM_PASSWORD,
      database: this.envConfig.TYPEORM_DATABASE,
      port: Number.parseInt(this.envConfig.TYPEORM_PORT, 10),
      logging: false,
      entities: [entitiesPath],
      migrations: [migrationPath],
      migrationsRun: this.envConfig.TYPEORM_MIGRATIONS_RUN === 'true',
      cli: {
        migrationsDir: 'src/db/migrations',
        entitiesDir: 'src/db/entities',
      },
    };
  }

  /*
	  Ensures all needed variables are set, and returns the validated JavaScript object
	  including the applied default values.
  */
  private validateInput(envConfig: IEnvConfigInterface): IEnvConfigInterface {
    const envVarsSchema: Joi.ObjectSchema = Joi.object({
      NODE_ENV: Joi.string()
        .valid('development', 'test')
        .default('development'),
      HTTP_PORT: Joi.number().required(),
    }).unknown(true);

    const { error, value: validatedEnvConfig } = envVarsSchema.validate(
      envConfig,
    );
    if (error) {
      throw new Error(`Config validation error: ${error.message}`);
    }
    return validatedEnvConfig;
  }
}

export default ConfigService;

Here, ‘IEnvConfigInterface‘ is an interface provided explicitly by us to improve the understandability of code.


export default interface IEnvConfigInterface {
  [key: string]: string;
}

The dotenv.parse will read the contents of the file containing environment variables and is made available for use. It can accept string or buffer and convert it into an object of key-value pairs.

This object is then validated by using Joi schema object which is a library provided by hapi. Under this schema, we have specified that the environment (whether test or development) will be grabbed as the NODE_ENV key in the command line.

Also, if no environment is specified, then set the environment to ‘development’. Thus, our envConfig variable is now initialized with this validated object.

Now, create a configModule and import it to app module.

#src/modules/config.module.ts

import { Global, Module } from '@nestjs/common';
import ConfigService from './Services/config.service';

@Global()
@Module({
  providers: [
    {
      provide: ConfigService,
      useValue: new ConfigService(`env/${process.env.NODE_ENV || 'development'}.env`),
    },
  ],
  exports: [ConfigService],
})
export default class ConfigModule {
}

Here config service is injected into this module. But since our config service is expecting an argument through the constructor, we will use ‘useValue’ to provide this service an argument which by default is development.env file, if no environment is explicitly provided during execution of cli commands.

Now we will create another loader file that will load all the configurations for database and dotenv.

We will create this file in ‘cli’ folder under ‘src’ folder of our project and name it as ‘loader.ts’.


import * as dotenv from 'dotenv';
import { dotEnvOptions } from '../config/dotenv-options';

// Make sure dbConfig is imported only after dotenv.config

dotenv.config(dotEnvOptions);
import * as dbConfig from '../config/database.config';

module.exports = dbConfig.default;

Note that there is a comment in the code to import dbConfig only after dotenv config is imported. This is because our database configuration will depend on the environment used by nest.

Now in our package.json file under the ‘scripts’ section, we will add two key-value pairs that will be our cli command for migration.

...

"migrate:all": "ts-node ./node_modules/typeorm/cli migration:run -f src/cli/loader.ts",
"migrate:undo": "ts-node ./node_modules/typeorm/cli migration:revert -f src/cli/loader.ts"

...

Note that this command will directly execute our loader file.

And, that’s it!

We have successfully integrated dotenv with NestJS and TypeORM.

To test this, start your database server, and then run the following cli commands one after another:


NODE_ENV=development yarn migrate:all
NODE_ENV=test yarn migrate:all

It will console the environment currently being used by us, which can be seen below:

Frequently Asked Questions

Categories
API Backend Developer Node

Top 10 NodeJS Frameworks For Developers in 2020

Node.js is an open-source, cross-platform runtime environment built on Chrome’s V8 javascript engine. The event-driven, non-blocking I/O model makes the NodeJS framework an extremely lightweight and efficient web application.

As a developer, one gets to smoothly use the same language for both clientside and serverside scripting and this unique facility has increased the quick adoption of NodeJS frameworks by many developers across the globe in building web applications of any size.

Since it’s been launched in 2009 as a tool for building scalable, server-side web applications it has brought about exponential growth in its usage.

In addition, Node facilitates quick prototyping in building unique projects.

Let’s check out this list of 10 Top NodeJS Frameworks:

Hapi.JS

Hapi is a powerful and robust framework that is used for developing APIs. The well-developed plugin system and various key features such as input validation, configuration-based functionality, implement caching, error handling, logging, etc. make the Hapi one of the most preferred frameworks. It is used for building useful applications and providing technology solutions by several large-scale websites such as PayPal, Disney.

Hapi
Hapi builds secure, powerful, scalable applications with minimal overhead and out-of-box functionality

Hapi is a configuration-driven pattern, traditionally modeled to control web server operations. A unique feature it has is the ability to create a server on a specific IP, with features like the ‘onPreHandler’, we can do something with a request before it is completed by intercepting it and doing some pre-processing on the request.

Express.JS

Express.js was built by TJ Holowaychuk, one of the members of the core Node project team. A large community backs this framework, so it has the advantages of continuous updates and reforms of all the core features. This is a minimalist framework that is used to build a number of mobile applications and APIs.

Express
Express is a minimal and flexible Node.JS web application framework providing a robust set of features

It’s robust API allows users to configure routes to send/receive requests between the front-end and the database (acting as a HTTP server framework).

A good advantage with express is how it supports a lot of other packages and other template engines such as Pug, Mustache, EJS and a lot more.

Socket.io

Socket
Socket the fastest and reliable real-time engine

It is used for building real-time web applications. It’s a Javascript library that allows the bidirectional data flow between the web client and server. Asynchronous data I/O, binary streaming, instant messaging are some of the most important features of this framework.

Total.JS

Total.js is a modern and modular NodeJS supporting the MVC architecture. Angular.js, Polymer, Backbone.js, Bootstrap and other clientside frameworks are fully compatible with this framework. This framework is totally extensible and asynchronous. The fact that it does not require any tools such as Grunt to compress makes it easy to use. It also has NoSql embedded in it and supports array and other prototypes.

Total
For fast, furious and powerful websites, REST services, real-time applications TotalJS is the best choice

Total.js has some really beautiful versions like the Total.js Eshop, which contains a user interface optimized for mobile devices, and it is downloadable by all premium members. Eshop is one of the best Node.js e-commerce system. This is because of its many versions of unique content management system(CMS).

Sail.JS

This MVC framework has become very popular with NodeJS developers and this framework has gained traction through the development of chat applications, dashboards and multiplayer games. It is most famous for building data-driven APIs. It uses waterline for object-related mapping and db solutions. This framework uses Express.js for handling HTTP requests and is built upon Node.js.

Sail
Sail


Its compatibility with Grunt modules, including LESS, SASS, Stylus, CoffeeScript, Jade, Dust makes it an ideal candidate for browser-based applications.

Sail is highly compatible with several front-end platforms. Developers have enough freedom to their development while implementing this framework.

Derby

Derby
Derby is a full-stack framework for writing modern web applications

This is an MVC framework that is used for creating real-time mobile and web applications. Derby’s Racer, a real-time data synchronization engine for Node.js allows multi-site, real-time concurrency and data synchronization across clients and servers. The racer optimizes conflict resolution and allows real-time editing of the application by leveraging the ShareJS.

Derby is an open-source framework based on MVC structure and it is a full-stack NodeJS web framework. Derby is considered ideal for developing real-time collective applications. Using DerbyJS , developers can easily add customized codes and build real-time and effective custom made websites.

Meteor.JS

One of the fundamentally most used NodeJS frameworks is Meteor.JS. And this list would remain incomplete if we didn’t mention the MeteorJS framework. This is a full-stack framework of NodeJS which allows users to build real-time applications.

Meteor
Meteor

It is used to create both mobile and web-based javascript applications.

Backed by a huge community of developers, tutorials, custom packages and documentation, this framework is used to create some great web and mobile applications for only Javascript only.

Loopback

Loopback is a highly-extensible API framework that allows the user to create such APIs that can work with any kind of web client and can be easily bridged to backend sources. This new age modern application comes with complex integration. With Loopback being an open-source framework, the user can create dynamic REST APIs with a minimum or no knowledge of coding.

Loopback
Highly extensible NodeJS framework for building APIs and microservices

Loopback permits developers to create SDKs and API documentation. This is possible due to widget-API explorer that comes in default with loopback.

Also, it comes with model-relation-support, third party login and storage service, API swagger, better user management policy

Koa

Koa was created by the same team that created Express.js and it is often referred to as the next-generation NodeJS framework. Koa is unique in the fact that it uses some really cool ECMAScript(ES6) methods that have not even landed in some browsers yet. It allows you to work without callbacks while providing you with an immense increase in error handling.

It requires a NodeJS version of at least 0.11 or higher.

Koa
Next-generation web framework for NodeJS

KoaJS supports syn/await keywords and helps to manage the codes neatly.

Also, it doesn’t pack any bundle of middleware in the core. That makes server writing with Koa faster and enjoyable. KoaJS comes with more options for customization. It allows you to work with applications from scratch where developers can add those features only which they need.

NestJS

NestJs is a framework built with Node.js, It is used for building efficient, scalable Node.js server-side applications. Nest uses progressive JavaScript and is written with TypeScript. Being built with TypeScript means that Nest comes with strong typing and combines elements of OOP(Object Oriented Programming), FP(Functional Programming) and FRP(Functional Reactive Programming).

NestJS
NestJS Framework- a progressive NodeJS Framework for building efficient, reliable and scalable server-side applications


Advantages of NodeJS Framework

NodeJS Framework is now emerging as the most commonly used development framework for building frontend and backend for web applications. And it is the most preferred environment for custom web development. Let’s check some of the major advantages of NodeJS Framework:

  • Real-time working environment
  • Simple coding experience
  • Seamless data streaming
  • Same code pattern throughout development
  • User-friendly

Final Analysis

After going through this article, we can certainly understand that the adoption of a particular framework depends totally upon the kind of website and web application we are planning to build. The list is endless but we tried presenting 10 most useful NodeJS Frameworks for you depending on utilization and ubiquity in the javascript community

Categories
Angular Backend Developer Development Express Frontend Developers Node Programming

MEAN Stack Development Influences The Future Of Web Apps

The implementation and usage of Web app development is increasing and currently in a fast-moving realm. A highly competent architecture and navigation are in demand in today’s web applications. They need to be dynamic, user-friendly, robust, and flexible. With the developments and evolutions in technology leaves web developers with many choices for their app. One essential factor while choosing a suitable framework for the solution, it is essential to determine a software technology that combines the best features to work.

MEAN stack is a growing contemporary trend for JavaScript development. This stack is the one technology that meets all the requirements for a fully efficient development in the best possible way. An open-source JavaScript bundle for web apps, MEAN is an acronym that stands for:

  • M stands for MongoDB,
  • E stands for Express,
  • A for AngularJS and
  • N for NodeJS.
mean stack development
Mean Stack Development

Web developers find MEAN stack application development as an attractive choice and are switching to it as it is on the latest technology-go-to basis, the full-stack JavaScript. The flawless combination of these 4 robust technologies makes it the most sought out bundle for web app development services.

What makes this stack an ideal choice for developing a website is as follows:

  • Flexible in developing for any size and type of organization.
  • Best viable technology solutions for all Business segments from startups, SMEs, or large enterprises.
  • Straightforward for frontend and backend developers to apply this framework.
  • Suitable framework for any multi-layer application.
  • Immense benefit in terms of productivity and performance.

Knowledge of JavaScript language mechanisms from the presentation layer to the database is all you need to proceed with the MEAN stack software.

A brief look into the 4 Components of MEAN

MongoDB is the open-source, NoSQL database framework for JavaScript

  • Cross-platform document-oriented database model
  • A schema-less, independent NoSQL database
  • With JavaScript, it limits to a single language for the complete application development
  • Collects and stores the application’s database in MEAN
  • High scalability in both storage and performance
  • Cost-effective and useful in transferring both client and server-side data without compromising data access
  • Expandable resources, load balancing and handling increased activity durations

ExpressJS is the lightweight server-side JavaScript framework

  • Web application framework for NodeJS and simplifies the development process
  • Cuts down the entire process of writing secure code for web and mobile applications
  • Developers can include New features and enhancements
  • Minimal structure mainly used for backend development and aids decluttering
  • Building smooth server-side software with NodeJS
  • Prevents accidental redefinition of a variable, therefore, eliminating errors and time-saving

AngularJS is the web frontend JavaScript framework

  • A browser-free MVC JavaScript UI framework with data binding
  • Popular Google’s front end framework that enables smooth flow of information throughout the application
  • Enables rapid development of dynamic single-page web apps (SPA’s)
  • Modular structure and develops for both mobile and web
  • Easy-to-use templates and high scalability for full stack front end augmentation

NodeJS is the open-source JavaScript-based runtime framework

  • Built on Chrome’s JSV8 engine
  • Before execution, compiling JavaScript source code to native machine code is done
  • Helps build scalable, secure web applications and comes with an integrated web server
  • Maintains a vast ecosystem of open source libraries and components
  • Quickly responds to usage spikes during runtime

The reasons behind why the preference for this Stack Development software for Web Applications are as follows

  • Inexpensive in Nature

Due to its budget-friendly nature, development finds its main reason to be a cut above other technology frameworks in existence. Also, as it is a full-stack development, unnecessary expenditure over resources can be eliminated for the customers as well as the developers. Hence, a large volume of reusability and sharing of code amongst the developers occurs. This process, thereby, restrains the budget considerably.

  • Full JavaScript Framework

Since the framework is entirely JavaScript, it has its set of benefits to provide in terms of exceptional user experience and data handling. Both Linux, as well as Windows OS, are supported. Data recovery is speedy due to the power and dependability of the framework. Seeing that both NodeJS and AngularJS contribute to a better condition to build competent web apps and more traffic occurrences.

  • Universal Accessibility to JSON

It adds to the advantage of having a seamless expanse of data within layers because JSON is present all over, whether it’s AngularJS, MongoDB, NodeJS, or ExpressJS. The highlight is that rewriting the code is not necessarily required. Data flow between the layers is much more comfortable and not necessary to be reformatted. MEAN utilizes a standard JSON format without exception for data. Also, it becomes increasingly simpler while functioning with APIs.

  • Highly Scalable and so very Popular

Full-stack development with MEAN is scalable, and its ability to handle multiple users makes it a reliable choice and a business favorite. In addition to that, all four components are open source. The development time is also faster, owing to the presence of various frameworks, libraries, and reusable modules. Because of its swiftness in operation, easy to collaborate, easy to learn, and takes less time to develop cloud-native applications, it’s eventually a developer’s choice.

Being an open-source makes it available for free. MEAN can be easily deployed as it includes its web server. The development potential is on the higher side for many other JavaScript resources with this stack technology. Because of this, MEAN stack web development has made avid developers look ahead to work, and the built-in elements of JavaScript make it even more, easier to utilize resources in this sector.

  • Reusability and Fixing is much simpler

Streamlining the development process by using a single language across the whole application is possible. Thus it becomes easy for developers as it eliminates the need for different specialists for developing each part of any web application. It also enables easy tracking of the entire development flow, monitor data exchange, and catch sight of errors/bugs. This technology can be even more improvised with the help of a few third-party opensource tools and libraries that allow the frontend and the backend to reprocess quickly.

  • Lowered Development Expenses

A MEAN application penetrates the tech-world improvised to take advantage of all the cost savings and performance improvements of the cloud. The primary feature of this innovative technology is that it does not incur any needless expenses, thereby a large volume of concurrent users can be reached. Code reuse across the entire application reduces reinvention, and code sharing helps developers to reach their target objective within the committed time frame and allocated budget.

  • Enables Collaboration Between Developers

The stack technology has a lot of community support behind, and finding answers to questions or even hiring assistance can be obtained. All developers speak the same programming fundamentals, and so it’s effortless and efficient for them to understand the nuances of web app development mutually. The advantage of hiring MEAN stack developers is more since they can effectively understand, facilitate association, and manage the project with ease.

  • Access to Congruent Coding

MEAN stack helps to transfer within frames, i.e., the server-side and the client-side. Creating code in one framework and transferring to another is achievable without any difficulty or disruption in performance. It is yet another critical feature of this technology in comparison to the rest.

  • Systematic & Exceptionally Flexible

It is incredibly swift to prototype because the stack has its internal web server that enables opening without difficulty, and the database can be systemized on-demand to contain momentary usage spikes. Consistent language and flexibility give it an added competitive edge for developers.

Some of the famous and familiar websites that use MEAN stack are Netflix, Uber, LinkedIn, Walmart, PayPal, and Yahoo. The web development frameworks and databases are enhancing every day. This is the most suitable stack technology for cutting-edge web and mobile applications development.

Categories
API Backend Developer

Various Tools Used For API Testing

What Exactly Is An API?

According to Wikipedia,’An application programming interface (API) is an interface or communication protocol between different parts of a computer program intended to simplify the implementation and maintenance of software’.

API stands for Application Programming Interface. An API is a software intermediary that allows smooth communication between two applications.

Aip testing tools

APIs (Application Programming Interfaces), are the connecting layer between different layers of an application. Simply put, it acts as a messenger for applications, devices, and databases.

In addition, APIs are used for programming graphical user interface (GUI) components. A good API makes it easier for the developer to put all blocks together by providing all the building blocks.

The API layer contains the business logic of an application where the user’s interaction with services, data, and functions of the app is determined.

API Testing API-Architecture-image

Since the API or service layer is in direct touch with both data and presentation layer it occupies a fixed space for continuous testing for QA and Developmental teams. 

Applications have three layers:

  1. Data layer
  2. Service (API) layer
  3. Presentation (UI) layer

What is API Testing?

Today, APIs are considered the epicenter of software development, connecting and transferring data and logics across disparate systems and applications. And testing them greatly improves the efficiency of the testing strategy as a whole, delivering faster software.

API Testing
Api Testing

While traditional testing mainly focuses on the UI (User Interface), still it has many advantages to offer in API Testing. API Testing consists of making requests to single or sometimes multiple API endpoints and validate the response for performance, security, functional correctness or status check whereas UI Testing focuses on validating the look and feel of the web interface. Also, API Testing lays greater emphasis on business logic, data responses, and security and performance bottlenecks.

Various Types of API Testing

  • Unit Testing

Testing world is filled with misnomers, and simple and easy way for “unit test” and APIs is testing a single endpoint with a single request, looking for a single response.

Most of the time “Unit Test” is performed manually via command lines like “cURL” command or with lightweight tools like SoapUI.

  • Integration Testing

Integration testing is the most common form of API testing because APIs stay at the centre of integration.

  • End-to-End Testing

End-to-End testing can help us validate the flow of data and information between a few different API connections. 

  • Performance Testing

Earlier load testing was difficult to execute in a CI/CD environment and was performed by a very few. LoadUI Pro is a performance testing tool for RESTful, SOAP, and other web services that enable nearly any team member to embed performance tests into their CI/CD pipeline.

Why API testing is required?

As the changes in software happen at a rapid pace, it becomes important to have tests that provide faster feedback for developers and testers. The major benefit of API testing is flexible access to the application without any user interface. 

Testing the core, code-level functionality of the application provides an early evaluation of its overall build strength before running the GUI tests.

Various API Testing Tools

API Tool Features

Let’s check the list of top API Testing Tools/ Their features, which simplifies the development process:

  1. JMeter
  2. Postman
  3. Rest assured
  4. Citrus Framework
  5. Fiddler
  6. Insomnia
  7. Powershell
  8. Taurus
  9. SoapUI
  10. Karate
  11. KatalonStudio
  12. TestNG
  13. Apiary
  14. Tricentis Tosca
  15. Swagger
  16. Apigee

Various Tools Used For Api Testing :-

1. JMeter

JMeter was created for load testing, and many developers use it for functional API testing. JMeter is a simple yet powerful tool for automated testing where developers can perform performance testing of RESTFul services with the use of JMeter scripting.

Jmeter Tool
JMeter

Key Features:

  • It can use different languages like Java, JavaScript, and PHP.
  • It is designed to test web applications, as well as it has expanded its base to other test functions.
  • JMeter includes all the functionality you need to test an API, plus some extra features that can enhance your API testing efforts. 
  • It also integrates with Jenkins, which means you can include your API tests in your CI pipelines.

2. Postman

Postman is an open-source, easy to install tool used for building and testing of API. Postman is a good option for exploratory-type API testing and it’s powerful enough to create more integrated solutions as per the need.

API Best tools

Key features:

  • Writing and running tests for every request using JavaScript.
  • During testing API in Postman, the developer gets to choose required HTTP methods like GET, PUT, POST, etc.
  • Store associated endpoints into a collection.

3. REST-Assured

It is the main tool for API Testing. When using Java, REST-Assured is the best choice for API automation. Rest Assured library is a tailor-made API tool for Java domain using people to test and validate REST Services.

RestApi
REST API

REST-Assured is a fluent Java library used to test HTTP-based REST services. It’s designed with testing in mind, and it integrates with any existing Java-based automation framework. 

Key Features:

  • The REST-Assured API was created so that one doesn’t necessarily need to be an HTTP expert.
  • It provides a behavior-driven development (BDD)-like, domain-specific language that makes creating API testing so simple.
  • It also has a bunch of baked-in functionalities, which means one doesn’t have to code things from scratch.

Testing and validating REST services is harder in Java than in other dynamic languages such as Ruby and Groovy, lending simplicity to developers to choose REST-Assured for API Testing.

4. SoapUI

Soap UI is a free, open-source tool used for web services.

Soap Ui
SOAP UI

Key Features:

  • SoapUI is a major API testing tool used to test web services.
  • It is commonly used for SOA ( Service Oriented Architecture) Testing.
  • It is sufficient to check both SOAP Web services as well as RESTful Web Services.

5. Karate

Karate is an open-source API test-automation tool that can script call to HTTP end-points and assert JSON or XML responses as expected.

Karate

Key Features:

  • API tests are written using BDD Gherkin syntax. But unlike most BDD frameworks (Cucumber, JBehave, SpecFlow), you don’t need to write step definitions.
  • It is easy to use since no Java knowledge is required, if you are a novice to programming its a great blessing.

6. Fiddler

Fiddler allows the developer to monitor, manipulate, and reuse HTTP requests.

fiddler

Key Features:

  • It helps you debug web applications by capturing network traffic between the Internet and test computers.
  • It enables you to inspect incoming and outgoing data to monitor and modify requests and responses before the browser receives them.

7. Citrus Framework

Citrus is an open-source tool that can help to automate integration tests for any messaging protocol or data format.

Citrus Api tool
Citus

Key Features:

  • Works with REST, SOAP, HTTP, JMS, TCP/IP, and other.
  • Creates tests using Java or XML.
  • If you plan testing headless technologies beyond REST services, Citrus is the tool for you.
  •  It’s made to handle any headless protocol, giving you an excellent solution for all your non-UI testing needs. 

8. PowerShell

 Powershell is super-efficient at automating many things from the command line.

Microsofts’s Powershell

Key Features:

  • It requires only one line of code to import web services description language.
  • It is a factory installed on all windows machines.
  • It is easy to learn and very fast as it runs from the command line without any UI overhead.

9. Insomnia

It’s a free and easy to use tool possessing a visually attractive interface.

Insomnia pai tool image
Insomnia Api Tool

Major benefits of Insomnia are:

  • It allows creating HTTP requests
  • It allows viewing response details
  • It organizes your tests
  • It reuses values
  • It generates code snippets 

10. Taurus

Taurus is an automation-friendly framework for continuous testing. Also, one can use it with JMeter.

The power of Taurus is that it allows the developer to write their tests in YAML. One can actually describe a full-blown script in about 10 lines of text, which provides the developer the ability to describe their tests in a YAML or JSON file.

API Tool Taurus
Taurus

Key features:

  • Taurus allows more members to contribute to AI testing. Since test cases are written in YAML, the tests in Taurus are much more readable which makes it easier to perform code reviews on.
  • Taurus fits performance testing in your CI/CD pipeline more efficiently.
  • Taurus provides a sort of abstraction layer on top of JMeter.

Taurus is great to use when the developer wants to take a more BDD-testing approach to their API testing efforts and using YAML files provides clear, easy-to-read tests.

11. Katalon Studio

Katalon Studio is considered the best automation testing tool for Web, API, and Mobile. It is often viewed as one of the best emerging testing tools.

Katlon Tools in API

Key features:

  • End-to-end testing solution for testers and developers
  • Supports all kind of SOAP, REST requests
  • Works with a framework such as BDD Cucumber. It’s a testing approach in which the written test cases are in natural languages helps the conveyance between business stakeholders and technical human resources.
  • Built-in integrations with Jenkins, JIRA, Slack, Docker, and qTest
  • Efficiently utilize Katalon UI/UX features like searching, drag & drop, built-in keywords, selecting test cases

12. TestNG

TestNG is inspired from JUnit and NUnit for Java language. The best feature of TestNG is to provide easy to use functionalities and fulfill all types of testing requirements like unit, integration, functional testing etc.

API tools
TestNg

Key Features:

  • If TestNG is used with Selenium, one can create a prompt report where we c get to know how many test cases were unsuccessful, progressed, and bounced.
  • Easily integrate with DevOps tools like Maven, Jenkins, Docker, etc.
  • Create data-driven tests using TestNG.

13. Swagger

Swagger tools are both open source and pro and are helping millions of developers & testers to deliver great API.

API Swagger Framework
Swagger

Key Features:

  • Inspector is easy to design, document and test API through swagger.
  • One can test APIs on the cloud.
  • Support all types of services like REST, SOAP.
  • SwaggerHub is the platform where one can design and document with OpenAPI.

 14. Tricentis Tosca

Tricentis Tosca is a customized continuous testing tool for DevOps platforms as some of the leading tools failed to meet in the DevOps environments.

Tricentis Tosca
Tricentis Tosca

Key Features:

  • A beginner also can understand the Tosca tool and can instantly create advance API tests from a business perspective and then integrate them into throughout all scenarios.
  • Tricentis Tosca is suitable for continuous testing & automation test for mobile-based, web-based, UI, SAP, etc.

15. Apiary

Apiary is a complete API platform where we can design, build, develop, and document API.

API Tools

Key Features:

  • It provides a framework to develop, test, and implement production-ready API, faster.
  • To create an API, we need to define a schema for input and the output whereas, in Apiary API can be designed with mocked input and output.

This mocked API meet application specifications without any change in coding, while data can be integrated and tested.

16. MuleSoft API

Mulesoft aka AnyPoint API Manager, is a platform where developers can build, design, manage, and publish APIs.

best API Tool
Mulersoft

Key Features:

  • It offers organizations to integrate with popular cloud services such as Salesforce, SAP, and many more.
  • The AnyPoint platform uses Mule as a run time engine.
  • API Manager assures each API is secure and in simple terms is full lifecycle API management.

17. Apigee

 Apigee by Google Cloud enables API managers to design, secure, publish, analyze, monitor, and monetize APIs.

API Testing framework

Key Features:

  • It can be operated in a hybrid-cloud environment to perform digital acceleration.
  • Apigee edge creates API proxies, using these; one can get real analytics data.
  • Proxies created by Apigee edge manages security and authentication to give better services.

Conclusion

  1. Each organization has different requirements and they deploy different tools as per the requirement of that project. 
  2. There is no such thing as the perfect tool.
  3. All API test tools work superbly well and are great options, depending on your team’s requirements.
  4. The role of APIs is extremely important if analyzed from software development and business angle.
  5. These machine-readable interfaces for resource exchange are delivery services that work under the hood and enable the needed technological connection.
  6. There is the same functionality accessible in all API tools, but the approach for each tool differs. The best way to experience their complete features is to give it a try to see what is best suitable for your business requirement.  

Categories
Backend Developer

Learn to use fetch() in API call Easily !

Today we are going to explore the fetch function in API calls and get to know about different methods (such as GET, POST, PUT and DELETE), Call Authentication and Interceptor.

XMLHttpRequest (XHR) was used before fetch() was introduced to make http requests. An XMLHttpRequest would need two listeners to be set to handle the success and error cases and a call to open() and send(). It was more complex to understand and implement.

A sample snapshot of XHR code performing asynchronious GET request.

 //Create the XHR Object
    let xhr = new XMLHttpRequest;
    //Call the open function, GET-type of request, url, true-asynchronous
    xhr.open('GET', 'http://www.example.com/users', true)
    //call the onload 
    xhr.onload = function() 
        {
            //check if the status is 200(means everything is okay)
            if (this.status === 200) 
                {
                    //return server response as an object with JSON.parse
                    console.log(JSON.parse(this.responseText));
        }
                }
    //call send
    xhr.send();

What is fetch() method ?

The Fetch API provides a JavaScript interface for accessing and manipulating parts of the HTTP pipeline, such as requests and responses. It also provides a global fetch() method that provides an easy and logical way to fetch resources asynchronously across the network.

Fetch Method Figure
Api Call

fetch() allows us to make network requests similar to XMLHttpRequest. The main difference is that the Fetch API uses Promises, which enables a simpler and cleaner API, avoiding callback hell and having to remember the complex API of XMLHttpRequest.

The fetch method only has one mandatory argument, which is the URL of the resource we wish to fetch

What are the different methods used by fetch() ?

Some most popular methods used by fetch to make an HTTP request to an API are :

  • GET
  • POST
  • PUT
  • DELETE

GET

GET requests are the most common and widely used methods in APIs and websites. The GET method is used to retreive data from the server at the specified resource. For example, say we have an API with a ‘/users’ endpoint. Making a GET request to that endpoint should return a list of all available users.

Since a GET request is only requesting data and not modifying any resource, it is considered as a safe and idempotent method.

GET is often the default method in HTTP clients

//set the specific API URL
const url = 'http://www.example.com/tasks';

//function to make API Call
const getFetch = async (url) => {
  const response = await fetch(url);  
  //convert response to Json format
  const myJson = await response.json();
  // return the response
  return myJson ;
}

POST

The POST method sends data to the server and creates a new resource. The resource it creates is subordinate to some other parent resource. When a new resource is posted to the parent, the API service will automatically associate the new resource by assigning it an ID (new resource URI).

In short, this method is used to create a new data entry.

In web services, POST requests are used to send data to the API sever to create or udpate a resource. The data sent to the server is stored in the request body of the HTTP request.

The simplest example is a contact form on a website. When we fill out the inputs in a form and hit Send, that data is put in the response body of the request and sent to the server. This may be JSON, XML, or query parameters (there’s plenty of other formats, but these are the most common).

It’s worth noting that a POST request is non-idempotent. It mutates data on the backend server (by creating or updating a resource), as opposed to a GET request which does not change any data. 

//set the specific API URL
const url = 'http://www.example.com/tasks';
//initialize the data to be posted
const data = {
    userId: 11,
    id: 2,
    title: “coding task”,
    completed: false
  }

//function to make API Call
const postFetch = async (url,data) => (
  const response = await fetch(url, {
      method: 'POST',
      headers: {
        //type of data
        'Content-Type': 'application/json'
      },
      //data to be posted on server
      body: JSON.stringify(data)
    });
  //convert response to Json format
  const myJson = await response.json();
  //return the response
  return myJson ;
}

NOTE : we needed to pass in the request method, body, and headers. We did not pass these in earlier for the GET method because by default these fields are configured for the GET request, but we need to specify them for all other types of requests. In the body, we assign values to the resource’s properties, stringified. Note that we do not need to assign a URI — the API will do that for us. As you can see from the response, the API assigns an id to the newly created resource.

PUT

The PUT method is most often used to update an existing resource. If we want to update a specific resource (which comes with a specific URI), we can call the PUT method to that resource URI with the request body containing the complete new version of the resource we are trying to update.

Similar to POST, PUT requests are used to send data to the API to create or update a resource. The difference is that PUT requests are idempotent. That is, calling the same PUT request multiple times will always produce the same result. In contrast, calling a POST request repeatedly may have side effects of creating the same resource multiple times.

//set the specific API URL
const url = 'http://www.example.com/tasks/5';
//initialize the data 
  const data = {
    userId: 1,
    id: 5,
    title: “hello task”,
    completed: false
  }

//function to make API Call
const putFetch = async (url,data) => {
  const response = await fetch(url, {
     method: ‘PUT’,
    //data to be updated on server
    body: JSON.stringify(data),
    headers: {
      //type of data
      “Content-type”: “application/json; charset=UTF-8”
    }
  });
  //convert response to Json format
  const myJson = await response.json();
  //return the response
  return myJson;
}

DELETE

The DELETE method is exactly as it sounds, i.e. it delete the resource at the specified URI. This method is one of the more common in RESTful APIs so it’s good to know how it works.

If a new user is created with a POST request to /users, and it can be retrieved with a GET request to /users/{{userid}}, then making a DELETE request to /users/{{userid}} will completely remove that user.

//set the specific API URL
const url = 'http://www.example.com/tasks/3';

//function to make API Call
const deleteFetch = async (url) => (
  const response = await fetch(url, {
    method: ‘DELETE’
    });
  //convert response to Json format
  const myJson = await response.json();
  //return the response
  return myJson ;
}

How to pass authentication parameters ?

We can add the authentication parameters to fetch function and authenticate our API call by adding them as an header to fetch function. It increases the security and authenticity of an API call.

Most of the API’s need access permission to make use of those API’s, so user/developer has to register themselves and accept their terms & conditions, in return they get the login credentials which they may pass to header to get access to that API.

//set the specific API URL
const url = 'http://www.example.com/tasks';

//function to make get API Call with authentication
const authFetch = async (url) => {
  const response = await fetch(url, {
    headers: {
      //type of data
      “Content-type”: “application/json; charset=UTF-8”,
     //adding authentication to API call
      "Authenticate": "Basic user:password"
     //replace user and password with the original credentials
    }
  });
  //convert response to Json format
  const myJson = response.json();
  //return the response
  return myJson;
}

Using Interceptors in fetch()

We can intercept requests or responses before they are handled by then or catch.

Interceptors helps in performing some required tasks such as changing or manipulating url, logging the data , adding tokens to header, etc before and after making an API call. It automatically invoked in all API calls and we don’t need to explicitly intercept each and every API call.

Api call

It enhances the API call and provide more features to make API call more efficient and effective. In order to make use of the interceptor we have to install a package named fetch-intercept in our project.

To install fetch-intercept package we may run one of these command in the terminal.

yarn add fetch-intercept --save
//OR
npm install fetch-intercept --save

We may make use of intercept on request as well as response through the below code fragment and make our API call more smooth and errorless.

import fetchIntercept from 'fetch-intercept';
 
const registerIntercept = fetchIntercept.register({
    request: function (url, config) {
        // Modify the url or config here
        console.log(url);
        return [url, config];
    },
 
    requestError: function (error) {
        // Called when an error occured during another 'request' interceptor call
        return Promise.reject(error);
    },
 
    response: function (response) {
        // Modify or log the reponse object
        console.log(response);
        return response;
    },
 
    responseError: function (error) {
        // Handle a fetch error
        return Promise.reject(error);
    }
});

Conclusion

Anyone looking to build a complete application must know how to query a database. Almost all applications would require you to fetch and store data from a database. These request methods are more than enough for a fully functional application. Javascript’s new Fetch API is extremely simple to use. Whether you’ve worked with APIs before, it is easy to pick up or adapt to. Nonetheless, the fundamental understanding of each of these requests, authentication and interceptors would well equip you to be adaptable to other kinds of HTTP request methods and helps in making safe, smooth and efficient API call.

Categories
Backend Developer Database

Sql Vs NoSql – Which Is Best For You?

Structured Query language (SQL) 

SQL database is a domain-specific programming language used for managing, and designing data stored in a relational database management system (RDBMS). Also, it is used for stream processing in RDBMS. Relational databases use relations (typically called tables) to store data and match that particular data by using common characteristics within that dataset.

sql vs nosql

SQL often pronounced as “S-Q-L” or “See-Quel” is the standard language for dealing with Relational Databases invented in 1974 and is still going strong with their latest released version in 2016.  It is particularly useful in handling structured data which is data incorporating relations among entities and variables.

A relational database defines relationships in the form of tables and SQL is effectively used to insert, search, update, delete database records.

Database

SQL database is originally based on Relational Algebra and Tuple relational calculus consisting of various types of statements. These statements can also be classified as sublanguages, called: A Data query language (DQL),  Data Definition Language (DDL), a Data Control Language (DCL), and a Data Manipulation Language (DML).

Schema For SQL

Schema in SQL is a template/ a pattern that describes qualities regarding the information a database will store.

Specifically, it describes:

  • Type – Type of information refers to a specific piece of information and general attributes of that particular information. For example, integers can be positive or negative and they don’t have a fractional part. This piece of information about their characteristics makes a huge difference in the way they are being efficiently stored.
  • Size – The size of each piece of information determines how much space it will occupy in the database. Although the price of storage has come down, still it is not practical to leave an infinite storage space. This information is recognized at the designing stage when building and maintenance of databases happen.
  • Organization – It refers to how the information is grouped and stored as per the user’s convenience and intended use at a particular point in time. Organization of information is stored in such a way that it is on a priority basis and unused or to be used later information is stored separately, making it a comfortable experience for the user.

SQL provides an organized and systematic approach to accessing information through various methods like:

  • Data query
  • Data manipulations (insert, update and delete),
  • Data definition (schema creation and modification),
  • Data access control

Although the SQL database is essentially a declarative language, it includes procedural elements also.

sql database image

Scalability

Scalability is the ability of a system, network, or process, to handle a growing amount of work in an efficient manner or its ability to be enlarged to accommodate that growth. In other words, we can say that it is the ability of a system to optimize its performance level as per the requirement of the system at that stage.

EXAMPLES


Few examples of relational databases using SQL are:-

  • MySQL
  • Oracle
  • Microsoft SQL server
  • Sybase
  • Ingres
  • Access
  • Postgres

Model-

ACID is a concept that is generally used by database professionals for the evaluation of databases and application architectures in the SQL database model to ensure that data is stored in a safe, consistent and robust manner

Here, ACID stands for-

A- Atomicity -Atomicity is an all-or-none proposition. During such transactions between two information either all is saved or none is saved.

C- Consistency The data saved can’t violate any of the database’s integrity.  Interrupted changes are rolled back to ensure the database is placed in a state prior to the change.

I- Isolation – The transaction does not get affected by any other transactions which are happening at other places, this avoids “mid-air collisions.”

D- Durability– Once the transaction happens, any failure or system restart returns the data in an absolute correct form.  Regardless of subsequent system failure, its state remains unaffected.

For a reliable database, all these four attributes should be achieved.

Usage- Which jobs use SQL?

SQL statements are used to perform tasks such as updating and retrieval of data on a database.

A job is a specified series of operations that are sequentially performed by SQL Server Agent. A job performs a wide range of activities, including running Transact- SQL scripts, Command prompt applications, Microsoft ActiveX scripts, Integration Services packages, Analysis Services commands, and queries, or Replication task.

Pros

  • High speed– Using the SQL queries, the user can quickly and efficiently retrieve a larger amount of data from a database.
  • No coding needed– In the standard SQL, it is very easy to manage the database without any substantial coding requirements.
  • Well defined standards– Long established ISO and ANSI standards are strictly followed.
  • Portability– It offers great ease to use in PCs, laptops, servers and even some mobile phones.

Interactive language SQL is used to communicate with greater ease in answering complex queries in a database.

Cons

Along with some benefits, the SQL database comes with certain limitations/ disadvantages:

  • Difficult Interface– SQL has a complex interface making it difficult for the users to access it.
  • Partial Control– Users don’t get full control over the database because of the hidden business rules.
  • Implementation– Some of the databases go to the proprietary extensions to standard SQL for ensuring the vendor lock-in.
  • Cost– The operating cost of a few SQL versions makes it difficult for users to use it.

The average salary of SQL Developer:-

The average annual salary for any SQL developer in the USA is $84,328.

No Sql

NoSQL is a non-relational database management system, that does not require a fixed schema, avoids joins, and is easy to scale. NoSQL database is used for distributed data stores with humongous data storage needs.

NoSQL stands for “not only SQL,” or “Not SQL” an alternative to traditional relational databases where data is placed in tables and schema is carefully designed before the database is built.

A NoSQL database is self-describing, so it does not require a schema. Also, it does not enforce relations between tables in all cases. All its documents are JSON documents, which are complete entities and one can readily read and understand.

A NoSQL database system encompasses a wide range of database technologies that can store structured, semi-structured, unstructured and polymorphic data.

Nosql

NoSQL’ refers to high-performance, non-relational databases that utilize a wide variety of data models. These databases are highly recognized for their ease-of-use, scalable performance, strong resilience, and wide availability.

Database

According to Wikipedia “A NoSQL database provides a mechanism for storage and retrieval of data that is modeled in means other than the tabular relations used in relational databases.”

what is no sql database

NoSql is a cloud-friendly approach to employ for your applications.

Schema For NoSql

The formal definition of a database schema is a set of formulas or sentences called “Integrity constraints” imposed on a database.

NoSql
NOSql SCHEMA

The term “schema” refers to the organization of data as a blueprint of how the database is constructed, construction here refers to the division of database tables in case of relational databases.

Scalability

NoSQL databases are horizontally scalable, which means they can handle increased traffic needs immediately, simply by adding more servers to the database. ‘NoSQL’ databases have the ability to become larger and more powerful, making them a preferred choice for larger or constantly evolving data sets.


Nosql scalability

Examples

Presenting here a list of top 4 NoSQL Databases with their uses:

no sql database examples

Model

‌NoSQL relies upon a softer model known as the BASE model. Here BASE stands for (Basically Available, Soft state, Eventual consistency).

Basically Available: Guarantees the availability of the data.

Usage

NoSQL is used for Big data and real-time web apps.

Pros

No SQL provides ease in availability with rich query language and easy scalability. The following are the main advantages of NoSql databases.

  • Elastic scaling

RDBMS might not scale out easily for commodity clusters, but the new versions of the “NoSQL database” are designed to expand transparently to take benefits from new nodes.

  • Big data

To combat the growing needs of the volumes of data that are being stored, RDBMS capacity has been increased to match these massive volumes. But with transaction rates, constraints of data volumes that can be practically managed by a single RDBMS is getting difficult to handle by organizations/ enterprises worldwide. NoSql systems provide a solution to all this by handling bigger data needs as displayed in Hadoop.

Cons

Every database has certain advantages and some disadvantages as well, listing here a few of the major NoSql limitations:

  • Less Community Support
  • Standardization
  • Interfaces and Interoperability

Average Salary Of NoSql Developer:-

The average annual salary for a NoSql developer in the USA is $72,174.

Major Differences To Understand in SQL and NoSql Database As Per Business Needs

To understand which is the best data management system between Sql Vs NoSql databases for your organization, we must identify the needs of our business and then make an informed decision. In database technology, there’s no one-size-fits-all solution, so it is recommended to analyze SQL Vs NoSql and then decide.

Many businesses rely on both relational and nonrelational databases for different tasks, as NoSQL databases win in speed, safety, cost, and scalability, whereas the SQL database is preferred when the highly structured database is required.

sql vs nosql difference table

One of the key differentiators is that NoSQL is column-oriented, non-relational distributed databases whereas RDBMS is the row-oriented relational database. Also, they are differentiated on the basis of built, type of information they store and how they store

Relational databases are structured, like phone books and Non-relational databases are document-oriented, distributed, like file folders that store everything from a person’s address and phone number to their Facebook and online shopping preferences etc.


pros and cons of sql nosql

The major point of differences in Sql Vs NoSql databases are:

  1. Language– One of the major differences among the SQL database and NoSQL databases is the language. SQL databases use Structured Query Language for defining and manipulating data, making it a widely-used and extremely versatile database. But, it makes it a restrictive language also. SQL requires ‘predefined schemas’ to determine the structure of the data before the user starts working with it. A ‘NoSQL database’ requires a dynamic schema for unstructured data and the data is stored in many different ways, whether it is graph-based, document-oriented, column-oriented, or organized as a KeyValue store. This extreme flexibility in the ‘NoSql database’ allows the user to create documents without having to carefully plan beforehand and define their structure. It gives the flexibility to add fields as you go and vary the syntax from one database to another. It also provides the freedom to give each document its own unique structure.

2. Scalability– Another big difference between SQL and NoSQL is their scalability. In most SQL databases, they are vertically scalable, which means that you can increase the load on a single server by increasing components like RAM, SSD, or CPU. In contrast, NoSQL databases are horizontally scalable, which means that they can handle increased traffic simply by adding more servers to the database. NoSQL databases have the ability to become larger and much more powerful, making them the preferred choice for large or constantly evolving data sets.

sql nosql database

3. Community– Because of the SQL’s advanced and mature useful features in the database management, it has a much stronger, huge and developed community as compared to ‘NoSQL’. Although, NoSQL is growing rapidly its community is not big enough and well defined in comparison to SQL, because it’s relatively new.

4. Structure– Finally in SQL vs NoSQL differences, an important difference in their structures. SQL databases are table-based considered a good option for multi-row transactions like in accounting systems or legacy systems that are built on relational structure. NoSQL databases are key-value pairs, wide-column stores, graph databases, or document-based in structure

List Of Top Companies Using SQL:

  • Hootsuite
  • Gauges
  • CircleCI

List Of Top Companies Using NoSQL:

  • Uber
  • Airbnb
  • Kickstarter

Conclusion:

One of the most important decisions for your businesses is what database to go for as per the requirement. Many times it so happens that businesses require both the databases at various stages of an application. The onus is on the developer to recognize the right database for a certain application and deploy it as per the need on the basis of query and scalability needs.

  • SQL databases are suitable for transactional data where structural change is not required frequently or does not happen at all. Also, data integrity and durability is of paramount importance. Additionally, it is found useful for faster analytical queries.
  • NoSQL databases provide better flexibility and scalability yielding high performance with high availability. Also, it is better for big data and real-time web applications.

Categories
Backend Developer Database Programming React Developers

Hooks, Getting in a New Relationship

Introducing React Hooks


In 2018, at the React Conference “Hooks” was officially Introduced to React.

Hooks arrived as a savior for developers who were struggling in maintaining hundreds of states for hundreds of components.

They let you use state and other React features without writing a class. Now, you can kick out classes from your components.

No need to worry, There are no plans to remove classes from React permanently, yet

You can adopt Hooks gradually,
Hooks work side-by-side with existing code so there is no rush to migrate to Hooks.

You don’t have to learn or use Hooks right now if you don’t want to.

WHY GO FOR HOOKS?

You might be thinking why you need to learn one more feature? The answer is here:

  • It helps when you need to maintain too many components and states.
  • Completely opt-in.
    You can try Hooks in a few components without rewriting any existing code.
  • A “wrapper hell” of components surrounded by layers of providers, consumers, higher-order components, render props, and other abstractions. While we could filter them out in DevTools, this points to a deeper underlying problem: React needs a better primitive for sharing stateful logic, here Hooks made an appearance.
  • With Hooks code Reusability is improved, you can extract stateful logic from a component so it can be tested independently and reused. Hooks allow you to reuse stateful logic without changing your component hierarchy. This makes it easy to share Hooks among many components or with the community.
  • render props and higher-order components try to solve some problems but make code harder to follow, because it requires to restructure your components.
  • components might perform some data fetching in componentDidMount and componentDidUpdate. However, the same componentDidMount method might also contain some unrelated logic that sets up event listeners, with cleanup performed in componentWillUnmount. Mutually related code that changes together gets split apart, but completely unrelated code ends up combined in a single method. This makes it too easy to introduce bugs and inconsistencies.
  • It’s not always possible to break these components into smaller ones because the stateful logic is all over the place. It’s also difficult to test them. This is one of the reasons many people prefer to combine React with a separate state management library.
  • class components can encourage unintentional patterns that make these optimizations fall back to a slower path

How Hooks Affect the Coding Style

  • Say bye! to class
Without Hooks:

Class Components

class Clock extends React.Component {
    ...
    ...
    render() {
        return (
            <div>
                <h1>...something...</h1>
            </div>
        );
    }
}
With Hooks:

Function Components

function Example() {
    ... // Hooks can be used here
    ...
    render() {
        return (
            <div>
                <h1>...something...</h1>
            </div>
        );
    }
}
OR like this:
function Example = () => {
    ... // Hooks can be used here
    ...
    render() {
        return (
            <div>
                <h1>...something...</h1>
            </div>
        );
    }
}

> you can also pass props to the function:

function Example(props) {
    ... // Hooks can be used here
    ...
}
OR like this:
function Example = (props) => {
    ... // Hooks can be used here
    ...
}

props can be accessed like this -> const v = props.value

  • Creating a local state
Without Hooks:
const state = {
    x: 10,
    y: 'hello',
    z: {
        word: "world!"
    }
}
With Hooks:

useState is used to set the initial value for a local state.

// this is How we declare a new state variable
const [color, setColor] = useState('Yellow');

// declaring multiple state variables
const [x, setX] = useState(10);
const [y, setY] = useState('hello');
const [z, setZ] = useState([{
    word: "world!",     
}]);
  • Accessing state: a Breakup With this
Without Hooks:
constructor(props) {
    this.state = { text: 'demo' };
}

render() {
    return (
        <div>
            <h1>This is { this.state.text }</h1>
        </div>
    );
}
With Hooks:

While using hooks, state variables can be accessed directly

const [text, setText] = useState('demo');

render() {
    return (
        <div>
            <h1>This is { text }</h1>
        </div>
    );
}
  • Changing the State
Without Hooks:
...
this.state = {
    a: 1,
    b: 2,
    fruit: 'apple'
}

...
{
    this.setState({
        ...state,
        fruit: 'orange'
    });
}
With Hooks:
const [fruit, setFruit] = useState('apple');

...
{
    setFruit('orange')
}
  • Effect of the Effect Hook
  • React runs the effects after every render, including the first render.
  • With useEffect() we can run a script after each update or after a particular change.
  • Lifecycle methods componentDidMount, componentDidUpdate or componentWillUnmount can be replaced with useEffect()
// To run with each Update
useEffect(() => {
    // Do something
});


// run only when value of "color" is changed
useEffect(() => {
    // Do something
}, [color]);


// run only on first render
useEffect(() => {
    // Do something
}, []);

Let’s see some usages, in lifecycle methods
— ComponentDidMount

Without Hooks:
componentDidMount() {
    // do something
    const cat = "tom";
    this.setState({
        ...state,
        animal: cat
    });
}
With Hooks:
useEffect(() => {
    // Do Something
    const cat = "tom";
    setAnimal(cat);
}, []);

— ComponentDidUpdate

Without Hooks:
componentDidUpdate() {
    // do something
    const cat = "tom";
    this.setState({
        ...state,
        animal: cat
    });
}
With Hooks:
useEffect(() => {
    // Do Something
    const cat = "tom";
    setAnimal(cat);
})

above snippet will run the code at every update including the first render acting as a combination of componentDidMount and componentDidUpdate, if you want to prevent it from running on first render, then it can be done by keeping a check of first render, like this:

const [isFirstRender, setIsFirstRender] = useState(true);

useEffect(() => {
    if (isFirstRender) {
        setIsFirstRender(false);
    } else {
        // do Something
        const cat = "tom";
        setAnimal(cat);
    }
}

— ComponentWillUnmount

Without Hooks:

componentWillUnmount() {
    // Do Something
}
With Hooks:

Just return a function ( named or anonymous ) for cleanup, that we do in ComponentWillUnmount

useEffect(() => { 
    return () => {
        // Do something
    }
});
  • Getting the context with the Context Hook

useContext() takes a context object as the parameter and returns the corresponding context values at that time. Refer to the example below for a better understanding.

// for example, We have
const flowers = {
    sunflower: {
        petals: 25,
        color: "yellow"
    },
    daisy: {
        petals: 5,
        color: "white"
    },
    rose: {
        petals: 30,
        color: red
    }
};


// Creating our context
const MyContext = React.createContext( flowers.rose );


// Wrappin the component with <MyContext.Provider>
function App() {
    return (
        <MyContext.Provider value={ flowers.sunflower }>
            <MyComponent />
        </MyContext.Provider>
    )
}

The current context value is determined by the value of the value prop passed in the nearest <MyContext.Provider> in which the component is wrapped.

// ... somewhere in our function component ...
const flower = useContext(MyContext);

Now the flower will have the value of rose:
{ petals: 30, color: "red" }
and can be used as
<p>Colour of rose is { flower.color }.</p>
It will run each time when the context is updated

You must have got the ‘context‘ of this blog if you are still here, kindly have a look at “Some rules to remember” below:

Some rules to remember

  • never be conditional with Hooks:
    don’t call hooks inside loops or conditions, call Hooks at the Top level
  • don’t call hooks from nested functions:
    call only from React Function components or custom hooks
    More details can be found in official React docs, available here

More about Hooks

More Hooks

Some other commonly used Hooks are:

Custom Hooks

A custom Hook is a function whose name starts with ”use” and that may call other Hooks and, lets you extract component logic into reusable functions.

Let’s create a custom Hook useColor that returns the color of the flower whose ID is passed as argument:

function useColor(flowerID) {
    const [color, setColor] = useColor(null);

    useEffect(() => {
        /*    Extract the value of colour of the flower from the database and set the value of color using setColor()    */
    });

    return color;
}

Now, we can use our custom hook,

{
    // To get the colour of the flower with ID = 10
    const color = useColor(10);
}

Learn more about how to create the custom hooks in detail.

See official docs for React Hooks.

Categories
Backend Developer Career Development Frontend Developers Programming

Top 35 interview questions on JQuery

jquery-interview-questions

We have listed down some of the most frequently asked Interview questions of JQuery. These questions curated by the experts so that you don’t have to go anywhere. Here we will bestow the in-depth knowledge about JQuery so that you can bag down your dream job.

1. Define JQuery?

Answer. JQuery was first released on August 26, 2006, and it is free and open-source software using permissive MIT license. JQuery’s syntax is designed to make it easier to navigate a document, select DOM elements, create animations, handle events, and develop Ajax applications. JQuery also provides developers with the ability to create JavaScript plug-ins.

2. What are the advantages of JQuery?

Answer.

  • It’s like an enhanced JavaScript version, so learning a new syntax has no overhead.
  • It gives hundreds of plug-ins for everything.
  • JQuery has cross-browser support.
  • This would eliminate the need to write complex loops and library calls for DOM scripting.
  • JQuery is capable of keeping the code short, easy to read, straightforward and reusable.

3. Name the methods that provide effects to JQuery?

Answer.

  • Fade In ()
  • Fade out ()
  • Show ()
  • Hide ()
  • Toggle ()

4. What is the difference between the ID selector and class Selector in JQuery?

Answer. If you’ve used CSS, you might know the difference between the ID and the class selector, jQuery is the same. ID selector uses ID for selecting elements, e.g. #element1, while class selector uses CSS for selecting items. When selecting just one element, use the ID selector while selecting a group of elements with the same CSS class as using the class selector. The interviewer is likely to ask you to write code using the ID and class selector. From the point of view of syntax, as you can see, another difference is that the former uses “#” and later uses.” “character.

5. What do you mean by booking?

Answer. If more than one selector shares the same statement, they can be grouped together through a comma-separated list; this enables you to reduce the size of the CSS (every bit and byte is important) and make it more readable. The following snippet applies the same background to the h1, h2, h3 {background: blue;}.

6. Name the compatible operating system with JQuery?

  • Windows
  • Mac
  • Linux

7. How you can read, write and delete cookies in Jquery?

Answer. Using the Dough cookie plugin, we can handle jquery cookies. Dough is user-friendly and features powerful.

  • Create cookie:
$. dough(“cookie_name”, “cookie_value”);
  • Read Cookie:
$. dough(“cookie_name”);
  • Delete cookie:
$. dough(“cookie_name”, “remove”);

8. What do you mean by JQuery connect? And also tell how to use it?

Answer.  A jQuery connect is a plugin for connecting or binding a function to another function. Connect is used from any other function or plugin to perform a function. Through downloading the jQuery link file from jQuery official website, it can be used to include that file in the HTML document. To connect one function to another, you need to use $. connect.

9. Is there any program for testing JQuery? If yes, name it?

Answer.  Yes, there is a program for testing JQuery. QUnit is used to test jQuery and it is very easy and efficient.

10. What do you mean by Jquery UI?

Answer. JQuery UI is a set of jQuery JavaScript Library user interactions, effects, widgets, and themes. JQuery UI works well for highly interactive web applications with a variety of controls or simple date picker pages.

11. What is the use of the HTML() method in JQuery?

Answer. The method jQuery HTML() is used to change the selected elements ‘ entire content. It replaces the content of the selected component with new content.

$(document).ready(function(){
$(“button”).click(function(){
$(“p”).html(“Hello <b>Codersera</b>”);
});
});

12. What is the use of Jquery.each () function?

Answer. The function “jQuery.each()” is a general function that loops through a set (a type of entity or type of array). The index position and value of array-like objects with a length property are iterated. Certain objects are iterated on the properties of their primary value.

Nevertheless, the function “jQuery.each()” works differently from the function $(selector).each() that uses the selector to operate on the DOM component. But both iterate about an element of jQuery.

Syntax:

<jQuery.each(collection, callback(indexInArray, valueOfElement))
< script type = "text/javascript" >
 $(document).ready(function() {
var arr = ["Mary", "John", "Garry", "Tina", "Daisy"];
$.each(arr, function(index, value) {
alert('Position is : ' + index + ' And Value is : ' + value);
});
}); < /script> 

13. How can you debug JQuery?

Answer. There can be two ways to debug JQuery:-

  • Add the debugger to the line from which to debug and run Visual Studio with the F5 function key in debug mode.
  • Insert a breakpoint after attaching the process.

14. Can Jquery be replaceable with JavaScript?

Answer. NO, JQuery is not a replacement for JavaScript.

15. Differentiate between prop and attr?

Answer. JQuery. prop() – It gets a property value in the matched element set for the first element.

JQuery. Attr()– In the matched element set, it gets the value of an attribute for the first element.

Attributes contain additional HTML element information and come in pairs of name=”value”. You can set and specify an attribute for an HTML element when the source code is published.

 For Eg -<input id="txtBox" value="Jquery" type="text" readonly="readonly" /> 

Here, “id”, “type” and “value” are attributes of the input elements.

jquery data atribute

16. Differentiate between $(this) and this keyword in jQuery?

Answer. For many jQuery beginners, it might be a tricky question, but it’s actually the easiest one. $(this) returns a jQuery object, where you can call several jQuery methods, e.g. text() retrieve text, Val() to retrieve the value, etc., while this is the current component, and it is one of the JavaScript keywords to denote the current DOM element in the background. You can’t call this jQuery process until it’s wrapped up $() function i.e. $(this).

17. Where JQuery can be used?

Answer.

  • Manipulation Process
  • Basically for animation purpose
  • Calling functions on events
  • Apply CSS static or dynamic.

18. Differentiate between find and children methods?

Answer. The find() method is used to locate all the descending elements of the selected element and the children() method is used to find all the elements connected to the selected element.

19. Can you write a command that gives the version to JQuery?

Answer. The command $.ui.version returns jQuery UI version.

20. Can you explain bind() vs live() vs delegate() methods in JQuery?

Answer. The bind() method does not attach events to those elements that are added after loading DOM, whereas live() and delegate() methods often attach events to future elements.

The difference between live() and delegate ()methods is that chaining does not work with the live() function. It will only operate on a selector or element while the chaining method of the delegate() can work.

21. Differentiate between Map and Grep function in Jquery?

Answer. In $.map(), each element in an array must be looped and its value changed while the

$.Grep() method returns the filtered array using some filter condition from an existing array. Map()’s fundamental structure is:

1 $.map ( array, callback(elementOfArray, indexInArray) )

Syntax for $.Grep():

1 jQuery $.grep() Method

22. What are JQuery plugins?

Answer. Plugins are a code piece. The jQuery plugin is a code written in a JavaScript standard file. These JavaScript files provide useful methods for jQuery which can be used in combination with methods for the jQuery library. Every form that you use in plugins must end up with a semicolon “;”. Unless otherwise explicitly noted, the method must return an object. This way it produces clean, stable software. You will prefix the filename with jQuery, follow it with the plugin’s name and finish with.js

23. Jquery is a client or server scripting library?

Answer. Client-side Scripting

24. Which sign is used as a shortcut for Jquery?

AnswerDollar($) sign Is used as a shortcut for Jquery.

25. Name two types of CDN?

Microsoft – Load JQuery from Ajax CDN

Google – Load JQuery from Google libraries API

26. What is the use of the JQuery filter?

Answer. The JQuery filter is used based on the criteria to delete those values from the list of items. An example is to filter some products from a cart website’s master list of products.

27. Define the use of JQuery.data() method?

Answer. To connect data with DOM nodes and JavaScript objects, the JQuery data method is used.

28. Define the use of the serialize () method in Jquery?

Answer. The JQuery serialize() method is used to create a text string in standard URL-encoded notation. It serializes the form values so that its serialized values can be used in the URL query string while making an AJAX request.

$(document).ready(function(){
$("button").click(function(){
$("div").text($("form").serialize());
  });
});

29. Differentiate between $(window).load and $(document).ready function in jQuery?

Answer. $(window).load is an event that fires when the page’s full loading of the DOM and other content. After the ready case, this event will be set.

In most cases, as soon as the DOM is fully loaded, the script can be executed. Normally the ready) (is the best place to write your JavaScript code. But there might be some situation wherein the load) (the method you might need to write scripts. For instance, to get an image’s actual width and height.

Once the DOM and all the CSS, images and frames are fully loaded, the $(window). load event will be released.

30. Differentiate between Jquery.size() and Jquery.length?

Answer. The function jQuery.size() gives the maximum number of elements in the set. But the method size() is not favoured because the property jQuery has.length. It does the same thing but it doesn’t have the overhead of a function call for the.length property.

31. What is the use of param() method in Jquery?

Answer. The param() method in jQuery is used to create a serialized representation of an object.

32.  Differentiate between onload() and document.ready()?

Answer. We can only have one onload feature on a page, but we can have more than one file. When DOM is loaded, Document.ready is called, but when DOM and images are loaded on the screen, the onload function is called.

33. Which is the fastest selector in JQuery?

Answer. ID and Element are the fastest selectors in JQuery.

34. What is the slowest selector in JQuery?

Answer. Class selectors are the slowest selector in JQuery.

35. Name the types of selectors in JQuery?

Answer.

  • CSS Selector
  • XPath Selector
  • Custom Selector
Categories
Backend Developer Database Development Top Coder

TypeORM With NEST JS Basic Tutorial

In this article, we will be using TypeORM with Nest Js to integrate database with our application. But before starting with TypeORM, let’s have a brief look over the concept of Object-relational mapping(ORM).

Object-relational mapping as a technique for converting data between incompatible type systems using object-oriented programming languages. In other words, ORM is a programming technique in which a metadata descriptor is used to connect object code to a relational database.

Source Wikipedia

Object code is written in object-oriented programming (OOP) languages such as C++, JAVA, etc. We will be using TypeScript for creations of our object-oriented programming.

In addition to the data access technique, ORM also provide
simplified development because it automates object-to-table and table-to-object conversion, resulting in lower development and maintenance costs.

Now, when we have a good idea about what is the notion of ORM is, let’s understand what TypeORM is :-

TypeORM: TypeORM is an ORM that can run in NodeJS, Browser, Cordova, PhoneGap, Ionic, React Native, NativeScript, Expo, and Electron platforms and can be used with TypeScript and JavaScript (ES5, ES6, ES7, ES8).

Topics:

  1. Creating a model( or Table ).
  2. Primary / Auto-generation column.
  3. Relationship between two or more models.
  4. Our Project.

Creating a model/ Table

The first step in the database is to create a table. With TypeORM, we create database tables through models. So models in our app will be our database tables.

Now create a sample model “Cat” for a better understanding.


export class Cat {
    id: number;
    name: string;
    breed: string;
    age: string;
}

Note: The database table is not created for each model but only for those models which are declared as entities. To declare a model as an entity, we just need to add @Entity() decorator before the declaration of the Class defining our model.

In addition to this, we should ideally have columns in our model now because the table which will be generated (because of the model being declared as an entity now) makes no sense without any column in it. To add a data member of a model as a column, we need to decorate a data member with a @Column() decorator.

Let us modify our above model of ‘Cats’ by adding ‘@Entity()’ and ‘@Column()’ decorator.


@Entity()
export class Cat {

    @Column()
    id: number;

    @Column()
    name: string;

    @Column()
    breed: string;
    
    @Column()
    age: string;

}

Primary / auto-generated primary column

For creating a column as a primary key of the database table, we need to use @PrimaryColumn() decorator instead of @Column() decorator. And for the primary column to be self-generated, we need to use @PrimaryGeneratedColumn() instead of @PrimaryColumn().

By making ‘id’ in ‘Cat’ as auto-generated primary key, our Cat model will look like this:


@Entity()
export class Cat {

    @PrimaryGeneratedColumn()
    id: number;

    @Column()
    name: string;

    @Column()
    breed: string;
    
    @Column()
    age: string;

}

Relationship between two or more models

A relationship, in the context of databases, is a situation that exists between two relational database tables when one table has a foreign key that references the primary key of the other table. Relationships allow relational databases to split and store data in different tables while linking disparate data items.

There are 3 types of relationships in relational database design :-

  • One-to-One (implemented by @OneToOne() decorator)
  • One-to-Many / Many-to-One (implemented by @OneToMany() decorator )
  • Many-to-Many (implemented by @ManyToMany() decorator)
One-to-One
One-to-Many
Many-to-Many

Our Project

In this section, we will create a NestJS project in which we will have three tables/entities as follows:

  • UserEntity
  • BookEntity
  • GenreEntity

Relationships between the entities:

  • UserEntity and BookEntity: One-To-Many
  • BookEntity and GenreEntity: Many-To-Many

In simple words, a user can have many books and each book can belong to more than one Genre.

For now, we will create the above-mentioned entities as follows without any relationship between them as follows:

MyProject/db/user.entity.ts


import { Entity, PrimaryGeneratedColumn, Column, BaseEntity, OneToMany } from 'typeorm';
import BookEntity from './book.entity';
@Entity()
export default class UserEntity extends BaseEntity {

  @PrimaryGeneratedColumn()
  id: number;

  @Column({ length: 500 })
  name: string;
}

MyProject/db/book.entity.ts


import { Entity, PrimaryGeneratedColumn, Column, BaseEntity, ManyToOne, ManyToMany, JoinTable } from 'typeorm';
import UserEntity from './user.entity';
import GenreEntity from './genre.entity';

@Entity()
export default class BookEntity extends BaseEntity 
{
  @PrimaryGeneratedColumn()
  id: number;

  @Column({ length: 500 })
  name: string;
}

MyProject/db/genre.entity.ts


@Entity()
export default class GenreEntity extends BaseEntity {

  @PrimaryGeneratedColumn()
  id: number;

  @Column()
  type: string;

}

For setting up the relation between UserEntity and BookEntity, we have to add the following code in UserEntity and BookEntity Class as follows:

MyProject/db/user.entity.ts


// 1:n relation with bookEntity 
  @OneToMany( type => BookEntity , book => book.user)
  books: BookEntity[];

type => BookEntity is a function that returns the class of the entity with which we want to make our relationship.

book => book.user states which column to be used by ‘BookEntity’ to get the associated user.

Now, we have set a One-to-Many relationship from the ‘UserEntity’ side. As One-to-Many is complimentary with Many-to-One, we should state the ‘Many-to-One’ relationship from BookEntity to UserEntity.

MyProject/db/book.entity.ts


// n:1 relation with books
  @ManyToOne(type => UserEntity, user => user.books)
  user: UserEntity;

Similarly, to make a many-to-many relationship between BookEntity and GenreEntity, we have to add the following code:


// n:n relation with genre
  @ManyToMany(type => GenreEntity)
  @JoinTable()
  genres: GenreEntity[];

Here, ‘@JoinTable()’ decorator states that in a many-to-many relationship in BookEntity and GenreEntity, the ownership lies in the BookEntity side.

Now we are done with almost everything related to database and TypeORM. The only thing that remains is to establish a connection with the database. For this purpose, we have to create an ‘ormconfig.json’ file and add the following JSON code in it.

MyProject/ormconfig.json


{
  "type": "sqlite",
  "database": "./database.sqlite",
  "synchronize": "true",
  "entities": [
    "dist/db/entity/**/*.js"
  ],
  "cli": {
    "entitiesDir": "src/db/entity"
  }
}

The first line in the above JSON object specifies that the database we are using is ‘SQLite’.

Now we have to create the NEST controllers and services to handle the requests.

Here are the three DataTransferObjects we will be using in the further code:

MyProject/User/dto/create-user.dto.ts


export default class CreateUserDto {
  readonly name: string;
  readonly books: number[] ;
}

MyProject/User/dto/create-book.dto.ts


export default class CreateBookDto {
  readonly name: string;
  readonly userID: number;
  readonly genreIDs: number[];
}

MyProject/User/dto/create-genre.dto.ts


export default class CreateGenreDto {
  readonly type: string;
}

Below are the Controllers and Services for Users, Books, and Genres which will be handling the requests.

Users

MyProject/User/user.controller.ts


import { Body, Controller, Get, ParseIntPipe, Post, Put } from '@nestjs/common';
import { UserServices } from './user.services';
import CreateUserDto from './dto/create-user.dto';

@Controller('users')
export class UserController {
  constructor(private readonly usersServices: UserServices) {}

//'postUser()' will handle the creating of new User
  @Post('post')
  postUser( @Body() user: CreateUserDto) {
    return this.usersServices.insert(user);
  }
// 'getAll()' returns the list of all the existing users in the database
  @Get()
  getAll() {
    return this.usersServices.getAllUsers();
  }

//'getBooks()' return all the books which are associated with the user 
// provided through 'userID' by the request  
  @Get('books')
  getBooks( @Body('userID', ParseIntPipe) userID: number ) {
    return this.usersServices.getBooksOfUser(userID);
  }
}

Myproject/User/user.service.ts


import { Injectable } from '@nestjs/common';
import UserEntity from '../db/entity/user.entity';
import CreateUserDto from './dto/create-user.dto';
import BookEntity from '../db/entity/book.entity';
import {getConnection} from "typeorm";

@Injectable()
export class UserServices {

  async insert(userDetails: CreateUserDto): Promise<UserEntity> {
    const userEntity: UserEntity = UserEntity.create();
    const {name } = userDetails;
    userEntity.name = name;
    await UserEntity.save(userEntity);
    return userEntity;
  }
  async getAllUsers(): Promise<UserEntity[]> {
    return await UserEntity.find();
  }
  async getBooksOfUser(userID: number): Promise<BookEntity[]> {
    console.log(typeof(userID));
    const user: UserEntity = await UserEntity.findOne({where: {id: userID}, relations: ['books']});
    return user.books;
  }
}

Myproject/User/user.module.ts


import { Module } from '@nestjs/common';
import { UserServices } from './user.services';
import { UserController } from './user.controller';
@Module({
  imports: [],
  controllers: [UserController],
  providers: [UserServices],
})
export class UserModule {}

Genre

MyProject/Genre/genre.controller.ts


import { Body, Controller, Get, Post } from '@nestjs/common';
import GenreServices from './genre.services';
import CreateGenreDto from './dto/create-genre.dto';

@Controller('genre')
export default class GenreController {
  constructor(private readonly genreServices: GenreServices) {}
  @Post('post')
  postGenre( @Body() genre: CreateGenreDto) {
    return this.genreServices.insert(genre);
  }
  @Get()
  getAll() {
    return this.genreServices.getAllGenre();
  }
}

MyProject/Genre/genre.services.ts


import { Injectable } from '@nestjs/common';
import CreateGenreDto from './dto/create-genre.dto';
import GenreEntity from '../db/entity/genre.entity';

@Injectable()
export default class GenreServices {
    async insert(genreDetails: CreateGenreDto): Promise<GenreEntity> {

    const genreEntity: GenreEntity = GenreEntity.create();
    const {type} = genreDetails;

    genreEntity.type = type;
    await GenreEntity.save(genreEntity);
    return genreEntity;
  }
  async getAllGenre(): Promise<GenreEntity[]> {
        return await GenreEntity.find();
  }
}

MyProject/Genre/genre.module.ts


import { Module } from '@nestjs/common';
import GenreServices from './genre.services';
import GenreController from './genre.controller';
@Module({
  imports: [],
  controllers: [GenreController],
  providers: [GenreServices],
})
export default class GenreModule {}

Books


MyProject/Books/books.controller.ts

import BookEntity from '../db/entity/book.entity';
import CreateBookDto from './dto/create-book.dto';
import UserEntity from '../db/entity/user.entity';
import { createQueryBuilder, getConnection } from 'typeorm';
import GenreEntity from '../db/entity/genre.entity';

export class BooksService {

  async insert(bookDetails: CreateBookDto): Promise<BookEntity> {
    const { name , userID , genreIDs } = bookDetails;
    const book = new BookEntity();
    book.name = name;
    book.user = await UserEntity.findOne(userID) ;
    book.genres=[];
    for ( let i = 0; i < genreIDs.length ; i++)
    {
             const genre = await GenreEntity.findOne(genreIDs[i]);
             book.genres.push(genre);
    }
    await book.save();
    return book;
  }
  async getAllBooks(): Promise<BookEntity[] > {
    // const user: UserEntity = await UserEntity.findOne({where: {id: 2}, relations: ['books']});
    return BookEntity.find();
  }

MyProject/Books/books.module.ts


import { Module } from '@nestjs/common';
import { BooksService } from './books.service';
import BooksController from './books.controller';
@Module({
  imports: [],
  controllers: [BooksController],
  providers: [BooksService],
})
export default class BooksModule {}

Now, finally, this is the time to integrate everything with ‘app.module.ts’


import { Module } from '@nestjs/common';
import { AppController } from './app.controller';
import { AppService } from './app.service';
import { UserModule } from './User/user.module';
import { TypeOrmModule } from '@nestjs/typeorm';
import UserEntity from './db/entity/user.entity';
import BooksModule from './Books/books.module';
import GenreModule from './Genre/genre.module';
import BookEntity from './db/entity/book.entity';
import GenreEntity from './db/entity/genre.entity';

@Module({
  imports: [UserModule ,
            BooksModule,
            GenreModule,
    TypeOrmModule.forFeature(
      [UserEntity, BookEntity , GenreEntity],
    ),

    TypeOrmModule.forRoot(),
  ],
  controllers: [AppController],
  providers: [AppService],
})
export class AppModule {}

Categories
Backend Developer Development Programming Startup

Is Ruby On Rails Still Worth Learning In 2020?

Ruby on Rails is a web application framework written in Ruby under the MIT lisense. Rails works on MVC, (Model View Controller) structure, bestowing default structure for a database and web pages. 

The initial release of Ruby on rails was on 13 December 2005 and in its initial years, it greatly affected the web app development through new features that comprise Seamless database table creation, a scaffolding of views to allow the rapid application development.

ROR or Ruby on Rails is older and mature technology to use but in the last couple of years, it has been facing a difficult time as many new technologies came over by taking over this sector. Some people say Rails is dead and it isn’t worth it but here we are going to see why it’s worth learning in 2020.

Where Has ROR Been The Best

Rails have been the talk of the town and it is quite an old language. There are a few reasons that make the Rails best and really tough to compete. So, we would be looking for those reasons.

Simpler business logic Execution

ROR has a simple and fast process of implementation for difficult business logic. For example, you need API for your application at the earliest so you can ask the developer. Rails developers can develop it really fast. You have to put the front- end framework like React and Vue and you are done.

Huge Collection of Gems

Ruby has a huge collection of gems created by its developers. They act as a bridge to fill in the gap left in web apps and their services. And the best thing about them is that they are free for commercial use. And the minor things that might be left by the development team can be cleared with the help of these.

Ruby collection has made it really easier for developers to use it. It is like a readymade gem for developers when they get stuck at developing some features but they have it sorted with gems.

There are various companies that use Ruby on Rails in their products and apps that we would be discussing later.

Rapid development Process

Rails or ROR is known for its fast development process. Developers use Ruby on Rails to develop a project for its quick nature and creating a project with Rails is quite easy.

There is a difference of 40 to 45% in terms of speed in creating a project with ROR instead of Stacks. In layman terms, if a developer uses Stacks for developing a project then it will take 40% extra time than Rails.

Various types of apps which are developed using Ruby on Rails

We would be mentioning 6 well-known apps that are developed with ROR. These are widely known and you might be using these on a daily basis.

Basecamp

It is a type of business organizer curated by David Hansson who is the creator of Ruby on Rails and his team members. We use Slack and other apps like Asana that are tough competitors of Basecamp. Recently, Basecamp has 2.5 million users and is a good alternative app and developed through ROR.

Shopify

Shopify is an E-commerce platform that gives potential entrepreneurs a platform to start a business. And with this, it enables you to use the payment integration method, managing content, domain name generator, and everything that you need to start an online business. Half a Million plus merchants are using the Shopify platform and it is able to generate $40 billion in GMV. Shopify was developed and launched after 2 months through Ruby on Rails. For potential entrepreneurs, Shopify is a great source to launch a business as it provides everything you need for a business.

Airbnb

Airbnb, Inc. is an online marketplace for arranging or offering lodging, primarily homestays, or tourism experiences. The number of people who use Airbnb for their travel stay is increasing rapidly. It has a total of 150 million+ users out of which 500k people use it to stay at nights. Airbnb was also created by ruby on rails and one of the best and popular services that use this framework.

Fiverr

Almost every freelancer knows this website but only a few people know that it was also created on ruby on rails. You can get a service or hire someone or you can also give a service starting from $5 and went up to $200. You can get every type of services from graphics to logo to webpage designing. It covers almost everything. It is also one of the popular services that use this framework. If you are a remote developer you can also register at our website.

Github

Github is a popular service used by almost 26 million people. It is an app created on ruby on rails and it is used for bug tracking, task management and other features for developers.

Bloomberg

It is developed on ruby on rails and it specializes in data analysis, trading services, and news. These services are the vital revenue-generating services of Bloomberg. And like others, it is also a popular service that uses this framework.

Consider Ruby On Rails For These Projects

Like we have discussed before, there are various apps you can develop with Ruby on Rails and now we will discuss some projects that are good to go with Ruby on Rails.

Fast prototyping

Ruby on Rails allowed companies to build in no time a small application or an MVP. The fast development process allows the early acquisition of more customers, resulting in quicker and more efficient device monetization. Developing an MVP will show you what are the needs of your customer and what is your focus. Feature and some usabilities can be created really quick with ROR.

E-commerce

E-commerce is trending and people use e-commerce to expand their business. E-commerce provides them all the necessary things and the best example is Shopify. Ruby on Rails has gems for your every problem which enhances the business and tries to bridge the gap. Spree commerce is an alternative to ROR e-commerce.

Data solutions

Ruby on rails has a tremendous framework for new and advanced startup models. The software has an outstanding Object Relational Mapping, called ActiveRecord, allowing developers to navigate database quickly without using SQL. In addition, Ruby on Rails can easily integrate such as PostgreSQL with Database Management Systems.

Fluctuating concept

Ruby on rails believes in the concept of go with the flow which means that you need not plan everything beforehand. As the process goes, everything will go accordingly and ROR is famous for this thing that you never know what it has for you. You can add on the things while moving ahead. That’s why it is known as a fluctuating concept which means it has nothing fixed or planned.

Content Services

There are so many good, SEO-friendly tools for developing and maintaining content in the Ruby on Rails ecosystem. Perhaps a website based on content would be right up your alley? If so, make sure Jekyll gets a shot.

The question that arises is there any chance that learning ROR would be worth it. why this question arises so I would like to mention some drawbacks where ROR lacks its importance over the years.

Few Shortcomings of Ruby on Rails

Operational speed

It must be noted-Rails are not on edge in speed. If you need fast processing speed and low resource usage on the server, then Ruby on Rails is definitely not the way to go. Keep in mind, of course, that this is an edge case, and you don’t really need that much pace in most projects, particularly if you’re developing a startup or MVP. If you don’t expect hundreds of millions.

Ruby language

Artificial Intelligence and Machine Learning is the hottest technology these days. Many modern apps offer some sort of ML integration to help users with tedious tasks or even automate some jobs by literally substituting software for jobs and staff.

It’s a shame that the language of Ruby is bad at this, to put it simply. Python is the best technology, not to mention that it is one of the world’s most popular programming languages and is much faster than Ruby. Even Java is considered to be one of the job’s best technologies. Unfortunately, machine learning is another major trend not followed by our beloved language, mainly due to the lack of libraries needed.

Less creativity left for the developers

If you’re familiar with the design of Ruby on Rails then you probably know it’s known to be a very thought-out one. It only allows you to create your app the way Rails “wants” you too. While this function has a lot to do with it, the creation of an unusual application may be a pain. There is a lot of default modules that may not leave sufficient room for the development of developers.

Wrapping up

Ruby on Rails has latest version is 6.0.1 that is released on this 5 November 2019 and ROR has been working on its advancements from the early 2000s. And being a matured technology, there is a lot more left to learn about this technology and many startups are using this ROR. Ruby on Rails ‘ recent and upcoming releases sound very promising. Many of the concerns from users have been addressed, each new version adds new exciting features.

Hopefully in the near future, both Ruby and Rails will rise again.  So we can not say that it is obsolete and ROR is dead. If you make full use of the potential of Ruby on Rails, you will be able to develop sophisticated applications in no time. And one should not forget that ROR has massive gems which many new technologies doesn’t have. So, in my opinion, it is still worth it to learn RUBY ON RAILS in 2020. There are many scopes that are still left to utilize. And if we talk about paychecks according to data of indeed.com, ruby on rails developer in the USA gets the highest salary. And after that python developer, javascript developer, Clojure developer, java developer, Nodejs developer in that order.